AI

Over the past few years, I have done many projects related to AI. I believe task specific AI with objectively measurable inputs and outputs is useful in a variety of areas.

Clash of Empires: One of my primary goals with this game was to have a competitive AI. More detail about the things I did can be found  here.

Here are the AI fields I have explored

  • Computer vision
  • Genetic algorithms
  • Neural Networks
  • Othello AI
  • Pathfinding

These are some examples of where I found automation useful. 

  • AI for computer opponents in a real-time strategy game.
  • Automated trains on the DC Metro.
  • Database integration tests for the dmvboardgames.com backend.
  • Weather forecasts.

Also, I do not use generative AI with two exceptions.

  • Testing generative AI products to understand their capabilities.
  • Creating generative AI output as evidence to convince people that is a flawed technology.

Currently, generative AI is based on LLMs, which are a fundamentally flawed technology. Here are the problems I have with generative AI.

  • Anti human outputs:LLMs are programmed to generate optimized responses for data, without considering the effects on people. LLMs are computer programs that calculate output by doing math. Human thoughts and feelings cannot be described by pure math. They also lack the context to understand human needs.
  • Corporate subsidies: LLMs are being subsides by billions of dollars in corporate investment, loans, and circular financing deals. This money could be directed towards products that provide value to users. These subsidies are unsustainable, and will cause widespread economic disruption that spreads outside of tech. Keeping the subsidies going for more time will make the aftermath worse.
  • Data centers: LLMs also require large data centers while harming local neighborhoods. People living near data centers have to deal with polluted water and noise. Data centers also do not pay for all the electricity they use and pass the costs on to residents. They also require vast amounts of resources, especially computing hardware. If I’m constantly sending data to an AI data center, I’m not sure where else the data is going. Meanwhile, if I work offline, I don’t have to worry about unauthorized access to my data. A security breach at the data center means my personal data could be exploited by scammers or random companies sending me targeted ads.
  • Exploited labor: In order to make LLMs cheaper, companies are relying on underpaid workers, including data labelers who are often forced to view abusive material.  LLMs also train on writing and art created by other people without any credit or compensation.
  • Inaccurate responses: Generative AI has a tendency to confidently give wrong answers which can be easily mistaken for a valid response. The effects of this can range from a minor annoyance to causing serious harm. It is impossible to supply enough training data and computing power to resolve this issue. Also, generative AI companies will always need to be run by humans, and the inefficiencies of human organization will be a fundamental limitation on generative AI development.
  • Loss of critical thinking: Using LLMs is correlated with a loss of critical thinking skills, which causes a dependence on them. LLMs aren’t universal oracles and it’s important to understand how to solve problems when using an LLM isn’t going to work.

 

Using an LLM is simply a matter of prompting it by typing text and doesn’t require much skill. If LLMs stick around after the AI bubble bursts and turn into something useful, then skills unrelated to prompting AI are going to be become more valuable.

 

However, I see AI in the form of automation as useful. I think the keys are defined inputs and outputs, combined with a quantitative measure of accuracy.