The case for not criticizing people for using generative AI

While generative AI is a flawed technology that should not be used, I think we should avoid criticizing others for using it.

In the instances where people are using generative AI for malicious purposes, I think it’s better to focus on what they doing wrong instead of the generative AI use. For example, if a criminal is using generative AI to generate automated phishing email, they should be prosecuted for phishing. Criminals can create phishing emails without AI, so getting them to not use generative AI is not helpful.

Also, people can end up in situations where they have to use generative AI or aren’t familiar with why they shouldn’t use it. Criticizing them for using generative AI doesn’t help them find alternatives.

Some of my experiences with focusing too much on technology and a teacher’s unconventional approach.

From 2008 to 2010, I overused AI due to unrealistic expectations. I was working on my own real-time version of Risk, and I started focusing on making a skilled AI for it. My high expectations culminated with an attempt to use a genetic algorithm to make an AI through simulated evolution. I was able to get to the point where the algorithm could generate equations for specific numbers, and then progress stalled.

For a while, I focused on front-end development with React, and took the idea of building single page applications in React too far. This included building a static site listing board game groups using React. 

Also, I used to dismiss the importance of liberal arts subjects for a software engineer. In my high school, there was a math and computer science teacher, Michael Stueben who gave extra credit for writing book reports. I thought the idea of writing a book report for a math or computer science class was a waste of time, and I was not sure sure why Mr. Steuben was a fan of them.

Recently, I’ve come to the conclusion that asking computer science students to write book reports like Mr. Steuben is a great idea to practice written communication. Coding is only a small part of creating software. Communicating with others is an essential part of software development, and will likely take more time than coding. I’m currently working on a decentralized open source alternative to Meetup.com. There are days where I don’t write a single line of code because of a focus on communication.

Mr. Stueben also had a unique sense of humor and would make jokes completly unrelatd to computer science. Someone claimed he gave the following quiz, which was a logic puzzle.


DIRECTIONS: Choose the best answer to the following question from the choices below.

(Chinese text you don't need to know to solve it)

A. All of the below.
B. None of the below.
C. All of the above.
D. One of the above.
E. None of the above.
F. None of the above.

I also remember one conversation I had with a friend about the computer science at UC Berkeley. They mentioned that it was very hard to get into the engineering departments undergraduate computer science program. However, they mentioned that getting into the Letters and Science(Liberal arts version) was far easier. I thought that the idea of getting a liberal arts degree for computer science was not worthwhile.

How luck with getting good information helped me understand technology

During the peak of my AI overuse, I was also in the fortunate position of learning about fundamentals of AI through a class, which helped me develop a more realistic view of AI. Due to luck, I could have also found myself in a situation where I was not getting accurate information, and I think that is the case for many people today. Also, outside of some websites, I didn’t hear anything about genetic algorithms, so my unrealistic view of their use was not getting reinforced.

One of the factors that stopped my AI overuse in 2010 was learning about neural networks the AI class I was taking. Neural networks were presented as an innovative AI technology, and I thought that their resemblance to a human brain would make them better than genetic algorithms. I decided to spend time on my own trying to learn about neural networks, and learned that they had fundamental limitations on their effectiveness.

Having a good teacher, and being around motivated students who were eager to learn about AI helped. Also, the class was done in a computer lab run by student volunteers who setup all the computers to run Linux. During the multiple classes I took in the lab, students promoted an environment of trying different things and talking about newer tech.

Here are some examples showing other student’s interest in trying new things and thinking about tech.

  • I mentioned that I didn’t enjoy using a terminal, and then they tried to convince me that the terminal was more powerful. Now, I use the terminal far more often.
  • One student made a comment about using the recently released Microsoft Bing for search. Others started vocally calling Bing a joke. On a related note, I tried Bing around that time and found Google search to be better.
  • Someone started talking about the Go programming language shortly after it was announced. While they found Go interesting, they did not talk about actively using it. I appreciated their perspective about acknowledging a new technology while not immediately rushing to use it

When I heard about generative AI because of ChatGPT, I became concerned about a superintelligent AI developing that would take everyone’s jobs. I thought ChatGPT was using some new advanced technology. However, I then found out that ChatGPT and other generative AI tools were advanced neural networks. Afterwards, I started doing more research on generative AI and it became increasingly clear that the technology was flawed. If I didn’t learn about neural networks years earlier, I probably would have had a different and inaccurate understanding of generative AI.

In summary, I learned a lot about technology because of the environment I happened to be in. If I had the luck to live somewhere else with a different teacher and minimal interaction with other students, I would have a more limited understanding of technology. When someone is overusing AI, it is important to consider they have not been receiving correct information.

Other reasons why I avoid criticizing people for using generative AI

If someone is using generative AI, I think adjusting to working without AI involves short term challenges when getting used to different habits. Often, people may not have time to deal with these short term challenges due to other priorities. I’ve experienced a similar pattern across multiple jobs with tech debt that was slowing down development. However, we often did not have the time to refactor code and reduce tech debt because of other higher priorities. 

Also, people may be in situations where they have to use generative AI, even if they don’t like it. Working with others or doing a job sometimes required me using technologies I did not like. Software development is a team activity, and this means considering the preferences of other developers or requirements given to you.  For example, I have not enjoyed using AWS since I started using it close to 10 years ago. However, I’ve used AWS at my previous job because I was contributing to existing software deployed on AWS, and switching to a different cloud provider was not an option.

How I make sure generative AI is not used on my projects

I make sure new projects aren’t created with any generative AI help and establish a norm of not using generative AI. It becomes far more difficult to stop using generative AI for a project once it has started. Every one of my projects has been started with zero generative AI use, and I am not going to add generative AI to them. Also, I ban generative AI contributions on my open source projects.

Using task specific AI automation.

Although I don’t use generative AI, I think task specific AI automation is useful when clear quantitative expectations for input and output can be created, and the accuracy and be measured. Automation also has clear boundaries on where it will be useful.

For example, predicting the high temperature for the next day is a great use of AI. The accuracy of the AI prediction can be easily evaluated by comparing the prediction with the measured temperature, and being inaccurate by a few degrees is not a major issue. There is also a clear boundary between what the AI is doing, and what a human meteorologist is doing.

A meteorologist will read the prediction, and then can communicate relevant information and also the consequences of uncertainty.In the case where the predicted temperature is around 32 degrees, and a slight difference could mean rain, snow, or ice, a human meteorologist will communicate that uncertainty and mention the possibility of frozen precipitation so that people.