releases | setup | blog | about me
return to blog home

Generic AI rant

As someone that codes for a living, I’ve been forced to interact with AI on many occasions, having to do “prompt engineering”, collecting data for training, even designing models - Mostly because the managerial class is entirely hype-driven, and have an endless desire to grift. It’s whatever, as I’m just here for the money, but out of curiosity and boredom, I decided to dig a little deeper, and use AI in my day-to-day coding, just to see how it is. (DISCLAIMER: Model used was ChatGPT 3.5)

And… it’s alright, but the problem is that the technology is deceptive - it fools the unintelligent and makes it very easy for them to make my time on this earth worse. Not that they don’t already, but with AI, they have been given a big red button that says “RUIN EVERYTHING”, and they can’t stop hitting it like old, mentally ill people at the slot machines.

The idea itself is simple: throw a LOT of GPUs at a model, and it would predict the next word, a better version of autocomplete. The answers are sometimes quite impressive, especially for the simple stuff. I’ve been getting back into writing javascript (unfortunately) recently, and chatGPT has been useful in helping me pick up the garbage-tier syntax of the language again. While yes, I can just “read the docs”, but with stuff like this where I can trust the model to not hallucinate, I do find that it’s a little more convenient than Googling.

It’s also good at writing boilerplate or very simple functions, I do a lot of code in flutter, and it’s nice that I can get a page set up in 10 seconds with “Write me a flutter page with a state”, and then start working on the page elements. It’s not much (Realistically, it’s like a minute down to 10 seconds), but it’s a nice little boost in productivity.

For anything harder than this though, it falls apart, unsurprisingly. Ask a question on a less popular library, and it will make up functions and variables. Sometimes I also need to take careful consideration into wording my questions to be as detailed as possible, which is annoying, and just makes the process slower than Googling, speaking of which:

Google searches have been entirely NUKED by AI. For a given technical question, the first answer is, of course, Stack Overflow, but instead of a decent answer, or someone with no friends being unnecessarily snarky, the answer is now generated by someone with no more than 90 iq trying to farm karma so they can land a shit job writing CRUD applications. I, frustrated, click the second result, and it’s a webcrawler reposting the same Stack Overflow question with the same garbage answer. Same goes for the 3rd, 4th, and 5th result. (Google images also have been destroyed by AI, but that’s for a different topic.)

This leads to my main observation: While AI isn’t objectively bad, the fact that stupid people are allowed to use it has caused irrecoverable damage to human society.

AI is VERY confident, for any question it gets, it answers, and it will tell you that answer like it’s a fundamental law of the universe, even if it’s wrong. If you have more than 2 brain cells, you would verify the answer, and disregard it if it’s wrong. This should be very simple, but yet a large part of the population, particularly the linkedin-crypto-corporate-subhuman archetype, is completely enamored with said answer, and takes it at face-value. This is why you keep seeing them say “AI will replace devs” when AI models cannot be trusted to build a single feature without extremely close supervision and careful editing.

What AI models output is also perfect for them - ChatGPT is very good at writing corporate-sounding emails that don’t say anything, which is exactly the managerial class’s raison d'etre, this makes them even MORE confident in AI, as AI is able to do their HIGHLY challenging job with ease. Therefore, the organization must “pivot” towards AI, which means chatbot with a custom prompt (30 minutes of work), or training custom models, which will, of course, be much worse than what Microsoft/Google has, but at least you get to say it’s proprietary!

But even the corporate obsession with AI is fine to me - This is what I signed up for, I get paid (poorly), and most importantly, these are GREAT opportunities for me to pad out my CV for future jobs. These products that I make are obviously minimally viable products, with nominal value, but at least there’s arguably SOME utility, so, business as usual in the software industry. Then, why do I say stupid people being allowed to use AI ruined human existence?

The TRUE evil is that BOTTOM OF THE BARREL “humans” are allowed to use them, and I mean that seriously. Deeply unintelligent, desperate for money, the “entrepreneurs”, “founders”, and “market disruptors” of the world. With the confidence of AI outputs, and the ease of access to these models, and how easy it is to use them, these “people” are able to churn out convincing-looking (To them only, because they are THAT stupid.) “products” and “content” that have 0 value, at best it would be a generic piece of trash that will be dead-on-arrival, at worst, it poisons social media and the wider internet.

To make matters worse, current society lets people like this force themselves onto us, simply by paying money (advertising, twitter blue, etc), I am unfortunately forced to see the sludge from these losers, ads for their NFT (in 2024!), crypto “pump groups” (Market manipulation being actively encouraged is actually very funny.), automated twitter accounts posting unrelated images and videos under popular tweets for interactions, and the list goes on. To the average user, the experience of using the internet is essentially destroyed, a quick scroll on Twitter and you can see that the notion of discourse is gone. No more interesting topics, just an endless stream of absolute garbage. Who would’ve thought allowing braindead midwits to promote themselves was a good idea? (I know, it’s Elon Musk, the most influential midwit in human history!)

It’s fascinating to see what these “people” are like, just look at your typical AI art defender - They are completely serious in arguing that their anime slop-image that is a horrible disguise of their poorly veiled “ephebophilia” is a legitimate form of self-expression. This is pretty disturbing to think about - To these supposed “humans” this is the MOST they can do to express themselves, with all of their lived experiences and emotions, the best they can do is a 100-word prompt, with 85 adjectives, clicking “generate” again and again. Anyway, the simple point that AI image generation models are trained on images from artists without their consent is undeniable, and yet this very black-and-white moral issue is entirely lost on them, and in fact enrages them; They reply, with their AI generated profile picture of themselves losing 50 pounds and growing a real jaw, saying that artists should simply “use the tool” and “stop complaining”.

In my (very clear) nerd rage, it’s easy to assume that I think these people are evil, or ontologically bad in some way, I do not. While yes, I think they are deeply stupid, and it takes a legitimate moment of reflection for me to admit they deserve rights, these are often desperate people, trying their best to improve their conditions by latching onto the AI boom. They have just been fooled by the outputs of the model, and don’t know any better, it’s hard to explain why these models are not reliable without a small background in mathematics, to a layman all you can say is “Yea it’s wrong sometimes.”, which is very unconvincing. It’s all too easy to blame individual actors for failures of the system, and I don’t want to make that mistake.

It’s my personal belief that AI generation models, both for text or images, should’ve never been released in their current state. There’s simply too many issues, and the implications of these models being freely available are too great - hallucinations, the immense potential for misinformation, moral issues with faking images with real people, questionable sources for training data, and the list goes on. In a better world, companies would care about their social responsibilities and be careful with the technologies they develop, and the state would also step in if something goes wrong.

Unfortunately, we are in hell, where companies have no responsibilities but profit, and governments can be bought (“lobbied”) with said profit. Meaning that all of these models are just unleashed onto the public by psychopathic billionaires, and we have no choice but to live in the ruins.

Written 18.01.2024