AI ( Artificial Intelligence ) just a few decades ago meant a robot or a computer that has human-like thinking. Think of HAL 9000, or the robots in *I Robot*. All of those things were "AI".

Today any remotely automatic algorithm on the planet is labeled with this nonsensical term. 

LLMs ( Large Language Models ) are just statistical analysis algorithms. Yes, they are quite clever, and quite complex. But in the end of the day, they are mathematical statistics predictors. There were words for this sort of things. We used to call it "Statistics" or "Analytics". But now it's called "AI".

Same thing is happening with different uses of applied mathematical Analytics, like automatic generation of images ( that statistically satisfy the description "prompt" ). This is also a clever way to do "Statistics". Why are we calling it "AI"?

I get calling LLMs AI to some extent. They are very similar ( to the uninitiated ) to the way AI is portrayed in the sci-fi stories that I started this article with. But why are we calling image generation algorithms "AI"?

In a way, it seems like every remotely automatic algorithm now-a-days is "AI". Is it a marketing gimmick to make people care? To make people notice? Are all those renamings of everything into "AI" just an attempt to hop-on the hype wave of bullshit?

Let's think about it. Analytics. Hm...

"AI" seems to still be closely related to anything machine learning. LLMs are not algorithms carefully assembled by linguists to "understand" the promps. And the answers the LLMs are giving are not carefully put together information curated by the experts or at least people that know a thing or two about the topics in question. That would be Wikipedia, not LLMs.

Instead LLMs are algorithms that analyze large quantities of text to pick up on patterns. How much this or that word, or this or that letter, is used in this or that context? And then, given enough source data, the algorithm will be able to "randomly assemble" an amalgamation of letters in sequence that statistically speaking appear to belong in the source data.

Image generators and alike seem to to do the same thing. But instead of letters in sequence they are an analysis of pixels on a 2D plane. 

It seems like instead of actually trying to figure out and solve the issue of language, making a very robust, very useful clever algorithm, the developers of AI decided to cleverly cheat the system. Bypassing the hard work, by offloading everything to an algorithm.

That in the end of the day yielded a yet another algorithm. An algorithm that can pretend very convincingly to be able to solve a problem. And algorithm, that poetically people started using whenever they don't want to actually solve a problem themselves.

Think about it. When you ask an LLM to perform a "thinking" for you, to "solve" something for you, you are not solving it yourself. You are cheating. But the ironic thing is, the LLM itself is cheating too, because the engineers who designed it didn't want to solve the problem themselves. They wanted a computer to statistically arrive at an algorithm by itself.

There is not just no intelligence in the LLM. But no intelligence in the creation of the LLM in the first place. If there is any intelligence in it, it is just pretend intelligence, Artificial Intelligence.

Wait...

So, given this logic, "AI" is the best term to describe this madness? Hm...

I get it now. "AI" does not refer to the algorithm used. "AI" ( Artificial Intelligence ) is the term that describes its users.

**Happy Hacking!!!** 