Anxaity
March 28, 2023•1,069 words
Anxaity is the feeling that you are missing out on the current AI rush as an entrepreneur. But also, there’s a certain anxaity one feels not fully fathoming the nature of the ongoing explosions in scientific discovery. It’s hard to get an explanation of how it all works beyond “it predicts the next word in the sequence.” Because that certainly doesn’t look to be what’s happening right? It seems far more magical than that. So how does it all work? What follows is my best effort at a very high level overview using analogies that take shortcuts for the sake of imparting some semblance of resonance, and also whether LLMs are really "AI" or whether it's hyperbole.
The true magic sauce is neural networks. Neural networks are math equations with hundreds of billions of variables. With that many variables, it’s absolutely futile to attempt to understand what each variable represents or even what the whole equation represents. The purpose of a neural network is to approximate some physical phenomenon that is otherwise impossible to model due to its vastness and intricacy, like human language.
To what values do you set each of the variables to get the desired result? This can’t be forecast. Instead, you iterate via trial and error by giving the neural network examples and solutions, and have the neural network more or less randomly alter its variables until it starts to produce something intelligible to you. After an obscene amount of training, the result is a math equation with hundreds of billions of parameters that are finely tuned to produce some result that looks good to you but are absolutely inscrutable internally.
The big scientific discovery made recently is that there is a math equation one can make that can model human language. That is, if you put together some equation with hundreds of billions of parameters, and you feed it a word and have that word run through each step of the math function, it will produce some other word or word-like thing that will make sense to you.
This is quite remarkable because it means that human language can be modeled using math, and is not as divinely out of reach for computers as we once thought it was.
So, if large language models like ChatGPT are just math equations with lots of parameters, how do they exhibit creativity and understanding? Well, it’s just that—creativity as expressed by language is in the language itself. It turns out language is a tool invented by humans, kind of like math, and you can operate on words and sentences mathematically to yield results that more or less follow certain laws. Neural networks don’t know what those laws are, but they can approximate them to a high degree.
To what extent is creativity mechanical then? I would say probably all of it. If you think of classical creative endeavors like writing, speaking, and drawing, they are all eventually manifest in the physical realm via very narrow, mechanical outlets, like pen to paper. These are outlets that can easily be represented by arrays of numbers, and operated on in neural networks to yield results which are meaningful to humans.
So is that artificial intelligence? Well, only if we define our own intelligence by our ability to manipulate language. If humans are intelligent simply because our brains can do math on words internally in a very fast, transparent manner, then we are indeed screwed and LLMs are truly, truly artificial intelligence, with no trace of hyperbole whatsoever. If however you can take the language away from a human, and there remains some profound intelligence, then we are at a vast distance from true AI.
In conducting a thought experiment where you try to imagine a human who cannot use language—and thus paradoxically cannot run thought experiments—I struggle to define what really remains and if that could be described as intelligent. Perhaps comparing humans to non-languaged species could be apt, in which case, what remains is a simple neural network that detects patterns as correlated to physical impulses like harm and hunger. No different, seemingly, from an autonomous driving AI or a captcha.
In which case, to my dismay, I do not find it to be a stretch to call LLMs artificial intelligence. But it’s just that: artificial. It’s a model of intelligence. An approximation. Models are not the real thing. A map of the world is not the world. Of course even if LLMs approximated lingual intelligence to 80%, that would be quite sufficient to overtake a lot of tasks humans do today, at speeds which are practically light speeds. 80% of human intelligence at 1000x the speed is indeed very disruptive, very scary. Could you ever build a 100% model? I don’t think so—the question itself is oxymoronic.
What’s interesting about neural networks is that because they are just a balancing act between billions and trillions of variables, you could arrive at variations of weights and parameters that yield something on top of human language. That is, human language, but in some derivative form that could very well be far more useful than what we know as language today. So instead of having time iterate on language over the next 100,000 years to yield more and more useful human lingual technology, you can have computers iterate on language in a much shorter timespan. The results stand to be both kaleidoscopic and cataclysmic.
Is your job safe from AI? Well, here’s a heuristic that I think works: does your job consist of mapping human language and so-called creativity to some mechanical, single dimensional form, like a keyboard and mouse? AI. Straight to AI. Anything that can be fed as a multi-dimensional array of numbers, like keyboard and mouse inputs, can be passed to and approximated by a neural network. So yes, writing, sketching, designing, coding, music production, video production—all these are easily expressed through mechanical tips and taps on a keyboard and mouse, and fair game for being done at 1000x the speed, and 80-99% the quality.
Even yourself on a Zoom meeting is just single dimensional inputs to a light and audio sensor—again, easily approximated by neutral networks. Many things will be automated, but those that can produce at the top 20% of quality globally will likely not only retain their job, but be extremely sought after, if for nothing more than to train the next generation of algorithms.