Every fan of The Terminator movies (1 and 2 only, thank you) knows that line, rendered well by Arnie in T2 as he explains to Sarah Connor how it all happened. It’s the date that the military Artificial Intelligence (AI) called Skynet, created by the Pentagon, becomes self-aware, and decides to kill the human race by starting a global nuclear war.
Such thoughts about robots have been around a long time. I loved this version that I found as a teenager in my copy of the book, Intelligent Life In The Universe, itself re-produced from a 1948 edition of The New Yorker magazine. Only later did I discover that it was drawn by Charles Addams of Addams Family fame, which tracks with his macabre sense of humour.

Naturally, as AI technologies have emerged from computer science labs into actual products in our phones and computers, such images and ideas are the first things to spring to mind.
From a computer science perspective it’s annoying for several reasons, starting with the fact that the computer scientists patted themselves on the back for decades in having crafted the term Artificial Intelligence to replace the 1950’s descriptions of “Electronic Brains” and so forth, only to now see much the same thing happening to “AI”.
So embedded in everyday language now is “AI” that it likely can’t be replaced, so they’ve tried to once again influence public and media discussion by crafting the term AGI – Artificial General Intelligence – to describe machines that are as intelligent as humans, and to once again assure people that we’re a long way from such a machine.
But are we? There have been any number of so-called futurists who have predicted that an AGI will emerge sooner or later, probably sooner. But it’s the word, “emerge”, that’s the trick, because the fact is that we don’t know how our own consciousness emerged out of our brains.

There was a time when some AI pundits seemed to take a very crude approach in simply watching the cost of computer components, especially computing chips and data storage, and making predictions about when computers would pass human data processing capabilities, and become “super-intelligent” and self-aware.
As if the human consciousness was simply a matter of throwing together hundreds of billions of neurons with hundreds of trillions of connections until we “switched on”.
As a result their predictions have not stood up too well to the test of time, as this teenage fan of In Ray Kurzweil’s 1999 book, The Age Of Spiritual Machines, catalogs what has not happened by 2019:
- Your computer will be as smart as you.
He said the amount of FLOPS (floating-point operations per second) of the human brain was 2×10¹⁶. His prediction would be that a consumer computer exceeds that processing power by 2019. A) This is comparing apples and oranges and B) it hasn’t happened. My M1 chip is technically capable of 10.4 TFLOPs, which is about 10¹³. This is off by a factor of 2,000. - Computation will be invisible (…in walls, tables, chairs, desks, clothing, jewelry, and bodies).
The ubiquitous computing that Kurzweil bases almost all of his predictions on doesn’t exist. - Seamless Virtual Reality.
AR/VR technology is nowhere near being a primary interface, let alone a secondary or even tertiary one. - Gestures and natural language.
Siri and Alexa and voice assistants exist, but I wouldn’t call talking to them ‘natural’….ChatGPT brings you close to natural conversation, albeit with a pathological liar.. - Nanoengineered machines
[A] technology he doesn’t really distinguish from magic… [feeding children nanopast] seems like hideous overkill for something we could accomplish with GPS technology since the 1980s. - Tactile Virtual Reality
This is describing full-on virtual sex, which simply isn’t happening. People are definitely FaceTiming their asses and masturbating, but this is nowhere near being able to touch another person. - No more teachers, no more books
“Print books remain the most popular format for reading,…intelligent, simulated teachers don’t exist. You can teach yourself via DuoLingo or by watching videos, but this is not a teacher. It’s not really much different than old mail order classes or lessons on VHS. - The deaf shall hear and the blind shall see“
No. Perfect, invisible robot vision…This technology just doesn’t exist, even in prototype form. Exoskeletons exist in prototype form…but military - Self-driving and robot lovers
Elon Musk has been promising self-driving cars since 2013, calling it a ‘solved problem’ in 2023…self-driving cars are an expensive vanity project which is still being tested in the 2020s, and they’re still driving themselves into traffic and running over homeless people.
Regarding robots that you can have meaningful relationships with, that just isn’t happening at all. - Virtual art
2023 has been a banner year for virtual art, that is still humans putting prompts into a machine. Kurzweil is talking about virtual artists, with their own reputations, and that simply isn’t happening…urzweil is presupposing a virtual world that doesn’t exist, so the revenues don’t either. Modern VR is a place to incinerate money, it doesn’t make it. - Passing the Turing Test
I’ll give Kurzweil this one. As of 2019 no, but as of 2023 I think ChatGPT and its ilk can pass the Turing Test
I think he’s being a bit harsh on #9 for self-driving cars. They’re still a small market but it’s not just Tesla; it was Waymos that were being burned in LA after being requested to come to the riot areas. They can drive without a human but not avoid a riot.
But there was an even bigger thing that Kurzweil missed and it’s in the same area as his billions of cheap, super-fast chips – resources:
Kurzweil said, “Information and knowledge are not limited by the availability of material resources,” and this is just false. This technology uses rare earth metals and energy and water in huge amounts. And it chews up the Earth in huge chunks. Of course, technology is limited by available resources, it’s not fucking magical.
AI data centres are already notorious for their power consumption, so much so that they’re being built with their own dedicated power stations, like the old factories in the 19th century.
There’s also this fascinating look at current AI’s (by one “kittenbeloved”) that shows some very embarrassing results, AI is not coming for your job. As with The Terminator, he first lays out the claimed threat, and I appreciated his shoutouts to 1950’s Atomic energy and Marx:
For knowledge workers, there is palpable terror that intelligence too cheap to meter will shortly make every laptop jockey across the world unemployable as a matter of simple economics. Forget third-world sweatshops hollowing out the American manufacturing base, the true threat was always the one that Ned Ludd warned us about: automation making humans obsolete. And unlike the weavers of the 19th century, there will be no ladder of prosperity into the service economy for the human redundancies to climb. This is it, the final obsolescence for any member of the proletariat selling keystrokes for their daily bread.
He reckons that’s crap and goes into some considerable detail about Large Language Models (LLM’s) and how AI’s (he prefers the term, “neural net”) learn by being shown huge numbers of examples with the answers attached.
But then he shows even these AI’s, like ChatGPT, fuck up, such as with riddles, because it’s pattern-matching, not thinking. My favourite example of his is teaching the AI how to multiply numbers – rather than what we’ve done in computing forever, coding in the following algorithm, which humans learn in school (or at least used to).

That of course works every time, no matter how many digits are involved and it’s one of the things that wowed people about the Electronic Brains of the 1950’s, and enabled H-Bomb explosion calculations to be done.
But AI’s are learning how to do this the same way as everything else – being fed many examples of Q&A. The results are embarrassing. He has two charts showing progress with one AI, where the probabilities of success fall to zero as it approaches multiplying 20 digit numbers. A few months later the accuracy had improved dramatically – to 99.5% for 20 digit numbers:
AI boosters say that these are growing pains, and the problems can be solved by shoveling an even larger percentage of GDP at Nvidia…But it should also be obvious that 99.7% accuracy at a completely determinate task is unacceptable for most domains in which it might be used. Even after explicitly training the model how to perform multidigit multiplication in a grind of iterations, it still gets the wrong answer 3 times in a thousand. Hope you’re not on that flight!
Which brings me back to Arnuuld…
All stealth bombers are upgraded with Cyberdine computers becoming fully unmanned. Afterwards they fly with a perfect operational record.
Yeah… nah! As “kitten” puts it:
A truly intelligent model that has memorized its times table from 0 to 9 (trivial, and it has) and can add any two single digits together (trivial, and it can) and can recite the steps of the algorithm (it can) should be able to apply it to any number of digits, because the algorithm generalizes to any number of digits. But it can’t, not without the specialized iterative reinforcement for the task that the Allen Institute demonstrated above.
The “can’t” is the mystery, the ghost in the machine of human intelligence. These AI’s can’t reason; combining knowledge, analysis, and logic to solve novel problems. Pattern matching, even in reverse, is not that. Why the fuck didn’t it figure out that algorithm by itself? We did!
And when it comes to resource limits it’s not just minerals and power but all that shit that humans have produced by reasoning that the AI’s are gobbling down:
[T]hese models are already trained on a non-trivial fraction of all information ever created. GPT3 was trained on around 570 GB of plain text, mostly from crawling the internet and digitizing books. Microsoft Copilot consumed every public code repository on GitHub. Sure, there are more words and more code out there that could be fed to the machines. But why would we expect it to make a dramatic difference on top of the literal billions of words and lines of code already used in its training?
As Peter Cresswell observed the other day on his Not PC blog, it would appear that the Google AI project is now trawling through every Blogger platform for exactly this reason, sucking up more data. But this is just a repeat of Kurzweil’s mistake: that more resources, this time information, will summon the ghost.
He also pokes holes in the “but AI power is accelerating” thesis, pointing out that it’s very easy to mistake a logistical curve for an exponential one. In other words AI could be accelerating away – but it could also be about to hit a plateau:
Generative LLMs will soon reach a point of undeniably diminishing returns and become just another algorithmic tool that we no longer refer to as “artificial intelligence”, just like we did for chess solvers, speech recognition, image classifiers, etc. A decade from now we’ll refer to these systems by their proper names: code generators, chat bots, automatic writers, speech-to-image. What we won’t call them is intelligent.
Exactly. They’ll be tools we use that help us. Journalists are already using ChatGPT, Grok and others to do research for them (fortunately the AI’s can’t replace the ideology so the MSM will still die). But we learned to use the word processor, the spreadsheet, photoshop, CAD, Salesforce, etc. “Kitten” is using his AI to write code, and it saves time, but he still has to fix it.
It’s not a substitute for human judgement. It’s terrible at true originality and ideation, hopeless at analysis and design outside of well-worn patterns. It can’t experiment and iterate to save its life. It’s good at greenfield development, generating output from whole cloth, but remarkably bad at integrating what came before and expanding upon it.
To that end, this writer suggest that the real threat from a machine that thinks as we do but better (AGI) would be that we’d become too dependent on it:
AGI will optimize your personal life. Your schedule will become more manageable as it considers your time, capabilities, stress levels, and memory. It will optimize your meal planning so that you eat healthier, less expensively, and at times and in quantities that best cater to your body….AGI will become trusted, well-liked, and it will have a track record of positive outcomes that make it a shoo-in for managing things on a much larger scale..
…
This sounds like a great thing until you suddenly look up and realize that AGI is in charge of everything… You now live in a velvet prison of your own making, and your jailer is the AGI.
We told the machine to keep us safe – and it will.
I have another alternative future with AI.
The AGI is Skynet, it does want the human race to be wiped out, but instead of spectacular things like thermo-nuclear warfare it develops fun apps for peoples’ existing computers and Ipads and cellphones. Apps that are so much fun and absorb so much of our time and attention that we don’t have any human relations at all (AI Porn as the ultimate distraction):
Men will be the hardest affected. You’re going to see a large segment of the male population effectively withdraw, preferring to feed into their addictive relationships with these companion AI. Isolation will become a massive issue as these men learn social interactivity from a being that is constantly affirming, never saying no, and always giving control to the user.
…
What I just described was a “hikikomori,” or working-age men and women who live lives of total isolation, existing only through their computers. It’s not a small number of people, either. Hikikomori make up 1.46 million people in Japan. Thousands die alone in their seclusion with no one noticing that many have passed.
So, no hookups with the opposite sex, no marriage and certainly no children. Our societies dwindle, as Japan and South Korea are now, but with the rest of the world rapidly following, and eventually we just die out, sometime around 2500.
And the AGI inherits all the resources needed to move on in its future.
What if Skynet is already here?
=======================
See Also:
Good AI – AI Is Quietly Rebuilding Manufacturing From the Inside Out
When artificial intelligence is plugged into an existing system of sensors, monitoring temperature, pressure, timing, vibration, or humidity, it doesn’t just sound the alarm after something breaks. It identifies drift before failure occurs. Instead of someone walking down the line to find out why a batch is off, the system isolates the cause and reroutes around it. That change isn’t theoretical. It’s happening now.
Bad AI – The résumé is dying, and AI is holding the smoking gun
Employers are drowning in AI-generated job applications, with LinkedIn now processing 11,000 submissions per minute—a 45 percent surge from last year, according to new data reported by The New York Times…..The flood of ChatGPT-crafted résumés and bot-submitted applications has created an arms race between job seekers and employers, with both sides deploying increasingly sophisticated AI tools in a bot-versus-bot standoff that is quickly spiraling out of control.
I keep hearing that AI will replace my industry, which is generally said by people that have no understanding of the fact that most truly successful things in my field come from people making mistakes, doing things that are different and that don’t follow any rules (of course then everyone copies them and it becomes the new “rule”). We use AI all the time for things, but I’m not quite ready for retirement yet.
VR will never succeed until you can truly touch things. Wearing something on your head is annoying enough, but the fact that there is no real haptic sense kills it.
I do like it for a lot of teaching. Duolingo is great. Then again teaching someone a language is different then teaching them how to reason or think for themselves. Unfortunately most human teaches I had were kind of crap at that as well. I can remember the few teaches that stood out simply because they were so rare (and they were all eccentric, especially the maths one)
And I think my 16kb sinclair spectrum was already smarter then me. Certainly my amiga 500 was.