No Minister

Posts Tagged ‘Artificial Intelligence

A different kind of war

with 5 comments

Putin and his cronies have recently rattled their nuclear sabres over Ukraine, in a rather pathetic effort to frighten the West into stopping their support for the latter and thus hopefully achieving some sort of Russian success in the conventional war.

Given that no Western nation has increased their nuclear alert levels it’s clear that they don’t take this threat of nuclear war seriously, and I think they’re correct (circumstances can change of course).

But there are other ways to wage war. We’ve already seen how much a part of warfare drones have become in Ukraine, a capability that has built up steadily in the last twenty years and, like planes in WWI, has gone from mere observation in the battlefront to combat and whose capabilities are sure to expand given that a $10 million tank can now be destroyed by a $300,000 drone.

But one thing that has not been considered very much is the prospect of a full-scale Cyber War. If this sounds less harmless then read this:

The event that would come to be known as “Cyber Harbor,” or “Cyber 11th,” started small. One morning, the “autopilot” mode on some Tesla cars started going haywire. First, dozens, then thousands of cars began veering into oncoming traffic all across the country. Emergency rooms were swamped with crash victims. Then, office workers in dozens of industries watched in shock as their computers began spontaneously deleting files. It took about 24 hours for officials to realize that these scattered problems were connected. The power grid was next: Blackouts began in California and soon rolled across most of the U.S. The Internet started crumbling as well. Routine communications became impossible.

It took only a few days for grocery-store shelves to go bare. Gas stations put out “No Fuel” signs. Even if supplies of food and gas were available, trucks couldn’t deliver them. The country’s banking system had collapsed; with credit cards and ATMs disabled, truckers had no way to buy diesel fuel. The backup generators powering hospitals, police stations, water-treatment plants, and other critical infrastructure eventually drained their fuel tanks and went silent.

There is also an eerie similarity to nuclear warfare in that you can’t really defend against this sort of attack either because to do so involves a degree of rebuilding our IT control networks and systems far beyond even what’s required to build a missile defence. As with nuclear warfare the best that can be hoped for is to make it clear to potential attackers that you have the same weapons and they have the same vulnerabilities.

But what if an enemy calculates that dragging the West down to its level would enable a second war, more conventional, to be won? The old nuclear First Strike scenario but without the radioactivity?

Written by Tom Hunter

May 22, 2022 at 10:29 am

It sure beats a backpack weed sprayer

with 5 comments

Very cool news on the farming weedspraying front.

Farming robot kills 100,000 weeds per hour with lasers

Carbon Robotics has unveiled the third-generation of its Autonomous Weeder, a smart farming robot that identifies weeds and then destroys them with high-power lasers.

As it drives itself down rows of crops, its 12 cameras scan the ground. An onboard computer, powered by AI, identifies weeds, and the robot’s carbon dioxide lasers then zap and kill the plants.

The Autonomous Weeder can eliminate more than 100,000 weeds per hour and weed 15 to 20 acres of crops in one day — for comparison, Myers said a laborer can weed about one acre of his onions per day.

A robot!

With lasers!

Shit, it doesn’t get any better than that and this is the 21st century after all. I’m sure it can be adapted to work the slightly more random world of the average Kiwi sheep, cattle and dairy farm.

Written by Tom Hunter

December 1, 2021 at 12:14 pm

Big Brains.

with 2 comments

The photo shown here is a typical CPU (Central Processing Unit) silicon chip used in desktop and laptop computers.

Specifically this is an AMD Ryzen 3 2200, a so-called “entry-level” chip for people building their own desktop gaming computers. Just a few years ago the power of such chips was still talked of in terms of the number of transistors they held, with one of the classics, the Intel 8086 of the 1980’s, having 29,000.

That AMD chip has 5 billion transistors.

As such people nowadays usually talk about other measurements of power such as clock speed and “cores”. What’s a core? Well it means a processing unit, a computer in itself. If the original silicon chips were said to be “a computer on a chip” then a 4-core chip like the Ryzen 3 2200 has four computers on a chip, and that’s pretty ordinary now. There are retailed chips with 64 cores.

Why do this? Why not just keep making a single core ever larger? Well there are scaling problems, not just in the hardware but in using a single processor to do a job. Instead, use is made of something called parallel computing, where one job is split into many, all run at the same time. It started off as something only used with the supercomputers simulating things like nuclear explosions. Parallel processing makes it possible to perform ever larger data processing jobs in human time.

The thing is that the human brain is also, basically, a parallel processor (Minsky), and a pretty massive one at that, with 100 trillion synapses (brain cells), each of which is like a transistor but with hundreds of connections to other synapses, which multiplies that 100 trillion in terms of data storage and delivers processing power beyond what should be possible given how much slower nerve impulses are compared to electronics.

The SF author Arthur C Clarke, in the book version of 2001: A Space Odyssey, actually makes reference to Minsky’s research on neural networks in explaining how the infamous computer HAL 9000 was developed, which shows you the sort of background study Clarke did for his stories. Minsky was an advisor to the movie.

For decades, most of these neural networks amounted to creating artificial “neurons” in the software, which was basically a clunking simulation, ultimately limited by the hardware it ran on. You could do interesting things, just slowly.

Which brings me to this news story, World’s Largest Chip Unlocks Brain-Sized AI Models With Cerebras CS-2.

Cores? It has 850,000 of them!

Cerebras Systems today announced that it has created what it bills as the first brain-scale AI solution – a single system that can support 120-trillion parameter AI models, beating out the 100 trillion synapses present in the human brain. In contrast, clusters of GPUs, the most commonly-used device for AI workloads, typically top out at 1 trillion parameters. Cerebras can accomplish this industry-first with a single 850,000-core system, but it can also spread workloads over up to 192 CS-2 systems with 162 million AI-optimized cores to unlock even more performance. 

Are you scared?

I am, and I’ve been in this world for most of my life.

Of course it’s going to take some time to develop the software to truly use this baby, but if we get to the point where the software-hardware configuration can start truly learning in growing itself, well…

Written by Tom Hunter

October 7, 2021 at 9:07 am

Funny … and scary?

leave a comment »

Things are perhaps getting too depressing and heated.

So it’s time for a KitKat moment.

I’m putting both of these videos up together with a simple question for readers.

Rating both videos, are they:

a) Scary

b) Funny

c) Funny and Scary

d) Neither Funny nor Scary

First up is “Inside Biden’s Office”:

Next up is our (slightly?) more distant future. You may have seen YouTube clips published by a robotics company called Boston Dynamics, showing off the robots they’re developing for various uses, starting with “pack dogs” for the US military. They are eerily like animals, and their humanoid robots increasingly move like humans.

Those videos have led to this parody – at least I hope it’s a parody. I’ve never had a lot of faith in the idea of built-in, programmed restrictions on robotic brains, starting with Asimovs Three Laws of Robotics.

Still, in this case I can’t blame the robot at all.

That’s what scary about it. 🙂

Written by Tom Hunter

August 21, 2021 at 1:26 pm

Well, this is depressing

with 4 comments

No need for this level of complexity

Specifically the news that the race is on to build killer robot armies.

They won’t look anything like James Cameron’s famous images from his dystopian hell of The Terminator movies.

(By the way, watch only the first two of the series. After the 1991 sequel they’re totally derivative crap designed only to pull money from your wallet, a warning from friends that I had already guessed at as I avoided them.)

Blonde and here to kill you.

Still less is it going to look like the Cylons in Battlestar Gallactica (BSG) such as “Six”, more’s the pity.

No, as is often the way of reality vs fantasy they’ll look a lot more mundane, probably not too different to the sort of drones you can buy off-the shelf nowadays.

And that’s what really frightening about them. Unlike nuclear weapons it doesn’t take a lot of infrastructure or resources to build large numbers of these things.

Also, don’t imagine that an “AI killer robot” is going to have some sort of human-level intelligence, or need to.

That’s not what Artificial Intelligence is really about, despite decades of SF stories like BSG.

The “AI” in this case will amount to little more than the ability to do the following:

  • Recognise a human target, which could be just any human or perhaps using facial or body recognition (or your cellphone)
  • Control flight and/or other movements towards the target.
  • Trigger a lethal munition to kill the target. Lethal meaning something as small as a single bullet.

It should be noted that all these capabilities are here now.

The temptation to open Pandora’s Box is irresistible. In early March, the U.S. National Security Commission (NSC) on Artificial Intelligence completed its two-year inquiry, publishing its findings in a dense 750-page report. Its members unanimously concluded that the United States has a “moral imperative” to pursue the use of lethal autonomous weapons, a.k.a. “killer robots.” Otherwise, we risk bringing a rusty knife to a superhuman gunfight.

Citing the threat of China or Russia leading the global artificial intelligence (AI) arms race, the commission’s chairman, former Google CEO Eric Schmidt, urged President Biden to reject a proposed international ban on AI-controlled weapons. Schmidt rightly suspects our major rivals won’t abide by such a treaty, warning U.S. leaders, “This is the tough reality we must face.”

If other superpowers are going to unleash demonic drone swarms on the world, the logic goes, the United States should be the first to open the gates of Hell.

Of course we already have things like the General Atomic Predator drones (“General Atomic”, how 1950’s is that?) and others which have been launching missiles at people for over a decade now. But they have humans in the decision loop and they’re still big and relatively expensive, although much cheaper than a human-piloted fighter bomber.

The attack drones currently on the market are plenty dangerous as is. A good example is the KARGU Loitering Munitions System, currently deployed by Turkish forces. This lightweight quadcopter “can be effectively used against static or moving targets through its … real-time image processing capabilities and machine learning algorithms.”

KARGU’s mode of attack is full-on kamikaze. It hovers high in the air as the operator searches for victims. When one is located, the drone dive-bombs its target and explodes. If the concussion doesn’t kill them, the shrapnel will. Just imagine what a thousand could do.

That last is the future. What we’re talking about here is a swarm of such machines and again – not like SF – these don’t need any centrally organised intelligence, human or AI, to operate. For twenty years now computer simulations have mimicked the swarming movements of schools of fish and flocks of birds with just three rules.

Once you get into such swarms we’re no longer talking about just picking off a few selected targets:

To raise awareness of this imminent threat, the Future of Life Institute produced the alarming, if poorly acted film Slaughterbots. The finale shows dissident college students having their brains blown out by bird-sized quadcopters.

In a 2018 study conducted for the US Air Force, drone specialist Zachary Kallenborn correctly argued that lethal drone swarms should be declared weapons of mass destruction.

Cheap weapons of mass destruction, too.

Even without that miserable conclusion from the USNSC I would have found it hard to believe that various nations could be held back from pursuing development of these things.

In the future how tempted would a POTUS be by the idea that the entire North Korean nuclear team, military and scientists, could be taken out in one hit by such a swarm, leaving nobody to launch a nuclear counter-strike? Or imagine an Israeli leader looking at the Iranian nuclear group? And that’s in democratic nations. What brakes might there be on the likes of Xi Jinping, Putin and Erdogan?

Of course every weapon system has been countered sooner or later. In this case it may be that in future we’ll each be guarded by a small swarm of counter-drones, starting with the wealthy members of society like Eric Schmidt:

In 2019, PAX published a list of the global corporations most likely to develop lethal autonomous weapon systems. Among the U.S. companies ranked as “high risk” are Amazon, Microsoft, and Oracle, as well as Intel, Palantir, Neurala, Corenova, and Heron Systems. It’s worth noting that the top members of the National Security Commission on AI—all of whom support using these murder machines—include chiefs from Amazon, Microsoft, and Oracle.

Written by Tom Hunter

July 15, 2021 at 8:13 am

Random Fun with Fake News

leave a comment »

First, let us once again admire this Tweet from 2016: “Wakey wakey!”


Next, more humour as a 16 year old girl continues to be the perfect human shield for Warmists – and Trump falls into their trap because as a man in his 70’s he thinks Time Magazine and their Person Of The Year is still a thing. Somebody needs to tell him that Time’s once great rival Newsweek sold for $1.00 a few years ago.

The first thing that came to my mind when I saw this was the following quote:

If you want a vision of the future Winston, imagine a boot stamping on a human face – forever.

Speaking of visions of the future, here’s some very scary

stuff from the land of AI-enhanced computer graphics with Deep Fakes:

Yes, that is Arnold’s face perfectly applied over Clint Eastwood’s for two minutes of the famous “Do you feel lucky, Punk?” scene from the movie Dirty Harry. They used a voice actor and he hasn’t got it quite right, but that’s next on the AI-CGI front. What they have to date is scary-incredible.

So in 2023 or perhaps even in 2020 we could see Topham Guerin producing an advert of Jacinda Adern admiring Trump and comparing the strengths and weaknesses of a Glock handgun vs an AR15.

I suggest they use Brenna Spencer as the base:

On second thoughts that might give Jacinda a landslide election win. Perhaps instead an “interview” of her praising Corbyn and Hugo Chavez, along with plans for Landcorp to take over the dairy farms in NZ that they don’t already own?

Maybe that’s the flaw with the use of Deep Fakes: in trying to reduce or collapse the opposition’s vote numbers you might end up throwing your voters over to them?

Lastly a reminder of the philosophical basis behind all this: Post-Modernism unleashed.

Written by Tom Hunter

December 16, 2019 at 1:27 am

How to write like a Snowflake

leave a comment »

Since joining NoMinister early this year I’ve observed that several of our commentators have a very basic style of writing. Normally I would add “thought”, but that’s nowhere in sight with these comments.

So I decided to look back through many of them, including deleted ones that I copied before they vanished, to see what the common elements were. Obviously with our new high and strict standards in place we may see some of the following go away, but I reckon there will be a few that reappear sooner or later as their ids break free of self constraint.

Meantime, you can have some fun using this deconstructive analysis to look at future comments, and perhaps even put your own comment together (it’s very easy). A chocolate fish to the first commentator who can prove that they’ve turned these rules into AIML algorithms.

Step 1, an expression of surprise:

  • Jesus Christ
  • My god
  • Goodness gracious
  • Fuck me.
  • Jesus
  • Christ

Step 2, abusive, childish name change of the blogger:

  • Gravetodger
  • Adolt
  • Chunter / Bunter 
  • Tom Tom / Tommy

Step 2a, add an adjective to imply the lowly state of the blogger:

  • poor
  • wee
  • little
  • old
  • aged
  • sad
  • lil’

Step 3: provide a straw man of what the blogger said, basically variations on the now famous, “So What Your Saying Is…..” :

  • you love racists / xenophobes / white supremacists / misogynists / sexists / bigots / Far-Right / Alt-Right
  • X was a commie
  • Y is a liar
  • you defend racists / xenophobes / white supremacists / misogynists / sexists / bigots / Far-Right / Alt-Right
  • you miss / wish for / desire / want / hope for:

– the Southern Confederacy
– slavery
– Apartheid
– Fascism
– the past
– death squads

  • totalitarianism is on the rise.
  • the Commies are taking over
  • Coloured folk are taking over
  • Woman are taking over

Step 3a,  combine these with sophisticated sounding terms…

  • crypto
  • neo
  • pseudo
  • hyper
  • arch
  • faux
  • tautological

Step 3b, mix in some adverbs for excessive and unnecessary use:

  • strange /ly
  • utter /ly
  • total /ly
  • monstrous /ly
  • complete /ly
  • definite /ly
  • undoubted /ly
  • obvious /ly
  • clear /ly
  • definitive /ly
  • interesting /ly
  • facile /ly

Step 4, move to abuse of the intelligence of the blogger:

  • no / lacking / bereft of / without / missing ….
  • … evidence / logic / sense / facts
  • just lumps in
  • facts dressed up as… 
  • cognitive decline
  • senile
  • conspiracy / conspiratorial
  • clueless
  • simple minded
  • flailing
  • impotent
  • clowns
  • witless
  • trite
  • facile

Step 5, mix in some talk about the emotional state of the blogger:

  • confused
  • hysterical
  • shrieking
  • mad
  • rage
  • angry
  • hate-filled
  • gutless
  • feeble
  • desperate
  • raving
  • bad-hair day
  • aggressive
  • lathered up
  • victim
  • lies / liar / lying
  • incandescent with rage
  • verbosity and bombast

Step 6, throw in a conclusion about the blogger, based on the sophisticated analysis already provided:

  • moron / cretin / idiot / imbecile / tool / fascist / numbskull

.. and spice it up a bit  by adding the following…

  • fucking / utter / total / complete

Step 7, make sure you come across as relaxed and unconcerned in order to avoid revealing how angry and unhinged you really are (combine with step3b):

  • amusing
  • amused
  • bemused
  • funny
  • laugh / laughing / laughter
  • sad / sadly
  • disappointing / disappointed
  • regretfully
  • hilarious

Step 8, wrap the whole thing up with a little fake concern as to why this was written and express hope for future improvements:

  • If you had…
  • you would be on to something if…
  • if only…
  • so that’s something
  • keep trying
  • Give it time…
  • Better luck…
  • My work here is done
  • Back to studying…
  • Come on, let’s go
  • Back to the asylum / pensioners home / street / gutter
  • Here’s your chance…
  • Come on. 
  • Teachable moment.

Written by Tom Hunter

July 11, 2019 at 11:10 pm

No, Siri!

leave a comment »

We live in age of marvels, although I’m sure anybody driving a car and watching planes in the 1910’s would have said much the same.

One of those marvels is the so-called “Smart Phone”, or just phone, typically “iPhone”, to anybody under the age of 40! 😀  It was the introduction of the iPhone3 in 2007 that really started the ball rolling on these devices, and given the impact they’ve had on work and business, it’s sometimes hard to believe that they’ve only been around a little over a decade.

A whole host of devices, such as video cameras, SLR cameras (really any cameras at all), music players, voice recorders and so forth, have been marginalised by the miraculous smart-phone. In a rare case of successful cannibalisation, Apple even thrived as the iPhone destroyed its previous wonder-toy, the iPod, for whom Podcasts are the only living remanent of that early 2000’s device.

The apps available have also meant that many young people don’t even much bother with a laptop, let alone a desktop computer. There are Apps for navigation, weather forecasts, fishing, star-gazing, and a million other uses, not to mention the horrors of Facebook, Twitter, Snapchat, and Instagram – although gaming message systems such as Discord have been supplanting the first two. The youngin’s at Beloved Child’s university all went with Discord for group work, the Gen Z hatred of Facebook and Twitter being the driver. There are even apps which give the average rifle shooter the sort of detailed ballistics calculations which, as recently as a decade ago, were available only to military snipers.

But there are still limits on what these things can do, and my recent trip to Wellington showed that, in vivid and annoying detail, and also what it means to become reliant on technology.

A year or so ago, Beloved and I were driving to Auckland on the Southern Motorway when Siri told us to pull off at Papakura. Mystified and not pressed for time, we decided to follow the directions through the backstreets before rejoining the motorway. At which point we found out that there had been a crash on the motorway that had caused huge traffic jams.

Clever Siri.

Clearly, it was getting informaton from the NZTA, as has been the case with foreign traffic control systems for some years now.

And so we come to this week’s trip: a joyride to Wellington. As those of you in the region will know, there are some fairly extensive local roadworks that have been underway over the last year. Not just Transmission Gully, which has actually not interfered that much, but works from Levin through Otaki to join the Northern end of the Kapiti Expressway. Otaki in particular has been a pain, with a new roadbridge being built over the top of the railway. This is probably also enabling the new Expressway, but it’s hard to tell. In any case it’s a mess and when my son and I drove down last year we hit terrible traffic jams in the area. The second time, Siri sent us through country back roads to the East, which landed us back in Otaki but past the worst of the traffic. I expected the same this time, and leaving from the Waikato that’s what Siri showed.

Until Taihape, where we were suddenly presented with this on the car’s screen:

No, Siri!

A diversion through Palmerston North (shudder), and then East through the Wairarapa? Or a longer divert through Fielding, turning off in the heart of the Rangitikei River area. Okay – so vineyards but…

So what the hell did Siri know about the coastal SH1 roadworks that we did not? This caused a problem because we had figured we’d be in Wellington by 3:30 or so, just in time to pick up Beloved Child direct from the end of the varsity day, for a birthday dinner. This was looking like a pickup from the flat at 5ish: a pain in the ass for everybody. We decided to press on to Bulls/Sanson and see what she said then.

No, Siri.
Note that at no stage was the SH1 “option” even presented, which would be a problem in the tap-and-go mode with which drivers are supposed to be able to use these things while driving, in the same way as changing songs. Luckily there were two of us.
With a little effort, I was able to force it to show Levin to Wellington and get the key data: 1 hour 26 minutes. Ok – so what the hell were all the diversions about? Again we ignore her and press on to Levin. Only to be presented with this “option”.
No, Siri

Seriously? An almost four hour diversion involving backtracking a good 50-80km? Bugger that for a game of soldiers. But of course the worry you get nowadays is wondering whether Siri is right and you’re wrong. What then? A completely stuffed up birthday?

In the end, it turned out that the new Otaki roadbridge had been opened to traffic, and while work continued all around it and beyond, there were no traffic problems at all, certainly not like 2018. We were in the Capital by 3:40pm and all was well.

What I think has happened here is that while Siri may get updates from the NZTA they’re not up-to-date and they probably don’t include actual traffic volumes. Even so, if I could build a route showing a total time from Taihape of about 3 hours, why could Siri not do so? Isn’t she supposed to be “Smart”?

We were ready for her on the return trip, having seen the conditions ourselves and as you might guess, this is what she showed us as the optimum choices for driving North:

No, Siri.

She’ll improve of course, as all these things do. But what happens when people stop looking at the world outside their screens, when what the computer displays for you becomes your reality? When you’re living in the world of Neuromancer?

Written by Tom Hunter

April 12, 2019 at 2:31 am

Posted in New Zealand

Tagged with