Right now there is an explosion of public interest in artificial intelligence, or AI for short.
This was sparked by the mass availability of two artificial intelligence generators, both developed by the Microsoft-backed company OpenAI.
Entering a few specifications into Dall-E will generate an image of your choice. ChatGPT will produce a written piece of prose that can be mostly used on a given topic, tailored to the instructions of the receipts.
Google is working on its ChatGPT alternative called Bard. There is also a program called Jukebox that creates “new” songs.
Not surprisingly, all this artificial intelligence has panicked some graphic designers, writers and musicians who fear they’re on the brink of being made redundant by machines that can do what they do better, faster and cheaper.
AI will certainly cause a major shift in the workplace, since printing meant unemployment for many scribes, cars almost made for horse-drawn carriages, and calculators replaced counting houses.
There is no evidence today that computers armed with artificial intelligence will trick or replace our species unless we ask them to.
AI is not true intelligence. He doesn’t think creatively for himself. He may never do it. It is limited by being “artificial” in both the senses of “made by human beings rather than occurring naturally” and being “false or bogus”, a mere substitute for the real thing, such as plastic flowers.
AI programs run into problems
The old computer data adage – GIGO, Garbage In Garbage Out – still holds true. Image and prose generators are only as good as the instructions they receive from humans and the source material, created by humans, which they assemble at electronic speed, much faster than we can.
AI programs cannot think for themselves and are already in trouble because of the imperfect thoughts of the human sources they draw from.
Alphabet, the parent company of Google, has just suspended the launch of Bard after a test run. It repeated what many people believe, wrongly, that NASA’s James Webb Telescope was used to take the first pictures of planets beyond the solar system.
In 2016, Microsoft hastily shut down Tay, a chatbot on Twitter, after it started posting sexist and racist messages. Now where could he have gotten these ideas?
Meta, the owner of Facebook, was embarrassed when his Blenderbot sighed along with many doom-scrollers retrieving “after I deleted Facebook my life was so much better”.
Some fear or hope that ChatGPT could end homework because teachers wouldn’t be able to tell if students had cheated using AI. Proctored written exams would soon put an end to this.
How teachers are experiencing ChatGPT
Google’s new chatbot’s costly bug wipes out $100 billion in market value
It turns out that AI is a mediocre student
Besides, schoolwork occupies relatively basic rungs on the ladder of learning.
In university and graduate school, AI turns out to be a mediocre student. Tested at the Minnesota Law School scored a low grade of C+. The B grade at Wharton Business School was slightly better, but still hovers around average.
Mildly concerned, newspaper columnist Hugo Rifkind asked ChatGPT to write an article on a given topic in his style. To his relief the first attempt was illiterate and unused.
He then identified some quotes and jokes to include. Based on this additional input, the second effort was competent, but nowhere near the standard of his own work or worthy of appearing in the Times.
AI is already replacing some hard work, although it will need to be told what to do by humans.
Those few journalists tasked with simply rewriting press releases may be an endangered species. It will be a long time before digital subreddits can reliably produce acceptable puns. Graphic designers who produce standard images for advertising could be replaced. There is already a raging dispute over the copyright of the original images and written material that inspire the new production.
It may be cheaper and less risky to hire people to do the work instead.
It always works according to the guidelines set by human beings
Artificial intelligence offers humanity great benefits. It can process data at speeds and accuracy that we could not match. At this week’s massive LEAP technology conference in Riyadh, more than 200,000 delegates were accredited. They heard that artificial intelligence will be able to save at least 10% of electricity consumption simply by monitoring devices and facilities and being able to turn them off when not in use.
Likewise, the autonomous vehicles being designed, on the roads and in the air, will only be able to operate safely because AI can assimilate the myriad inputs from the vehicle itself, other vehicles and monitoring systems in the surrounding infrastructure.
None of this means that AI “thinks for itself.” It always operates according to guidelines and parameters set by human beings. It is clear that there are dystopian potential implications, such as justice administered by computers based on input. machines programmed to kill autonomously. or repressive and highly effective surveillance of people.
But the malevolent tool of all this would first be driven by bad people.
We are a long way from the so-called “Singularity”, a hypothetical point, invented by philosophers and science fiction writers, when machines program and build better machines, replacing humans and other carbon-based life forms. It may never happen.
Some techies think we’re all already on our way to being fired. Last year, Blake Lemoine – a software engineer who worked on Google’s LAMDA (Language Model for Dialog Applications) – was put on leave after claiming to have the consciousness equivalent of a seven or eight-year-old child, fearing death and knowing that ” his rights’.
He espoused a reductionist view of humanity—that we too are mere stimulus-responding machines, incapable of doing anything truly new but merely, like artificial intelligence, juggling existing stimuli.
Musk has talked about augmenting the human brain with microchips
This approach seems to be contradicted by the inexplicable and so far unreplicable phenomena of evolution, genuine creative intelligence, consciousness, and indeed life itself.
Novelist Philip K. Dick explored the possible differences between humans and hypothetical superintelligent machines in Do Android’s Dream of Electric Sheep?
Those who have seen the Bladerunner movies inspired by the book will remember that the most valuable things in Dick’s dystopia are living creatures.
Elon Musk talked about augmenting the human brain with microchips. This may soon be possible to a limited extent. Philosopher Susan Schneider, founding director of the Center for the Future Mind at Florida Atlantic University, advises caution.
If you could actually upload the contents of your mind to a microchip, either in your skull or in the Cloud, he points out, you’d be dead too. Real human intelligence has a lot of life left in it.