The 60 Minutes program (9 July 2023) on “The Revolution”, which artificial intelligence (AI) is said to have brought, entrenched mistaken beliefs about AI, including the claimed danger that AI could “take over” the world. On the other hand, the program failed to emphasize sufficiently AI’s clear and present dangers to society, civilization, humanity through making such powerful tools available to all and sundry.
AI was said to be capable of teaching itself “superhuman skills” and how to speak like a human, including with “creativity”. Reporter Scott Pelley pronounced himself “speechless” at the “breathtaking insights” of a poem written by Bard, Google's AI chatbot.
The program failed to make the essential distinction between AI in general and common-language AI chatbots, which have been so much in the news since ChatGPT was released In November 2022.
AI as a whole uses algorithms that enable computers to “learn” by examining data sets; but it should be emphasized that “learning” here does not mean understanding, it means only the detection of patterns and regularities which can then be used to make predictions by assuming that the same regularities will be present in further additions to the data sets. Human beings, however, have learned — at least collectively — that past performance is no guarantee of future performance; that scientific theories have a limited useful life-time; and above all, that something happens to be so does not make it desirable for humankind.
Common-language bots add to these deficiencies an ability for computers to communicate in ordinary human language. The criterion here seems to be whether a human being communicating with a chatbot is able to detect that the bot is not a human being.
As the 60 Minutes program correctly noted, common-language AI-Bots will therefore display the same qualities as humans do when they speak. 60 Minutes marveled that Bard had gained “the sum of human knowledge” by reading “everything on the internet”. In other words, an indistinguishable mixture of truth, falsehoods, uncertainties, deliberate deceit, myths, shibboleths, outdated beliefs, contradictory religious assertions.
As one of Google's experts did say, correctly, although Bard might seem to be making judgments, it was not really doing so.
Of course not, because Bard understands nothing.
The program also expressed admiration and awe for AI chess programs that are said to have discovered entirely new strategies, for example, “taking unusually early action to create a weakness in the opponent’s king’s position and then using this weakness as a motif throughout the rest of its play” [i]. But this was done by running through every possible combination and sequence of moves at amazingly high speeds to find the best lines of play at each stage in a particular game, which is not the same as a universal strategy; it is more like the subsidiary generalities human chess players use at various stages, like avoiding “backward passed pawns”, general rules occasionally outweighed by other considerations. I wonder what we might learn about AI “learning” as opposed to understanding by having AI chess machines play against one another.
Fantastic speed together with vast memory make AI so amazing. That reminded me of a one-time boss, a mathematician turned administrator who had a lovely sense of humor and the facility for witty quips. I often referred to him as “smart”, but whenever I did so in the presence of one of his former math colleagues, that colleague would correct me, saying, “He's quick”, with emphasis on the difference between quick and smart. I recall also the Australian C. J. S. Purdy, one of the best correspondence-chess players in the world in the 1940/50s, who was nowhere near championship ability in live tournaments with their typical restrictions on time.
AI is enormously quick, but it is not smart. The very first axiom of computing still applies: GIGO — Garbage In, Garbage Out. AI is nothing more than a very, very fast and very, very big computer. It learns only from what human beings have produced.
Something like a century ago, Isaac Asimov explored in science fiction writing[ii] what would be involved in teaching an Android — a robot with the appearance of being a cloned human being — how to think and make judgments as we would ideally like human beings to do. What that succeeded in demonstrating was the impossibility of making a robot behave in all respects like an ideal human being — that is to say, if one could even imagine it, a human being whose free-will choices never disadvantage or offend other humans or humankind as a whole.
All sorts of applications of AI present danger of the highest order to a decent, equitable, human civilization. Google's CEO, Sundar Pichai, described Bard as a work in progress, but appeared not to understand its inevitable limits and dangers. That AI chatbots deliver such deceitful information as the titles of non-existent books is described by the technical gurus as “hallucinating”, whereas in reality it is an obvious, natural, indeed inevitable consequence of feeding into an unthinking computer everything on the internet and then inviting it to speak in ways that are common. Since the Bots do not understand what they are doing, they cannot distinguish truth from falsity, or reliable internet sources from unreliable ones, or honest mistakes from deliberate deceit.
Pichai appeared not to recognize the clear, present, enormous dangers of making these tools openly available. Common-language bots only add to the incalculable dangers that AI has brought by making it possible to create moving films in which avatars of living people are completely indistinguishable from the actual people themselves. The same could presumably be done with individuals now dead, if there exists a large enough store of videos in which they move and speak.
This presages the worst possible scenarios for political campaigns.
Consider just one hypothetical example. When President Obama was running against John McCain, McCain gave a movingly decent response to a lady in his audience who had voiced the opinion that Obama was a Muslim. Imagine the impossibility of ever laying that canard to rest if AI had generated a video of an avatar, indistinguishable from a young Obama, ritually washing its hands on a pilgrimage to Mecca.
Google is in competition with other high-tech companies to bring to market the most successful applications of AI, including common language Bots. It is inconceivable that all the competitors would ever call a halt in response to the evident problems. And even if all of the companies in the (relatively) “Free World” were to declare a moratorium, the world's bad actors would still cheerfully proceed with further development.
Pichai’s comments on how to respond to the problems and dangers were perhaps naïve, perhaps disingenuous, perhaps hypocritical and deliberately evasive: He acknowledged that regulation was necessary, and declared himself optimistic that regulation was feasible because so many voices have started to worry about the issue so early in the technological development. All that's needed, he opined, was for the countries of the world to get together to decide how best to proceed.
Perhaps he was thinking of some other world, one that does not include the actual countries in the present world.
The impossibility of staving off the many ways in which bad actors will be using AI technology brought me to recall the Fermi Paradox.
According to generally accepted Big-Bang cosmology, the universe is three or four times older than our solar system. The probability seems very high, therefore, that intelligent life evolved on some number of other planets several billion years before it did on Earth. Those beings will have been capable of space travel for billions of years, including presumably the capability of distributing at least robotic vehicles through most of the existing universe. Enrico Fermi, the physicist who created the first nuclear pile (reactor), remarked on these probabilities by asking, “Where are they”?, a question that has become iconic as the Fermi Paradox.
A number of solutions to the Paradox have been suggested. One group suggests that advanced civilizations are not necessarily interested in exploring or colonizing the universe. Another group suggests that the physical form of the greatly advanced civilization is such that we cannot recognize or communicate with it[iii]. A third possibility is that civilizations that have attained the capacity for space travel did not develop social and political skills to match their technological development, and self-destructed.
The AI capabilities now being developed, together with the current state of international non-coordination, suggests that for civilizations that developed similarly to ours, the most likely explanation to the Fermi Paradox is self-destruction. These powerful AI applications are being released into a world full of bad actors and in which overwhelming pluralities of the general populations believe that computers do not make mistakes and that the internet is a source of reliable information.
[i] Matthew Sadler & Natasha Regan, “DeepMind’s superhuman AI is rewriting how we play chess”; https://www.wired.co.uk/article/deepmind-ai-chess
[ii] I, Robot; a series of stories later collected in a book
[iii] Pertinent novels are by Olaf Stapledon (e.g. Star Maker) and Fred Hoyle (The Black Cloud)
Thank you, Peter, I'll have a look at Brooks blog
Excellent piece. Are you aware of the writing of Rodney Brooks? Formerly at the AI lab at MIT. His blog is well worth a read.