“Intelligence” has been defined in a number of different ways. Few people would quarrel with “the ability to acquire and apply knowledge and skills”; an ability, not simply a store of impersonal knowledge (and so such usage as in “Military Intelligence” is misleading, since that describes simply knowledge).
As to applying knowledge or skills, the question arises, why do that? To what purpose?
That question cannot be answered without considering what is of concern to human beings.
From that point of view, it is a categorical mistake to associate “intelligence” with a non-human entity. For that matter, disagreement is unlikely if one simply says that the very concept of intelligence Is derived from human experience and human behavior. The phrase “Artificial intelligence” (henceforth just AI) is therefore an oxymoron since it suggests the possibility that a non-human entity could have a trait that is fully the same as — in all respects, so identical with — a trait that is characteristic of human beings.
That misunderstands human nature.
All human behaviors and abilities are governed by the complex processes and interactions of mind and body, which express themselves in physical terms through electrical signals and physiological reactions. Human thinking is inseparable from human emotions. (Telling examples, even proofs of that are the placebo and nocebo effecs.)
For example, human beings on the whole do not approve of obtaining knowledge if that means harming human beings; clinical trials in medicine eschew protocols that require some human beings to be harmed, for example. By contrast, a knowledge-seeking entity that does not care about human suffering would simply get the best available knowledge, including by experiments on humans.
The notion that machines, robots, could somehow be designed to assist human beings safely by incorporating “rules” about not harming humans was explored by Isaac Asimov in his sci-fi writings, which should be required reading for AI enthusiasts and entrepreneurs,
The current hysterical fad to develop and apply AI may reflect in some part a faith that sufficiently capacious and speedy data-gathering and data-analysis could somehow be an authentic replication of what we mean by human intelligence. That fails to distinguish two types of application: data-gathering and the quite different data-analysis, interpretation, extraction or addition of meaning.
To assist in medical diagnosis, for instance, doctors could certainly benefit from huge data-bases with statistics about possible correlations; but that is entirely distinct from generative AI, which uses models to produce text, images, videos, etc.; for instance the ChatGPT (a “large language model”, LLM) that uses generative AI techniques to produce human-like text responses and permits unscrupulous students to have essays written for them.
The time-honored criterion for whether an artificial entity can behave genuinely like a human being is the Turing Test: a human interacts with a machine and tries, from its responses and outputs, to decide whether it is another human being; or, some human adjudicator or judge interacts with both a human and a machine, without knowing which is which, and tries to identify the human by comparing the responses.
Some claims of passing the Turing test have been made recently: “GPT-4.5 is the first AI model to pass an authentic Turing test, scientists say” [1].
However, one might well disagree, given the detail that “GPT-4.5 could fool people into thinking it was another human 73% of the time” is described as “resoundingly passing” the Turing Test. The “judges” were two groups, of sizes 126 and 158. “Participants had 5 minute conversations simultaneously with another human participant and one of these systems before judging which conversational partner they thought was human”.
5-minute conversations hardly constitute much of a test. I might suggest not that GPT-4.5 passed a Turing Test. but rather that 27% of the judges were not very smart. So far, pertinent reports indicate that AI-generated student essays do not fool competent, experienced, teachers.
Nevertheless, the hype about AI is roping in gullibles at all levels. The British Government has developed and tested “in house” “Redbox, a generative AI tool” that, some suspect, may be “advising” the Prime Minister [2].
I also label “gullible” (or worse) the suggestion that AI could improve the reliability of Wikipedia [3].
All the GPTs, the “large-language models”, learn from what human beings have created. It remains to be demonstrated that these machines can learn to distinguish reliable from unreliable information or approaches. I doubt that can ever be possible, since human beings have never collectively agreed on a method for doing so. Quite the contrary. In any particular subject or on any particular topic, some sort of consensus forms, which is the lowest common denominator [4] among a sufficiently large majority of the researchers or scholars who are the recognized experts because the topic or subject is their chief preoccupation.
That majority consensus is what the media and the general public come to accept as “true”. The minority of dissenters — experts and well-informed observers and students —are rarely able to survive the “peer review” that largely determines opportunities to publish and to pursue and succeed at careers [5].
Robots that learn from what the human community has come to know and understand could do no more than mention that there are dissenting voices to what is generally believed; and they would merely, as Wikipedia for example has done, add further weight to any given majority consensus — even as the plain facts of human history, in particular the histories of science and religion, demonstrate unequivocally that any given majority consensus has a limited lifetime before being superseded.
It seems to me realistic, therefore, to worry that the hyped potential benefits of AI will be less influential than the damaging suppression of unorthodox minority views that have, in the past, correctly identified flaws in the majority consensus [6].
Most important for human beings and human societies are such things as love, loyalty, empathy; manners; ethics, morals, values. Those are governed by emotions mediated by such factors as knowledge, intelligence, cultural and parental and social environment and history. Nothing “artificial” could come to understand — to feel — those.
Using AI for anything except purely data-gathering and statistical analysis would be a recipe for disaster. Decisions about policies and actions need to remain at the mercy of human judgment.
There is a long history of attempts to improve human institutions and practices by consulting “objective” data instead of letting human beings use their judgments. One may not agree that such initiatives have always been beneficial — for example, “judging” scientific or scholarly value by numbers of publications, or numbers of citations of publications, or “impact factors” of journals. Nor have academic departments always prospered better with rotating chairs than with autocratic Herr-Professors.
Human beings are not perfect. There is no reason to imagine that human-designed machines or protocols can be perfect either; and they are likely to have unwanted corollaries that are recognized only when it is too late to prevent further harms.
*******************************************************************************************************
[1] Roland Moore-Colyer, “GPT-4.5 is the first AI model to pass an authentic Turing test, scientists say”; 13 April 2025; https://www.livescience.com/technology/artificial-intelligence/open-ai-gpt-4-5-is-the-first-ai-model-to-pass-an-authentic-turing-test-scientists-say
[2] Chris Stokel-Walker, “Is Keith Starmer advised by AI?”, New Scientist, 3 May 2005, p. 12
[3] New Scientist, 9 November 2024, p. 10
[4] More or less inevitably, this is an indication of mediocrity — J. Klein, “Hegemony of mediocrity in contemporary sciences, particularly in immunology”, Lymphology, 18 (1985):122-31.
[5] Henry H. Bauer, “Science in the 21st century: knowledge monopolies and research cartels”, Journal of Scientific Exploration, 18 (2004) 643-60;
Dogmatism in Science and Medicine: How dominant theories monopolize research and stifle the search for truth, McFarland, 2012
[6] For instance, the COVID pandemic was almost globally mis-managed even though well-informed experts knew better and said so publicly — Stephen Macedo & Frances Lee, In Covid's Wake — How Our Politics Failed Us, Princeton University Press, 2025;
Henry H. Bauer, “COVID was mismanaged: How not to repeat that history?”
https://henryhbauer.substack.com/p/covid-was-mismanaged-how-not-to-repeat
AI generated information, like other computer-generated information, is subject to the GIGO rule (Garbage In, Garbage Out). Those who use it note that it relies on data generated by establishment "experts" and does not think independently. I have also read (N.B.: I never use AI) that some who use AI to obtain references have found that some of the references do not exist, i.e., they were some kind of hallucination by the machine. Fundamentally, AI and AI run robotics are a threat to the human race, as was acknowledged by the AI enthusiast Elon Musk. He describes a future in which most work is done by AI and robots and the majority of people are unemployable and live on government stipends. He seems okay with that. I think it is nightmarish and worse than Communism. Even the Communists had a system where people worked.
Deep Learning assumes more importance maybe for cloud security and detecting network threats, besides its data prompt & modeling basis.