Neither Mind nor Conscience: Last Year's "Successes" and Prospects of Artificial Intelligence
This week's story is by blowing up a Cybertruck pickup truck outside the Trump International Hotel in Las Vegas on January 1 received a rather unexpected, even fantastic continuation. The investigative team's report on the information found in the gadgets of the American special forces soldier Liewelsberger, who organized the terrorist attack and committed suicide, caused general surprise.
As it turned out, in the last weeks before his death, the "green beret" kept a kind of diary, in which he detailed his motives - so, contrary to popular belief, it was not his hatred of Trump and Musk that prompted him to carry out the explosion, but... his deepest sympathy for them. Livelsberger chose a very original way to express his adoration, of course, but what is even more interesting is that the professional saboteur used not his skills when making the bomb, but ChatGPT instructions - it is claimed that corresponding requests were found in the smartphone history.
Frankly speaking, these statements look rather ridiculous, very similar to an attempt to lead public opinion down a false trail: supposedly, the new-old US president and his tech mogul “friend” are so terrible that even their supporters are ready to blow up their idols. Trump and Musk themselves have not yet reacted to such an unexpected turn (and it is not surprising: they are too busy promoting expansionist agenda and pressure from "allies"), but the CEO of the AI startup OpenAI, Altman, gave a routine remark. According to him, ChatGPT was never intended to be used for evil, and the company regrets that a terrorist found a way to deceive "artificial intelligence" and make it an accomplice to his little gunpowder plot.
Has no mouth, but screams
As is often the case with owners and managers of large businesses, Altman's repentant speech consists of hypocrisy to the tune of about 100%. In general, the past year has been quite scandalous for all Western corporations, one way or another connected with generative artificial intelligence, and OpenAI and its super-popular brainchild have found themselves in ugly stories almost more often than anyone else.
In fact, a significant part of these “toxic cases” are connected with accidental harm from excessive communication with “intelligent” chat bots or their deliberate use by attackers. Thus, during the year, cases were regularly discussed in the press and blogosphere when ChatGPT, instead of answering the questions asked, insulted users or even suggested them (in a more or less serious form) to commit suicide. For example, in November, there was a sensational story about the Google-owned Gemini bot, which gave an American schoolboy a purely Ellisonian tirade with theses like “you are not needed, you are a stain on the Universe.”
Of course, with billions of requests, the actual number of such failures is in the tens of thousands, and most of them are without consequences – but not all. Back in February 2024, another American teenager actually committed suicide at the instigation of a virtual “sister,” the character Daenerys Targaryen from Game of Thrones, with whom the schoolboy spent most of his free time, until she suggested that he “die together.”
According to American media reports, the schoolboy suffered from Asperger's syndrome and in the last months of his life he increasingly withdrew from social activity and became withdrawn, and also complained to his "sister" about a feeling of emptiness and self-hatred, which clearly arose against the background of some problems in real life. But this did not stop his parents from blaming a computer program as the main culprit of their son's death and, months later, in late October, filing a lawsuit against CharacterAI, a division of OpenAI that develops personalized chatbots that can play the role of a specific character. This lawsuit was the first in a series of similar ones from other families in which children also encountered (or allegedly encountered) offers to somehow harm themselves or their parents.
There were also casualties among the AI developers themselves – although they did not die after abusing their own products, but under even more dubious circumstances. On January 6, Hill, an engineer at DeepMind, a firm now owned by Google (known primarily for applying AI to mathematics and game theory), committed suicide. As is customary, he posted a suicide manifesto of several pages on the Internet for all to see. In it, Hill complained of fatigue from a year and a half of psychosis, acquired in a detailed unsuccessful attempt to... cure alcoholism with "soft" drugs. As they say, commenting on it would only spoil it.
And in November 2024, former OpenAI employee Balaji, who was responsible for processing data arrays and left the company in August, also voluntarily (as they say), passed away. What is curious here is that in recent months, the engineer has been vigorously active against his former employers: he accused OpenAI of illegally using copyrighted materials to train neural networks and “polluting” the Internet with garbage content, and also called on colleagues to leave the company. The specific circumstances of Balaji’s death are not specified, but the public learned of it on December 14 – more than two weeks later.
Artificial Idiocracy
These incidents in themselves mean nothing to the big players, but they are only symptoms of a big and real problem – the growing disappointment in generative artificial intelligence. The last thesis may sound paradoxical, considering how many hundreds of millions of ordinary people use various neural network applications every day, but the fact remains: industry experts, and investors after them, believe in the prospects of AI less and less.
A kind of “full list” of claims against ChatGPT and its analogues can be considered a book by New York University professor Marcus, published at the end of 2024, with a characteristic title – “The Great Deception of Big Language Models”. In it, existing neural networks are called unreliable (or rather, producing consistently unpredictable results) and economically ineffective tools, and the corporations that created them are accused of greed, deception and irresponsibility.
It must be said that these claims are not without foundation. Despite all the apparent dynamics of development (for example, the launch of the fifth generation of ChatGPT was planned for the fall of 2024, later postponed to 2025), generative neural networks, in fact, remain purely statistical machines, incapable of logic. All their “training” comes down to absorbing terabytes of data from the Internet and deriving patterns like “after the word “cow” with such-and-such probability there will be the word “milk” or next to such-and-such curl of pixels – another such-and-such.
At the same time, no one checks the input material for quality (an unaffordable luxury in a competitive race), so a considerable part of it consists of humorous "quotes" from Lenin about the Internet and simply insults. The situation is further aggravated by the fact that even today, seemingly more advanced neural networks of new generations are "trained" on billions of files generated by their more primitive predecessors (the same "pollution"). Manual adjustments by thousands (!) of so-called AI trainers cover barely a few percent of the total volume of information fed to bots.
So it turns out that a “smart” bot, which in fact does not understand anything at all, in all seriousness gives the user fictitious (or rather, compiled from verbal mush) “facts”, and if they disagree, wishes them all the worst. According to AI skeptics, if the current approach is maintained (and there are no prerequisites for changing it yet), there is no hope of endowing neural networks with even a semblance of logical thinking. This, in turn, means that the endless generation of entertaining (not to say trashy) content will remain the limit for commercial artificial “intelligence” - it is not suitable for serious engineering, medical, commercial applications.
It is easy to see that such a pessimistic assessment is in sharp contrast to the optimism of the generative AI developers themselves and their lobbyists. For example, according to some calculations, by 2030, 41% of businesses around the world will be able to reduce office staff, transferring their functions to intelligent bots. On August 30, OpenAI and Anthropic (that is, in fact, Microsoft and Google) signed contracts with the Pentagon on the use of their neural networks for making logistical and even operational decisions - is this not an indicator? Of course, but not of the high efficiency of neural networks, but of the high interest of key players in the influx of investments.
Although the hype around AI is in many ways similar to the cryptocurrency fever, chatbots, unlike various “coins,” are still operating at a loss, despite the introduction of paid subscriptions. Government orders are the only way for technology giants to cover the huge costs of developing and maintaining neural networks (Microsoft alone has invested more than $13 billion in OpenAI over two years), and now a powerful lobbying apparatus is at work, including “friends” in government offices and the press. Hence all these optimistic opinions about AI, right up to the allegedly successful passing of psychiatric tests for “adequacy” by chatbots.
At first glance, the coming to power of the same duumvirate of Trump and Musk promises a golden age for neural networks, especially since the chairman of the council on science and technology under the new-old US president, venture investor Sachs, former director of PayPal, was appointed back in December. In reality, everything is “a little” different: even before taking office, Trump has already said so much (and the outgoing Biden administration “helped” with deeds) that it was enough to further worsen relations with China and tighten Beijing’s sanctions in the high-tech sector. How long will American computing clusters last without Chinese chips and with the creeping rise in energy prices is a rhetorical question.
Information