The Evolution of Artificial Intelligence: From Assistance to Super Mind of Artificial General Intelligence? Article 1. Information Technology and Artificial Intelligence: The Past, Present and Some Forecasts


The Evolution of Artificial Intelligence: From Assistance to Super Mind of Artificial General Intelligence? Article 1. Information Technology and Artificial Intelligence: The Past, Present and Some Forecasts
Download
Authors: Grinin, Leonid; Grinin, Anton L. ; Grinin, Igor L.
Journal: Social Evolution & History. Volume 23, Number 1 / March 2024

DOIhttps://doi.org/10.30884/seh/2024.01.07

Leonid E. Grinin, HSE University; Institute of Oriental Studies, Russian Academy of Sciences, Moscow, Russia

Anton L. Grinin, Lomonosov Moscow State University, Russia

Igor L. Grinin, Volgograd State Technical University, Russia

ABSTRACT

The article is devoted to the history of the development of Information and Communication Technologies (ICT) and Artificial Intelligence (AI), their current and probable future achievements, and the problems (which have already arisen, but will become even more acute in the future) associated with the development of these technologies and their active introduction in society. The close connection between the development of AI and cognitive science, the penetration of ICT and AI into various fields, in particular the field of health care, is shown. A significant part of the article is devoted to the analysis of the concept of ‘artificial intelligence’, including the definition of generative AI. We analyze recent achievements in the field of Artificial Intelligence, describe the basic models, in particular the Large Linguistic Models (LLM), and forecast the development of AI and the dangers that await us in the coming decades. We identify the forces behind the aspiration to create artificial intelligence, which is increasingly approaching the capabilities of the so-called general/universal AI, and also suggest desirable measures to limit and channel the development of artificial intelligence. The authors emphasize that the threats and dangers of the development of ICT and AI are particularly aggravated by the monopolization of their development by the state, intelligence services, large corporations and those often referred to as globalists. The article forecasts the development of computers, ICT and AI in the coming decades, and also shows the changes in society that will be associated with them.

The study consists of two articles. The first, presented below, provides a brief historical overview and characterizes the current situation in the field of ICT and AI, it also analyzes the concepts of artificial intelligence, including generative AI, changes in the understanding of AI related to the emergence of the so-called large language models and related new types of AI programs (ChatGPT). The article discusses the serious problems and dangers associated with the rapid and uncontrolled development of artificial intelligence. The second article, to be published in the next issue of the journal, describes and comments on current assessments of breakthroughs in the field of AI, analyzes various forecasts, and the authors give their own assessments and forecasts of future developments. Particular attention is given to the problems and dangers associated with the rapid and uncontrolled development of AI, the fact that achievements in the field of AI are becoming a powerful means of controlling the population, imposing ideology and choice, influencing the results of elections, and a weapon for undermining security and geopolitical struggle.

Keywords: information and computer technologies, ICT, artificial intelligence, AI, large language models, LLM, cognitive science, self-managing systems, the Cybernetic Revolution, inforg, technological progress.

1. DEVELOPMENT OF INFORMATION AND COMPUTER TECHNOLOGIES

The problem of AI in science did not arise today, or even in the 1950s. This sphere began to develop from the end of the nineteenth century (see Romanov 2012). However, in this article we do not set ourselves the task of describing the history of ideas and the first steps in this direction, although they are very important. Nor will we present the history of the development of ICT and AI. The main focus is on their development in the twenty-first century, especially in the last decade, and on this basis some important conclusions and forecasts have been made. Nevertheless, it is still necessary to briefly outline some stages of ICT development in the twentieth century (for more details see Grinin 2021a; 2021b; Grinin L., Grinin A. 2015b).

1.1 Development of ICT in the Twentieth Century

During the Second World War, work on computer prototypes was carried out in several countries (Germany, Great Britain, and the USA). As you know, the first computer in the United States, ‘Mark-I’, appeared in 1944 (after three years of development and testing) and was located within the walls of Harvard University. However, it worked on the relay principle, that is, it was not yet an electronic machine, like the invention of Konrad Zuse.1 The first electronic computer was ENIAC, created in 1946 by designers John William Mauchly and J. Presper Eckert based on vacuum tubes. Compared to the Mark I computer, ENIAC was more than a thousand times faster. The operation of individual blocks of the latter was controlled by a master oscillator, which distributed a sequence of clock or synchronization pulses that ‘opened’ and ‘closed’ the corresponding electronic blocks of the machine (Gutter and Polunov 1981). The work on it began during the war years, and it was military needs that made it possible to finance such a large-scale project. In the 1940s and 1950s, intensive computer development took place in the USA, the UK, the USSR and other countries, still linked to military and other government contracts, including space ones. In the 1960s, computers became a ubiquitous phenomenon that continued to amaze society. But the main breakthrough in the form of mass computerization occurred a little later.

The development of computer technology was accompanied by the development of programming in different countries. In the 1950s and 1960s, significant progress was made in the field of programming, the creation of new languages and the reduction in the size of computers. At the end of the 1960s, the prototype of the Internet appeared: in 1969, ARPANET was created – the first territorial computer information network, which initially consisted of four computers that linked the University of California at Los Angeles, the University of California at Santa Barbara, the Stanford Research Center and Sault Lake City State University in Utah. It was this concept of connecting networks that later became into the Internet. But in reality, the prototype of the global network was created later – in the 1980s. In particular, the year of birth of the Internet is considered to be 1982 (and sometimes 1986, when NSFNET was created – the first high-speed computer information network, on the basis of which the global international Internet was later created).

Thus began the transition to technologies in which working with information plays a major role. Although this transition started already in the nineteenth and early twentieth centuries with the emergence of technologies that transmitted information in pure form (such as the telegraph, telephone, radio, audio equipment, television), it was not the leading technology in terms of generating production volumes. With the advent of information in its pure form, which could be separated from its previously inextricable connection with material (i.e., it appeared on non-paper media, making it possible to store and retrieve vast amounts of information), and the powerful development of ICT, the information services sector began to play an increasingly important role – especially in the fifth technological mode, which continues to this day (Grinin 2021b).

The foundations of a new technological mode are prepared within the framework of the old one, until they form a primary system, which gradually begins to serve as the basis of production. Computers (or rather, ECM, that is, electronic computing machines) were developed in the fourth mode, at the very end of which personal computers appeared (see Grinin 2021a). However, it was the PC that became the most important part of the fifth technological mode.2 Since its emergence, the computer has become one of the most popular devices, and their number reached several billion. Accordingly, this mass base was the basis for the development of all other areas of information technology. In the 1980s, the first personal computer appeared, followed by the first laptop. This area began to develop extremely rapidly, and the number of PC sales data grew almost exponentially.

In the 1990s, the Internet had already become a powerful system. Many companies were created based on its development and the creation of communications for it, some of which were buried as a result of the dot-com crisis in 2001. But the Internet spread at an unprecedented pace, and with it many things would change, including the way trade was conducted – both in exchange markets, and in the provision of goods to the population.

The next powerful step in the development of Internet technologies was the emergence of the modern type of social networks (the concept itself was developed in the 1950s). The prototypes of such networks appeared in the 1990s, and their rapid development took place in the 2000s and 2010s. This made it possible to involve hundreds of millions of people in new ways of communication, so that other devices, especially mobile phones, gradually began to replace computers in many operations. The mobile phone (more precisely, the smartphone) and social networks have brought the number of users to the masses through new forms of communication, which has made it possible to use the data of these people for business, political and special purposes, including intelligence.

1.2 ICT Development in the Twenty-First Century

Miniaturization as the Main Trend in Technological Progress

In the early 2010s, a new ‘core’ of information technologies emerged, based on the transition from microelectronics to nanoelectronics. We are talking about a rapid reduction in the size of processor chips. The speed of the technological process of miniaturisation is illustrated by the following data: 1970s. –3 µm (3000 nm; Zilog and Intel); 1980s – early 1990s – 0.8 µm (800 nm; Intel and IBM); late 1990s – early 2000s – 180–130 nm (a number of companies); early 2010s – 45, 32, 28 nm process technology (Intel etc.). In 2019, Intel released processors based on the 10 nm process technology (the company could not organize the production of 7 nm chips), and TSMC released chipsets for mobile devices based on the 7 nm process technology (Apple A12, Kirin 980 and Snapdragon 855). At the same time, their production technologies are noticeably different: Intel's 10 nm can fit up to 100 transistors on one square millimeter, while TSMC's 7 nm process can only fit 66 million transistors on one square millimetre. These miniaturization processes and the increase in computing power and speed of electronic communications have led to the widest spread of information and computing technologies.

Then AMD announced the imminent creation of 5nm processors. In September 2022, the Apple A16 mobile processor, was introduced, released at 4 nm. More recently, the creation of 3nm and even 2nm chips has been announced. In September 2023, the iPhone 15 Pro was released with a technical size of 3nm. There is talk about an imminent qualitative breakthrough in the field of processors. However, the question arises: is it possible to create a fundamentally new generation of computers by reducing dimensionality in this way? Many experts believe that this is unlikely. The fact is that, even under optimistic assumptions, such processes will have slightly higher productivity (up to 15 %), especially taking into account that the shortage of chips could seriously hamper the development of these technologies.3 The bottleneck that data tries to penetrate between the hard drive and the CPU consumes a lot of power and generates a lot of heat, limiting further improvements (Khel 2015). It becomes clear that it is unlikely that it will be possible to create a fundamentally new generation of computers in this way. As a result, there is more and more information about progress in the field of quantum computing, which, if it appears, will not be anytime soon.

The idea of storing data in a particular medium (e.g., magnetic, electrical or optical) has been around for a long time, and with the advent of nanotechnology it is possible to store information by replacing silicon, which is the basis of today's semiconductor devices, with another material or by using carbon nanotubes.4 In this case, a bit of information can be stored in the form of a certain number of atoms. This would make it possible to reduce the size of processors by an order of magnitude and significantly increase their operating speed. The number of transistors in a processor has now reached tens of billions, and there are plans to create a processor with trillions of transistors (Moore 2021). However, there are doubts that this technology can completely replace traditional transistors.

Speed of Development

Back in the 1960s, the so-called Moore's Law, named after Intel co-founder Gordon Moore, was formulated. But it is more accurately described as the empirical observation that processor performance doubles approximately every two years. This empirical observation became very popular and gave rise to the idea that it is necessary to keep pace with this trend of the computing industry by ensuring that the new semiconductor node was prioritised within the specified timeframe (Campbell 2021; Tsvetkov 2017). This accelerated the computer race. One way or another, the power of computers grew exponentially, acquiring amazing capabilities. However, from about 2007–2010, Moore's Law no longer works, since trends lag behind it in some places, and ahead of it in others (Zhang 2022; Kish 2002).

While Moore's first law states that chips become increasingly more complex over a short period of time (two years), Eugene Meieran's ‘Second Moore's Law,’ formulated in 1998, states that the cost of chip factories increases exponentially as the chips produced become more complex. And we see this today in the monstrous monopolization of chip production, which led first to a deep crisis and then to a ‘chip war.’ In general, this increase in power and productivity has meant that the level of ICT has approached the limits beyond which a new qualitative leap is already emerging. There are different views on when this will happen and what its manifestations will be. Many new information technologies have emerged, which we will discuss below.

1.3. Ideas about New Types of Computers

Options for new approaches

So, the development of information technology is now on the eve of the creation of a new type of computer that is fundamentally superior to today’s computers. There are different ideas about the basis of such breakthrough computer technologies of the future. The most famous predictions concern the future of quantum computers.5 To put it simply, the fundamental difference between quantum computers and conventional ones is the technology of information processing. Conventional processors perceive information in a binary system, meaning that data takes on values of either one or zero. Quantum machines perform computations in which information can have a value of both one and zero; calculations are performed not in ordinary bits, but in so-called qubits. Moreover, unlike bits, qubits can take on different values simultaneously, making it possible to perform calculations that a conventional computer is inherently incapable of (Datta et al. 2005; Veldhorst et al. 2015).

There have also been reports of attempts to create a photonic computer (Barabash 2015). The idea of nanocomputers also lives on. According to Eric Drexler, the basis for this can be precisely nanomechanics rather than nanoelectronics. He developed the mechanical designs for the main components of the nanocomputer. Its main components can move outwards and inwards to the core, interdependently fixing each other's movements (Drexler 1987; 1992; 2013; Balabanov 2010). There is information about the creation of a so-called single-atom computer, which operates at a temperature close to absolute zero, but is a significant step towards the creation of a quantum computer.6

Prospects

It is, of course, extremely difficult to predict what type of computer will be the breakthrough and the leader in the future. But sooner or later such a breakthrough will happen. However, we believe that it will not come true as soon as some people expect. Let's try to explain why. There is currently no mass market for this new type of computer that the digital giants are accustomed to. The level of development of the most popular computers is quite satisfactory for users (if someone is not personally satisfied, it is only because they have not updated their models). It is therefore not possible to replace them en masse with some much more productive ones, except by force, but this still requires the creation of the appropriate infrastructure. It is even more difficult taking into account the fact that more and more people prefer to use mobile phones. The latter still have prospects for increasing power, but they can also be replaced en masse only by force. After all, the new devices will clearly be much more expensive (maybe by an order of magnitude) than the current ones, which are already not very cheap. However, even the efforts of governments to replace gasoline cars with electric ones have not yet yielded full results. In other words, all that remains is the market of the state and very large companies, which would like to replace expensive supercomputers. But this is a limited market. Even if a new type of computer were to appear within a decade, its development would be very limited due to the narrow market, that is, they would only prepare the ground for one day becoming the basis of a new technological order. But, taking into account that the need for them, even in the military and aerospace industries, is not extremely high, their technological emergence could be delayed due to insufficient funding. However, as the AI race accelerates, efforts by governments directly or through proxy companies (including leading digital giants) could accelerate the development of new types of ultra-fast computers. In this case, the dangers of uncontrolled use of AI will increase exponentially.

Models within existing technologies. Neuromorphic processors

We would like to point out interesting directions within the framework of existing technologies. These are, in particular, neuromorphic processors created at the intersection of biology, physics, mathematics, computer science and semiconductor production, based on familiar transistors, but with a different organization of the architecture, similar to the structure of neurons in the biological brain. Similar to biological model, an artificial neuron has an output (axon) whose signal can be sent to a large number of inputs on other neurons and thereby change their state. Enthusiasts believe that neuromorphic processors represent one of the most promising developments in the field of computing. Today they are just a new model of programmable computing, but in the near future they are expected not only to significantly speed up the execution of labor-intensive computing tasks with minimal energy consumption, but will also to open up to humanity new harmonious aspects of the digital lifestyle, as seen in living nature. Neuromorphic processors have the potential to eventually extend and complement the capabilities of modern processors with the help of new technologies that will allow future computers to function, adapt and learn using algorithms reminiscent of human thinking (Thakur et al. 2018; Sandomirskaya 2021).We do not exclude at all the possibility of such a bright future, although there are significant doubts that neuromorphic processors will be able to become as widespread as the current ones, as they are still intended for a relatively limited circle of users.

Nevertheless, the development of fundamentally new types of processors is underway, and we can expect them to appear in the more or less foreseeable future. But on such a powerful basis, the dangers of misuse of AI will increase manifold, so it is necessary to think in advance about the limits and channelling of such technologies.

2. ARTIFICIAL INTELLIGENCE:
GENERAL CHARACTERISTICS

In recent years, information and communication technologies have clearly moved to a higher level, which can be conventionally described as the level of artificial intelligence. Although this concept is rather vague, and AI has been in use for a long time, its capabilities have now been expanded significantly thanks to various new technologies (neural networks, remote contact, linguistic systems, facial recognition systems, motion control, individual preference detection, machine learning, deep learning and etc.). Moreover, more recently (including during and in connection with the lockdown at the height of the COVID-19 epidemic), artificial intelligence has been increasingly used to influence social, control and supervisory and even administrative relationships (Grinin, Grinin, and Korotayev 2021).

It is believed that we are living in the era of the third wave of interest in AI. Two waves have already been identified: the first (1950–1960) was associated with work on machine translation and game programs, and the second (1980s) was associated with the development of expert systems. The third (from the end of the 1990s) is due not only to the increased productivity of computers, but also to significant progress in many areas in the field of AI (Proydakov 2016: 121). We can probably speak about the fourth wave of interest in AI from 2023.

In the early 2010s, the concept of four main driving forces in the ICT market was formulated: social networks, mobile solutions, cloud computing and means of processing large amounts of information (Belousov 2016: 22). Accordingly, since then, interest in AI has increased significantly. The idea has become a fully material reality: whoever owns artificial intelligence owns the world. With new types of AI (such as ChatGPT), this is increasingly making sense for the world. The AI race is accelerating. The scope of artificial intelligence is rapidly expanding. Here is a lists of intellectual areas, or branches of AI (rather areas of thinking): logical AI; search; pattern recognition; performance; inference; common sense knowledge and reasoning; learning by experience; planning; epistemology; ontology; heuristics; genetic programming (McCarthy 2007; see also Idem 1990; 2000; Mitchell 1997; Shanahan 1997; Thomason 2003).

However, this list, which was incomplete at the time of its creation, is quickly supplemented by general areas of activity (even if they have not yet been subdivided into directions): conducting dialogues and correspondence on various topics; imitation of art (painting, composing, poetry, etc.); compilation of various texts; selection of necessary materials; forecasting and etc.

There are quite a number of definitions of AI.7 It is believed that there are three directions in its definitions:

1) emphasizing the similarity of AI to human intelligence, based on the results of actions (in particular, the ability of AI to conduct a dialogue, which we see today in many programs and bots);

2) emphasis on the fact that AI programs are capable of learning and self-development (neural networks, deep learning, etc.);

3) defining AI as a software and hardware complex that, to one degree or another, is capable of solving problems and making decisions (more often than not, providing assistance in making them), as well as mastering other intellectual functions.

AI is considered in two aspects: broad and narrow. Artificial intelligence as a working program that replaces a person in one way or another is AI in the narrow sense. That is why the regular AI we use is called weak or narrow. But there is also the concept of AI in a broad sense, which refers to general artificial intelligence. The latter is also called strong, broad or universal in the tradition of artificial intelligence philosophy,8 a distinction that arose in theory many decades ago. General AI is a system (machine, see below) that is capable of understanding the environment and the world at large in the same way as a human, and has the same potential to learn to perform a wide range of tasks. One definition of artificial general intelligence (and there are many) is a hypothetical intelligent agent capable of understanding or learning any intelligent task that can be understood/performed by humans or animals. A more precise definition, in our opinion, is that it is capable of performing most of the tasks that a human can perform (Shumikhin 2021; Turing 1950).

It is important to keep in mind that artificial general intelligence does not yet exist, it is a hypothesis. There is an active debate in computer theory and industry about how to create it and whether it is even possible (Gates 2023). Despite the alarmist, sometimes hysterical, or clearly intended to shock the public, statements and forecasts, the creation of such a general AI is still far away.

It is worth citing the characteristics of general AI. It is taken from (Business World 2023) with some of our comments. A general AI system should include: a) abstract reasoning; b) foresight; c) practical wisdom; d) cause-and-effect relationship; e) transfer training.9 It is very difficult to meet a person with abstract reasoning, foresight and practical wisdom, and it will be even more difficult to give AI with this. Other key characteristics will include:

Geospatial awareness and navigation. The modern Global Positioning System (GPS) is capable of identifying geographic locations. When general AI is fully developed, it will outperform existing systems in predicting movement in physical domains. It should be noted that great progress has been made in the field of geolocation today, so we can speak of the actual presence of this function.

Understanding 3D images and color. General AI will succeed in the subjective form of perception, i.e. in color recognition. It will also be able to recognize the size and depth of 2D images.

Learning and creativity. In theory, a general AI system would be able to read, interpret, and extend human-gene-rated code. This means that as AI advances, there will be less demand for programmer services for less complex code and tasks.

Understanding language in context. Understanding human language is highly context dependent. AGI systems will be equipped with the intuition needed to understand natural languages. Obviously, it will be extremely difficult to develop intuition.

Motor skills. For example, taking keys out of a pocket, which requires creative perception. In other words, a robot should ultimately have general AI. This means that general AI is not only a system, but also a machine.

Making decisions. Artificial general intelligence will indeed be able to create requires structures for all tasks, understand value systems and use different types of data in all sorts of ways. This is, of course, the most difficult direction in its development.

Modern forms of artificial intelligence are often referred to as generative artificial intelligence. 10 This is a type of artificial intelligence that can create (generate, hence generative) new content, formulate ideas, conduct dialogues, create works, stories, images, videos and music, as well as edit photos, videos, etc. Any artificial intelligence is based on machine learning models that are pre-trained on large amo-unts of data, usually called foundation models (FM).

Basic models of modern AI, such as GPT models, are usually called Large Linguistic Models (LLMs). They are specifically focused on performing language tasks, including the creation (generation) of texts, information, blogs, dialogues, and information extraction. Obviously, large language models can perform many more tasks because they contain a huge number of parameters that allow them to learn complex concepts. This is not surprising – after all, they are ‘trained’ on billion-dollar supercomputers or by a larger team of highly skilled programmers and other specialists, including psychologists and sociologists. Large language models learn to apply their knowledge in a variety of contexts. They use incredible amounts of information from a wide variety of areas to train them. The terms AI (especially generative AI), neural network and large language model are often used as synonyms.

As mentioned above, it is still fundamentally unclear whether reaching the level of general or universal artificial intelligence is a reality or just an abstraction. But with the advent of the most advanced large language models and AIs based on them, such as ChatGPT, it has become clear that, unexpectedly for many specialists (not to mention ordinary people), we have come much closer to the milestone of creating universal AI than seemed possible just recently (see, e.g., Shumikhin 2021). Yet, we should take into account that this acceleration is mainly due to the fact that enormous amounts of money are being invested in this direction (and it is fully supported by government and law enforcement agencies). Let us repeat: the speed of development of new chats is largely due to the fact that programmers ‘train’ artificial intelligence with the help of supercomputers and a huge team that trains large language models. For example, Tesla plans to spend more than $1 billion on the Dojo supercomputer to train AI in self-driving cars. There are also much more expensive plans. Microsoft and OpenAI have been discussing a project called ‘Stargate’ that would see Microsoft spend $100 billion to build a massive supercomputing cluster to support OpenAI's future advanced AI models by 2028. However, we think the announcement of such a gigantic project is more likely to continue strong stock market rally seen since November 2023 by actively promoting the idea that AI will ensure economic development in the long term. There are serious doubts about the implementation of such a project.

Time will tell whether the leap in AI is the most significant technology breakthrough since the graphical user interface, as Bill Gates believes (Gates 2023), or whether it is less important. But we can already say that this is a serious change, all the risks of which should be identified in order to create quite quickly a system for reducing and balancing them.

However, instead of a business approach, many philosophizing AI researchers begin to develop ideas close to science (or non-science) fiction. One of the AI pioneers, Terry Sejnowski, in his talk ‘ChatGPT and the Talking Dog’ on YouTube (VCUresearch 2023), summarizes the latest approaches to modern AI as follows.

What is Large Language Model AI?

1. A stochastic parrot that understands nothing, imitates human intelligence, sorts and combines fragments of meaningful human texts?

2. A rival to the human mind, capable of surpassing it?

3. An ‘Alien intelligence’ with other (non-human) mechanisms of thinking and understanding that allow solving intellectual problems without having a picture of the world, just by possessing gigantic information capacity and computing power?

Sejnowski gives his own rather detailed answer, based on the fact that the very formulation of the question about the intelligence of some independent entity (LLM or person) is incorrect. But there is no point in analyzing his philosophy in detail in this article. Let us note that there have been and are many outstanding people in the field of AI development. It is not possible to talk about them in detail within the framework of this paper, but we will give a brief summary of some of the many.

1. Frank Rosenblatt (USA, 1928–1971) is a scientist who created the first functioning neural network. He was a man of a rare mindset, which allowed him to become a specialist in several fields at once, such as neurophysiology and technical sciences (but not only). Rosenblatt devoted himself to studying the behavior of the brain and trying to reproduce similar capabilities in a computer. In 1957, he created the first neural network, the perceptron, and three years later, the first neurocomputer based on its principles. The computer's capabilities were used, for example, to make weather forecasts. However, his work was later sharply criticized by prominent colleagues. Due to Rosenblatt's tragic death, they were never refuted, and research in this direction ceased for a long time.

2. John Hopfield (USA, born 1933) – physicist; among his many achievements is the creation in 1982 of a mathematical model of the so-called Hopfield network, which, because of its relative simplicity, is the most widely used in the technical sciences. The Hopfield network is a mathematical representation of human memory and is therefore part of computer memory systems. The scientist continues to develop it: thus, in 2016, significant changes were made, and in 2020, models with a large amount of memory began to work, later called ‘modern Hopfield networks.’

3. Yann LeCun (France, b. 1960) is considered one of the three ‘godfathers of artificial intelligence’ (along with Yoshua Bengio and Geoffrey Hinton, see below), a man who has done a lot for machine learning. He is the creator of one of the main types of neural networks – convolutional networks. Among the scientist’s main achievements is the development of ‘computer vision,’ the ability of machines to perceive and understand images. For example, it was used to create, a device that recognizes cheques, which was long in demand in the USA.

4. Yoshua Bengio (Canada, born 1964) is the second of the three ‘godfathers of artificial intelligence’ and one of the most cited scientists in the field of computer science. He is known for his work on deep learning and artificial networks, on the basis of which many modern concepts and technologies are based.

5. Geoffrey Hinton (UK, born 1947) is the last of the three ‘godfathers of artificial intelligence,’ the man who popularized algorithms for training multi-layer neural networks. He is also one of the creators of the Boltzmann machine, a network that is a variant of the Hopfield network and the first neural network capable of solving complex combinatorial problems. Among other things, Hinton is one of the developers of the AlexNet neural network, which has had a huge impact on the development of computer vision algorithms and machine learning in general.

Ability to Make Decisions

Researchers (see, e.g., Zgurovsky, Zaichenko 2013) identify the following capabilities of AI (compared to previous levels of ICT):

1) the presence of a goal or set of functioning goals;

2) the ability to plan one's actions and find solutions to problems;

3) the ability to learn and adapt behavior in the course of work;

4) the ability to work in a poorly formalized environment, under conditions of uncertainty, with unclear instructions;

5) the ability for self-organization and self-development;

6) the ability to understand texts in natural language;

7) the ability to generalize and abstract accumulated information.

So AI is really capable of making decisions. And there are a number of areas where people are already entrusting ICT and its algorithms with making not just advisory but actually final decisions for themselves (such as algorithmic trading on the financial markets; the collection of some legal evidence using video and infrared cameras, whose data is accepted by the court; the recognition of the contents of luggage and even of people at border controls; the decision to grant a loan according to an algorithmic assessment of creditworthiness based on indirect signs, etc.). These are primarily areas in which the volume of information and the required speed of its processing are too great for a human. And the number of these areas will grow, of course. But still, in most cases, not only today, but also in the future, we hope that AI will mainly help with decision-making or be part of a decision-making complex embedded in various more complex and technologically integrating areas. Otherwise, entrusting AI completely with decision-making may create considerable problems in society, as we can already see today, for example, in the relationship between customers and banks.

Thus, in general, information technology and artificial intelligence have become ubiquitous; in fact they are present in most areas. And as AI develops, the situation is rapidly moving to the point where, metaphorically speaking, artificial intelligence will become a mandatory block of many systems. This AI block can then be integrated in one or another configuration (from primitive to very complex) into more or less complex systems, depending on the characteristics of the technical and other tasks. Just as we can now increase the amount of memory in a computer or upgrade a graphics card (for more details see Grinin L., Grinin A. 2021).

In this respect, it is useful to determine the place of AI in the world of self-managing systems 11 in which we already live and in which we will continue to live (in the process of what we call the Cybernetic Revolution (see Grinin L., Grinin A. 2015a; 2015b). In this context, we would define artificial intelligence in the following way: it is a specific universal technology (and a practical area for implementing the theory), which is (almost) an integral part of self-managing systems, like an electric motor in electric cars or an internal combustion engine (ICE) in cars, tractors and others vehicles. Hundreds and thousands of different machines are based on AI. However, just as an internal combustion engine or an electric motor do not determine the operating principle of all machines, AI does not determine the functions and operating principle of all self-managing systems, but only of a significant part of them (there are self-managing medical, physiological, biotechnological, genetic and other systems).

Limitations

The rapid development of AI technologies naturally requires a number of restrictive measures, which are unfortunately lagging behind. Nevertheless, something is being done. For example, in 2021, after several years of preparation, UNESCO adopted recommendations on the ethical aspects of artificial intelligence (UNESCO 2021). In general, this is a useful document which states that there is a need to provide a universal framework of values, operating principles and mechanisms to guide states in developing their laws, policies and other documents related to AI, in accordance with international law. However, human rights and fundamental freedoms must be respected, protected and promoted throughout the life cycle of AI-based systems. But, of course, this document remains a mere declaration that no one intends to follow.

On October 7, 2022, the White House Office of Science and Technology Policy (OSTP) released five guidelines to guide the development, use, and implementation of automated systems. The AI Bill of Rights is a set of five principles and associated practices designed to guide the development, use, and implementation of automated systems to protect the rights of the American public in the age of artificial intelligence (The White House 2022). But this is, of course, extremely insufficient, especially given that government agencies (including intelligence agencies) are themselves actively violating these principles and will continue to do so.

At the end of July 2023, shortly after the emergence of ChatGPT and other similar generative AI, the UN Security Council met on ‘Artificial Intelligence: Opportunities and Risks for International Peace and Security.’ At the event, the UN Secretary-General supported calls from several member countries for the creation of a new AI body within the global organization that would help address future threats, as well as the establishment and implementation of internationally developed monitoring and control mechanisms. Of course, such a body would be useful. But given the fact that the most powerful states are seeking to maximize the use of generative AI for control, intelligence and military purposes, its ability to create new and more powerful AI as quickly as possible against the background of secrecy and increasing competition between states will obviously be limited. The fact that different countries are striving to use the growing power of AI for political purposes is evidenced, in particular, by the head of the UK Foreign Office's statement on artificial intelligence, which should be used to support freedom and democracy (which in practice means being used to fight against geopolitical rivals). The UN Security Council also announced the first global summit on the security and regulation of artificial intelligence technology which took place in the UK in autumn 2023. And it also shows that the Group of Seven (G-7) is keen to join forces to control the growing generative AI.

In March 2024, the European Union enacted the most comprehensive protection mechanisms from the rapidly developing world of artificial intelligence, after the bloc's parliament passed the AI Act. This is an important step in terms of control over AI, and we would like to believe that similar laws will be adopted in other countries.

At the same time, there are powerful forces that want to use AI not just to maintain their own benefits, but to radically reform society, bring it under total control and undermine the possibility of any opposition. It is not for nothing that high-tech gurus and ‘prophets’ from Silicon Valley are creating a new universal narrative that legitimizes the power of algorithms and Big Data. As a result, they will be able to empower Big Brother by giving algorithms the power to make the most important decisions in people's lives. This aim is to take away from a person the right to make the most important (and not only) decisions in a person’s life and transfer it to artificial intelligence (see the philosophical essay with description and apologetics of such an ideology [Harari 2016]) – and is the dream of globalists, politicians and financial tycoons. But ‘giving this right to AI’ in reality means that this group of super-powerful people seeks to usurp the right to impose their goals on humanity.

3. ARTIFICIAL INTELLIGENCE AND VIRTUAL REALITY

Despite great successes in the development of ICT and AI, it should be noted that the market for their development has begun to grow much more slowly than before (which, by the way, further explains the turn of the digital giants towards the development of generative AI). The reasons for the slowdown in market development are quite understandable, since at present a huge, perhaps even the majority of humanity is covered by computers, telephones, and the Internet. On the one hand, as we have seen, new AI technologies are actively supported and funded by states and their structures, which become almost the main customers, and this gives rise to projects that are gigantic in terms of scale and resources. On the other hand, there are no breakthrough trends that could be compared with the massive technological development in the previous period.12 The idea of the Internet of Things has not materialized to the extent that it should, the development of self-driving cars has slowed down considerably, and so on. In general, this fits well with our concept that during the modernization phase of the Cybernetic Revolution, which we are now at the end of, there is a powerful spread of significantly improved basic technologies, the foundations of which were laid earlier, and newly emerging technologies no longer look so ground-breaking. At the same time, these improved and significantly changed technologies are already covering the whole of society and attempting to completely reshape it. Also in this phase, limitations and moments of crisis begin to be felt, and the foundations are laid for a new technological breakthrough (for more details, see Grinin L., Grinin A. 2015a; 2015b).

The digital giants are striving to penetrate a number of sectors (including medicine, see below) and in addition, create new ones. These include virtual and augmented reality. The most prominent attempt to move towards artificial reality was Facebook's transformation into Meta.13 There have been reports of fairly large-scale transactions in the framework of virtual reality, in particular about a ‘land auction’ and the sale of digital paintings, the originals of which the artist has undertaken to destroy.14 The idea, expressed several years ago, that we are on the threshold of a new technological revolution, in which business will migrate from the physical world to the virtual one, has not yet manifested itself visibly, but this trend is nevertheless gradually emerging. It is difficult to say how widespread it will be. But the opportunities to change many things, including the appearance of a person, who in the information space now has the opportunity to change not only hairstyle, make-up and appearance, but also voice, are constantly growing. It is no coincidence that one of Gartner's recent forecasts is that 'the evolution of AI towards the creation of hyper-realistic reality will lead to a situation when people will have difficulty believing their own eyes, the transition to a world of zero trust will begin (this includes the capabilities of the aforementioned ChatGPT and others like it). No one or anything in this world can be trusted without seeing confirmation in the form of a cryptographic digital signature. It is possible and even likely that fraudulent bots will be added to the human fraudsters. 15 There are many opportunities for the profitable use of deception technologies. In particular, if chats offer investment advice (and traders are already beginning to ask for it), it is easy to imagine how investor sentiment could be manipulated.

IT clones of deceased or real people can also complicate the situation. The South Korean company DeepBrain AI offers its ‘Memory’ program for creating IT clones. This can be a digital copy of a person during his lifetime, created as a result of long-term contact (which requires up to seven hours of communication with the client), so that the neural network can study habits, character, recording of external data, manner of communication and voice. Or, after death, a digital copy is made based on photos and videos, as well as a short interview with the person's relatives. Then the relatives can have a kind of IT séance with the deceased. But the cost of the procedure is high. An electronic copy will cost between 12 and 24 thousand dollars; each interview costs about $1,200 (Coupeau 2023).

In this way, artificial intelligence can significantly change people’s attitudes to life and death, as the diseased people will have the opportunity to continue to ‘live’ among us and communicate through their copies. Their images, thanks to AI, will even be able to learn, learn new facts and news, which will strengthen the illusion of the presence of a living person. It is likely that for many, the very fact of losing loved ones will become less tragic. But this may lead to additional problems of consciousness disorder in particularly impressionable people, and new psychological and social problems. The prevalence of digital copies will increase, but not rapidly.

In our opinion, it is unlikely that the development of artificial reality technologies is unlikely to be as groundbreaking as mobile phones, which made it possible to hold a conversation from anywhere in the world, or smartphones, which completely revolutionized people's understanding of the capabilities of technology, allowing you to do literally anything from anywhere in the world. But the phenomenon of artificial reality is too weak in economic terms compared to telephone, since its consumer properties are very specific and only intended for the idle rich. But in general, of course, such a movement can significantly change some relationships and things. It is possible that artificial reality will be actively developed (if not already developed) by the secretive and military agencies for the purpose of misinforming the enemy.

Cryptocurrency is becoming a more serious area; sates have also started to actively invest in this sector and get involved. But this is a special topic (for more details, see Grinin 2021c). A change in the nature of money and finance, and with it fundamental changes in the fi-nancial sector, including the erosion or even destruction of modern financial agents (including banks), changes in the relationship between financial authorities and the population – all this can lead to unpredictable consequences. Even in terms of undermining the hegemony of the dollar, if and when cryptocurrencies can one day become the global equivalent of precious metals, equally accepted in all corners of the globe.

4. AI AND HEALTH CARE

As mentioned above, digital companies are actively penetrating various sectors, and one of the most important sectors is healthcare. We will talk about medicine not only in the narrow sense of treatment and rehabilitation, but more generally as the area of control and self-control of health, lifestyle and life processes, condition monitoring, prognosis, diagnosis, etc. Various devices, watches, wristbands, etc., are being developed to monitor the pulse and various biorhythms, human activity (e.g., what distance, at what speed, along what trajectory it passed) and much more.

Moreover, more and more active attempts are being made to track and control a person's location, calls, messages, contacts, activity, sleep, driving, etc. Work is underway to track a person's emotions and try to limit their expression. The technology itself is reminiscent of the famous lie detector, as AI focuses on a person's breathing and heartbeat to determine emotions. But now, instead of using sensors on the body, it uses ordinary radio waves, including a Wi-Fi signal. This means that, in the foreseeable future, not only will it be impossible to hide our feelings from an invisible spy hidden in a wireless network, but artificial intelligence will begin to teach us how to behave. Prototypes are already being created. ‘Master, don't get excited, pull yourself together,’ the new wearable device says, vibrating or squeezing your wrist. The development is designed to help people who find it difficult to control their emotions (Glyantsev 2019; 2021).

At first glance, this seems a reasonable thing which can make life easier and help keep healthy. After all, what could be wrong with using software to measure emotional burnout by heart rhythm? Emotional burnout is a real psychological syndrome caused by constant stress, the scourge of the twenty-first century, according to the authors of the project. Its early diagnosis can prevent overwork, stress, depression, nervousness, and aggression. The researchers note that this method can also be used in the complex treatment of mental disorders and headaches (Muraya 2022). Of course, this can be a really good way to control yourself if it remains at the level of voluntary use and the data is not used by third parties or ‘leaked’ to impose various kinds of goods and services. Now, of course, all kinds of devices and smart things are voluntary choices, but we have all seen how voluntariness turns out to be first voluntary-obligatory, and then coercive. Today, without a mobile phone, a person is absolutely powerless and sometimes feels worse than without an ID: he cannot even open a bank account, log in to his account, etc. We can hardly live without Wi-Fi. However, few people know that passive visualization technology that is the acquisition of images of spaces and people, has been developing over these waves for a long time. Recently, very high image resolution has been achieved using commercial Wi-Fi signals by beamforming (MR. E 2023). The image appears to have been taken using infrared rays. All human movements in the room are quite distinguishable. This means that we can now be constantly monitored via Wi-Fi.

So, today there is a clear shift in the field of application of AI in medicine and, more generally, in the field of human lifestyle and activity, health and the quality of biological life. And this trend will intensify.16 Thus, the first prerequisites are being created for what we call a system of techno-medical-biological environment, designed to create a kind of artificial environment for the constant monitoring of people's condition, which we expect to be fully operational in a few decades. Finally, it is extremely important that this trend restricts human rights and freedoms as little as possible (see about it Grinin, Grinin, Korotayev 2023).

FUNDING

The research was supported by the Russian Science Foundation (project No. 23-11-00160 ‘Modeling and forecasting the development of the BRICS countries in the 21st century in the context of global dynamics’).

NOTES

1 It is believed that the world's first electromechanical computer was created by the German Konrad Zuse in 1940, it was called Z2. Zuse later attempted to create a prototype of an electronic programmed computer. Relay machines were used for a time in the 1950s and even competed (sometimes not without success) with electronic computers (see, e.g., Apokin and Maistrov 1990: 237).

2 The fourth technological order (1940s–1970s) was based on new chemistry (chemistry of artificial materials), the automobile industry and non-computer/non-interactive electronics (radio, teletransistors, etc.) combined with automation. The fifth technological mode (in the 1980s – 2020s) is mainly associated with the development of information and computer, communication, financial, management and remote technologies.

3 In the field of chip production, which has recently found itself at the epicenter of geopolitical interests, other problems may arise in addition to purely technological ones, for example, a shortage of fresh water for production, which is extremely water-intensive. For example, in Taiwan chip manufacturers and farmers fight over water.

4 Microelectronics is currently undergoing a transition from silicon carbide to gallium nitride, but this is clearly not enough for the project described.

5 Dozens of such computers have already been created, but there are very serious problems with errors in the calculations. However, there is information about progress in this direction (Stavitsky 2021).

6 For the first time, it has been possible to create a working transistor based on a single atom. For a single-atom transistor to be used in real devices, a single atom must be precisely positioned on a silicon chip. According to the nanotechnology journal Nature Nanotechnology (Fuechsle et al. 2012), this is exactly what the researchers have achieved. They used a scanning tunnelling microscope (a device that allows researchers to see and precisely manipulate atoms) and cut a narrow channel into the silicon base. Phosphine gas was then used to place a single phosphorus atom between two electrodes in the desired area. When an electric current passes through such a device, it amplifies and transmits the electric signal, which is the basic operating principle of any transistor (Law... 2015). However, since the above publications, there has been no evidence of any new progress in this direction.

7 Beyond definitions that consider AI as a theory or a field of study. The latter was founded as an academic discipline in 1956 (Crevier 1993). In this case, AI functions as a special theory of machines, more precisely, as a theory of intelligent machines. In the definition introduced in the early 1980s by computer systems theorists Avron Barr and Edward Feigenbaum, technology/science is quite clearly distinguished from the systems that use it: ‘Artificial intelligence (AI) is the branch of computer science concerned with the design of intelligent computer systems, that is, systems that have the characteristics which we associate with intelligence in human behavior – understanding language, learning, reasoning, problem solving, etc.’ (Barr and Feigenbaum 1981: 3).

8 In English terminology, Artificial Narrow Intelligence, ANI, Narrow AI; Artificial General Intelligence, AGI, Strong AI.

9 Transfer learning allows you to take knowledge gained from solving one problem and apply it to a specialized but related problem.

10 The GPT in ChatGPT means Generative Pre-trained Transformer.

11 Self-managing systems are systems of various types (purely technical, like robots, self-driving cars; bio- and nano-technical, including self-cleaning, physiological, such as artificial organs, genetic, activating/blocking certain programs of the body) that are capable of functioning, sometimes making complex decisions, mostly without human intervention. Today there are already many examples of such systems, but in the future they will become dominant.

12 It is likely that the new type of chats will be a breakthrough trend, but it is unlikely to become a saleable product for a wide range of buyers. It is not for nothing that at least some of the services offered by ChatGPT and similar chats are provided for free.

13 The company is recognized as extremist in Russia, its activities in the country are prohibited.

14 In general, however, the technology has not been very successful. As The Nation writes, the Meta-universe has crashed: with investments of several billion dollars, its record daily online audience has not exceeded 40 people, and in all that time it has only managed to make about $500 (Wagner 2023).

15 For some time now, they have been developing a negotiation bot that can lie and bargain with humans. As Quartz writes, during training, the system used more than 5.8 thousand real human dialogues during negotiations, collected using the online crowdsourcing platform Amazon Mechanical Turk (Kornev 2017).

16 It is characteristic that even such a probable technology of the future as a quantum computer, which will be able to speed up modern computers by orders of magnitude, in its purpose, according to futurologists, will be largely guided by the ability to perform calculations of various processes in the human body, including the brain, and it is expected that medicine will become one of its main areas of its application (see Bagraev 2015).

REFERENCES

Apokin I. A., Maistrov L. E. 1990. History of Computer Technology. From the Simplest Counting Devices to Complex Relay Systems. Moscow: Nauka. Original in Russian (Апокин И. А., Майстров Л. Е. История вычислительной техники. От простейших счетных приспособлений до сложных релейных систем. М.: Наука).

Bagraev N. T. 2015. Prospects for the Development of Quantum Computing. Report at the Second International Seminar “Basic Technologies of the First Half of the 21st Century (Structural-Cyclic Analysis). October 1–2. St. Petersburg: SPbPU im. Peter the Great. Original in Russian (Баграев Н. Т. Перспективы развития квантовых вычислений. Доклад на Втором международном семинаре «Базисные технологии первой половины XXI века (структурно-циклический анализ). 1–2 октября. СПб.: СПбПУ им. Петра Великого).

Balabanov V. I. 2010. Nanotechnologies: Truth and Fiction. Moscow: Eksmo. Original in Russian (Балабанов В. И. Нанотехнологииправда и вымысел. M.: Эксмо).

Barabash A. 2015. Scientists from the USA have Created the First Photonic Processor. Hi-News.ru, December 25. URL: https://hi-news.ru/technology/uchyonye-sozdali-dejstvuyushhij-fotonnyi-processor.html?utm_source=rne....

Original in Russian (Барабаш А. Учёные создали действующий фотонный процессор. Hi-News.ru, 24 декабря).

Barr A., Feigenbaum E. A. (eds.) 1981. The Handbook of Artificial Intelligence. 2 vols. Stanford, CA; Los Altos, CA: Heuris Tech Press, William Kaufmann, Inc.

Belousov D. 2016. Change of Technological Structure: New Formats of Business, State and Society. In Burov, V. V. (ed.), Challenge 2035 (pp. 17–41). Moscow: Olimp-Business. Original in Russian (Белоусов Д. Смена технологического уклада: новые форматы бизнеса, государства и общества. Вызов 2035 / сост. В. В. Буров. М.: Олимп-Бизнес, С. 17–41).

Business World. 2023. Flash-Forward: What is Artificial General Intel-
ligence? Business World, February 2. URL: https://www.businessw-orldit.com/ai/artificial-general-intelligence/.

Campbell C. 2021. Inside the Taiwan Firm That Makes the World's Tech Run. Time, October 1. URL: https://time.com/6102879/semiconductor-chip-shortage-tsmc/. Accessed August 28, 2023.

Coupeau N. 2023. Re-memory, The AI that Helps Us Talk to the Dead. Greek Reporter, February 1. URL: https://greekreporter.com/2023/02/01/rememo-ry-the-artificial-intelligence-that-helps-us-talk-to-the.... Accessed August 29, 2023.

Crevier D. 1993. AI: The Tumultuous Search for Artificial Intelligence. New York: BasicBooks.

Datta A., Flammia S. T., Caves C. M. 2005. Entanglement and the Power of One Qubit. Physical Review A 72 (4) (October 18): 042316. DOI: 10.1103/PhysRevA.72.042316. Accessed August 30, 2023.

Drexler K. E. 1987. Engines of Creation: The Coming Era of Nanotechnology. New York: Anchor Press/Doubleday.

Drexler K. E. 1992. Nano-Systems: Molecular Machinery, Manufacturing, and Computation. New York: John Wiley & Sons.

Drexler K. E. 2013. Radical Abundance: How a Revolution in Nanotechnology will Change Civilization. New York: Public Affairs,

Fuechsle M., Miwa J. A., Mahapatra S., Ryu Hoon, Lee Sunhee, Warsch-kow, O., Hollenberg L. C. L., Klimeck G., Simmons M. Y. 2012. A Single-Atom Transistor. Nature Nanotechnology 7 (4) (April): 242–246. DOI: 10. 1038/nnano.2012.21.

Gates B. 2023. The Age of AI has Begun: Artificial Intelligence is as Revolutionary as Mobile Phones and the Internet. URL: https://www.gatesnotes.
com/The-Age-of-AI-Has-Begun. Accessed August 28, 2023.

Glyantsev A. 2019. The New Bracelet Reads the Owner's Emotions. Smotrim.ru, July 2. URL: https://smotrim.ru/article/1216271Original in Russian (Глянцев А. Новый браслет считывает эмоции владельца. Смотрим.ру, 2 июля). Accessed August 30, 2023.

Glyantsev A. 2021. Big Brother: Our Emotions will be Tracked using Wi-Fi. Smotrim.ru, February 4. URL: https://smotrim.ru/article/2519509Original in Russian (Глянцев А. Большой брат: наши эмоции будут отслеживать с помощью Wi-Fi. Смотрим.ру). Accessed August 31, 2023.

Grinin L. E. 2021а. The Fourth Technological Mode (the late 1940s –early 1980s): the Rise and Decline of Industrial Capitalism. In Grinin, L. E., Korotayev, A. V. (eds.), Kondratiev waves: Technological and Economic Aspects (pp. 171–187). Volgograd: Uchitel Publishers. Original in Russian (Гринин Л. Е. Четвертый технологический уклад (конец 1940-х – начало 1980-х гг.): расцвет и закат промышленного капитализма // Кондратьевские волны: технологические и экономические аспекты / отв. ред. Л. Е. Гринин, А. В. Коротаев. Волгоград: Учитель, С. 171–187).

Grinin L. E. 2021b. Fifth Technological Mode (the1980s – present). In Grinin, L. E., Korotayev, A. V. (eds.), Kondratiev waves: Technological and Economic Aspects (pp. 187–193). Volgograd: Uchitel Publishers. Original in Russian (Гринин Л. Е. Пятый технологический уклад (1980-е гг. – настоящее время) // Кондратьевские волны: технологические и экономические аспекты / отв. ред. Л. Е. Гринин, А. В. Коротаев. Волгоград: Учитель, С. 187–193).

Grinin L. E. 2021c. Negative Rates and Other New Financial Technologies. Obshchestvo i ekonomika 2: 18–30. Original in Russian (Гринин Л. Е. Отрицательные ставки и другие новейшие финансовые технологии. Общество и экономика. № 2. С. 18–30. DOI: 10.31857/S020736760013634-6).

Grinin L. E., Grinin A. L. 2015а. Cybernetic Revolution and the Sixth Technological Mode. Istoricheskaya psikhologiya i sotsiologiya istorii 8 (1): 172–197. Original in Russian (Гринин Л. Е., Гринин А. Л. Киберне-тическая революция и шестой технологический уклад. Историческая психология и социология истории. № 8(1). С. 172–197).

Grinin L. E., Grinin A. L. 2015b. From Choppers to Nanorobots. The World is on the Way to the Epoch of Self-Regulating Systems. Moscow: Moscow branch of Uchitel Publishers. Original in Russian (Гринин Л. Е., Гринин А. Л. От рубил до нанороботов. М.: Моск. ред. изд-ва «Учитель»).

Grinin L. E., Grinin A. L. 2021. Sixth Technological Mode (forecast). In Grinin, L. E., Korotayev, A. V. (eds.), Kondratiev waves: Technological and Economic Aspects (pp. 200–216). Volgograd: Uchitel Publishers. Original in Russian (Гринин Л. Е., Гринин А. Л. Шестой технологический уклад (прогноз) // Кондратьевские волны: технологические
и экономические аспекты
 / отв. ред. Л. Е. Гринин, А. В. Коротаев. Волгоград : Учитель, С. 200–216).

Grinin L., Grinin A., Korotayev A. 2021. Does COVID-19 Accelerate the Cybernetic Revolution and Transition from E-government to E-state? In Grinin, L. E., Korotayev, A. V. (eds.), Kondratieff Waves: Processes, Cycles, Triggers, and Technological Paradigms (pp. 95–125). Volgograd: Uchitel.

Grinin L., Grinin A., Korotayev A. 2023. Global Aging and Our Futures. World Futures 79 (5): 536–556. DOI: 10.1080/02604027.2023.2204791.

Gutter R. S., Polunov Yu. L. 1981. From the Abacus to the Computer. Moscow: Znanie. Original in Russian (Гуттер Р. С., Полунов Ю. Л. От абака до компьютера. М.: Знание).

Harari Y. N. 2016. Yuval Noah Harari on Big Data, Google and the End of Free Will. Financial Times, August 26. URL: https://www.ft.com/con-tent/50bb4830-6a4c-11e6-ae5b-a7cc5dd5a28c Accessed August 29, 2023.

Kish L. B. 2002. End of Moore's Law: Thermal (Noise) Death of Integration in Micro and Nano Electronics. Physics Letters, Section A: General, Atomic and Solid State Physics 305 (3–4): 144–149. DOI: 10.1016/S0 375-9601(02)01365-8.

Khel I. 2015. Ten Technologies that 2015 should be Remembered for. Hi-News.ru, March 8. URL: http://hi-news.ru/technology/10-texnologij-koto-rymi-dolzhen-zapomnitsya-2015-god.html. Original in Russian (Хель И. 10 технологий, которыми должен запомниться 2015 год. Hi-News.ru).

Kornev A. 2017. Facebook artificial intelligence has learned to lie and bargain. C-News, June 15. URL: https://www.cnews.ru/news/top/2017-06-15_iskusstvennyj_intellekt_facebook_nauchilsya_vratOriginal in Russian (Корнев А. Искусственный интеллект Facebook научился врать и торговаться. C-News, 15 июня). Accessed August 31, 2023.

McCarthy J. 1990. Artificial Intelligence, Logic and Formalizing Common Sense. URL: http://www-formal.stanford.edu/jmc/ailogic.pdf. Accessed August 29, 2023.

McCarthy J. 2000. Concepts of Logical AI. URL: http://www-formal.stan-ford.edu/jmc/concepts-ai.pdf. Accessed August 29, 2023.

McCarthy J. 2007. What is Artificial Intelligence? Computer Science Department. Stanford University. November 12. URL: http://www-formal.stan-ford.edu/jmc/whatisai/whatisai.html. Accessed August 29, 2023.

Mitchell T. 1997. Machine Learning. New York: McGraw-Hill.

Moore S. K. 2021. Cerebras' New Monster AI Chip Adds 1.4 Trillion Transistors Shift to 7-nanometer Process Boosts the Second-generation Chip's Transistor Count to a Mind Boggling 2.6-trillion. IEEE Spectrum. April 20. URL: https://spectrum.ieee.org/cerebras-giant-ai-chip-now-has-a-trillions-more-transistors. Accessed August 28, 2023.

MR. E. 2023. Your WiFi Can See You. Bombthrower, September 18. URL: https://bombthrower.com/your-wifi-can-see-you/.

Muraya O. 2022. Russian Application will Determine Burnout by Heart Rhythm. URL: https://smotrim.ru/article/2812453Original in Russian (Мурая О. Российское приложение определит выгорание по ритму сердца.) Accessed August 29, 2023.

Proydakov E. 2016. Robotics in Twenty Years. In Burov, V. V. (ed.), Challenge 2035 (pp. 114–131). Moscow: Olimp-Business. Original in Russian (Пройдаков Е. Робототехника через двадцать лет // Вызов 2035 / сост. В. В. Буров. М.: Олимп-Бизнес, С. 114–131).

Romanov R. V. 2012. Classical concepts for solving problems of artificial intelligence (philosophical aspect). Filosofskiye nauki 3: 241–247. Original in Russian (Романов Р. В. Классические концепции решения проблем искусственного интеллекта (философский аспект). Философские науки. № 3. С. 241–247).

Sandomirskaya Yu. 2021. Artificial Intelligence and neuromorphic Computing: A Second Wind. Kommersant Nauka, November 30. URL: https://www.kommersant.ru/doc/5089550Original in Russian (Сандомирская Ю. Искусственный интеллект и нейроморфные вычисления: второе дыхание. Коммерсант Наука. 30 ноября). Accessed August 30, 2023.

Stavitsky A. 2021. Scientists have Created an Error-Resistant Quantum Computer for the First Time. How will New Technology Change Science and Economics? Lenta.ru, October 14. URL: https://lenta.ru/brief/2021/10/14/quant/Original in Russian (Ставицкий А. Ученые впервые создали устойчивый к ошибкам квантовый компьютер. Как новая технология изменит науку и экономику? Лента.ру. 2021. 14 октября). Accessed August 27, 2023.

Shanahan M. 1997. Solving the Frame Problem, a Mathematical Investigation of the Common Sense Law of Inertia. New York: M.I.T. Press.

Shumikhin S. 2021. Artificial General Intelligence – the Search for the Holy Grail of Artificial Intelligence. Habr, April 2. URL: https://habr.com/ru/ar-ticles/550292/Original in Russian (Шумихин С. Artificial General Intelligence – поиски Святого Грааля искусственного интеллекта. Хабр,
2 апреля).

Thakur, Ch. S., Lottier Molin J., Cauwenberghs G., Indiveri G., Kumar K., Qiao Ning, Schemmel J., Wang Runchun, Chicca E., Olson Hasler J., Jae-sun Seo, Shimeng Yu, Yu Cao, van Schaik A., Etienne-Cummings R. 2018. Large-Scale Neuromorphic Spiking Array Processors: A Quest to Mimic the Brain. Frontiers in Neuroscience 12. DOI: 10.3389/fnins. 2018.00891.

Thomason R. 2003. Logic and Artificial Intelligence. In Zalta, E. N. (ed.), The Stanford Encyclopedia of Philosophy. Stanford, CA: The Metaphysics Research Lab. URL: http://plato.stanford.edu/entries/logic-ai/. Accessed August 29, 2023.

The White House. 2022. What is the Blueprint for an AI Bill of Rights? URL: https://www.whitehouse.gov/ostp/ai-bill-of-rights/what-is-the-blueprint-for-an-ai-bill-of-rights/. Accessed August 31, 2023.

Tsvetkov V. Ya. 2017. Moore's Law and others. Mezhdunarodnyy zhurnal prikladnykh i fundamental'nykh issledovaniy 1–2: 370. URL: https://ap-plied-research.ru/ru/article/view?id=11205Original in Russian (Цветков В. Я. Закон Мура и другие. Международный журнал прикладных и фундаментальных исследований № 1–2. С. 370).

Turing, A. M. 1950. Computing Machinery and Intelligence. Mind 59: 433–460.

UNESCO. 2021. Recommendation on the Ethics of Artificial Intelligence. URL: https://unesdoc.unesco.org/ark:/48223/pf0000380455.

VCUresearch. 2023. Chat GPT and the Talking Dog with Dr. Terry Sejnowski. YouTube. URL: https://www.youtube.com/watch?v=dZOEXNIrZLI.

Veldhorst M., Yang C. H., Hwang J. C. C., Huang W., Dehollain J. P., Muhonen J. T., Simmons S. et al. 2015. A Two-Qubit Logic Gate in Silicon. Nature 526 (7573) (October): 410–414. DOI: 10.1038/nature15263.

Wagner K. 2023. Lessons from the Catastrophic Failure of the Metaverse. The Nation, July 3. URL: https://www.thenation.com/article/culture/metaverse-zuckerberg-pr-hype/. Accessed August 31, 2023.

Zgurovsky M. Z., Zaichenko Yu. P. 2013. Fundamentals of Computational Intelligence. Kyiv: Naukova Dumka. Original in Russian (Згуровский М. З., Зайченко Ю. П. Основы вычислительного интеллекта. Киев: Наукова думка).

Zhang N. 2022. Moore's Law is Dead, Long Live Moore's Law! ArXiv. Preprint ArXiv:2205.15011.