About Artificial Intelligence

In the year 2024, we were inundated with advertisements talking about the possibilities of artificial intelligence (AI). Without a doubt, it is a highly valuable tool that should be studied in detail and carefully overseen by scientists. The promises are many, ranging from AI focused on more specific tasks—which are more realistic to achieve—to the holy grail: general artificial intelligence, the latter still to be proven in the laboratory.

The idea behind AI lies in the neural networks of computers, intricate mechanisms of mathematical algorithms capable of learning patterns. Currently, these algorithms can communicate satisfactorily and access information very quickly. Some current problems, such as AI hallucinations—that is, giving imaginary answers that are not correct—still persist.

Like every dragon that glitters at the slightest glimpse of golden scales, the market decided to spearhead this revolution, pouring billions of dollars into research and increasing the capacity of these models. The problem is that no one knows for sure if this technology has the disruptive economic impact that was promised. The most down-to-earth analysts today are already questioning whether the bill for these investments can be paid in the future. Investment funds demand returns for this promise. In the US alone, hundreds of billions of dollars are being invested annually in AI, although some estimates vary. The European Union plans to mobilize up to 20 billion euros per year, combining public and private investments. In China, investments are also in the range of tens of billions of dollars. All these countries have plans to expand these investments in the coming years.

To justify such spending, a flurry of propaganda showcasing the qualities of AI is unleashed in the media. One of them is the promise of an artificial intelligence so advanced that it will replace humans in productive tasks and scientific research. This would be the so-called general artificial intelligence, which could adapt to different contexts and problems, very similar to the current human brain. This intelligence would behave in a way that is far superior to human intelligence.

However, one of the characteristics of intelligence is its intrinsic connection with society. If we take a human being and separate them from their peers, they would simply die of hunger or fall prey to other species. Evolutionarily, it only makes sense to analyze intelligence as a social factor that depends on a society and other humans to prosper. Since everything is interconnected, society, in turn, imposes characteristics on this intelligence. Of course, the phenomenon will have its particularities, and we cannot be reductionist by saying that intelligence is solely a social phenomenon and not an individual one. It’s merely a delineation where we point out a driving force associated with other forces to achieve the final result.

When human beings established themselves in societies, a series of factors molded our intelligence. We began to face obstacles related to how to establish good cultivation to obtain a fruitful harvest and how to set rules for communal living. Solving these problems requires the coordination of the necessary human resources for such feats. The intelligence that was previously used for individual purposes—such as obtaining food, hunting alone, discovering water sources, and finding the best ways to gather fruits, activities typical of non-social species—was transformed, through the necessity of social cooperation, into a more abstract intelligence. This new intelligence is capable of understanding the best ways to perform more complex tasks in strict collaboration with others.

The principle behind increasing intelligence, as addressed by scientists today, faces debates and challenges. While boosting computational power, energy, and data quantity has driven significant advancements, there is a growing recognition that merely scaling resources will not solve all AI problems. Innovative approaches and more efficient algorithms are needed to overcome practical limitations such as high costs, energy sustainability, and diminishing efficiency.

Another issue that needs to be addressed is the subjectivity of what would constitute an exponentially infinite intelligence—say, IQ 10,000²² Turbo+, whatever that may be. Let’s assume this intelligence is about discovering new things and applying them to the real world; at least for now, machines do not yet have access to autonomous research capabilities and sensory experiences to identify patterns, nor do they understand the limits of what is practical and feasible. They require constant human supervision for occasional corrections. One of the recognized aspects of human intelligence comes from the ability to be part of a society, implying underlying moral factors.

Furthermore, it is important to consider that the real world is not based solely on quantitative additions but on dialectical principles that show us that increases in quantity lead to changes in qualitative states. For example, by gradually adding water to a glass, initially, only the level of the liquid changes; however, upon reaching the brim, any additional drop causes an overflow, qualitatively changing the situation. Another example is heating water: by gradually increasing the temperature (quantity), the water remains liquid until it reaches 100 °C at sea level, when it undergoes a state change and transforms into vapor (quality). These examples illustrate that quantitative increases can lead to qualitative transformations. However, these qualitative changes do not necessarily signify improvement; they can also represent deterioration or aggravation. For instance, the excessive accumulation of CO₂ in the atmosphere not only quantitatively increases greenhouse gases but also leads to drastic climate changes, negatively impacting the environment.

It seems that current research is focused in the same way as our astronomical science: just keep adding values to the variable, and all problems can be solved. However, in practice, this approach ignores dialectical principles and possible qualitative changes that may occur, which do not necessarily lead to improvements—and may potentially worsen the situation—besides not necessarily leading to significant advances in understanding or artificial intelligence. The training of AI itself is currently going through a phase of feedback loops, where data generated by other AIs are feeding the algorithms of new AIs for training. This could end up creating a complete disconnection from the reality of these models.

Moreover, the “more is better” model seems to have already reached its limit in the natural world; the human brain has remained the same for at least 300,000 years. If the problem were simply adding processing power, the evolution of species would likely have attempted this model.

Intelligence has an omnipresent evolutionary aspect. It is a factor of the natural world. We perceive this quality differently from other species for several reasons: opposable thumbs; walking on two limbs, which frees the hands for task execution; the primitive human society that allowed us to hunt in partnership with other members of our tribe, giving us access to rich sources of proteins and calories; the development of language and culture. All these are phenomena anchored in the real world that allowed the evolution of human intelligence. This is intrinsically connected to our life in society and the tools we have to modify and experience the world around us. Human intelligence did not come into existence because one neuron was added to another in a constant cycle. It is the result of the evolutionary advantage that this tool gives us over other species.

In contrast, Artificial Intelligence does not have an actively participative function; it is merely a tool that needs to be activated and accessed to generate results. It does not have a genuine interest in the improvement of society because it does not participate in one. It is not an active member; it has no interests. Its achievements in the physical world are unknown to it. It does not directly benefit from its interaction with humans. Therefore, its only capacity to evolve lies exclusively in the human ability to solve problems, and its limitations will remain as long as this scenario persists.

In any case, there are two problems not directly related to AI science that need to be addressed beforehand. In a society where less and less human labor is needed, we need to understand the social impact this represents. The other is the amount of money and enthusiasm generated in this sector, potentially leading us to another economic bubble like the banking collapse in the US in 2008.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *