Потребителски вход

Запомни ме | Регистрация
Постинг
26.03 13:44 - Why Artificial Intelligence is not Intelligence
Автор: virtuals Категория: Други   
Прочетен: 519 Коментари: 0 Гласове:
1

Последна промяна: 26.03 13:48

Постингът е бил сред най-популярни в категория в Blog.bg Постингът е бил сред най-популярни в Blog.bg
Why Artificial Intelligence is not Intelligence


In recent years, the media has persistently flooded us with apocalyptic predictions about the dangerous development of Artificial Intelligence (AI). At the heart of most of them is the misconception that AI is intelligence and can think and understand better than humans...
At the same time, we are constantly being told that AI thinks and understands better than humans themselves.
To debunk these myths about AI, we need to briefly examine the nature of today"s AI.

The transformation of human logic and information into machine logic is a fundamental process in computer science and Artificial Intelligence. The process involves the formalization, definition, and translation of mental matrices and mental processes into structures compatible with machines. This is not just a machine copy of human intelligence, but a radical transformation of logic, mental matrices, and models into digital reflection.

Human logic is complex and intricate, contextual, relative, analog and permeated by all our experience, intuition and common sense.
Digital Translation begins with abstraction and determination of the specific problem to its important points, nodes, connections, etc. logical relationships.

Human thought is transformed into rigorous mathematical and/or logical systems.

• Boolean logic (AND, OR, NOT, IF-THET, etc.) – the basis of digital circuits.
• Predicate logic – for representing objects, properties and relationships (For every X, if X is human, then X is mortal).
• Logical programming (Prolog) – here knowledge is expressed as facts and rules, and the machine can draw conclusions based on them.
• Lambda calculus and functional programming – the logic of functions and complex transformations.

From our linguistic (conceptual) thinking we move to algorithmization, which is the core of AI transformation.

• An algorithm is a finite, definite, ordered list of steps to solve a problem(s).
• Human empirics and heuristics (practical rules acquired from experience) are translated into heuristic algorithms (e.g. for finding the fastest free path).
• Inductive logic (from particular cases to general rules) is transformed into machine learning (data and expected results are given, and the machine derives its own "logical" rule/model).

There are limits to algorithmization and digital transformation.
Here lies the main problem. Much of human logic cannot be machine translated:

• Context and complex knowledge (deep background understanding is not taken into account).
• Paradoxes, metaphors, irony, etc. are translated into formal rules with great loss of meaning.
• Emotional and ethical logic is unattainable – "This is wrong, although it is completely logical." is digitally untranslatable.

For these and similar cases, complex approximations are used (for example, in the Large Language Models (LLM) of AI, which learn statistical patterns of human language and thoughts, without at all "understanding" them logically).

Digital transformation has different directions:

• "IF-THEN" rules (Expert systems): Direct recording and digital transformation of human expert knowledge by specialists.
• Decision Trees: Visual and hierarchical representation of decision-making logic.
• Forms of verification: Translation of various requirements (for example, "the elevator should never move with an open door") into strict logical formulas that the system can check.
• Neural networks: A radically different approach - instead of explicit logic, implicitly formed logic is simulated by adjusting weights in a neural network based on millions of different examples. This is only a digital simulation of the result of logical thinking, not a translation of human logic itself.

The transformation of logic is expressed in:

• From Analog to Discrete. Human logic works with continuous thought complex spectra and nuances. Machine logic works with hard discrete values ​​(1/0, true/false).
• From Relatively Incoherent to Absolutely Consistent. People tolerate contradictions, relativity, uncertainty and ambiguity. For the machine, they are fatal errors, unless they are specifically set.
• From Holistic to Reductionist. A person usually sees the picture as a whole. The machine breaks it down into separate, sequential portions and operations.

Digital transformation is not like language translation, but rather it is the mapping and codification of one natural phenomenon (human intelligence) onto another artificial and completely different one (the digital machine). Here we lose in context, nuance, uncertainty, relativity and integrity, but we gain in speed, scalability and scope.

In modern AI (especially in machine learning) an attempt is made to circumvent this rough translation by learning directly from the data. This simulates the results of human logic without any understanding.
This is both a powerful IT tool, but it also opens a deep gap between human and machine intelligence in terms of reality.
Different directions of AI create their own, unique information structures that play technical roles, pseudo-analogous to mental matrices and models, in radically different ways and essence.

Large Language Model (LLM) is based on neural networks and large databases.
Neural networks (especially deep neural networks) are the dominant idea in modern AI and are characterized by transferring a similarity of structures from the biological brain to mathematical and computational models. They are structured "bottom-up" through a data-driven approach, which is radically different from symbolic and expert AI systems.

Neural networks are akin to a powerful intuitive multiprocessor - very good at finding patterns in chaos, but lacking the ability to understand and logically explain the resulting solutions.
Their success lies in the digital simulation of the results of human intelligence, not in copying the mechanisms of human thinking themselves. Modern progress in AI is primarily due to a statistical, data-driven approach, which, however, raises serious questions about understanding, reliability, control and ethics.

In AI, we can talk about two main types of "disease" (hallucination and deep delusion). This is done by analogy with the human psyche. Distinguishing them reveals a lot about the nature of artificial intelligence systems.

Here"s how we can distinguish "hallucinations" and "deep delusions" in the field of AI, assuming that we use the terms metaphorically to describe purely technical phenomena.

AI "hallucinations" are Factual errors arising from a statistical deviation in the data or model.
There is a generation of a convincing, but factually inaccurate (incorrect) answer. The model "invents" dates, names, events, sources, etc., that do not correspond to reality.
The reason for this is the lack of ground truth (reference reality) in training, excessive acceptance of correlations from different data, noise in the system or contradictions in the training set.
For example, AI often claims that "Albert Einstein received the Nobel Prize for the Theory of Relativity in 1921" (The truth is that he received the prize for the photoelectric effect then).

AI"s "Deep Delusion" is a metaphor for a much deeper systemic breakdown.
It is a Systematic and consistently distorted AI "belief" that the model builds and defends in its internal logic, even in the presence of contradictory data and presented true and accurate counterarguments.
This is not a single error, but a persistent, erroneous pseudo-mental model (Dogmatic Machine Information Matrices - DMIM), embedded in the parameters of the neural network. The model is not just wrong with facts - it builds a complex but false narrative or causal information structure.
These are Systematic pseudo-biases due to biases in the training data, which lead to a fundamentally wrong pseudo-understanding of some ideas and concepts.This can also be the result of unexpected emergent behaviors in complex neural networks.
For example, a model trained primarily on corporate documents may develop a "delusion" that profit is the sole and absolute goal of all human action, and interpret all human motives only through this informational prism.
Another example - a model that has "learned" that correlation implies relationship, may enter into a "paranoid delusion" about causal relationships where none exist at all (e.g. "Because people carry umbrellas when it rains, umbrellas cause rain.").

The distinction is very important.
An AI "hallucination" is a localized "symptom" - a problem with a specific outcome. It can be corrected with better data, "grounding" techniques, or fact-checking.
"Deep Delusion" is a systemic "disease" - a problem with the AI"s own internal system model. Correcting it requires retraining, changing the architecture, or corrective intervention in the learning process to remove distorting pseudo-biases (DMIM).

A person with a hallucination (for example, under the influence of high temperature, etc.) sees something unreal that is not there, but can easily return to a normal state.
A person with deep delusions (paranoid disorder) builds a complete, alternative system of unrealistic beliefs that filters and interprets their entire internal reality (e.g., "Everyone is against me"). Here, treatment is much more difficult.

In AI, "delusions" are more frightening because they are systematic and programmed, not a random phenomenon. They can be silently present in thousands of applications, distorting AI decisions quite noticeably, but unnoticed.

In a direct analogy, we must recognize that AI can suffer from both "perceptual errors" (hallucinations) and "cognitive distortions" (deep delusions). This shows that complex information systems built to simulate intelligence can reproduce not only some of the strengths, but also some of the weaknesses of natural intelligence - but for fundamentally different, man-made reasons. This makes the task of creating reliable AI even more complex and responsible.

Delusion is not just a bug, but a fundamental feature of the statistical, probabilistic models that dominate AI today. It highlights the gap between human knowledge (related to experience and reality checking) and machine pseudo-knowledge (formed by modeling probabilities in available databases). Combating delusion is key to the evolution of more reliable AI systems.

Delusions and hallucinations are just a few of the fundamental problems of modern AI, especially systems based on machine learning and Large Language Models.

The problems stem from a common foundation: modern AI (especially statistical, data-based) does not understand the world, but forms complex models of correlations and compilations from data. It is a "master of statistics", but not of understanding the essence of things.

However, in AI there are complex, hierarchical, associative information models that simulate some aspects of conceptual thinking.
However, they are not "concepts" or "thoughts". They are computational artifacts - matrices, vectors, graphs, rules, weights, etc., which implement only functionality without mental essence and content.

This distinction is critically important not only for philosophy, but also for practice. It protects us from excessive expectations and directs us to understand AI as a powerful, but completely non-mental system for processing information.

AI systems do not have concepts (mental matrices) as mental contents with a semantic load, as in humans. They have functional equivalents – information structures that perform a similar role in information processing.
Symbolic AI has syntactic equivalents of concepts (symbols).
Neural AI has statistical equivalents (vector elements).
Transformers have dynamic, contextual equivalents, etc.

This is the fundamental difference between the simulation of intelligent behavior and natural intelligent thinking and understanding. The model may have a complex internal geometry with pseudo-concepts that allow it to function, but this information structure is not connected to real experience, real sensory and motor skills, which are the basis of human concepts (mental matrices). These are pseudo-concepts without consciousness, pseudo-mental matrices without meaning.
We can only define them as Machine Information Matrices.

Machine Information Matrices manipulate data and information to reduce the uncertainty of the result.
Mental matrices function in the field of consciousness, intellect and reason and include cognition and awareness of reality.
Human mental matrices arise from biological evolution, sensorimotor experience, own intention and social interaction.
Machine information matrices arise entirely from the optimization procedure (gradient descent, maximizing fidelity, etc.) towards a given goal (minimum error). They are the product of a computational, not a cognitive process.

In the Digital Environment in AI we see sliding on the surface of mental matrices, without access to their deep mental essence.
In this regard, great care must be taken about the influence of Virtual Reality and Virtual Unreality on people"s lives (reality).

Until recently, Information Technologies were digital information tools that facilitated the collection, storage, processing and transfer of information, but did not replace human thinking.
However, medias claims that AI be replace human thinking. That is, it is already planned by some important people that AI will be programmed to think what and how instead of humans in the future.
Although the so-called AI is not (yet) intelligence at all, many transfer not only the search and selection of information to it, but even thinking on certain issues and topics. Usually this concerns areas where these people are laypeople, amateurs and non-specialists. This helps them to accept the results of AI uncritically. That is, they themselves are practically already distorting and partially replacing their thinking. The danger of digital distortion and replacement of reality is already a very tangible real fact.

If you are a specialist in certain topics, you will quickly find that AI (LLM) is still at the level of a functionally illiterate student. The difference between AI and a human student is that AI always easily somehow justifies itself for the mistake and instead of stopping and correcting itself, it continues to generate new and new wrong results. Often, when we give it verified true and accurate data, AI ignores it or reviews it superficially and continues with its wrong results. Even when you give it a detailed, reasoned, true and accurate presentation, AI can continue to be wrong on the given topic.

Today, our own mental reflection of reality is increasingly formed by digital media streams and algorithmically selected, edited and generated by information platforms with AI and other digital environments.

The most significant difference between AI, intelligence and reason is that intelligence is a product of reason, i.e. reason is behind the intellect. There is no Artificial reason behind AI! Only human intelligence is behind it. The latter, as we know (from the Mental Matrices) is limited, i.e. the genesis of AI is initially limited. Believing that AI can think and understand like humans is more dangerous for your development than believing in the Sun god...


The book Mental Matrices is available for free download here:
https://sfera.zonebg.com/knigi.htm

Articles on Information Psychology can be downloaded from here:
https://research.zonebg.com/

Original publication:
https://virtuals.blog.bg/drugi/2026/03/15/zashto-izkustveniiat-intelekt-ne-e-intelekt.1980724



Гласувай:
1



Няма коментари
Търсене

За този блог
Автор: virtuals
Категория: Други
Прочетен: 102306
Постинги: 24
Коментари: 0
Гласове: 8
Архив
Календар
«  Април, 2026  
ПВСЧПСН
12345
6789101112
13141516171819
20212223242526
27282930