Algorithmic Reality and the Myth of Neutrality: Social Informatics Perspectives on AI‑Mediated Society

Автор: Momčilo Bajac, Milica Njegovan Gužvica

Журнал: Social Informatics Journal @socialinformaticsjournal

Статья в выпуске: 2 vol.4, 2025 года.

Бесплатный доступ

The rapid expansion of artificial intelligence (AI), algorithmic systems, and platform-based communication technologies is fundamentally changing the way social reality is produced, perceived, and interpreted, particularly among younger generations growing up in digitally mediated environments. This paper aims to demonstrate that the assumption of algorithmic neutrality represents one of the most prominent and dangerous myths in contemporary technological discourse. By uncritically accepting algorithms as objective, value-free, and merely technical instruments, societies risk obscuring their profound normative, epistemological, and political effects. As a conceptual-theoretical paper drawing on insights from social informatics, philosophy of technology, and critical AI studies, we offer a conceptual framework for understanding AI as a generator of mediated reality, rather than merely a passive tool. The paper is intended as a reflective guide for authors and researchers, encouraging critical engagement in exploring the consequences of AI-driven systems across all spheres of social reality.

Еще

Artificial intelligence, social informatics, algorithmic bias, mediated reality, youth, digital society

Короткий адрес: https://sciup.org/170211463

IDR: 170211463   |   УДК: 004.8:004.021]:316.3   |   DOI: 10.58898/sij.v4i2.01-10

Текст научной статьи Algorithmic Reality and the Myth of Neutrality: Social Informatics Perspectives on AI‑Mediated Society

Introduction: From the Information Society to Algorithmic Reality

Modern societies have entered a stage in which digital technologies not only support social processes but actively structure them. Social networks, recommendation systems, generative AI models, and automated decision-making tools increasingly determine what individuals see, read, learn, and believe. In this context, reality itself becomes technologically mediated. For younger generations, that are often described as digital natives , this mediation is not experienced as an external intervention but as a natural framework of existence.

In such an environment, algorithms function as invisible social actors. They filter information, rank relevance, predict preferences, and shape attention. Google, as a pioneer of surveillance capitalism with the help of AI, transforms data into highly profitable algorithmic products that predict and modify user behavior. This creates the foundation for unprecedented instrumental power and its social and political consequences (Zubof, 2019).

  • *Corresponding author: Momčilo Bajac, momcilo.bajac@famns.edu.rs

    © 2025 by the authors. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( https://creativecommons.org/licenses/by/4.0/ ).

Social informatics, as a field concerned with the mutual interaction between information technologies and society, is directly called upon to examine the consequences of this interaction. The central question is no longer whether technology affects society, but how deeply algorithmic systems participate in constructing social reality.

The Myth of Algorithmic Neutrality

One of the dominant narratives accompanying the adoption of artificial intelligence is the claim of neutrality. Algorithms are often presented as objective, rational, and immune to human biases. This narrative is deeply misleading. Algorithms are designed, trained, and applied within specific social, economic, and cultural contexts (Filipović et al, 2022). Their outcomes inevitably reflect the values, assumptions, and power relations embedded in the data, design choices, and institutional objectives.

The myth of neutrality becomes particularly problematic when algorithmic systems are entrusted with epistemic authority, deciding what counts as relevant information, credible knowledge, or acceptable behavior, especially in sensitive situations involving personal data and information (Bajac & Bjelajac, 2022). In such cases, algorithms do not merely process reality; they actively produce it. Neutrality therefore serves as an ideological shield that conceals responsibility and discourages critical scrutiny of the constructed reality.

This myth of algorithmic neutrality is closely intertwined with the myth of the “purity” of artificial intelligence and its ecological neutrality, which has been clearly and convincingly demystified by Kate Crawford in her book Atlas of AI (Crawford, 2021).

AI-Generated Content and the Construction of Meaning

A defining feature of the current technological moment is the mass production of AI-generated content. Texts, images, videos, and even social interactions are increasingly synthesized by machine learning models. For younger users, distinguishing between human-generated and AI-generated content is often irrelevant (Mishra et al., 2025).

This raises profound questions for social informatics. When meaning is algorithmically produced, traditional assumptions about authorship, authenticity, and intent are, to say the least, questionable. Reality becomes partially simulated yet socially effective. Individuals respond emotionally, cognitively, and behaviorally to content regardless of its origin (Carr, 2010). In this sense, AI systems function as cocreators of social reality.

Simulation, Post-Truth, and the Hybridization of Reality

The emergence of AI-generated content intensifies what Jean Baudrillard described even before the advent of computers and AI as the era of simulation: a state in which signs no longer represent reality but replace it (Bodrijar, 1985). In algorithmic environments, simulations are not merely symbolic representations; they are operational structures. They guide behavior, shape expectations, and organize social interaction. Reality, in this sense, becomes performative.

This fact can be productively combined with Berger and Luckmann’s theory of the social construction of reality (Berger & Luckmann, 1966). While earlier forms of socialization relied on relatively stable institutions (family, education, media), algorithmic systems introduce a fluid, continuously adaptive layer of 2

reality construction. Social meanings are no longer stabilized through shared narratives but are dynamically recalibrated through data-driven feedback loops.

Luciano Floridi’s concept of the infosphere further clarifies this transformation (Floridi, 2018). Human existence increasingly unfolds within an informational environment where distinctions between online and offline, artificial and natural, human and machine, lose analytical clarity. Within the infosphere, AI systems act as epistemic agents: they not only transmit information but actively participate in shaping what can be known, remembered, and valued.

Youth, Perception, and Algorithmic Socialization

From the perspective of social informatics, this phenomenon calls for a shift from explanations at the individual level to systemic analysis. The question is not whether young people critically assess information, but whether the informational environments in which they live even allow such critical distance to emerge.

From Media to Environment: The Change in the Ontological Status of Technology

For previous generations, technology was a tool (TV, radio, computer) or a mediator of reality. For younger generations (digital natives), technology is the environment – a primary reality, not a secondary representation. Here, we see a drastic, paradigmatic ontological shift in which reality is no longer equivalent to the physical world. Reality becomes a hybrid of physical, digital, and algorithmic elements.

Beyond these immediate effects, this has direct implications for democracy, education, and political participation. New technologies, social networks, and artificial intelligence do not merely change how younger generations communicate; they actively reconfigure how reality, identity, and authority are perceived. The challenge facing society as a whole is no longer the adoption of technology but the ethical and institutional design capable of preserving human autonomy within algorithmically mediated environments.

Digital Environment vs. Evolutionary Brain Development

Before the digital revolution, information arrived linearly and within natural rhythms (speech, reading, observing the environment), and the brain evolved to filter out the unnecessary and focus on what was relevant for survival and social interactions. In a digital environment, information is multisensory, hyperfast, and fragmented. Constant notifications, feeds, video clips, and interactive content stimulate the dopamine network and reward short-term attention. They interrupt deep, focused thinking and prolonged information processing ( deep work ).

Working Memory and Cognitive Capacity

Information processing in the brain involves selection, filtering, focus, and abbreviated heuristics. This is a system developed over millennia for controlled attention and problem-solving in the real world. When the volume of input (information) exceeds the capacity of working memory, overload occurs, resulting in reduced decision-making ability, decreased concentration, and fragmented attention (Mayer, E. R. 2005).

Impact on Children and Adolescents

Children may become hyper-stimulated, exerting increased mental effort to filter “important from unimportant.” Rapid selection of information, where the brain favors fast, superficial conclusions, reduces critical thinking and reflection, while excessive multitasking decreases working memory capacity, affecting planning, self-control, and long-term focus. Although the brain is flexible and allows adaptation to new conditions, the evolutionary framework was optimized over millennia for linear and contextual processing (Harari, 2015).

Artificial Intelligence, Power, and the Risk of Total Mediation

The popular metaphor of The Matrix , a fully simulated reality that its inhabitants accept as natural, gains new significance in the context of AI-driven societies. Although often dismissed as science fiction, the Matrix metaphor functions as a powerful heuristic for understanding technologically mediated reality (Lan-goday et al., 2025). It is important to note that such a state does not require a unified, totalitarian system of domination. Partial, fragmented, and normalized mediation can be even more effective.

In this sense, contemporary societies may already inhabit a “soft Matrix”: a reality continuously curated by algorithms. Individuals are not violently separated from reality; rather, reality itself is gradually reconstructed around them. This process closely aligns with the logic of post-truth societies, where emotional resonance, visibility, and algorithmic amplification surpass factual accuracy.

Here, the myth of algorithmic neutrality plays a crucial stabilizing role. By framing algorithmic outputs as objective reflections of reality, mediation systems become epistemically invisible. Users experience algorithmically generated environments as neutral, natural, and inevitable. Over time, this erodes the distinction between representation and reality itself, replacing critical distance with adaptive immersion.

For civilization to “slide” into the Matrix, it is not necessary to fully simulate reality; structuring the conditions of perception is sufficient. Thus, younger generations do not “enter” the Matrix. They grow up within it. Therefore, the most dangerous aspect of AI is not the autonomy of machines, but the transformation of humans who no longer know where their perception ends and the system-Matrix begins. The Matrix is not a technical but an epistemological and emotional prison. Within it, people do not lose freedom by force but by surrendering the interpretation of reality to algorithms. If people lose the ability to create meaning for themselves, AI will not need to control them. They will adapt and submit to the systemMatrix (Langoday et al., 2025).

When discussing the search for meaning in new conditions, Yuval Noah Harari starts from the assumption that a technologically mediated future, especially one shaped by AI, dismantles traditional sources of meaning: religion, nation, ideology, profession, and even personal identity (Harari, 2024). According to Harari, AI systems will not only replace human labor but will surpass humans in many cognitive and creative domains, challenging the idea of human uniqueness. In this context, his central question is not how to control AI, but how to preserve human meaning, autonomy, and dignity in a world where algorithms know more about us than we do ourselves.

According to Harari, meaning is not given by external authorities but learned. Future generations will not inherit ready-made narratives but will need to develop mental flexibility and the ability for constant “reinvention”. The most important skill becomes self-awareness – not technical expertise, but the ability to understand one’s own emotions, fears, and motives. The danger is not AI itself but the manipulation of attention. If algorithms take control of our attention, emotions, and decisions, humans can become “hacked organisms”.

New Reality vs. the Old Economic Paradigm – Do We Have Solutions for the Situation We Are Facing?

Why is responsibility, especially self-responsibility, more necessary today than ever? Predictions and statistics from reference institutions and experts estimate that over the next five years, a large portion of the global population will be left without employment due to rapid and paradigmatic changes. According to Yuval Noah Harari, the central challenge of an AI-mediated future is not technological domination but the preservation of human meaning in a world where algorithms increasingly shape perception, attention, and decision-making. In a world governed by intelligent systems, meaning becomes a human responsibility rather than a structural guarantee.

In this context, his often-criticized concept of the “useless class”, a class of economically irrelevant, unemployable people, is relevant. This is not a moral judgment but a structural diagnosis, a warning. Harari does not claim that people are useless as human beings; rather, in a system based on the market value of labor and competitiveness, a large number of people can become economically irrelevant. Industrial capitalism created jobs, digital capitalism rationalizes them, and AI capitalism eliminates them faster than it creates new ones. The problem is not unemployment as such, but persistent unemployability.

Smith’s “invisible hand of the market” operates under the assumption of human labor creating value, scarcity of resources, and market coordination of interests. The AI paradigm breaks the link between labor and value, reduces scarcity in the informational sphere, and introduces coded coordination instead of market coordination. Therefore, attempting to plan the AI era solely through Smithian categories is historically anachronistic. This is a cultural, not a technical problem. A new economy of ownership is necessary. If AI will produce value in the new reality, who owns the AI? Here, there is space for DAO (Decentralized Autonomous Organization) models, collective ownership of algorithms, similar to the Bitcoin community (Bajac & Bjelajac, 2022), and even AI as a public good. New social narratives are required. Without a new narrative, the “unemployed” remain “unsuccessful,” and UBI remains social assistance rather than a civilizational step forward. UBI, within the logic of capitalism, still assumes the market, money, and consumption as meaning. It does not change the ontology of labor but bypasses it. The problem of UBI funding is not trivial, and there is currently no clear solution. Without changes in the ownership of the means of production, within the logic of capitalism defined by Adam Smith in The Wealth of Nations (1776), UBI becomes “reluctant”: an unnatural redistribution of profits for capitalism, not a system transformation. In other words, technology is ahead of social theory, economics above ethics, and politics ahead of a deep understanding of the new reality.

The new reality currently has no complete solution for the “unemployable class,” into which a large number of young people could easily “slide” if they fail to take their responsibility for seeking meaning seriously in a post-authority and post-truth world. The creators of the “new reality” are still thinking in old economic terms about a new technological ontology.

New Paradigm, the Great Reset, and the Question of Human Nature

Klaus Schwab, in The Fourth Industrial Revolution (Schwab, 2016), suggests a new paradigm emphasizing the convergence of physical, digital, and biological systems. From the perspective of social informatics, this convergence signifies a transition to hybrid reality: a socio-technical state in which human cognition, machine intelligence, and material environments co-produce social experience. But it does not 6

stop there.

By erasing the boundary between physical, digital, and biological, the conditions under which identity, autonomy, labor, the body, and consciousness are formed are changing. While he implies that human nature itself will change, Schwab does not explicitly claim, in a strictly biological sense, that “humans will cease to be human.” His thesis is subtler but far-reaching. In other words, Schwab does not start from the classical philosophical definition of “human nature” but from functional anthropology: how people perceive the world, make decisions, communicate, and experience their body and mind. In this sense, a change in human nature does not necessarily mean a change in DNA (although that is no longer excluded), but a change in the mode of being in the world.

Convergence of Physical, Digital, and Biological: What Specifically Changes?

  •    Physical–Digital Convergence: Always-on connectivity, algorithmic perception of reality, exter-nalization of memory, attention, and orientation, inevitably leading to cognitive dependence on technical systems.

  •    Digital–Biological Convergence: AI in medicine (diagnostics, predictive models), neurotechnologies, bioinformatic personalization (genetics, mRNA therapies). As a result, the body becomes a data-fied system.

  •    Physical–Biological Convergence (nanomedicine, biomedicine): Implants, biohacking, life-extension, modification of bodily functions. Consequently, the boundary between natural and artificial bodies blurs, producing hybrid humanity, promoted by the ideology of transhumanism.

Although there are different approaches to defining human nature (essentialist, historical-constructivist, and techno-anthropological), the Fourth Industrial Revolution and its underlying technologies do not directly change human nature, but radically alter the conditions of its formation. When perception, memory, decision-making, and embodiment are systematically delegated to technological systems, the boundary between human and technical becomes ontologically unclear. If human nature has historically always been shaped by the environment, then a highly technological, algorithmically mediated environment has the potential to produce a new form of the human subject. Not biologically different, but epistemologically and existentially transformed.

Technology has no ethics, purpose, or compassion. It only has an internal logic of optimization. Meanwhile, young people remain without clear narratives of meaning, without value authorities, and without “initiatory” education for the new reality.

The Great Reset – Long-Awaited Utopia or Uncertain Dystopia?

Should humans transfer at least part, if not all, of their sovereignty to AI-led technology? Viewed from a historical perspective, humans have continuously produced violence, exploitation, war, and suffering, inflicting pain on one another. If intelligence beyond human passions assumes part of decision-making, chaos may decrease, resource allocation may be rationalized, and tribalism may weaken. On the other hand, intelligence without ethics can become the perfect ruler of injustice, a cold optimizer of suffering. There is a danger that evil will not disappear, but rather become more efficient and invisible.

We are currently entering a period that could be called an anthropological transition, which some may term the Great Reset (neither the first nor the last). It is akin to the transition from hunter-gatherer to agricultural societies or from feudalism to the industrial age (Harari, 2015). During major civilizational 7

transitions, “old wisdom” is often lost or forgotten, violence becomes inevitable, and meaning temporarily disappears. However, the difference today is the speed of paradigm change. Unlike previous transitions, this one occurs within a single generation.

Who controls these changes, and why is the intellectual elite mostly silent?

The world is too complex, driven by many intertwined interests. No one fully controls the changes, but a small number of actors significantly steer them. We are facing what could be called warning-level technological power without political responsibility. The most significant actors are:

  •    Silicon Valley, which holds infrastructural power (know-how, computing power, data, platforms).

  •    Capital, investment funds driven by a growth-at-all-costs logic.

  •    States, mostly reactive rather than proactive.

  •    Military-security sector, quiet but deeply present.

  •    AI systems, which already produce unexpected consequences even for their creators (Harari, 2024).

  •    Individuals such as Peter Thiel, Elon Musk, Sam Altman, Bill Gates etc., often called “world rulers,” acting as accelerators, narrative architects, and legitimizers of change. They ride the wave of the new technological paradigm, but the wave is no longer entirely under their control.

Why is the intellectual class largely silent?

  •    Status corruption: Academia has become bureaucratic, grant-oriented, and politically cautious. Radical questions are risky for careers.

  •    Obsolete epistemological tools: Most social sciences think linearly, fail to understand nonlinear systems, and lag (at least) a decade behind technology.

  •    Fear of technological irrelevance: Many feel they are losing the monopoly and control over knowledge, especially technological. It is easier to remain silent than to admit no longer being a central interpreter of the world.

Implications for Research and Public Discourse

For researchers aiming to contribute meaningfully to the Social Informatics Journal , studies on the impact of new AI-led technologies should not be limited to technical performance or economic efficiency. They must address questions of meaning, power, and social transformation.