Legal Protection of Electronic Contracts in the Era of Artificial Intelligence
Автор: Gridi M., Mouafek N.
Журнал: Science, Education and Innovations in the Context of Modern Problems @imcra
Статья в выпуске: 3 vol.8, 2025 года.
Бесплатный доступ
With the advent of the Fourth Industrial Revolution, artificial intelligence has undergone rapid development, resulting in increasing transformations in human life due to the evolution of the digital world. This technology can disrupt the balance of the social environment, thereby influencing the behaviours of individuals and society as a whole. Consequently, new legal challenges concerning privacy, security, and legal liability have emerged. These challenges have prompted states to strive to develop legislative frameworks that ensure a balance between encouraging technological advancement and addressing the associated risks. This is achieved by establishing legal rules that govern the use of artificial intelligence, enabling the benefits to be harnessed, the drawbacks to be mitigated, and the protection of electronic contractual transactions and users' privacy across digital platforms to be preserved.
Electronic contracts, Artificial intelligence, Legal protection, Informational privacy, Electronic communications
Короткий адрес: https://sciup.org/16010536
IDR: 16010536 | DOI: 10.56334/sei/8.3.61
Текст научной статьи Legal Protection of Electronic Contracts in the Era of Artificial Intelligence
, doi: 10.56334/sei/8.3.611
Driven by modern technological progress and continuous scientific development, a new technology known as "artificial intelligence" has emerged as part of the formation of the Fourth Industrial Revolution. This technology will transform numerous aspects of human life across various fields, including the legal domain. Artificial intelligence is a branch of computer science aimed at developing programs and mechanisms capable of simulating human cognitive abilities and performing complex tasks. Scientists and researchers have long sought to create an artificial mind that approximates the human intellect, and this pursuit has yielded tangible outcomes that have benefited humanity as a whole. As a result, numerous artificial intelligence systems and technologies have proliferated. The presence of artificial intelligence presents an opportunity to develop legal frameworks that must keep pace with any scientific or social advancement. This will ensure a secure framework for various modern technological interactions and serve the interests of those subject to the law. Accordingly, most laws and regulations must be updated to align with this intelligent technology.2
Significance of the Study
This study is dedicated to artificial intelligence and its relationship with electronic contracts. It represents one of the most critical issues of the modern era, with a profound impact on individuals' rights and freedoms and the legal dilemmas posed by this technology. The importance of this topic lies in highlighting the implications of using artificial intelligence without a legal framework, which may lead to numerous risks and violations. Moreover, synchronising with global technological developments by establishing clear, precise, and specific legal rules to regulate the use of artificial intelligence is essential. Such regulation is vital for defining privacy, accountability, and liability when this system is employed and causes harm. This calls for a revision of the legal framework to keep pace with technological advancements, particularly in light of new forms of technological transactions and novel crimes in the digital world.3
Research Problem
Therefore, the central research question addressed in this study is as follows: To what extent has the Algerian legislature regulated artificial intelligence within electronic transactions?
Objectives of the Study
This study aims to achieve several objectives, including the following:
– Clarifying the concept of artificial intelligence.
– Examining the legislative framework governing artificial intelligence for electronic contracts and highlighting the connections between them.
Research Methodology
This study adopts a descriptive approach and an analytical methodology that aligns with its content. This research involves describing the concept of artificial intelligence and analysing the relevant legislation to establish a sound legal foundation for regulating its uses. In addition, the comparative method is employed to contrast various legal systems concerning the regulation of issues related to artificial intelligence.
Section One: Artificial Intelligence and Its Role in Technological Innovation
Owing to its ongoing advancements, artificial intelligence is among the modern systems that have received considerable attention from various states. These developments underscore the significant and dynamic role of artificial intelligence in the future of technological innovation, as its impact extends to the development of numerous aspects of life.
Accordingly, this section addresses a definition of artificial intelligence and an outline of its characteristics (First), followed by an examination of the negative aspects associated with its use (Second).
Subsection One: The Concept of Artificial Intelligence
This section defines artificial intelligence, followed by a discussion of its characteristics as outlined below:
-
A. Definition of artificial intelligence
In one of his research papers, computer scientist John McCarthy first used the term artificial intelligence (AI). He suggested that technological advancement could experience a qualitative leap if machines could solve problems requiring human capabilities. He defined AI as "the science and engineering of making intelligent machines. "4Although the term artificial intelligence first appeared in 1956 and its technologies have only recently begun to proliferate, arriving at a universally agreed-upon and standardised definition remains challenging. This difficulty stems from the complexities of defining both human intelligence and artificial intelligence. Moreover, the perspectives from which artificial intelligence is approached vary. As the core element in achieving goals, intelligence differs in levels and
types among humans, machines, and animals.5Artificial intelligence is a broad term that encompasses numerous fields. It focuses on developing computer systems that simulate various aspects of biological intelligence, namely, human intelligence, such as speech recognition, language translation, visual perception, and decision-making. Accordingly, it may be defined as a technology, as the science is concerned with developing techniques that enable machines to perform activities that typically require human intelligence.6
-
B. Characteristics of artificial intelligence
Artificial intelligence is distinguished by several attributes, features, and capabilities that distinguish it from other technological systems. Among these are:
-
> The ability to extract knowledge and apply it effectively within available resources.7
-
> The capacity to benefit from past experiences and utilise them more efficiently and
rapidly in new situations.
-
> The ability to provide essential information for real-time decision-making.
-
> AI applications enable devices to recognise sounds and speech and manipulate objects.
-
> AI systems can continuously learn in an autonomous and automated manner without
supervision or oversight.8
-
> AI technologies can process vast amounts of data and information.
-
> They allow individuals to interact with AI entities; the easier this interaction is, the
greater the level of intelligence within the AI system.
-
> They possess the ability to innovate and perceive, as well as to learn from mistakes
and improve performance through experience, namely, through trial and error.
-
> Reasoning, through the ability to infer a specific idea or concept on the basis of the
available documents and guiding data, is essential for establishing electronic contractual relationships and brilliant contracts.9
-
> AI uses advanced techniques to solve problems or manage complex and challenging
situations even without complete information. This characteristic is one of the most prominent features distinguishing artificial intelligence from traditional machines.10
-
> Conflicting data, such as artificial intelligence data, can process data that contain
errors and inconsistencies.
Subsection Two: Negative Aspects of Artificial Intelligence Use
Despite the increasing use of artificial intelligence systems across various fields, this usage entails several negative aspects, some of which are outlined below:
-
> Artificial intelligence systems rely on vast amounts of personal data, raising serious
security concerns related to data protection and privacy.
-
> These systems operate through machine learning algorithms that often function as
"black boxes," making it difficult to understand how decisions are made transparently. This lack of transparency and accountability complicates the attribution of responsibility for AI-driven actions, as decision-making processes remain obscure.11
-
> The rapid evolution of artificial intelligence algorithms has raised significant concerns
regarding the swift spread of technological advancements, which may result in unintended consequences such as the absence of accountability, misuse of AI systems, and insufficient safeguards against harmful applications.12
-
> Artificial intelligence systems may engage in criminal behaviour, such as infiltrating
stock exchanges and financial markets and violating their regulations. Other examples include defamation offences and various forms of legal infringement.13
-
> Among the most serious negative aspects of artificial intelligence are the risks posed
by lethal and ingenious weaponry. The New York Times reported that technology companies are attempting to develop AI-powered weapon systems that eliminate human oversight, posing a significant global threat.14 Moreover, artificial intelligence has already been employed in warfare, as reported by The Guardian, which stated that Israel used AI-based software in its military operations in Gaza. The program, known as The Gospel, is fed with data and information to autonomously select individuals for targeting in the Gaza Strip, indicating that the system is capable of predicting the number of civilian casualties in advance.15
Section Two: The Legal Framework for Artificial Intelligence and Electronic Contracts
Establishing a legal framework for artificial intelligence is imperative to ensure that AI promotes innovation while safeguarding individuals' fundamental rights. This section addresses two key topics: the legal personality of artificial intelligence and the liability of AI systems.
Subsection One: Artificial Intelligence and Legal Personality
This topic has given rise to two opposing perspectives, one rejecting and the other supporting granting legal personality to artificial intelligence systems.
-
A. Support of the Granting of Legal Personality to AI Systems
Some modern legal systems are leaning towards recognising artificial intelligence systems as having a legal personality. This would involve granting robots and intelligent entities legal rights similar to those that natural persons enjoy. A notable example is the Kingdom of Saudi Arabia, which granted citizenship to the robot "Sophia" in late 2010 and issued a passport. This unprecedented measure raised numerous questions regarding the legal basis upon which it was granted and the potential legal consequences, particularly given that a name, nationality, and passport are rights typically reserved for natural persons possessing a legal personality.
Specific perspectives have emerged in this context, suggesting the need to grant artificial intelligence systems a form of virtual legal personality. According to comparative civil law, legal personality is not exclusively reserved for human beings; it has also been extended to entities such as companies, associations, foundations, and endowments. The American legal system has gone even further, granting legal personality to natural sites such as the Te Urewera Forest in New Zealand and Mount Taranaki, which is based on the notion that while all humans are persons, not all persons are human.
Supporters of this view argue that recognising the legal personality of intelligent robots is analogous to recognising the legal personality of legal persons. Just as a legal person is a fictional construct, the same can apply to an intelligent robot, which, under this approach, may be granted rights and obligations. Such recognition is not unprecedented, as the legal framework governing legal persons has in many respects been modelled on the framework for natural persons, including the recognition of fundamental rights.16
Some jurists in favour of this proposal point to its potential scope, suggesting that recognising the legal personality of intelligent robots would enable them to enter into contracts, particularly insurance contracts. This would entail granting them a distinct legal patrimony, financed through contributions made by the robot's designer, the owner upon purchase, and the user if different from the owner.
The main advantage of this proposal is that compensation for victims of harm caused by the robot would be swift and effective, given the existence of a dedicated financial fund linked to the robot's legal personality.17
-
B. Opposing the Granting of Legal Personality to AI Systems
The idea of granting legal personality to robots has drawn significant criticism, most notably due to the lack of clarity regarding the scope of their application. For example, which types of robots would qualify for such a legal personality? What boundaries should be established between the various categories of these devices? Moreover, this proposal is dangerous because it may undermine a fundamental legal distinction between persons and things. Some even argue that equating robots with legal persons capable of bearing responsibility is legally and conceptually flawed.18
Recognising robots as legal persons could eventually lead to the attribution of fundamental rights similar to those granted to natural persons. This could give rise to serious and complex legal problems, even more so than those encountered in the context of legal persons. A natural human may, in the future, be confronted with fictitious electronic legal entities due to embracing views that support granting such systems a legal personality. Such development could lead to grave distortions, including the evasion of liability by designers, manufacturers, owners, or users, who may attempt to shift responsibility onto the AI systems themselves.19
Subsection Two: The Legal Liability of Artificial Intelligence Systems
This part addresses both criminal liability for offenses committed through artificial intelligence and civil liability for damage caused by such systems, as follows:
-
A. Criminal Liability for AI-Related Offences
The legal concept of crime is based on three elements: the legal element, the material element, and the moral (or mental) element. Under the legal element, criminal liability is determined according to criminal law. This is affirmed by Article 1 of the Algerian Penal Code, which states, “There is no crime, penalty, or security measure except by virtue of the law.”
The mental element involves the presence of criminal intent, whether general or specific. Without any of these elements, assigning criminal liability to artificial intelligence systems becomes difficult. While a crime's material element could be fulfilled through an AI system's malfunction or erroneous action, the moral element of criminal intent cannot be attributed to such systems. This is because artificial intelligence lacks a legal personality and cannot possess the deliberate will necessary to commit a crime intentionally.20
Laws No. 18--07 on protecting personal data have established several mechanisms for safeguarding personal data during electronic transactions, even when innovative electronic communication media are involved. This law aims to protect the digital environment and reinforce the freedom and security of electronic contracting in all its dimensions.
An invention such as artificial intelligence has the potential to satisfy both the external element (the criminal act) and the internal element (criminal intent). In practice, AI can fulfil both requirements. Thus, no absolute legal barrier prevents the attribution of criminal liability to artificial intelligence. For an AI entity to bear such liability, it must be treated as legal, much like companies recognised as legal persons under the law. Accordingly, fundamental constitutional freedoms should be extended to artificial intelligence, such as those granted to corporate entities.
The primary objective of this perspective is to acknowledge that, as artificial intelligence continues to advance, criminal liability for its actions should not be limited solely to the programmer or owner.21 This aligns with the position supporting the recognition of legal personality for AI systems. This would allow such entities to be held accountable for their actions and to bear liability for the harm they may cause. It would also entail establishing an independent financial patrimony for these systems to enable direct compensation for damage.22
The opposing view maintains that criminal liability can be attributed only to natural persons and cannot be extended to artificial intelligence systems, primarily because of the inapplicability of most criminal sanctions. Moreover, assigning criminal responsibility to AI conflicts with the principle of legality in criminal law, which stipulates that there can be no crime or punishment without an explicit legal provision. Additionally, punishment aims to achieve general and specific deterrence objectives that cannot be realised concerning artificial intelligence systems.23
From our perspective, holding artificial intelligence directly criminally liable is illogical, as AI is not a conscious entity with free will; it operates on the basis of programming and algorithms. Criminal liability requires the presence of intent in its general and specific forms. If an error occurs, it is a result of programming, not deliberate intent on the part of the AI. Therefore, criminal liability should rest with developers, programmers, users, owners, and even companies that control AI applications, as they can influence AI functionality and anticipate potential errors. Consequently, artificial intelligence should be treated as a tool utilised by natural or legal persons and cannot be held independently liable.24
-
B. Civil Liability for Damage Caused by Artificial Intelligence
Three types of liability can be distinguished to determine civil liability for damage caused by artificial intelligence: personal fault liability, liability of the custodian of a thing, and liability for defective products.
-
1. Civil Liability for Personal Faults
-
2. Liability for Defective Products
-
3. Civil Liability of the Custodian of a Thing
As mentioned, some jurists support granting legal personalities to advanced artificial intelligence systems capable of making autonomous decisions. According to this view, such systems would bear civil liability for the damage they cause, whereby the robot would be obliged to compensate for any harm inflicted directly. The financial obligations would be fulfilled through dedicated independent patrimony, funded by the AI system's designer, owner, or user.
Certain members of the European Parliament supported this position on 16 February 2017, yet it was met with strong opposition.
With respect to civil liability for defective products, responsibility for compensating for damage resulting from artificial intelligence falls primarily on the designer or manufacturer, whether they created the entire system or merely produced some of its components. In certain exceptional cases, liability may extend to the system's owner or user.
Although this approach is considered the most appropriate for determining liability for harm caused by artificial intelligence, the concept of a “defective product” becomes problematic in this context. AI, by its very nature, is an immaterial element, making it difficult for a harmed party to prove the existence of a defect in a nonphysical product.25
In this context, Article 138 of the Algerian Civil Code governs civil liability related to the custody of things. It states that the burden of compensating for damage caused by a thing falls upon its custodian, owing to the three powers they possess: use, management, and control over the thing.
Accordingly, the individual who controls the artificial intelligence system or the custodian of the intelligent robot who has the authority to use, direct, or supervise the AI system is liable for compensating for any damage resulting from its use. While many robots and intelligent systems may operate autonomously and perform specific tasks independently of their custodian, they generally remain under the relative control of their users, who can typically shut them down when necessary. Therefore, liability for damage caused by the behaviour of AI systems remains with those who supervise or operate them.26
-
C. The Position of the Algerian Legislator in the Legal Regulation of Artificial Intelligence
Algeria is considered one of the pioneering countries in the digital domain through its promotion of technological innovation. Digitalisation has become a fundamental component of the national development strategy, aimed at integrating modern technologies into various aspects of life and ensuring full benefit from digital transformation opportunities. This has necessitated the modernisation of legal and legislative frameworks to keep pace with technological advancements.
Among the legal texts adopted by Algeria in response to technological development in the justice sector is Law No. 09--04, which sets out special rules for preventing and combating crimes related to information and communication technologies. This law criminalises behaviours associated with the misuse of information technologies.
-
> The Law on the Modernisation of Justice included several updates in the justice sector
through Law No. 15--03 concerning the modernisation of justice, including creating a centralised information system for the Ministry of Justice, the electronic exchange of documents and judicial writs, etc.
-
> The creation of a centralised database for criminal records.
-
> Electronic prosecution.
-
> The introduction of a unified electronic service desk is present at all judicial councils.
-
> Electronic file exchange between public prosecutors and investigating judges.27
-
> The use of remote video conferencing technology during the investigation and trial
phases, by Ordinance No. 20--04.28 Despite Algeria's progress in digitalisation and information security, there remains an absence of explicit legal provisions regulating the application of artificial intelligence. Although technologies currently in use rely on AI-based algorithms, the Algerian legislature has not addressed the regulation of artificial intelligence in its legal texts, which constitutes a shortcoming in the legal framework governing this technology. This is despite the establishment of the National Higher School of Artificial Intelligence, which trains specialists in data science and artificial intelligence engineering, an issue that the Algerian legislature must address in the coming years.29
Conclusion:
In conclusion, as discussed throughout this study, artificial intelligence has permeated all sectors, including electronic transactions, particularly electronic contracts. It plays a significant role in ensuring informational privacy and security and enhancing transparency and accountability. This contributes to achieving a balance between fostering technological innovation on the one hand and protecting human rights and preserving the public interest on the other hand. Among the key findings reached are the following:
-
> Artificial intelligence has added a new dimension to the digital revolution,
fundamentally transforming how data and digital information are processed, thereby significantly improving quality of life.
-
> Adopting artificial intelligence technologies requires numerous safeguards to ensure
the safe and ethical use of such technologies, especially those that protect personal data from unauthorised access or unethical use. States are rapidly embracing artificial intelligence systems across all areas of life to promote innovation and enhance technical efficiency. However, this rapid adoption is also accompanied by the emergence of new risks at various levels. The rapid development of AI systems has led to a new category of crimes related to artificial intelligence, many of which are not yet addressed by existing criminal laws, except in limited cases.
-
> Current legal texts remain insufficient without specific legislation regulating artificial
intelligence. This legislative gap underscores the urgent need for effective regulation of AI applications.
Recommendations
On the basis of the findings reached, the following recommendations are proposed:
-
> Strive to enact comprehensive legislation that addresses all aspects of using artificial
intelligence technologies, particularly protecting personal data. Such legislation should define the standards and limits for using AI technologies on the basis of the state's societal, religious, and ethical values.
-
> Existing domestic laws, particularly the Penal Code, the Civil Code, and the Intellectual
Property Law, should be amended to keep pace with the modern development of artificial intelligence systems.
-
> Promote public awareness and foster technological and legislative advancement in
parallel.
-
> Reassess the electronic contracts' legislative framework, especially with respect to
technological advancement.