193 countries adopt first global AI Ethics Agreement. What does this mean for developers?

On November 26, 2021, 193 member states of UNESCO signed a document that defines the values and principles necessary for the development and use of artificial intelligence (AI).

People use AI-enabled technologies in everyday life — from booking flights and loan applications to driverless vehicles, medicine and diagnostics, and even decision-making by national governments. However, according to UNESCO, these technologies are also bringing unprecedented challenges.

“We see increased gender and ethnic bias, significant threats to privacy, dignity, and agency, dangers of mass surveillance, and increased use of unreliable AI technologies in law enforcement. Until now, there were no universal standards to provide an answer to these issues,” reads the UNESCO statement.

So, the document adopted by UNESCO member states is supposed to create legal infrastructure for the development of AI technologies within the ethical framework. It establishes a global normative framework for the industry and gives states the responsibility to apply it at their level. This is the first document of its kind. You can read the document here.

AIN.UA talked to Mariya Ortynska, patent attorney and CEO of IPStyle, about the legal implications of this document and how it will affect the work of developers who create AI solutions.

Experts have been discussing the ethical and legal regulation of AI for years. For example, can we qualify AI as an inventor on a par with humans or not? As in the case of DABUS, for which the first patent has already been granted.

Or as another example — we are now witnessing and even participating in the active development and introduction of driverless transport. But suppose we have a situation where autopilot malfunctions. Who is responsible for safety? Who is liable for the AI error and causing harm to human life and health?

As of today, there is no consensus, no single answer to these and similar questions. But there is no doubt that the time has come to discuss the future of artificial intelligence policy at the international level. In fact, the document adopted at the UN level gives rise to a normative legal framework regulating the development of AI and the ethics of its use. It also holds states responsible for its application at the national level. Moreover, UNESCO will ask states to report regularly on their progress and practices.

Transparency, clarity, and ethics will mean that soon AI will not be a “loot box” that can realize the scariest scenarios of sci-fi books or movies but will work as a quite understandable tool for the user to operate services and devices. For developers, these recommendations can be a guide when developing an AI product concept. Developers have global plans for AI.

Thus, the global market for explainable artificial intelligence (XAI) is estimated to be $3.50 billion by 2020 and is projected by Research And Markets to reach $21.03 billion by 2030. Explainable artificial intelligence is an emerging field of computer science, which forms AI models that can be better understood by humans — for example, how a program makes a decision, what the decision criteria are. Often not only the end-users but also the developers themselves cannot determine exactly how the model came to certain conclusions during data processing.

UNESCO itself relies on data showing that China and North America will have the greatest economic growth by 2030, making up 70% of the global economic impact of AI. The concentration of AI in the hands of a few high-income countries is likely to leave developing countries behind. The latter will have little or no benefit from AI and will not own such technologies.