Artificial Intelligence: a Brief Summary of Ethical Issues

5th FEB 2019 | Disponible en Español

AI systems, despite being based on machine learning, are originally always developed by human beings. The possibility of making mistakes or perpetuating prejudices is always around the corner.

code of ethical behavior shop front

Artificial intelligence and ethics: can these two concepts coexist? These technologies are already having important impacts on business models and work organization. A revolution capable of facing a word certainly not very common in ICT (Information and Communications Technology), ethics precisely. Leaving aside for a moment the issue of work (​​will AI steal jobs from human beings?), the question on the lips of experts can be summarized as follows:
How can we be sure that artificial intelligence systems act in an ethical way?

Ethics of AI: the risk of prejudices

The softwares underlying AI applications and services do not emerge overnight; they are developed by someone: more precisely, machine learning systems need data "recorded" by human beings (supervised learning) or, at least, selected and prepared (unsupervised learning). So, being elaborated by real people, they risk reproducing errors or prejudices, even involuntarily introduced by the developers, replicating them in any future application.
An explanatory case of this risk is what has already happened in the US judicial system, where AI softwares has been used with the aim of predicting which individuals more than others are likely to be "future criminals": the following analysis has put in evidence the presence of bias against black people.
In these cases, how can we talk about an ethical artificial intelligence? What is the boundary between human ethics and the ethical use of artificial intelligence?

Datasets too unbalanced?

When we talk about an ethical use of artificial intelligence, the risk is, in general, the elaboration of unbalanced datasets, which overestimate or underestimate the weight of certain variables in the reconstruction of the cause-effect relationship necessary to explain certain events and, above all, to predict them.

In a positive scenario, AI systems can be used to improve human judgment and reduce our conscious or unconscious biases. However, data, algorithms and other design choices that influence AI systems can reflect and even amplify the existing cultural assumptions at a specific moment in time and, consequently, inequalities.

Ethics is a theme that must be increasingly present in the future development of artificial intelligence, starting from the basic rule that provides that AI should always be put at the service of people and not vice versa.

A decalogue for ethical artificial intelligence

These principles have been well explained by a decalogue drawn up by the UNI Global Union, an international federation that unites service sector syndicats. Among the most significant points of the document there is the need for the development of a responsible, safe and useful artificial intelligence, where the machines maintain the legal status of tools and the general counsels always keep control and responsibility on these machines.

This implies that AI systems must be designed and managed in accordance with applicable law, including privacy. Not only that: workers must have the right to access, manage and control data generated by AI systems, given the power of systems to analyze and use this data. In addition, employees must also have the "right to explanation" when AI systems are used in human resource activities such as hiring, promotion or dismissal.

As mentioned earlier, in the design and maintenance of artificial intelligence, it is essential that the system takes into account negative or harmful human prejudices and that any error - as regards gender, ethnicity, sexual orientation, age, etc. - is identified and not spread by the system.

YES, I WANT IT!robot's hand

And what about you? Would you adopt AI technology in your business?

Contact us to know more about our products.

⬅︎ Back to blog