The Brazilian lawyer Eduardo Magrani defended this Wednesday that the legal regulation of Artificial Intelligence (AI) should have as a guideline a deontological current, because this is a technological advance that involves regulations, constant public debate and ethics.
Furthermore, before thinking about the ethical issues surrounding robots, it is necessary to understand the ethics of animals philosophically, “which is behind” even though the regulation of animals, as beings of rights, has evolved. "Before taking rights to intelligent robots, we have to think about sentient beings, like many animals," he said in the second edition of the international conference Lisbon, Law and Tech. “The regulation of robots will require breaking the ontological and epistemological paradigm”, he concluded.
Eduardo Magrani wondered if society is properly framing the debate around AI and explained that there is a false dichotomy that puts human beings in rivalry with these cutting edge technologies. "AI is much more like an Excel sheet than a robot with a machine gun," he quipped.
The founding partner of Magrani e Pragmácio Advogados and a senior member of the Konrad Adenauer Foundation argues that there is more strategy than fear, giving as an example the United States and China, two of the 366 countries in the world that have national strategic plans for AI.
“We don't have a strong AI yet. Today, we have weak AI, which does not adapt to any situation as well as a human being. We are not in this scenario, but we are in full swing, ”said the lawyer and academic. “We have to work alongside AI. AI will only replace lawyers in tasks that are repetitive or predictable ”, he added, at the event organized by the company Abreu Advogados and of which Jornal Económico is media partner.
We see AI as something that is capable of leveraging human effort - Manuel Levi
The vision is shared by the CEO of Enlightenment AI, who warned that seven out of ten executives admit that the investments they have made in AI have had little impact. In his opinion, organizations fail because they do not have a data strategy that allows them to ensure that they have the necessary infrastructure and that they fulfill their objectives.
"Often companies try to move to AI without the controlled observation part", warned Manuel Levi, in the panel "Global Trends and Regulation in Artificial Intelligence", moderated by Helder Galvão, Abreu consultant.
So, what happened to the digitization initiatives fail? The company was probably not able to measure the indicators and did not fully understand what AI is.
“We noticed that there are companies that invest so much in the integration of AI that they become too dependent. It is not the safest thing. For example, it has now happened to us with Covid-19 with those that used AI to predict consumption patterns, but the coronavirus has totally changed the behavior of consumers ”, exemplified the manager.
According to Brazilian researcher Dora Kaufman, it is critical to incorporate ethical principles, which is not simple because it is difficult to define the territories (jurisdictions) in which the organization operates. There are still two variables that limit this implementation: the relatively lengthy process of legal systems based on civil law and the limitations of the technique deep learning.
The author of the work “Will artificial intelligence supplant human intelligence?”Considers that ethics has become an intrinsic variable, as personal data are part of the companies' core business. “The AI that permeates most implementations today is the deep learning, which has intrinsic limitations, such as database bias and opacity (black box). Any idea of regulation has to take these factors into account, ”concluded Dora Kaufman.