According to the Organization for Economic Co-operation and Development (OECD), artificial intelligence (AI), while providing in-depth business knowledge and efficiency, threatens to undermine privacy, competition, equal treatment and other foundations of trust in business.
“The increasing complexity of artificial intelligence models and the difficult – or in some cases impossible – explanation of how these models produce certain results, is a major challenge. trust and accountability in AI applications, “said Mathilde Mesnard, OECD’s Director of Finance and Corporate Affairs.
The OECD believes that AI algorithms can undermine market integrity and stability by that unintentional discrimination, diversionary behavior, concentration of dominant market players, cybersecurity vulnerabilities, privacy vulnerabilities and other negative outcomes
According to Statistics, last year businesses worldwide invested $ 67.9 billion in artificial intelligence, more than five times the 2015 spending. AI algorithms benefit businesses and consumers across the full spectrum of industries, helping to reduce costs, forecast sales and revenue, fight fraud, assess credit risk, and manage employees. AI also translates languages, shortens travel time, connects like-minded people on social media, increases returns for retail and institutional investors, and helps diagnose diseases such as cancer.
“The AI applications offer remarkable opportunities for businesses, investors, consumers and regulators. Artificial intelligence can facilitate transactions, increase market efficiency, strengthen financial stability, foster greater financial inclusion and improve the customer experience, ” OECD. He added, however, that the development of artificial intelligence could precede efforts to curb its potential threats.
“In the financial sector, the increasing complexity of applications supported by artificial intelligence and the features supported by AI technologies pose risks to fairness. , in terms of transparency and the stability of financial markets, which may not be properly addressed by the current regulatory framework, “the organization pointed out.”
. According to a survey conducted by the Pew Research Center in late 2019 and early 2020, 53 per cent of people say that artificial intelligence is good for society, while 33 per cent do not.
According to the OECD, businesses need to inform customers. if they use artificial intelligence and need to be able to explain how AI algorithms arrive at decisions. In addition, they must be accountable for AI results and meet high standards of data quality and data management.
Data quality and governance are critical as data misuse in artificial intelligence-based applications and non- the use of appropriate data can undermine confidence in the results of artificial intelligence, “the organization said.
According to a Pew survey, artificial intelligence experts and advocates around the world are increasingly concerned about the long-term impact of artificial intelligence. According to a survey by Pew and Elon University, more than two-thirds of the 602 respondents predicted that by 2030, most artificial intelligence systems “will not apply ethical principles focused primarily on the common good”.
Hardware, software, tests, curiosities and colorful news from the IT world by clicking here