Talk: Potential Risks of Artificial Intelligence
Artificial Intelligence, or AI, is a hot topic in today’s tech world. AI applications can offer us tremendous benefits in terms of efficiency, accuracy, convenience and so on. However, with the rapid innovations in this field also come potential risks.
To stay informed, we attended the Talk on Potential Risks of AI at the new Forum Groningen. This talk was part of the Forum’s 2020 programme ‘Human & Machine’, exploring the future role of AI in society.
Hosted by innovation consultant Nick Stevens, AI expert Jarno Duursma and science debater & researcher Anders Sandberg shared their thoughts on possible disadvantages of AI. Let’s fill you in on 4 key risks highlighted during the talk!
Lack of transparency
Because deep learning AI systems are generally based on several, often hidden, layers of intelligence, it can be difficult to determine how they produce their output. This type of ‘black box’ process poses a risk. Although answers may seem valid, they could be based on incorrect or irrelevant details that the system has focused on. A system’s lack of transparency can make it challenging to determine the validity and trustworthiness of its output.
Because databases can, unintentionally, contain biased information, AI algorithms can systematically produce biased outcomes. Although people often assume that AI systems can filter out any potential issues, they actually reflect the bias in the data. This creates ‘self-fulfilling prophecies’, as the system’s biased output confirms the biased input. In combination with so-called automation bias, where people trust the decisions of a computer more than those of an actual human, this risk is particularly poignant.
The most important example of AI biological recognition is facial recognition. While these systems are learning rapidly, there continues to be a considerable risk of false positives in which people are incorrectly identified. In addition to this, there are obvious issues with privacy and ethics. Because of the risks involved in the use of facial recognition technology, Microsoft started campaigning for public regulation and corporate responsibility in 2018.
Deep fake technology
Another quickly developing field in AI is deep fake technology. Besides the by now well-known face-swapping software, voice cloning programmes are also becoming increasingly sophisticated. These deep fake technologies are becoming increasingly easy to use and widely available. With this comes the risk of, for example, more convincing fake news stories.
These risks illustrate that, although AI offers great opportunities, keeping humans and their ethics, morals and empathy involved in decision making is key!
Curious about what we have to offer in terms of market intelligence?