Artifical Intelligence (AI) – in the service of mankind

 

AI has arrived in our daily lives. What can and what can AI not do? Which societal problems does AI pose? An opinion piece by FNR Secretary General Marc Schiltz.

Artificial Intelligence – a definition everyone knows by now. What is it? In a nutshell: AI is a generation of computer programmes with the ability to imitate intelligent human behaviour.

Example: Google’s computer programme Alpha Go Zero has learned how to master the ancient Chinese board game Go – even defeating the human world champion. The programme has learnt how to win by playing against itself thousands of times.

In the domain of medicine, AI can already analyse scans, x-rays and other data – putting together a diagnosis for certain types of cancer that is more reliable than a diagnosis from an experienced doctor.

Not to forget virtual assistants on smartphones, the likes of Siri and Alexa, which learn to understand and talk with the user. In the future, they will learn to get to know the user even better – their wishes, habits and preferences. AI has by now reached a point where it can recognise and adapt to emotions.

So are humans, with their imperfect intelligence, running the risk of being dominated by AI? Most likely not anytime soon. While current AI programmes often to a better job than humans, they can only do so in highly specific areas: Alpha Go is a champion in the game Go, but cannot do anything else. The most extraordinary aspect about the human brain is that it can tackle such a wide range of problems.

Human intelligence can also distinguish between cause and effect – it appears AI is not yet able to do this. As Judea Pearl – who championed the probabilistic approach to artificial intelligence – said: “Today’s machine learning programs can’t tell whether a crowing rooster makes the sun rise, or the other way around.”

AI brings with it a range of societal problems. There are jobs that will likely not exist, or not in their current form, in the future, such as chauffeurs, travel agents, accountants, but some jobs in medicine could also be affected.

There are also ethical challenges. AI is not completely transparent, as these programmes base their decisions on what they have „learned“. Who controls this? How can it be prevented that this becomes too one-sided? And in the end, how can AI be prevented from manipulating us? “Fake news” should not be followed by “Fake intelligence”.

An intense exchange between science, industry, politics and society is needed in order to develop AI in a way that always puts the well-being of humans first.


This opinion piece was originally published as a ‘Carte Blanche’ on rtl.lu in May 2018 (in Luxembourgish)

Marc Schiltz

More opinion pieces

Where there is science, there is a woman

ERC grants made in Luxembourg

Opinion: Fake News about the Corona Virus and science in general

More support needed for women (to pursue a career) in science

The impact of the new EU Copyright Directive on research

Brexit: What are the consequences for European research?

Opinion: Access to scientific publications for everyone

Opinion: The future of research in Luxembourg

Science without publication paywalls

We need both: start-ups and research-based spin-offs, but they need different support measures to be successful

Getting the next generation passionate about science and entrepreneurship

Opinion: Intersectoral mobility of researchers does not happen alone

Opinion: “What is truth?”

Opinion: Science – A bridge builder

Opinion: Gender balance in science and research

Don’t forget the research when thinking of innovation

Opinion: A clear framework for researcher-clinicians

Shifting the innovation gap in Luxembourg

This site uses cookies. By continuing to use this site, you agree to the use of cookies for analytics purposes. Find out more in our Privacy Statement