Artificial Intelligence: Bias and the case for industry

 

Over the past many decades, science fiction has shown us scenarios where AI has surpassed human intelligence and overpowered humanity. As we near a tipping point where AI could feature in every part of our lives – from logistics to healthcare, human resource to civil security – we take a look at opportunities and ethical questions in AI. In this article, we speak to AI expert Prof Dr Patrick Glauner about AI bias, as well as which impact – good and bad – AI could have on industry and workers.

What about our jobs? Can we trust AI to do what it is meant to, and without bias? What will society look like once we are surrounded by AI? Who will decide how far AI should go? These are some of the ‘frequently asked questions’ when it comes to AI. These were also part of the questions participants were encouraged to delve into at the FNR’s science-meets-science-fiction event ‘House of Frankenstein’ – sparking also the question of what it means to be human in the age of AI.

‘It’s not who has the best algorithm that wins. It’s who has the most data.’

Further reading: Studies have shown racial and ethnic bias in facial recognition technology

“For about the last decade, the Big Data paradigm that has dominated research in machine learning can be summarized as follows: ‘It’s not who has the best algorithm that wins. It’s who has the most data.’ ”explains Dr Patrick Glauner, who in February 2020 started a Full Professorship in AI at the Deggendorf Institute of Technology (Germany), at the young age of 30.

In machine learning and statistics, samples of the populations are typically used to get insights or derive generalisations about the population. Having a biased data set means that it is not representative of the population. Glauner explains biases appear in nearly every data set.

“The machine learning models trained on those data sets subsequently tend to make biased decisions, too.”

Queue facial recognition – for example unlocking your phone by scanning your face. This technology has turned out to have both race and ethnic bias – personal stories and studies about the technology not distinguishing between faces of Asian ethnicity, apps that are meant to ‘predict’ criminals being biased toward people with darker skin. Why? Because it was developed based on, for example, Caucasian men, rather than a representative sample of populations.

Then there is the case of Tay. Tay was an AI chatbot, which immediately turned racist when unleashed on and exposed to Twitter. This shows that currently AI does not understand what it computes – meaning the term ‘Intelligence’ is criticised by part of AI research community itself. It is crucial to train AI on data sets – but the risk here is that AI makes decisions about something it does not understand at all. Decisions which are then applied by humans unknowing how the AI came to this decision.  This is referred to as the “explainability” problem – the black box effect.

Further reading: Tay, the chatbot only lasted a few hours before it was removed because it immediately turned racist

Other concerns is the power that comes with this technology – where to put the limits on how it is used? China, for example, has rolled out facial recognition technology, which can be used to identify protesters. And not just that – a city in China is currently apologising for using facial recognition to shame citizens who are seen wearing their pyjamas in public.

A city in China is currently apologising for using facial recognition to shame citizens who wear pyjamas in public

While the The EU has drafted ethics guidelines for trustworthy AI, and the CEO of Microsoft has called for global guidelines, ethical guidelines for Government use of such technology are yet to be agreed on and implemented. The use of armed drones in warfare are also a concern.

Bias – an old problem on a larger scale

Prof Dr Glauner explains that bias in data is far from new – and that there is a risk that known issues will be carried over to AI if not properly addressed.

“Biases have always been present in the field of statistics. I am aware of statistics papers from 1976 and 1979 that started discussing biases. In my opinion, in the Big Data era, we tend to repeat the very same mistakes that have been made in statistics for a long time, but at a much larger scale.”

Glauner explains that the machine learning research community has recently started to look more actively into the problem of biased data sets – however, he stresses that there needs to be greater awareness of this issue amongst students studying machine learning, as well as amongst professors.

“In my view, it will be almost impossible to entirely get rid of biases in data sets, but that approach would be at least a great start.”

Glauner also explains that it is imperative to close the gap between AI in academia and industry, emphasising that he will ensure that students he teaches under his Professorship will learn early on how to solve real-world problems.

AI and jobs

AI has both positive and negative implications for the working world – some tasks will inevitably be handed over to AI, while others will continue to require humans; and in between, the mix. The Luxembourg Government’s ‘Artificial Intelligence: a strategic vision for Luxembourg’ puts the focus on how AI can improve our professional lives by automating time-consuming data-related tasks, helping us use our time more efficiently in the areas that require social relations, emotional intelligence and cultural sensitivity.

Prof Dr Glauner, whose AI background is rooted in industry, sees AI having a significant impact on the jobs market, both for businesses and workers – not everyone who loses their job to AI will be able to transform into an AI developer  – but also points out that the job market has always been undergoing changes.

“For example, look back 100 years ago: most of the jobs from that time do not exist anymore. However, those changes are now happening more frequently. As a consequence, employees will be forced to potentially undergo retraining multiple times in their career.

For instance, China has become a world-leading country in AI innovation. Chinese companies are using that advantage to rapidly advance their competitiveness in a large number of other industries. If Western companies do not adapt to that reality, they will probably be out of business in the foreseeable future.”

“AI is the next step of the industrial revolution”

“Even though those changes are dramatic, we cannot stop them – AI is the next step of the industrial revolution. 

“While the previous steps addressed the automation of repetitive physical steps, AI allows us to automate manual decision-making. That is a discipline in which humans naturally excel. AI’s ability to do so, too, will significantly impact nearly every industry. From a business perspective, this will result in more efficient business processes and new services/products that improve humans’ lives.”

Prof Dr Glauner’s PhD on AI to combat electricity theft and NTL was featured in New Scientist

Prof Dr Glauner’s PhD project is a concrete example of how AI can be used to improve output – and customer experience. Funded by an Industrial Fellowship grant (AFR-PPP at the time) – a collaboration between public research and industry – Glauner developed AI algorithms that detect the non-technical losses (NTL) of power grids, critical infrastructure assets.

“NTLs include, but are not limited to, electricity theft, broken or malfunctioning meters and arranged false meter readings. In emerging markets, NTL are a prime concern and often range up to 40% of the total electricity distributed.

The annual world-wide costs for utilities due to NTL are estimated to be around USD 100 billion. Reducing NTL in order to increase reliability, revenue, and profit of power grids is therefore of vital interest to utilities and authorities. My thesis has resulted in appreciable results on real-world big data sets of millions of customers.”

AI and new industries

The opportunities AI presents for existing industries areas are manifold, if done right – and AI could pave the way for completely new industries as well: space exploration and space mining would hardly be developing so fast without AI. For example, there is a communication delay from the Earth to the Moon, which makes controlling an unmanned vehicle or a machine from Earth challenging to say the least. However, if the machine would be able to navigate on its own and make the most basic of decisions, this communication gap would no longer be much of an obstacle.  Find out more about this FNR-funded project.

Improve, not replace

AI undoubtedly represents huge opportunities for industry in particular, and has the potential to improve performance, output, worker as well as customer satisfaction, to name only a few. However, it is imperative the bodies in charge put ethical considerations and the good of society at the heart of their strategies: a balance must be found – the goal has to be to improve society and the lives of the people within it, not to replace them. The same goes for bias in AI: after all, what good can come from algorithms that build their assumptions on non-representative data?


This is the first feature in a mini-series about Artificial Intelligence


About Prof Dr Patrick Glauner

Prof. Dr. Patrick Glauner (right) with Prof. Dr. Peter Sperber (left), President of Deggendorf Institute of Technology. Source: Deggendorf Institute of Technology.

Dr Patrick Glauner will be Full Professor of Artificial Intelligence at Deggendorf Institute of Technology in Bavaria, Germany as of February 2020. He is also the Founder & CEO of skyrocket.ai, an AI consulting firm.

His work on AI was featured in New Scientist and cited by McKinsey and others. He was previously Head of Data Academy at Alexander Thamm GmbH, Innovation Manager for Artificial Intelligence at Krones Group, a Fellow at the European Organization for Nuclear Research (CERN), an adjunct faculty member at the Universities of Applied Sciences in Karlsruhe and Trier and a visiting researcher at the University of Quebec in Montreal (UQAM). He graduated as valedictorian from Karlsruhe University of Applied Sciences with a BSc in Computer Science. He subsequently received a MSc in Machine Learning from Imperial College London, an MBA from Smartly and a PhD in Computer Science from the University of Luxembourg [funded through an FNR Industrial Fellowship / AFR-PPP at the time]. He is an alumnus of the German National Academic Foundation (Studienstiftung des deutschen Volkes).

Find out more in our news item about Patrick Glauner’s Professorship

Related highlights

From lab to startup: LuxAI and QTrobot – a robot to help children with autism

FNR 20 years: An evening with science [fiction] in the House of Frankenstein

Artifical Intelligence (AI) – in the service of mankind

This site uses cookies. By continuing to use this site, you agree to the use of cookies for analytics purposes. Find out more in our Privacy Statement