Daily Bulletin


The Conversation

  • Written by Monique Mann, Senior lecturer, Deakin University
Airlines take no chances with our safety. And neither should artificial intelligence

You’d thinking flying in a plane would be more dangerous than driving a car. In reality it’s much safer, partly because the aviation industry is heavily regulated.

Airlines must stick to strict standards for safety, testing, training, policies and procedures, auditing and oversight. And when things do go wrong, we investigate and attempt to rectify the issue to improve safety in the future.

It’s not just airlines, either. Other industries where things can go very badly wrong, such as pharmaceuticals and medical devices, are also heavily regulated.

Artificial intelligence is a relatively new industry, but it’s growing fast and has great capacity to do harm. Like aviation and pharmaceuticals, it needs to be regulated.

AI can do great harm

A wide range of technologies and applications that fit under the rubric of “artificial intelligence” have begun to play a significant role in our lives and social institutions. But they can be used in ways that are harmful, which we are already starting to see.

In the “robodebt” affair, for example, the Australian government welfare agency Centrelink used data-matching and automated decision-making to issue (often incorrect) debt notices to welfare recipients. What’s more, the burden of proof was reversed: individuals were required to prove they did not owe the claimed debt.

The New South Wales government has also started using AI to spot drivers with mobile phones. This involves expanded public surveillance via mobile phone detection cameras that use AI to automatically detect a rectangular object in the driver’s hands and classify it as a phone.

Read more: Caught red-handed: automatic cameras will spot mobile-using motorists, but at what cost?

Facial recognition is another AI application under intense scrutiny around the world. This is due to its potential to undermine human rights: it can be used for widespread surveillance and suppression of public protest, and programmed bias can lead to inaccuracy and racial discrimination. Some have even called for a moratorium or outright ban because it is so dangerous.

In several countries, including Australia, AI is being used to predict how likely a person is to commit a crime. Such predictive methods have been shown to impact Indigenous youth disproportionately and lead to oppressive policing practices.

AI that assists train drivers is also coming into use, and in future we can expect to see self-driving cars and other autonomous vehicles on our roads. Lives will depend on this software.

The European approach

Once we’ve decided that AI needs to be regulated, there is still the question of how to do it. Authorities in the European Union have recently made a set of proposals for how to regulate AI.

The first step, they argue, is to assess the risks AI poses in different sectors such as transport, healthcare, and government applications such as migration, criminal justice and social security. They also look at AI applications that pose a risk of death or injury, or have an impact on human rights such as the rights to privacy, equality, liberty and security, freedom of movement and assembly, social security and standard of living, and the presumption of innocence.

The greater the risk an AI application was deemed to pose, the more regulation it would face. The regulations would cover everything from the data used to train the AI and how records are kept, to how transparent the creators and operators of the system must be, testing for robustness and accuracy, and requirements for human oversight. This would include certification and assurances that the use of AI systems is safe, and does not lead to discriminatory or dangerous outcomes.

While the EU’s approach has strong points, even apparently “low-risk” AI applications can do real harm. For example, recommendation algorithms in search engines are discriminatory too. The EU proposal has also been criticised for seeking to regulate facial recognition technology rather than banning it outright.

The EU has led the world on data protection regulation. If the same happens with AI, these proposals are likely to serve as a model for other countries and apply to anyone doing business with the EU or even EU citizens.

What’s happening in Australia?

In Australia there are some applicable laws and regulations, but there are numerous gaps, and they are not always enforced. The situation is made more difficult by the lack of human rights protections at the federal level.

One prominent attempt at drawing up some rules for AI came last year from Data61, the data and digital arm of CSIRO. They developed an AI ethics framework built around eight ethical principles for AI.

These ethical principles aren’t entirely irrelevant (number two is “do no harm”, for example), but they are unenforceable and therefore largely meaningless. Ethics frameworks like this one for AI have been criticised as “ethics washing”, and a ploy for industry to avoid hard law and regulation.

Read more: How big tech designs its own rules of ethics to avoid scrutiny and accountability

Another attempt is the Human Rights and Technology project of the Australian Human Rights Commission. It aims to protect and promote human rights in the face of new technology.

We are likely to see some changes following the Australian Competition and Consumer Commission’s recent inquiry into digital platforms. And a long overdue review of the Privacy Act 1988 (Cth) is slated for later this year.

These initiatives will hopefully strengthen Australian protections in the digital age, but there is still much work to be done. Stronger human rights protections would be an important step in this direction, to provide a foundation for regulation.

Before AI is adopted even more widely, we need to understand its impacts and put protections in place. To realise the potential benefits of AI, we must ensure that it is governed appropriately. Otherwise, we risk paying a heavy price as individuals and as a society.

Authors: Monique Mann, Senior lecturer, Deakin University

Read more https://theconversation.com/airlines-take-no-chances-with-our-safety-and-neither-should-artificial-intelligence-132580

Writers Wanted

Physical Therapist Talks About This New Massage Gun On The Block - The HYDRAGUN

arrow_forward

Too much information: the COVID work revolution has increased digital overload

arrow_forward

Ammonite: the remarkable real science of Mary Anning and her fossils

arrow_forward

The Conversation
INTERWEBS DIGITAL AGENCY

Politics

Prime Minister's Remarks to Joint Party Room

PRIME MINISTER: Well, it is great to be back in the party room, the joint party room. It’s great to have everybody back here. It’s great to officially welcome Garth who joins us. Welcome, Garth...

Scott Morrison - avatar Scott Morrison

Prime Minister Interview with Ben Fordham, 2GB

BEN FORDHAM: Scott Morrison, good morning to you.    PRIME MINISTER: Good morning, Ben. How are you?    FORDHAM: Good. How many days have you got to go?   PRIME MINISTER: I've got another we...

Scott Morrison - avatar Scott Morrison

Prime Minister Interview with Kieran Gilbert, Sky News

KIERAN GILBERT: Kieran Gilbert here with you and the Prime Minister joins me. Prime Minister, thanks so much for your time.  PRIME MINISTER: G'day Kieran.  GILBERT: An assumption a vaccine is ...

Daily Bulletin - avatar Daily Bulletin

Business News

Getting Ready to Code? These Popular and Easy Programming Languages Can Get You Started

According to HOLP (History Encyclopedia of Programing Languages), there are more than 8,000 programming languages, some dating as far back as the 18th century. Although there might be as many pr...

News Co - avatar News Co

Avoid These Mistakes When Changing up Your Executive Career

Switching up industries is a valid move at any stage in your career, even if you’re an executive. Doing so at this stage can be a lot more intimidating, however, and it can be quite difficult know...

News Co - avatar News Co

4 Costly Mistake To Avoid When Subdividing Your Property

As a property developer or landowner, the first step in developing your land is subdividing it. You subdivide the property into several lots that you either rent, sell or award to shareholders. ...

News Co - avatar News Co



News Co Media Group

Content & Technology Connecting Global Audiences

More Information - Less Opinion