ChannelLife New Zealand - Industry insider news for technology resellers
Story image

Is there room for ethics in the ‘Wild West’ of AI?

Wed, 13th Jun 2018
FYI, this story is more than a year old

Wherever the future of artificial intelligence (AI) is discussed, ethics is never far behind.

With AI being the hottest topic at the 2018 SAS Users of NZ (SUNZ) conference, on yesterday at the Michael Fowler Centre in Wellington, the expert panel on ethics covered a range of fascinating and important questions.

SAS director of product management for cloud and platform technologies Mike Frost is one of the experts, taking his seat directly after delivering the opening keynote.

Matthew Spencer and Rohan Light complete the panel, the Ministry of Social Development (MSD) chief analytics officer, and responsible information use lead advisor, respectively.

Finally, Tenzing management consulting director Eugene Cash acts as moderator.

They decide on a working definition of ethics - 'knowing the difference between what you have a right to do, and what is right to do,' an apt one for the purposes of the discussion.

Spencer begins the conversation by talking about the MSD's attempts to guide ethical decision making with the Privacy, Human Rights and Ethics Framework (PHRaE).

The PHRaE is under development by the MSD to help organisations make "early decisions to not progress initiatives, or to accept risk if value outweighs risk, can be made if risks cannot be mitigated," according to the PHRaE information sheet.

This line of discussion leads to the development of the controversial Google Duplex which is able to pose as a human on a phone call and was debuted to wild applause without thought to the myriad of ramifications this technology could have.

Cash asks the panellists, does this show that Silicon Valley is 'ethically rudderless'?

"I think that this is pretty typical of some organisations," Frost replies.

"They will try something and then, based on the reaction, they'll withdraw and pull back and say 'we had a right to do it, but maybe we weren't right to do it.

"Do I think that Google cares about the ethics of that? No, they're like us - they're trying to sell software… I don't think that's the right way, I think we should be more proactive rather than reactive… but right now it's a bit of a wild west, and that's how this self-governance materialises.

Spencer points to another controversial use-case of AI as an example of how things can go wrong - the US courts using the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) system to aid judges in deciding whether a defendant would be likely to re-offend and then sentencing them accordingly.

He notes that in a study on the efficacy of AI, humans had an AUR (a statistical measure of how accurate something is when choosing between two options) of .76, AI given the same task received a score of .82, but human and AI together managed .90.

Whether this is right or wrong is not the point here - what it shows, Spencer says, is that if we are going to use AI to aid important decisions, adding human intelligence to the mix is vitally important.

A bit of humanity is also the solution suggested for perhaps the biggest ethical concern for AI - bigotry.

If the data that we feed into an AI is inherently skewed toward or against a certain kind of person, the results will be just as skewed.

The key to avoiding AI echoing these biases is "human plus AI and constant calibration," Light says.

"There should always be a human involved with the evolution of AI. If you don't have that then the chance that it goes astray increases.

Light also astutely notes the sadly common irony of four white men in suits discussing issues of bigotry and makes it clear that diversity is also an important ingredient in this recipe.

Frost suggests the possibility of a medicine-style panel of ethics for data scientists that would be in charge of reviewing uses for AI to ensure that they are staying within an ethical framework.

At the end of the half an hour panel, the discussion comes to an abrupt end with a dozen thoughts on ethics and AI half explored.

The main takeaway is that there is still a long way to go.

The panel comes together in agreement that as we move forward, an ethical perspective needs to be an integral part of the development and implementation of AI - but they also recognise that ethics are slippery and culturally specific.

As Frost says, at the end of the day, "Every community will have to set their own standards for what is an appropriate use of technology.

Correction - in the original posting of this story it was erroneously stated that the AUR data came from a study on COMPAS - Spencer clarified, "I referenced the  COMPAS issue as an example of how things can go wrong. I also referenced, as a separate example, a study from MIT to illustrate how humans and machines may work together to offer a superior solution in some circumstances. The human factor in the loop allows for greater oversight."

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X