How Will We Make Sure AI Is Ethical?

Few things inspire more excitement and trepidation than the development of artificial intelligence (AI). In recent years AI has been rapidly evolving from the pages of sci-fi novels toward a lived reality. Its presence is being felt in various areas of commercial and consumer life: just ask Alexa or any of the AI-powered digital assistants you talk to.

As the “AI era” unfolds, regulators are now hoping to make it accountable. In this, they’re hoping to stir innovation, while keeping potential hazards of the technology in check. The central question: now that AI is finally here, how do we hold it to ethical standards?

“For the first time, the ethics of AI isn’t an abstraction or philosophical exercise,” says Fei-Fei Li, co-director of Stanford University’s Institute for Human-Centered AI, at its recent launch, reports Axios. “This tech affects real people, living real lives.”

New EU ethical guidelines raise the bar for industry accountability

Until recently, international governmental bodies have been silent on the question of ethics and AI. That is, until the EU released a set of “Ethical guidelines for trustworthy AI” on April 8 of this year, making Europe the first-mover in a space dominated by U.S and Chinese AI innovation. The guidelines lay out seven key requirements that AI systems should meet, including transparency, human oversight, protections for data privacy and governance, and provisions for security, transparency, and the avoidance of bias. The guidelines are non-binding, however, stopping short of regulations such as the EU’s GDPR.

The EU’s action taps into a global debate about the role of business and government in placing ethical constraints on AI software. In particular, the debate highlights the tension between ensuring the ethical use of AI and hampering technology development. As governments have yet to solidify AI regulations, businesses have taken the lead on the issue thus far, establishing their own internal guidelines. The EU guidelines, however, will likely form a framework — or at least a jumping off point — for other regulatory bodies and businesses to draft their own ethical standards, notes one report in Lexology.

Curtailing possible negative consequences of AI

This move by the EU comes in the wake of a creeping realization, among industry giants and government leaders alike, that despite AI’s massive value, it may lead — and in some cases has led — to unforeseen consequences, says an Axios report. This has happened prominently in the media sector, for example, where AI bots have in some cases led to untrustworthy information circulating across social media.

For industry insiders, this kind of revelation is relatively new. Bill Gates, Microsoft’s founder, said at a recent Stanford University event that AI’s original creators, decades ago, were unaware of potential side-effects like an “information free-for-all,” according to Axios. “There wasn’t a recognition way in advance that that kind of freedom would have these dramatic effects that we’re just beginning to debate today,” he said.

Today, it’s not just AI’s role in media production that has industry insiders concerned, however. For instance, researchers at Google and Microsoft, as part of the NYU-affiliated group AI Now, recently called for government regulation of facial recognition technology, citing potential hazards. Indeed, San Francisco has now become the first city in the U.S. to ban the use of facial recognition by police and other agencies, with local city councilmember Aaron Peskin telling the New York Times that “I think part of San Francisco being the real and perceived headquarters for all things tech also comes with a responsibility for its local legislators. We have an outsize responsibility to regulate the excesses of technology precisely because they are headquartered here.”

Others fear the rise of things like autonomous weaponry, reports Engadget.

How to protect humans from AI without stifling its benefits

Still, policing various applications of AI in the private sector could prove tricky. At the heart of the matter is the need to create transparency regarding the uses and impacts of AI systems, without forcing businesses to disclose their trade secrets or kill innovation. However, experts have offered several paths forward.

For example, AI expert Charles Towers-Clark wrote in Forbes about the development of open-source AI “tracking” software, which can monitor the decision-making processes — and any potential biases — of AI systems, without revealing the internal processes of those systems themselves.

Another report, in the MIT Technology Review, also argues that the decisions of AI systems can be made explainable — that is, legally accountable — in instances of biased or erroneous decision-making, without the need to publish the blueprints of AI systems. They point out that such unwanted decisions can likewise be “corrected” in a similar fashion.

Absent further regulation, however, it remains to be seen what shape accountability mechanisms will take, and to which standards AI developers will be held. The EU, for one, is continuing to gather feedback on the matter, and plans to issue a final accountability report in 2020.

How other countries are responding

In the meantime, 18 countries have released their own AI strategies — including Canada, China, Germany and Japan. Each offers its own degree of regulatory oversight and government funding. This ranges, for example, from Canada’s $21 million in investment to support AI research to nearly $2 billion in funding from the South Korean government.

The U.S. will soon join the pack. Plans for an “American AI Initiative” were announced this past February. The initiative not only directs federal agencies to prioritize AI for research and funding, but tasks other governmental bodies, like the National Institute of Standards and Technology, to create ethical standards surrounding AI, potentially like those of the EU. The program, however, so far provides no new funding for development.

What’s clear from this trend is that, alongside the rapid growth of AI, both businesses and governments have become interested in the ethical considerations of AI. For an industry that has largely remained clear of regulatory scrutiny, this may represent a turning point for accountability. For developers, now is the time to consider how such potential measures may impact industry practice.

2019-06-17T19:38:50+00:00

About the Author:

Jeff Fochtman
Jeff Fochtman is vice president of Consumer Solutions and Global Marketing at Seagate.