AI Ethical Guidelines Raise the Bar for Industry Accountability

Few things inspire more excitement and trepidation than the development of artificial intelligence (AI). AI has rapidly evolved from the pages of sci-fi novels into a lived reality. Its presence is being felt in all areas of commercial and consumer life: just ask Alexa or any of the AI-powered digital assistants you talk to.

As the “AI era” unfolds, regulators are now hoping to make it accountable. In this, they’re hoping to stir innovation, while keeping potential hazards of the technology in check. The central question: now that AI is finally here, how do we hold it to ethical standards?

“For the first time, the ethics of AI isn’t an abstraction or philosophical exercise,” says Fei-Fei Li, co-director of Stanford University’s Institute for Human-Centered AI. “This tech affects real people, living real lives.”

Forming international ethics standards

Last year the EU released a set of “Ethical guidelines for trustworthy AI“, making Europe the first-mover in a space dominated by U.S and Chinese AI innovation. The guidelines lay out seven key requirements that AI systems should meet, including transparency, human oversight, protections for data privacy and governance, and provisions for security, transparency, and the avoidance of bias. The guidelines are non-binding, however, stopping short of regulations such as the EU’s GDPR.

The EU’s action taps into a global debate about the role of business and government in placing ethical constraints on AI software. In particular, they highlight the tension between ensuring the ethical use of AI and hampering technology development. As governments have yet to solidify AI regulations, businesses have taken the lead on the issue, thus far, establishing their own internal guidelines. The EU guidelines, however, can form a framework — or at least a jumping off point — for other regulatory bodies and businesses to draft their own ethical standards, notes one report in Lexology.

Mitigating the possible consequences of AI

Deliberations on the subject by governments and standards bodies come in the wake of a creeping realization, among industry giants and government leaders alike, that despite AI’s massive value, it has led to unforeseen consequences, says an Axios report. This has happened prominently in the media sector, for example, where AI bots have caused untrustworthy information to circulate across social media.

For industry insiders, this kind of revelation has been relatively new but has been gaining credence in recent years. Bill Gates, Microsoft’s founder, said at a recent Stanford University event that AI’s original creators, decades ago, were unaware of potential side-affects like an ‘information free-for-all’, according to Axios. “There wasn’t a recognition way in advance that that kind of freedom would have these dramatic effects that we’re just beginning to debate today,” he said.

Today, it’s not just AI’s role in media production that has industry insiders concerned, however. For instance, researchers at Google and Microsoft, as part of the NYU-affiliated group AI Now, have called for government regulation of facial recognition technology, citing potential hazards. Others fear the rise of things like autonomous weaponry, reports Engadget.

Still, policing various applications of AI in the private sector could prove tricky. At the heart of the matter is the need to create transparency with AI systems, without forcing businesses to disclose their trade secrets or kill innovation. However, experts have offered several paths forward.

For example, AI expert Charles Towers-Clark wrote in Forbes about the development of open-source AI “tracking” software, which can monitor the decision-making processes — and any potential biases — of AI systems, without revealing the internal processes of those systems themselves.

Another report, in the MIT Technology Review, also argues that the decisions of AI systems can be made explainable — that is, legally accountable — in instances of biased or erroneous decision-making, without the need to publish the blueprints of AI systems. They point out that such unwanted decisions can likewise be “corrected” in a similar fashion.

Absent further regulation, however, it remains to be seen what shape accountability mechanisms will take, and to which standards AI developers will be held.

How other countries are responding

In the meantime, 18 countries have released their own AI strategies — from Canada and Germany to Japan and China. Each offers its own degree of regulatory oversight and government funding. This ranges, for example, from Canada’s $21 million in investment to support AI research to nearly $2 billion in funding from the South Korean government.

The U.S. is also joining the pack with its plans for an “American AI Initiative“. The initiative not only directs federal agencies to prioritize AI for research and funding, but tasks other governmental bodies, like the National Institute of Standards and Technology, to create ethical standards surrounding AI, potentially like those of the EU.

What’s clear from this trend is that, alongside the rapid growth of AI, both businesses and governments have become interested in the ethical considerations of AI. For an industry that has largely remained clear of regulatory scrutiny, this may represent a turning point, good or bad, for accountability. For developers, now is the time to consider how such potential measures may impact industry practice.

2020-07-02T12:17:24+00:00

About the Author:

John Paulsen
John Paulsen is a "Data for Good" advocate, with more than 20 years in the data storage industry. He's helped launch many industry-firsts including HAMR technology, 10K-rpm and 15K-rpm hard drives, drives designed specifically for video and for gaming, Serial ATA drives, fluid dynamic HDD motors, 60TB SSDs, and MACH.2 multi-actuator technology.