Articles

TKMICTP

Community of ethical hackers needed to prevent AI’s looming ‘crisis of trust’,

The Artificial Intelligence industry should create a global community of hackers and “threat modelers” dedicated to stress-testing the harm potential of new AI products in order to earn the trust of governments and the public before it’s too late.

This is one of the recommendations made by an international team of risk and machine-learning experts, led by researchers at the University of Cambridg’s Center for the Study of Existential Risk (CSER), who have authored a new “call to action” published today in the journal Science.

They say that companies building intelligent technologies should harness techniques such as ‘red team’ hacking, audit trails and ‘bias bounties’—paying out rewards for revealing ethical flaws—to prove their integrity before releasing AI for use on the wider public.

Otherwise, the industry faces a ‘crisis of trust’ in the systems that increasingly underpin our society, as public concern continues to mount over everything from driverless cars and autonomous drones to secret social media algorithms that spread misinformation and provoke political turmoil.

The novelty and ‘black box’ nature of AI systems, and ferocious competition in the race to the marketplace, has hindered development and adoption of auditing or third party analysis, according to lead author Dr. Shahar Avin of CSER.

The experts argue that incentives to increase trustworthiness should not be limited to regulation, but must also come from within an industry yet to fully comprehend that public trust is vital for its own future—and trust is fraying.

The idea of AI ‘red teaming’—sometimes known as white-hat hacking—takes its cue from cyber-security.

“Red teams are ethical hackers playing the role of malign external agents,” said Avin. “They would be called in to attack any new AI, or strategise on how to use it for malicious purposes, in order to reveal any weaknesses or potential for harm.”

While a few big companies have internal capacity to ‘red team’—which comes with its own ethical conflicts—the report calls for a third-party community, one that can independently interrogate new AI and share any findings for the benefit of all developers.

A global resource could also offer high quality red teaming to the small start-up companies and research labs developing AI that could become ubiquitous.