Martin Wright looks at the plethora of efforts to try to avoid unintended consequences from the powerful new technologies

AI and its companion technologies have a huge capacity to transform the world for good, but also a worrying potential for some pretty devastating unintended consequences. When it comes to the question of governance, that poses quite a challenge. The “move fast and break things” culture that has helped drive AI is somewhat at odds with the safety first, precautionary principle approach of sustainability. And the fact that machine learning works best “in the wild” – ie, when it’s operating in the real world, not the confined environment of the lab – adds to the challenge.

Small wonder, then, that the last year or so has seen something approaching a frenzy of initiatives involving academics, tech companies and governments, aimed at setting standards and guidelines for AI.

They vary in breadth and focus, but most come up with strikingly similar sets of recommendations, which largely boil down to calls for the technology to be harnessed for the benefit of all of humanity, while minimising the risks inherent in its exploitation. Effectively – although without saying as much – they all start from the same premise as the Hippocratic Oath (“first, do no harm”): indeed, that might be a helpful preamble to...

This content is premium content, and only accessible to subscribers. Please log in to view the content - or subscribe here.

Subscribe to read: First, do no harm: regulators and tech industry scramble to tame the AI tiger



Already a subscriber? Login using the fields below.

To get access to this content, become an Ethical Corporation subscriber today.

Subscribe and join the likes of:

Subscribe here
Close popup
BSR  Future of Life Institute  AI Now Institute  General Data Protection Regulation  Institute of Electrical and Electronics Engineers  AI  google  Oxford Internet Institute  UK Industrial Strategy  CBI 

comments powered by Disqus