AI is coming to our phones and our homes, and with it, a new set of tricky questions appears on the horizon. DeepMind, the UK-based AI company that was bought by Google back in 2014, has declared their intention to tackle these issues with the formation of a new research team dedicated to the ethics and morality of AI use.
Dubbed DeepMind Ethics & Society, or DMES, the group will research areas of concern such as the potential biases of AI, the impact of increasing automation on the labor market as well as legal and healthcare systems and the need to ensure that AIs are developed in a way that benefits society. There is no Skynet-style science fiction doomsday on the horizon, it’s more about keeping the human impact in perspective as we rush to adopt exciting new technologies.
Right now DMES is a small team of eight full-time staffers, but the team also draws upon the expertise of six unpaid fellows drawn from academia and industry think-tanks. Among them is Oxford philosopher Nick Bostrom, who literally wrote the book on the risks of AI. DeepMind is hoping to more than triple their DMES staff within the year.
Of course DeepMind isn’t alone in wading into the moral quagmire of AI. The six giants of tech—Google, Amazon, Microsoft, Facebook, IBM and Apple—are already collaborating in the Partnership on AI. DMES will also be partnering with established groups such as The AI Now Institute at NYU, and the Leverhulme Centre for the Future of Intelligence.
DMES outlined their mission statement in a blog post by co-leads Verity Harding and Sean Legassick: “If AI technologies are to serve society, they must be shaped by society’s priorities and concerns.” For examples of ongoing issues, Harding and Legassick point to recent controversies such as racism in criminal justice systems, and the various failures of AI systems used in hospitals, credit agencies and courtrooms.
DeepMind itself doesn’t have a spotless ethical record. The Company was criticized for a deal it made with the UK’s National Health Service (NHS), in which DeepMind partnered with three London hospitals to process the medical data of 1.6 million patients. DeepMind failed to inform individuals that their data was being used and the data transfer was ruled to be illegal. DeepMind consequently brought in external reviewers to examine the ethical aspects of future deals.
We should worry more about the ethical behavior of corporations, rather than machines
What do you think?
Privacy concerns are likely to be at the forefront of most user’s minds as AI and machine learning increasingly become staples in consumer electronics. At the moment, different companies are trying different approaches. Google, for example, is pushing Google Home and Assistant, whereas Huawei is talking up the Neural Network Processing Unit (NPU) in Kirin 970 processors for the upcoming Mate 10.
In the future, our smart home assistant, whether from Google, Amazon, Microsoft, Apple or anyone else, may get to learn a whole lot of intimate details about ourselves and our families, which of course the manufacturer will have access to. We’ve already seen the problems that arise when aIgorithms curate our news feeds, and future technologies such as self-driving cars are hitting a roadblock on how the AI should be programmed to react in the case of situations where human life could be threatened.
Given how eager tech companies are to adopt AI, it’s good to know that they’re also making the effort to research the ethical and social questions around it. The technology is still in its infancy and it’s important not just to get to know the risks but also to communicate them effectively to the public. The experts are doing their research but it’s also up to all of us, the users, to participate, educate ourselves and voice our concerns. The question is, will powerful tech companies, thinking of their bottom line, be prepared to listen?
What do you think are the main risks of AI? Can corporations be trusted to use it responsibly?