Google’s new manifesto says AI will be used to benefit society
The company says it won't develop AI applications "that cause or are likely to cause overall harm."
During last month’s Google I/O keynote, CEO Sundar Pichai said something that struck me. Almost as a throwaway, he said that Google’s objective was to “Make information more useful, accessible and beneficial to society.”
Google’s actual mission statement is to “Organize the world’s information and make it universally accessible and useful.” I was struck by the shift in tone; was he test-driving a new mission statement? “Beneficial to society” is a lot stronger (and more demanding) than “Don’t be evil.”
These values matter, and yesterday the company made another strong statement when Pichai blogged a kind of AI manifesto that establishes a set of rules and principles surrounding AI project development:
We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.
Here are the seven principles:
- Be socially beneficial.
- Avoid creating or reinforcing unfair bias.
- Be built and tested for safety.
- Be accountable to people.
- Incorporate privacy design principles.
- Uphold high standards of scientific excellence.
- Be made available for uses that accord with these principles.
Google says it won’t use AI for:
- Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
- Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
- Technologies that gather or use information for surveillance violating internationally accepted norms.
- Technologies whose purpose contravenes widely accepted principles of international law and human rights.
This follows on the heels of an internal employee uproar over potential AI-related work for the US Department of Defense. Those who didn’t agree with the decision to use AI for weapons-related systems apparently won out. However, Google is still bidding on cloud contracts for the US military against Microsoft, Oracle and IBM. It was reportedly a long shot before; it’s an even longer shot now.
Google’s ethical AI principles are welcome and necessary. Microsoft has also been an ethical AI leader. But statements and behavior are often different things (That is the cynical position). In addition, dictatorships such as China and Russia, among other bad actors in the world, will not adopt such principles, which will create some very challenging issues and dilemmas in the future.
And while “socially beneficial” outcomes are more likely if strict guidelines govern decision-making, there’s no way to avoid negative outcomes as well. The genie is out of the bottle.
Opinions expressed in this article are those of the guest author and not necessarily MarTech. Staff authors are listed here.
Related stories