By Michael Dukakis, Nguyen Anh Tuan, Alex Pentland
Artificial intelligence (AI) and automated systems are increasingly affecting our daily lives. Banking algorithms decide who is eligible for housing or loans, healthcare algorithms are making decisions on coverage and standards of care. Companies are using hiring algorithms to sort resumes.
While all of these innovations make life more convenient, they pose risks to the public and are often rife with bias and discrimination.
In October, the White House Office of Science and Technology Policy (OSTP) released a “Blueprint for an AI Bill of Rights.” As leaders of the Boston Global Forum, we applaud President Biden and the OSTP for advancing this important measure, which protects people from threats and defines guardrails on technology to reinforce civil rights, civil liberties and privacy, and equal opportunities ensuring access to critical resources and services.
The blueprint outlines common-sense protections with respect to AI:
- It shouldn’t discriminate;
- It shouldn’t violate data privacy;
- We should know when AI is being used;
- We should be able to opt out and talk to a human when we encounter a problem.
It’s not binding legislation, but rather a set of recommendations for government agencies and technology companies using AI.
It’s also a great tool to educate the public as well as organizations responsible for protecting and advancing our civil rights and civil liberties.
On the world stage, bad actors in other nations are increasingly using AI to spread disinformation and propaganda through deep fakes and other manipulated media.
Last year, the Boston Global Forum and World Leadership Alliance — Club de Madrid brought prominent international leaders together to explore ideas and strategies and for a Global Law and Accord on Artificial Intelligence and Digital Rights.
The group established the Global Alliance for Digital Governance (GADG) to coordinate resources among governments, international organizations, corporations, think tanks, civil society and influencers for AI and a digital sphere for good, to make the resources more effective to synthesize and maximize their impact. It is not an organization, but rather, a network for sharing resources and cooperating among governments. At the core of this initiative is to establish a common understanding of policy and practice, anchored in general principles to help maximize the “good” and minimize the “bad” associated with AI:
- Fairness and justice for all: The first principle is already agreed upon in the international community as a powerful aspiration. All entities – private and public – should treat, and be treated, with fairness and justice.
- Responsibility and accountability for policy and decision making — private and public: The second principle recognizes the power of the new global ecology that will increasingly span all entities worldwide—private and public, developing and developed.
- Precautionary principle for innovations and applications: The third principle is to support innovation. It does not push for regulation but supports initiatives to explore the unknown with care and caution.
- Ethics-in-AI: Fourth is the principle of ethical integrity. At issue is incorporating the cultural commonalities into a global ethical system for all phases, innovations, and manifestations of artificial intelligence.
Without adequate guidelines and useful directives, the undisciplined use of AI poses risks to the wellbeing of individuals and creates fertile ground for economic, political, social, and criminal exploitation.
As we gain consensus on principles and practices among members of the global society, we will generate and enhance social benefits and well-being for all, shared by all.
Michael Dukakis is the former governor of Massachusetts and chairman of the Boston Global Forum. Nguyen Anh Tuan is CEO of the Boston Global Forum. MIT Professor Alex “Sandy” Pentland is a contributor to the book “Remaking the World – Toward an Age of Global Enlightenment” and a board member of the UN global partnership for sustainable development data.
This article first appeared in the Boston Business Journal, December 30.