The rapid development of artificial intelligence (AI) has created much debate across the legal community about its risks and benefits. AI can increase productivity and efficiency, and create new opportunities for communities and businesses. However, AI use is not without challenges, including that it poses regulatory requirements for governments.
What is the UNESCO Recommendation?
In November 2021, the United Nations Educational, Scientific and Cultural Organisation (UNESCO) published its Recommendation on the Ethics of Artificial Intelligence (Recommendation). The Recommendation proposes a global framework of standards for the ethical use of AI to be adopted by UNESCO member states. It examines ethical challenges that may arise, and how AI policies can be designed to ensure AI is used and developed in a way which benefits humanity, individuals, society, and the environment. The Recommendation has been adopted by 193 UNESCO member states.
AI Ethics: Foundational Values and Principles
The Recommendation supports AI policy development and legal reforms which reflect and are guided by four core values:
- “respecting, protecting and promoting human rights and fundamental freedoms, and human dignity;
- environment and ecosystem flourishing;
- ensuring diversity and inclusiveness; and
- living in peaceful, just and interconnected societies.”
These values advance and reflect ten core human rights based principles:
- AI use should be proportionate and do no harm;
- safety and security risks should be addressed, prevented and eliminated, so that safe AI facilitates sustainable and privacy-focused development and use;
- fairness and non-discrimination should be promoted, consistent with international law, ensuring AI benefits are available and accessible to all;
- sustainability objectives should be prioritised, including AI’s human, social, cultural, economic and environmental impact;
- privacy must be respected, protected and promoted, for example, through data protection and governance mechanisms, and AI actors must be accountable for the systems they develop;
- human and public oversight cannot be ceded, and AI should not replace human responsibility and accountability;
- AI systems must be transparent and explainable so that people are fully informed on decisions made by AI and the reasons affecting those decisions;
- responsibility and accountability is key, and AI must be attributable to AI actors throughout each stage of the AI lifecycle;
- public awareness and AI literacy should be promoted through accessible education and public engagement, to ensure effective public participation;
- AI regulation should be adaptable and collaborative, and must respect international laws and the stakeholder interests.
AI Ethics: Key Policy Areas
The Recommendation outlines 11 key policy areas that governments must consider to ensure AI use is ethical, and respects human rights and freedoms. It suggests practical strategies based on the above values and principles, to responsibly develop, use and regulate AI globally.
-
Ethical impact assessments
States should introduce frameworks for ethical impact assessments to identify and address AI’s benefits and risks, including any impact on human rights and freedoms, labour rights, the environment, and society. For example, assessing the socio-economic impact of AI to ensure its adoption does not increase the poverty gap or digital divide.
-
Ethical governance and stewardship
States should develop regulatory mechanisms that are inclusive, transparent, multidisciplinary and multilateral. Harms should be investigated and redressed. AI policy must comply with human rights laws. This could be achieved by:
- establishing a certification process;
- requiring organisations to engage an independent AI Ethics Officer to oversee ethical impact assessments, auditing and continuous monitoring;
- facilitating AI governance forums;
- supporting strategic research on safety and security risks, transparency and explainability, inclusion and literacy;
- introducing liability frameworks to ensure accountability;
- disclosing and combatting stereotypes in AI outcomes to ensure it does not foster cultural, economic or social inequalities, prejudice, spread misinformation or disinformation, or disrupt freedom of expression or access to information;
- setting clear requirements for AI system transparency and explainability to ensure trustworthiness;
- testing and developing laws by involving all AI actors, for example, through policy prototypes and regulatory sandboxes.
-
Data policy
States should encourage continuous evaluation of AI training data, including the data collection and selection process, data security and protection measures, and feedback mechanisms. Safeguards should be adopted to protect privacy, and individual rights to personal and sensitive data.
-
Development and international cooperation
States and organisations should prioritise AI ethics, for example, by:
- including discussions on it in international, intergovernmental and multi-stakeholder fora;
- adhering to the above values and principles;
- contributing expertise, funding, data, knowledge and infrastructure to address AI development issues;
- promoting AI ethics research and innovation.
-
Environment and ecosystems
States should assess and seek to reduce AI’s environmental impact, including its carbon footprint energy consumption and raw material extraction to support AI infrastructure. Incentives could be introduced to develop and adopt ethical AI solutions for disaster risk resilience, monitoring ecosystems, and protecting the planet. These should involve Indigenous communities, support the economy, and promote sustainable consumption and production patterns.
-
Gender
States should explore AI’s potential to achieve gender equality, and ensure its use does not impact the safety and integrity of women. They must ensure gender biases are not replicated in AI systems, and seek to eliminate gender gaps. This might be achieved through increased representation such as more opportunities for women’s participation in STEM, for example, through incentives to women, and policies promoting affirmative action and harassment-free environments.
-
Culture
States should use AI to preserve, enrich, understand, promote, and make accessible, cultural heritage such as human language and expression. AI could be used to bridge cultural gaps, increase human understanding, and mitigate the likelihood of languages disappearing. AI and IP should be researched to develop policies which assess how AI impacts IP owners, and protect their IP as well as how best to protect IP rights in works created using AI. Libraries and archives might also use AI to enhance their collections and improve access for users.
-
Education and research
States should collaborate with educational institutions, and organisations to provide AI literacy education, including literacy, numeracy, coding, and ethical skills. They should facilitate research on ethical AI use in education to ensure AI empowers students and teachers without reducing cognitive abilities, compromising privacy or allowing information to be misused.
-
Communication and information
States should use AI systems to improve access to information and knowledge by implementing frameworks which encourage transparency in online communications, ensure individuals have access to diverse viewpoints, notify users if and why content has been removed or changed, and provide appeal mechanisms for users to seek redress. States should invest in digital and media literacy to mitigate disinformation, misinformation and hate speech.
-
Economy and labour
States should assess and address the impact of AI on labour markets and consider introducing broader interdisciplinary skills in education programs to provide workers with a better opportunity at finding jobs in a rapidly changing market. They should implement frameworks for a fair transition for at-risk workers, for example, upskilling and reskilling programs, mechanisms to retain employees, and safety net programs. States should plan to ensure competitive markets and consumer protection, including to prevent market abuse by AI monopolies.
-
Health and social wellbeing
States should employ AI to improve health outcomes and protect the right to life, for example, by mitigating and managing disease outbreaks. They must ensure they are careful using AI in health, including ensuring oversight to reduce bias, monitor privacy risks, ensure individuals provide informed consent, ensure final decisions are made by humans, investing in ethical research committees, and developing guidelines for human-robot interactions.
Summary
The Recommendation establishes an important foundation to guide the development and use of AI. The above values and principles should underpin AI policy and practices to promote human rights and freedoms, peaceful and interconnected societies, diversity and inclusion, and flourishing environments. Businesses should be aware of the Recommendation as it will continue to inform AI policy development and implementation by governments in the near future.