Whether you've used social media, a navigation app or a picture filter, chances are that Artificial Intelligence (AI) has impacted you. It's not just you — AI is impacting human rights worldwide, and this course will inform and educate you on how your rights are affected by AI, and how you can be empowered to guard these rights.From the author:“From algorithms that are designed and used to shape our social media news feeds, to those profiling users and curating the information they receive, Artificial Intelligence (AI) is impacting human rights worldwide. There is an urgent need to inform and educate people on how their rights are affected by AI and empower them with tools to guard their rights. With that context, welcome to a new micro-learning course jointly developed by UNESCO and UNITAR on AI and Human Rights. This course breaks down complex concepts about AI for youth . Through activities built around our daily technology interactions, there is a strong focus on how freedom of expression, the right to privacy and the right to equality are impacted using AI. Happy learning!”
Click through the microlessons below to preview this course. Each lesson is designed to deliver engaging and effective learning to your team in only minutes.
Follow the interactions on each screen or click the arrows to navigate between lesson slides.
Defending Human Rights in the Age of Artificial Intelligence
After taking this course, you will... be aware of the implications of AI on freedom of expression, right to privacy, and the right to equality. ## have engaged with practical examples of uses of AI, which are problematic from a Human Rights perspective.
Search engine algorithms help us access the information that we want by rapidly processing data on the internet. The search results tend to be more and more personalized based on a user’s location, gender, language, search history, and other data footprints online.
Job-matching algorithms analyze people’s competencies to show employers suitable candidates for employment.
On-demand video platforms provide personalized recommendations based on our viewing patterns and those of millions of other users. By doing so, they offer advertisers the ability to predict and nudge our attitudes and actions.
Algorithms help judges determine the likelihood of someone committing a crime based on the past record of individuals with a similar profile. Algorithms are also used to suggest the duration of prison sentences.
Digital profiles are used by immigration authorities to approve or reject visa applications.
In this course, we will have a look at how the use of AI grows – and why we need to defend three human rights: Freedom of Expression ## Right to Privacy ## Right to Equality
Freedom of Expression | Part 1
Equally, freedom of expression is important for personal expression, which enables individual self-realization. (Gilmore, 2011)
The UN’s Human Rights Committee clarifies Article 19 of the ICCPR in its General Comment No. 34 (2011), stating that... Article 19 of the International Covenant on Civil and Political Rights (ICCPR) underlines that freedom of opinion and freedom of expression are both indispensable conditions for the full development of the person and are both foundation stones of every free society. ## The right to freedom of expression in the Universal Declaration of Human Rights (UDHR) and the ICCPR also includes the right to seek, receive and impart information and ideas. (de Zayas & Martín, 2012)
Online Content Moderation
Internet platforms rely on AI techniques to moderate, flag and remove illegal content posted online. Through practices like “spam detection, hash-matching technology, keyword filters, natural language processing and other "detection algorithms”, social media companies can remove or reduce the visibility of content perceived as ‘undesirable’ as per the company’s policy or laws of a country. (UNGA A/73/348, 2018)
AI use for content moderation should be implemented with oversight and a clear process to protect the rights of the users.
In May 2018, a group of organizations, advocates and academic experts proposed the Santa Clara Principles as initial steps to be followed by companies and platforms engaged in content moderation, in order to ensure the fair enforcement of content guidelines. (The Royal Society, 2018)
Freedom of Expression | Scenarios
Facebook 2016 The case of Facebook banning the Pulitzer prize-winning ‘Napalm Girl’ photograph
In September 2016, Facebook decided to remove the iconic photograph of nine-year-old Phan Thi Kim Phúc running naked in the aftermath of a napalm chemical attack during the Vietnam War.
As we saw in the Facebook example, only AI-driven algorithms make mistakes
Alana is staying with her parents over the weekend. Alana has different political ideologies from that of her parents. They tend to have heated debates over them often, particularly during political crises.
She decides to post about her latest disagreement with her parents on her social media feed. To her relief, she receives a wave of support from her friends and colleagues, asserting that the older generation has no regard for the future generations to come.
However, when using the family computer one evening, Alana constantly sees recommended articles and sites with political views that are opposite to her views. Alana realizes that since this is a shared device, the algorithm is continuing to recommend links to information that the last user, her father, would have liked to read. She realizes that like how her father was directed to information he was biased to, she too was being shown information based on her prior views. She too is living in a filter bubble and a social media echo chamber, insulated by her own personal ecosystem of information.
Alana understands that for her to have meaningful debates with her father, they must step out of their filter bubbles and echo chambers.
Alana finds a few online tools that can help them find information that is not aligned with their personal biases.
Every time we click, watch, or even share a comment, social media and search engines collect our information, which generates personalized advertisements. In this sense, our phones and computers are like a one-way mirror into our very minds.
We can avoid filter bubbles by
Right to Privacy | Part 1
Privacy is important since it also enables other human rights and freedoms including:
The right to privacy is enshrined in Article 12 of the UN’s Universal Declaration of Human Rights (UDHR) and Article 17 of the ICCPR, as well as other human rights documents, international instruments and national laws.
Through intensive use of the Internet and the increasing use of Internet of Things (IoT) devices, individuals are generating a vast amount of data. (ARTICLE 19 & Privacy International, 2018)
This can be done intentionally by writing posts, using emojis or posting pictures on social media.
...or unintentionally, by browsing websites, clicking on links, accepting cookies, etc.
Right to Privacy | Part 2
Concerns related to surveillance in public places Select all correct answers
Targeted surveillance of civilians needs to comply with the three-part test that includes: Select all the right answers
Online tracking and De-Anonymization of Individuals The balance between using data and protecting people’s privacy has historically relied on data anonymization, both legally and practically. (Montjoye, Farzanehfar, Hendrickx, & Rocher, 2017) ## Ubiquitous computing and big data are challenging anonymization. ## Some of these predictions can be accurate to the point of de-anonymizing web users whose online activities are constantly tracked.
Right to Privacy | Scenarios
Scenario #1 Virtual Assistants Eavesdropping: Amazon Alexa
Alexa had been listening in, recording their background conversation and then sending it to this person on their contact list. The device, however, was not hacked by a third party. Amazon confirmed that the audio was unintentionally broadcast by the device. (Moye, 2018).
Vision Land City has installed cameras that use facial recognition technology everywhere in the city.
Vision Land police had installed these cameras to help them identify criminals. More than 500,000 faces are analyzed and recorded every day, the overwhelming majority of whom are not suspected of any wrongdoing.
The court rules that the use of facial recognition has led to a breach of privacy. The police officers applied the technology so widely that it was not in line with the reasonable exemptions allowed under the right to privacy. The court further ruled that the police did not carry out any audits on its facial recognition system to ensure that it was not discriminatory against certain social groups and minority communities.
Right to Equality | Part 1
Article 1 of the Universal Declaration of Human Rights (UDHR) proclaims that “all human beings are born free and equal in dignity and rights”.
Article 2 states that “everyone is entitled to all the rights and freedoms set forth in this Declaration without distinction of any kind, such as race, color, sex, language, religion, political or other opinion, national or social origin, property, birth or other status”. (UNGA Resolution 217, 1948)
Article 26 is broader and provides protection against discrimination explicitly.
“All persons are equal before the law and are entitled without any discrimination to the equal protection of the law. In this respect, the law shall prohibit any discrimination and guarantee to all persons equal and effective protection against discrimination on any ground such as race, color, sex, language, religion, political or other opinion, national or social origin, property, birth or other status.”. (UNGA Resolution 217, 1948)
Negative discrimination is only morally wrong in the context of racial groups.
Establishing the criteria for recognizing discrimination does not necessarily equip us with tools that are sufficient for the analysis of algorithmic discrimination. Algorithms can acquire a discriminatory nature in multiple ways. ## These mainly include technical features of the algorithm being biased intentionally or unintentionally by programmers or through the reinforcement of biases present in training data for machine learning algorithms. ## The identification of direct, indirect and institutional discrimination is needed so that we can create regulatory or technical solutions.
Bias in AI comes from...
Many types of discrimination can be indirect.
For example, an algorithm that relies on cell phone usage patterns to determine the credit worthiness of a person is discriminatory if it assigns high credit risk to women in communities that...
There is a strong case for caution in our reliance on algorithms as the final decision-maker as they, at best, provide only useful insights.
Entry points for biases 1. Programmer driven bias 2. Data driven bias
Target variable is the variable that needs to be predicted, which is the output of the algorithm. Class label categorizes all possible target variables into mutually exclusive sets.
Programmers choose the attributes of the data that should be observed and used for analysis.
Data Driven Bias If the rules extracted by the machine learning algorithm from a dataset are considered legitimate, prejudices and omissions embedded in the dataset will be repeated in the predictive model. ## Examples of such data driven biases include: - ## Unrepresentative sample data i.e., some people are underrepresented in the data or simply absent from the dataset because their data is not collected at all. Inferences drawn based on correlations between different variables in the data may be incorrect thus leading to incorrect outcomes Proxy induced biases occur when variables like residential pin code act as proxies for race or social class. - ## Cyclical resource misallocation occurs when selective data is used to determine future resource allocation. For example, if a municipal body uses data recorded by a sensor embedded in cars on the number of potholes on streets to determine what streets to repair in the city, then poorer areas where there may not be enough cars could get fewer resources while richer areas with more cars and hence more data on potholes may get their roads repaired quickly.
Biased Training Data may look like:
Right to Equality | Scenarios
A good way to reduce bias being introduced into AI algorithms is to be diverse in hiring practices to ensure a wider range of perspectives.
Summary and Conclusion
Artificial Intelligence (AI) is increasingly being used in such a manner that it is becoming the veiled decision-maker of our times.
The diverse technical applications loosely associated with AI are ever present in our lives. They scan billions of web pages, digital trails and sensor-derived data within micro-seconds, using algorithms to prepare and produce significant decisions.
From algorithms that shape the way our social media news feed is shown, to those influencing our voting preferences, AI impacts many rights relevant to our rights related to freedom of expression, privacy and equality.
Some AI systems can have very negative impacts on societies and communities, putting them at risk and deepening discrimination amongst them.
Recommendations #1 Advocate for countries to strengthen AI governance with regard to international standards for human rights, and develop mechanisms for the transparency, accountability and redressing of violations and abuses. #2 Advocate for the private sector and technical community to conduct human rights risk and impact assessments of AI applications to ensure that these do not interfere with human rights. #3 Participate in rights-oriented research on the social, economic and political effects of AI content personalization, including the consequences of online “echo chambers”. #4 Raise awareness on the implications of AI and advocate for AI development that respects human rights. #5 Support media actors to investigate and report on abuses and biases of AI as well as the benefits, and harness AI to strengthen journalism and media development.