Technology and Access to Information: Effect of Search Algorithms on Access to Information

Wendy-Ashikomela-AshilenjeAuthor: Wendy Ashikomela Ashilenje
Advocate of the High Court of Kenya

Introduction

Are you accessing all the information on your Facebook, Threads, Google, Bing or Instagram? The increased technological advances in Africa have been characterised by the increased use of the internet which is driven by the increased use of Artificial Intelligence (AI), hence confirming that we are in the Fourth Industrial Revolution (4IR). Statistics by Statista show that as at 2024, Africa had approximately 646 million internet users which is a slight increase from the 570 million internet users in 2022. As a result of the increased internet use, there is a lot of information that is out there which can be accessed through the various search engines or social media platforms. The science behind the internet may be complicated but it can easily be described by one word – algorithms. Algorithms are machine learning techniques that have been programmed to provide certain output based on the information that they are fed. Tarleton Gillespie attempts to give context as to what algorithms are and states that they are encoded procedures that transform the input data into a desired or specific output based on certain calculations.

Access to the internet is a means by which individuals can exercise their freedom of expression and the right of access to information. The internet is vast with information from all corners of the world and hence the question of access to information will always arise with the use of the internet. Search engines allow users to access specific information while social media platforms like Instagram provide information based on the people that you ‘follow’ on the platform. The right of access to information is governed by various principles such as proactive disclosure, maximum disclosure, processes to facilitate access, costs, right of appeal, limited scope of expectations, promotion of open government and protection for whistleblowers. Through various international law instruments, states have been mandated to enforce the right of access to information through enactment of the various statutes and regulations.

The right of access to information is not an absolute right and there are situations when it may be limited. This paper seeks to understand whether all the limitations are justified.

Legal Framework

The digital era has been accompanied by calls for protection of digital rights. The right of access to information is a universally recognised right. Article 19 of the Universal Declaration of Human Rights (UDHR) provides for the freedom of expression and the right to seek, receive, and impart information and ideas through any media regardless of frontiers. Although the UDHR was not created in the digital era, the law’s interpretation of the right of access to information is said to apply to access to information in the digital era. 

Further, the Model Law on Access to Information in Africa elaborates on the right of access to information and the principles thereof. The Model Law acts as a guide to African States on what should be entailed in their Access to Information Laws. This is underpinned by article 9 (1) of the African Charter on Human and Peoples Rights provides for the right to receive information, which imposes obligations on states to ensure that their citizens have a right of access to information. African states have made strides by enacting access to information legislation subsequent to the adoption of the Model Law. For instance, in Kenya article 35 of the Constitution of Kenya 2010 and the Access to Information Act of 2016 are the main laws governing the right of access to information.

How do algorithms affect the right of access to information?

This section shall explore how algorithms affect the right of access to information through social media platforms and the various search engines. Understanding how these algorithms work is vital to establish whether or not they affect the users’ right to access to information. Algorithms have become an essential part of life for internet users. When you want to find the latest movies, music, or books, you search on your browser. Gerards argues that algorithms are human constructs i.e. they have been created, programmed and trained by human beings. He further recognises that as humans, we have our inherent biases and these are likely to come out through the algorithms that are programmed by humans. Since an algorithm is dependent on the input keyed in for it to provide a specific output, depending on the data keyed in, there is a high probability of bias. Nevertheless, programming of the algorithm controls what we can access through the internet.

algorhythms

 

When we log in to various websites, there are specific terms that we consent to before using the websites. Whether people actually understand the nature of what they are consenting to is debatable. However, algorithms have been programmed to study a user’s behavior and collect information that it uses to determine the behavioral pattern of the user hence determining the output that will be given. The terms of use may at times state that the user consents to specific information being collected, but the extent of what information is collected is not expressly stated. Is it a coincidence that when you have been researching prices of smartphones on your Google browser, your Instagram brings you smart phone related ads? There is a whole discussion of data privacy that we shall not dwell on right now, however, it is clear that algorithms only produce information based on the data that they are exposed to, hence a user shall only have access to information that an algorithm deems necessary. The necessity of the information is determined by the specific behavioural patterns that the algorithm has studied from the user.

Due to the biases algorithms have, more popular brands and companies are likely to appear higher on the search results as compared to brands that have not established enough presence in the market. This is a good marketing strategy although it limits the users’ exposure to other brands that they may also have an interest in.

Perhaps, an algorithm narrowing down to a user’s preferences is not detrimental. However, this becomes a problem when it creates ‘filter bubbles’ and ‘echo chambers’. These terms refer to situations where an algorithm only recommends specific information based on its behavioural analysis of the user, hence the user is only exposed to the information that resonates with them. Further, the user can be said to be in a filter bubble as they hardly ever get dissenting opinions on their ideas hence the echo chamber where their ideologies and beliefs are justified.

Additionally, algorithms may restrict access to information by states and non-state actors resorting to using technological tools to block and filter certain content from the users. The meaning of blocking and filtering of content is often confused. While blocking refers to a situation where access to certain sites is denied completely, filtering refers to the use of technology to prevent access to certain pages by making reference to specific characteristics.

Many states, when faced with the question of digital rights protection during the digital era, resolved to establish guidelines for regulating the various platforms. Blocking and filtering of content is one of the ways through which this regulation has been done. Consequently, there has been a rise in the existence of content moderation and content moderators. As stated above, the right to access to information is not an absolute right and there are circumstances where this right may be limited. These circumstances include where the content violates a person’s intellectual property, protection of children’s rights such as against child pornography, fake news, and defamatory information.

Content moderation can be performed by an algorithm or humans. However, an algorithm may not have an understanding of a people’s culture like human beings. Therefore, there is a likelihood that an algorithm will take down content based on the characteristics that they have been programmed to look out for. This results in limitation of the people’s right of access to information because of the lack of human aspect behind the algorithm to influence the decision before the content is taken down.

Conclusion

From the foregoing, it is noteworthy that algorithms can be manipulated to provide a specific desired result which may result in violation of a user’s right of access to information. We further recognise that the right is not absolute and there are instances where the limitation is justified, however, the line between a justified limitation and an unjustified limitation is blurry especially where there is no legal basis for the limitation. This restriction may be used by some states, particularly during elections, to violate citizens’ rights and win those elections.

Noting that algorithms mainly function based on the information that they have been exposed to, there are various things that can be changed to avoid a situation where the user’s right of access to information is limited. Since algorithms are human constructs, they are likely to be biased, therefore, it is important to ensure neutrality when programming these algorithms by involving people from various cultures and origins. If an algorithm is moderating content in Kenya, it would be ideal if a person who understands Kenyan culture is involved in its programming. The human aspect of decision-making should be incorporated in the content moderation.

Further, once algorithms have been created and programmed, it is difficult to tell what they can or cannot do once deployed in the market. Therefore, it would be important for the big tech companies to be transparent about the algorithms that they are deploying to the market. They do not need to disclose all the information that will disadvantage them in the market, but sufficient disclosure to justify its deployment into the market. For instance, the aspect of Google interfering with search algorithms to give priority to recognised brands demonstrates the lack of transparency in the market.

Lastly, international human rights instruments impose obligations on states to ensure that the right of access to information is not violated. Therefore, it is important for states to put in place measures to this effect. This may include regulating the market entry of algorithms and constantly monitoring them to ensure that they have not been manipulated.

 

References

  1. African Union, ‘Model Law on Access to Information in Africa’ (2010).
  2. Anne-Britt Gran, Peter Booth & Taina Bucher, “To be or not to be algorithm aware: a question of a new digital divide?” (2021) 24 Information, Communication & Society 1779-1796, DOI: 10.1080/1369118X.2020.1736124.
  3. ARTICLE 19, ‘Unfiltered: How Blocking and Filtering Affect Free Speech’ (2016) ARTICLE 19.
  4. Bozdag, Engin. 2013. ‘Bias in Algorithmic Filtering and Personalization’. Ethics and Information Technology 15(3):209–227.
  5. Council of Europe, ‘Algorithms and Human Rights: Study of Human Rights Dimensions of Automated Data Processing Techniques and Possible Regulatory Compliances’ (DGI Study, 2017) DGI (2017)12.
  6. Fletcher R., Kalogeropoulous A., Nielsen R., ‘More diverse, more politically varied: How social media, search engines and aggregators shape news repertoires in the United Kingdom.’ (2023) Vol 25(8) 2119
  7. How Google Interferes With Its Search Algorithms and Changes Your Results – WSJ
  8. https://www.ohchr.org/en/stories/2021/07/moderating-online-content-fighting-harm-or-silencing-dissent
  9. Janneke Gerards, ‘The Fundamental Rights Challenges of Algorithms’ (2019) Vol. 37(3), 205-206
  10. M. Laeeq Khan & Ika Karlina Idris, “Recognise misinformation and verify before sharing: a reasoned action and information literacy perspective” (2019) 38 Behaviour & Information Technology, DOI: 10.1080/0144929X.2019.1578828.
  11. Thiago Dias Oliva, ‘Content Moderation Technologies: Applying Human Rights Standards to Protect Freedom of Expression’ Human Rights Law Review, 2020, 20, 607–640

About the Author:

Wendy Ashikomela Ashilenje is a Kenyan National who obtained her undergraduate degree (Bachelor in Laws) from the University of Nairobi in Kenya. She obtained her post-graduate Diploma from the Kenya School of Law. She was admitted to the bar in March 2023 as an Advocate of the High Court of Kenya. She is currently undertaking her LLM in Human Rights – Sexual and Reproductive Rights in Africa at the Centre for Human Rights, University of Pretoria.

She is a practicing Advocate with a specific interest in the field of Law and Technology, Human Rights advocacy and Corporate Commercial Law and also a researcher. She has been published by The Platform Magazine in Kenya on research she did on the nexus between Artificial Intelligence and intellectual property rights.



Leave a comment