Human Rights in the Digital Age



The author of this blog is  Mr. Archit Vyas2nd Year student at GNLU Gandhinagar.





Our digitally connected world poses serious challenges to human rights. Mobile connectivity and internet use, low-cost and fast computing systems, and rapid AI advances have, on one hand, provided new opportunities. But, on the other hand, they present unique challenges to the protection of core human rights.Budding technologies like Al have a massive potential to divide societies, violate privacy and provoke prejudice, extremism, racism, hatred, and violence across the globe in a short span of time. Breaches of data safety protocols and social media campaigns can create opportunities for blackmail and influence political processes. Defenders of human rights and democracy need to discourse these challenges as a priority.

Focusing on human rights in the digital age is key. Data collection has already begun on an industrial scale. States, political parties, various organizations, and, in particular, businesses hold remarkably detailed and powerful information about us. More and more aspects of our lives are being digitally tracked, stored, used – and misused.[1]Digital technology already brings many benefits. Its value for human rights and development is huge. We can connect and communicate around the globe as never before. We can empower, inform, and investigate. We can use encrypted communications, satellite imagery, and data streams to directly defend and promote human rights. We can even use artificial intelligence to predict and eradicate human rights violations.There is an urgent need to scrutinize the international treaties and conventions that codify human rights to provide strong policy guidelines regarding international cooperation for the protection of human rights in the digital age.

International courts, tribunals, and national courts, for example, should interpret international human rights conventions and national fundamental rights laws to simplify the duty of care, refine the right to privacy, and freedom of speech, religious freedom, and association in the digital context. The vulnerability of women and children need instant attention. Women experience a higher level of online harassment than men. Children are more exposed to online persecution and sexual exploitation than adults. So, privacy protection rules should be made specifically for women and children. There should be detailed design and data consent standards for online services. The Age Appropriate Design Code announced by the UK in 2019 and American Children’s Online Privacy Protection Rule of 2013, for example, prescribe such standards for digital services.Nonetheless, we cannot ignore the dark side, the digital revolution is a major global human rights issue. Its unquestionable benefits do not cancel out its unmistakable risks. We should not feel overwhelmed by the scale or pace of digital development and do need to understand the risks associated with it.

A  lot of our attention is rightly focused on challenges to freedom of expression online and inciting hatred and violence. Online harassment, trolling campaigns and intimidation have polluted parts of the internet and pose very real off-line threats, with a disproportionate impact on women. In the most deadly case, social media posts targeted the Rohingya community in Myanmar in the run-up to the mass killings and rapes in 2017. Human rights investigators found that Facebook – and its algorithmically driven news feed – had helped spread hate speech
and incited violence.[2]

These grave violations of human rights leave no room for doubt. Threats, intimidation, and cyber-bullying on the internet lead to real world targeting, harassment, violence and murder, even to alleged genocide and ethnic cleansing. Failure to take action will result in further shrinking of civic space, decreased participation, enhanced discrimination, and a continuing risk of lethal consequences – in particular for women, minorities, and migrants, for anyone seen as “other”. But over-reaction by regulators to rein in speech and use of the online space is also a critical human rights issue. Dozens of countries are limiting what people can access, curbing free speech and political activity, often under the pretense of fighting hate or extremism. Internet shutdowns seem to have become a common tool to stifle legitimate debate, dissent, and protests. The NGO Access Now counted 196 shutdowns in 25 states in 2018, almost three times the number (75)  as per the statistics recorded in 2016.

Some States are deliberately tarnishing the reputations of human rights defenders and civil society groups by posting false information about them or orchestrating harassment campaigns. Others are using digital surveillance tools to track down and target rights defenders and other people perceived as critics.
Digital technologies have put privacy at risk. AI has enormously improved the possibility of electronic surveillance and interception. Thus, authentic national security and business interests need to be balanced against a basic right to privacy. How can the latter be ensured without undermining the former? International agencies like the United Nations should help state parties negotiate and enforce data-protection treaties and laws to ensure that governments, non-state actors, and companies cannot misuse the personal information of their citizens.

Reportedly, the 2016 US presidential election and Brexit were shaped by spiteful use of digital technology. It is completely possible that powerful countries and multinational corporations will employ AI to raid the economy of the under-developed countries and weaken their national security. In view of the increasing misappropriation of digital technology in economic and political affairs, the developing countries, in particular, need to raise a voice at regional and international forums for an effective mechanism of collaboration and safety of the developing world.[3

The UN, state governments, social media networks, and private businesses must guarantee that digital technology is working for the welfare of humanity in a transparent and accountable manner. AI systems must follow stringent ethical standards. It is becoming evident that AI can be used to create discrimination as prejudices can be fed into algorithms to produce a specific pattern or result. For example, artificial Intelligence can be misused to decide who is eligible for a particular job or permitted for a pertinent public service such as housing loans or healthcare. Therefore, there are constant global efforts to make AI developers subject to the law and ethical values.[4]

Those who progress and employ Al for political or business or war purposes must be held responsible for their actions. People are legally responsible for their actions under all legal systems. So, those who design, develop, adapt or deploy AI must also be held answerable for the consequences of their decisions. The legal responsibility of these actions becomes more critical when lethal autonomous weapons systems are used in striking violation of international human rights and humanitarian law. The UN secretary-general emphasized in 2018 that “machines with the power and will to take lives without human involvement are politically unacceptable, morally objectionable and should be prohibited by international law”.As we live in an age of digital interconnectedness, governments, human rights defenders, citizens, and AI companies should work together to boost digital cooperation for the protection of human rights. Common human values like equality before law, dignity, privacy, freedom, inclusiveness, respect, and sustainability should be preserved. These human values must serve as a guiding light to our conduct in the digital age.[5]

So while our notions of privacy are developing along with social media and data-capturing technology, we also need to identify that it’s not “just privacy” that is affected by the digitization of everything. The exercise of all fundamental freedoms is diluted when governments utilize new capacities that flow from digitization without regard for human rights. Furthermore, by engaging in tactics that weaken digital security for individuals, for networks, and for data, governments trigger and further inspire a hackers race to the bottom. Practices that weaken digital security will be learned and followed by other governments and non-state actors, and ultimately undermine security for critical infrastructure, as well as individuals users everywhere. Defending and improving digital security for individuals, for data, for networks, and for critical infrastructure must be seen as a priority for national and global security.

There is already an urgent need for governments, social media platforms and other businesses to protect the fundamental pillars of a democratic society, rule of law, and the full range of our rights online: a need for oversight, accountability, and responsibility. As the digital frontiers expand, one of our greatest challenges as a human rights community will be to help companies and societies to implement the international human rights framework in the land we have not yet reached. This includes clear guidance on the responsibilities of business as well as the obligations of states.
At its best, the digital revolution will empower, connect, inform, and save lives. At its worst, it will disempower, disconnect, misinform, and cost lives.



[1] Robin Blom. Naming Crime Suspects in the News. Media Law, Ethics, and Policy in the Digital Age, pages 207-225.
[2] Katharine SarikakisIzabela KorbielWagner Piassaroli Mantovaneli. (2018) Social control and the institutionalization of human rights as an ethical framework for media and ICT corporations. Journal of Information, Communication and Ethics in Society 16:3, pages 275-289.
[3] Robin Blom. 2020. Naming Crime Suspects in the News. Media Controversy, pages 354-372.
[4] Stephen Cory Robinson. (2015) The Good, the Bad, and the Ugly: Applying Rawlsian Ethics in Data Mining MarketingJournal of Media Ethics 30:1, pages 19-30.
[5] Corinne Cath. 2019. Internet Governance and Human Rights: A Literature Review. The 2018 Yearbook of the Digital Ethics Lab, pages 105-132.

Comments