Misinformation on the Internet – Working Group on Ethical Decision Making

HDPI Working Group on Ethical Decision Making

Misinformation on the Internet – an Ethical Tool to Find the Way Forward

To Regulate of Not to Regulate?

The extraordinary power of the internet to spread information instantly, with little oversight or regulation, has meant that hate speech, intentional misinformation, fake news, and the purposeful spreading of lies are of growing concern.  The world has seen the deleterious impacts of recent misinformation campaigns in multiple contexts, including the Russian invasion of Ukraine, the 2020 US election and the COVID-19 pandemic, to name but three. In the eyes of some, these instances constitute fundamental challenges to society, undermining public trust and order.  Yet others consider proposals to regulate or limit such information as repression, censorship, and the muzzling of free speech.

Regulation of non-internet media – both technical aspects and content – has long existed.  Content regulation has mostly been at the national level, using different frameworks and forms of enforcement.  These regulatory efforts have generally tried to strike a balance between supporting freedom of expression and limiting certain types of content deemed contrary to the public good.  Enforcement is facilitated by easy identification of who is creating and disseminating information, i.e. newspaper, radio or TV station, etc., and by relatively clear jurisdictional authorities.

The internet has transformed communication from local to global, and with instantaneous dissemination.  International regulatory standards are not clearly defined and there are, in any case, no international regulatory authorities capable of enforcing them.  What existing international bodies there are focus more on the efficient working of the internet, rather than governing its media environment in the way that regulators govern broadcast media.  Each internet platform adopts its own rules regarding content and certain professional groups their own code of ethics, but there are rarely any significant sanctions on companies or individuals if such rules or codes are breached.  Content producers are often anonymous.  The increased use of sophisticated technology, such as bots, means that the content producers are not always even human, and one actor can control countless individual social-media accounts, essentially software masquerading as thousands of separate human users.  Using data collected on the internet, such actors can target their audience and adjust the content of their messaging for maximum influence, based on the target’s individual profile.  Such efforts can be used to influence what people buy, what they believe, or how they vote.

Defining an Approach: Eight Key Questions

If society has decided that it is in the public interest to regulate content on non-internet media, should regulation of content on the internet not follow suit?  There are multiple challenges to doing so: i) determining who is responsible for content – ISP, social-media platform, individual content producers?; ii) defining a regulatory framework for a borderless information system of gargantuan proportion and instantaneous dissemination; and iii) keeping up with technological advances, particularly in artificial intelligence (AI), that make it easier for content producers to target and spread information, as well as to mask their identities. Such challenges are not insurmountable, but complicate efforts to find a consensus approach that takes into account the broad range of affected parties.

There are methods for sorting out complex interests among numerous parties and moving towards consensus, even on thorny moral issues.  Regulating internet content will require sifting through countless complex questions.  We propose taking one such case — to wit, regulation of misinformation about climate change on social media—and analyzing the proposition by using an objective tool for ethical analysis, the Eight Key Questions, as developed by James Madison University in Virginia, USA.

The Eight Key Questions (8KQ) applies a series of generally recognizable human values to test an ethical proposition, formulating simple questions about fairness, outcomes, responsibilities, character, liberty, empathy, authority and rights in order to get to the heart of a controversy. Asking each question openly exposes the complexity of the issues at hand but also situates the issues in a moral context, helping to identify relevant factual elements, and increasing confidence that the decision reached is well-informed and aligns with shared values. The Eight Key Questions are as follows:

  • Fairness: Is the decision fair – just, equitable, does it balance all interests?
  • Outcomes: What actions achieve the best outcomes (short- and long-term) for everyone?
  • Responsibilities: What responsibilities (duties or obligations) apply?
  • Character: What actions express a personal or corporate ideal?
  • Liberty: What actions best respect the autonomy, integrity, dignity and choice of all involved?
  • Empathy: Do the actions reflect empathy and care for all parties?
  • Authority: What legitimate authority should be considered?
  • Rights: What rights, if any, apply?

The 8KQ has been utilized effectively in clarifying debates regarding public health, education, disaster management and a variety of other sectors. This tool does not necessarily hard decisions easier, but it provides a structured framework that can improve the quality of decision-making.  Key to its success is including input from the full range of those affected by an issue and ensuring that the questions considered are genuinely open.

As of this writing, at the start of the second decade of the 21st Century, the vast majority  of scientific experts agree that climate change is a demonstrable fact and that its principal cause has been increased carbon emissions due to human activity in the industrialized era. There have been measurements of atmospheric temperature increase dating back nearly 70 years  and scientific affirmations that this is clearly linked to fossil fuel consumption.  Rejection of these scientific reports has gradually moved from being a topic of legitimate debate to the realm of wishful thinking at best or conspiracy theory at worst, with certain interested parties deliberately spreading information they know is incorrect.

Nevertheless, some would argue that limiting debate on the internet about the reality of climate change or its cause actually does more harm than good.  The interests of a society in permitting open challenges to even proven facts, this argument goes, far outweigh the potential confusion or harm that such misinformation may incur. Best to defend free expression of all opinions, in the interest of liberty and open discussion, confiding in the ultimate triumph of truth.  After all, science itself makes progress through a process of conflicting theories, debate, experimentation and eventual consensus.

In applying the 8KQ to the issue of regulating public speech about climate change, each participant expresses their views, but also considers the views of others, refraining to the extent possible from instinctive responses or drawing quick conclusions.  The process is most effective in a group format, with participants representing as inclusive a range of stakeholders as possible.  In the best Socratic fashion, open questions on this issue using the eight dimensions of the 8KQ guide a discussion by participants.  Results can vary from emerging consensus to continued differences, but almost always enhance participants’ understanding of the complex mix of actors and interests at play.

To ensure an inclusive discussion of climate-change misinformation on the internet, the mix of stakeholders would need to draw at least from governments, business, academic institutions, and advocacy and other public-interest groups.  The group would need to include members with recognized expertise on the environmental, economic, and social impact of climate change; on internet governance and regulation at the international and national level; and on the legal and human rights impact of both climate change and regulating speech on the internet.  The process would also need members who are formally or informally recognized as spokespersons for affected stakeholders, whether small-island states, fossil-fuel industries, internet social-media platforms, or freedom-of-expression advocates, to name but a few.

Below are examples of elements for a discussion on climate change and misinformation, using the 8KQ framework.

FAIRNESS:    Does misinformation about climate change affect all human beings equally? Are those spreading misinformation equally harmed as those most affected?  Is it fair to deny individuals the right to express their opinions, even if these are based on erroneous information?   Is it fair to leave the decision on limiting the right to free speech in the hands of a few companies?  Is it fair to leave this decision in the hands of specific governments?  Is it fair to future generations who will be harmed by unchecked climate change to preserve “the right to misinform”?

OUTCOMES:   What effect would a reduction of climate misinformation on the internet likely have on public attitudes or behavior? On Government action to manage or diminish the impacts?  Would measures to regulate the dissemination of information (even false information) on the internet have a negative impact on freedom of expression more generally?  What effect might increased regulation of information on the internet regarding climate change have on other topics or activities on the internet?  What economic impact could such measures have on countries or businesses?

RESPONSIBILITIES:  What is the responsibility of those managing internet content – social media, internet providers, national authorities, etc. – to mediate false/misleading/dangerous information?  Whose responsibility is it to set international standards?  What is the obligation of national authorities to abide by any eventual international standards?  What mechanism could there be to adjudicate differences or appeals concerning the implementation of international standards?  What responsibility do societies have to reign in speech that may ultimately hurt humanity, even if not maliciously intended by the individuals involved?

CHARACTER:  How does it affect the reputation of certain actors (e.g. government or business) when they give priority to short-term benefits for their population or financial situation over the long-term negative consequences of climate change? How does it affect the reputation of governments, internet platforms, and businesses when they suppress information because they judge that it is harmful to the public greater good?   What does it say about the character of a nation or society if it preserves the right to freedom of expression even at the cost of damaging human survival on the planet?

LIBERTY:  Placing limits on what and how information can be disseminated is, by definition, putting limitations on the liberty to undertake certain acts or express certain opinions.  Are such limits necessary for the common good on topics deemed of critical importance?  If so, how does a society determine which information regarding those topics causes harm to the common good that exceeds the general benefits of those freedoms being restricted?  Are all affected individuals equally free to have their voice heard about the trade-offs between free speech and accurate climate information?  Are there persons/nations affected by the decision whose voice may not be heard?

EMPATHY:  How would an empathetic person balance the long-term interests of the greater public against the short-term interests of specific actors?  What type of information and debate would an empathetic person wish to see on the internet regarding climate change? Would we still support unlimited free speech if we lived on a low-lying small island state, most threatened by climate change?  Would we advocate more limits to free speech on this issue if we lived in a society characterized by high levels of authoritarian control?

AUTHORITY:   Who has the authority to regulate and enforce rules regarding internet content?  Who has the authority to decide what constitutes misinformation, as opposed to just different opinions?  If there are differing national standards, which ones must the internet actors follow?  Should different types of actors, e.g. individuals, companies, governments, have differing levels of responsibility (and penalties?) for knowingly expressing incorrect information?

RIGHTS:  What right does an individual have to express any opinion, even a false one?  What laws apply regarding the dissemination of false or dangerous information?  What moral codes apply regarding putting the long-term greater good ahead of more short-term benefits for certain actors?  What laws (and moral codes) apply to taking care of those most vulnerable to climate change?

Building Blocks of Consensus

If successfully implemented, this inclusive process for considering the views, needs and interests of the broadest range of societal actors affected by climate-change misinformation on the internet should begin to define the outlines of both international standards for regulating the misinformation, and an international framework for enforcing those standards.

One could imagine this exercise being part of a larger effort examining the broader question of misinformation on the internet.  If, in addition to climate change, the 8KQ process were similarly applied other misinformation concerns, such as public health or elections, we expect that some overlap in the results could emerge, in part because of overlap in participants.  These collective results could help the broader effort to identify emerging areas of consensus, or topics requiring a differentiated approach.

Civilization over millennia has been shaped by peoples with differences in understandings,

opinions and beliefs, and over that same history, until the present day, myriad approaches have been adopted by societies to deal with such differences.*  Technology that has instantaneous and global impact makes it harder to manage differences, but does not obviate the need for society to address this phenomena in accordance to what it considers vital to its welfare and development.  By adopting a rigorous, inclusive and ethical process of examining the key questions underlying misinformation on the internet, we can begin to formulate the way forward.

(*Since no society can exist without order, and no order without regulation, we may take it as a rule of history that the power of custom varies inversely as the multiplicity of thoughts.  Some rules are necessary for the game of life; they may differ in different groups, but within the group they must be essentially the same.  These rules may be conventions, customs, morals, or laws.  Conventions are forms of behavior found expedient by a people; customs are conventions accepted by successive generation, after natural selection through trial and error and elimination; morals are such customs as the group considers vital to its welfare and development.” P36)