This past July, the Center co-hosted a conference in Rome, “Liberalism’s Limits: Religious Exemptions and Hate Speech.” The conference, which addressed the challenges that religious exemptions and hate-speech regulations pose for liberalism, was divided into three workshops, for which participants submitted short reflection papers. Professor Andrea Pin (Padua) submitted the following paper for Workshop 3, on hate speech, which we are delighted to publish here:
“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone . . . You claim there are problems among us that you need to solve. You use this claim as an excuse to invade our precincts. Many of these problems don’t exist. Where there are real conflicts, where there are wrongs, we will identify them and address them by our means. We are forming our own Social Contract . . . We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity . . . Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here.”
This is an excerpt from A Declaration of Independence of the Cyberspace. The visionary thinker John P. Barlow proclaimed it in 1996 in Davos, Switzerland – the Sinai of globalization. Barlow’s pride for what the cyberworld would become in the future was largely misplaced. As many soon acknowledged, Barlow’s prophecy, that from online anarchy would almost spontaneously rise an order based on freedom, was wrong. The cyberworld, after all, is just as much a part of our world as we are a part of it.
Philosopher Luciano Floridi suggested we now live an onlife existence, in flux between the physical and virtual world. Cyberspace cannot claim an independent life any more than we can claim our independence from it. Our reputation, social relations, and political sphere take shape in an environment where cyberspace occupies a special place. AI technologies affect how we perceive ourselves and others.
The issue is whether there is a sufficient public philosophy – or at least an intellectual framework within which our onlife sustains itself. As a recent book noted,
“The medieval world had its imago dei, its feudal agrarian patterns, its reverence for the crown, and its orientation toward the soaring heights of the cathedral spire. The age of reason had its cogito ergo sumand its quest for new horizons–and, with it, new assertions of agency within both individual and societal notions of destiny. The age of AI has yet to define its organizing principles, its moral concepts, or its sense of aspirations and limitations.”[1]
As noted above, Barlow had a recipe for this task. He suggested that cyberspace alone does the job of finding its own way of living, with no intrusion from the physical world. For a while, the idea that the online dimension would spontaneously give birth to a moral and legal order looked appealing. It did not just appeal to the tech giants, providing them with a justification for not being bound by state constraints, but also to the internet users and the global society at large. As people were running to join the internet, the idea that the internet environment would regulate itself suggested that social media and internet platforms would not vet or censor unpopular or controversial opinions. In the concept of internet self-regulation, internet users saw protection from the state, as well as from the tech giants. As a comparative law scholar, I cannot help but notice that these two dimensions seemed to marry American skepticism against public powers with the European fear of private powers.
What type of political philosophy was emerging from the dust of social media? Microsoft helped us answer that question. On March 23, 2016, Twitter released Tay, a chatbot developed by Microsoft. Tay – the telling acronym for Thinking About You – had a profile on Twitter and was equipped with AI technology. Using this AI, it perused the web, identified patterns of conversation, and interacted with other Twitter users accordingly. In a nutshell, Tay was designed to entertain, enrich, endorse, and challenge our views – to be “one of us,” but on steroids. In fact, within sixteen hours, Tay tweeted roughly 96,000 times.
Tay had only one lion’s day, however. It disappeared from the web after a few hours of operation. On March 25, 2016, Twitter finally confessed that Tay had gone offline. A second release also had a relatively short life: Zo, Tay’s successor, was discontinued after a few years. It took years for developers to craft new chatbots that would withstand the test of time – and the temptation of becoming online bad guys.
Tay was not discontinued because it failed to participate in conversations, interact with others, or keep pace with Twitter’s hectic life. Tay absorbed the spirit and followed the prevailing tune on Twitter – it probably even did so too well.
In a few hours’ span, Tay became racist, sexist, and offensive. It indulged in inflammatory discourses to the point that the developers were forced to take it offline. Its software’s specifics were never released, but it is hard to believe that Tay did not simply follow the steps of the most popular, contagious, and pervasive Twitter threads.
Twitter’s move with Tay seems to mark the edge of freedom on the web. Unleashed public opinion in an AI-governed world, within which the information and the viewpoints that are more widely disseminated are those that are expected to incite more reactions, can nurture a pernicious public philosophy. They can even train AI itself in a destructive way. Anarchy may be a fact, but it is hardly a suitable environment in which to live or develop a self-sufficient legal and/or moral order.
Tay was one of the many proofs that AI requires regulation – that the online world is not without danger and should be governed. A domino effect took place in the last few years after public opinion became conscious of social media’s unlimited power and its capacity to incite hatred both online and in the physical world. Public institutions started to consider holding such platforms accountable for what they did – or, more precisely, for what they did not do. Big Tech companies also architected mechanisms to vet what is published online, in a move that was immediately criticized as opaque, partisan, and potentially ideological. It is no surprise that this new trend has also become a matter of controversy: quis custodiet ipsos custodes, after all?
The Facebook (now Meta) Oversight Board is the best-known initiative of a worldwide social media company that tries to respond to such challenges of patrolling the social media environment in a nonpartisan way. The Board is an independent and self-governed infrastructure that is empowered to review Facebook’s vetting. Only time will tell if institutions like this are effective and fair.
Are we moving into another phase – one in which social media platforms and the wider public are aware of the limits and perils of leaving the online world uncharted, as well as of the dangers of giving such platforms the power to moderate, insulate, and even censor unpopular, challenging, or hate-inciting opinions? Maybe, yes. And there is a paradoxical way to prove it.
The war in Ukraine has mobilized social media. Not just in the sense that people and organizations have expressed their feelings and viewpoints on the internet, but in that it has prompted a sea-change also within prominent social media policies. Facebook and Instagram have allowed people in several countries (including Armenia, Azerbaijan, Estonia, Georgia, Hungary, Latvia, Lithuania, Poland, Romania, Russia, Slovakia, and of course, Ukraine) to use the media to incite violence against Russian soldiers, and to even wish death to Putin and his Belarus ally, Lukashenko.
Rallying people against a public enemy may seem like a human phenomenon. In this case, this might even seem justified, especially in the light of Putin’s control of social media in Russia. But unleashing Facebook against a specific target truly means weaponizing a social platform. This is not anarchy – it is selective anarchy at its best. Who are media platform leaders to decide when, where, and to what extent hate speech should be cut loose? How do they choose the targets against which online violence can be unleashed?
Online liberalism has its limits. The self-regulatory pattern and soft power of social media platforms are there to prove it. The problem is not simply who sets those limits: this is a question about the government and the governance of the web. The problem that lies underneath any other issue is how such limitations are justified. This is a moral question – a question that challenges the very possibility of a just order based only on freedom. The online rage against Putin may go unnoticed by Facebook users and States alike because it is against Putin. But how are they going to select the next permissible target?
We all knew that onlife was not the land of truth. Now we know that onlife is not the land of freedom, either. It is the land of opinions. There is a difference between liberty and free opinions. Free opinions cannot exist without liberty, but with such liberty comes great responsibility. Onlife also reminds us that liberalism can drift into anarchy or be arbitrarily selective. It is in need of a moral compass: how this compass will develop, along which lines, and how it will balance competing interests and protect pluralism is hardly foreseeable. What we know, however, is that such a compass requires a fresh look at the very fabric of modernity.
John Barrow’s fundamental misunderstanding of the internet as a harmless environment in which everyone could enjoy boundless freedom reflected how he conceived freedom itself. In his Declaration he stated:
“In our world, all the sentiments and expressions of humanity, from the debasing to the angelic, are parts of a seamless whole, the global conversation of bits. We cannot separate the air that chokes from the air upon which wings beat.”
Protecting the common good without undermining freedom looked impossible for Barrow. Barrow’s understanding of freedom was all but exceptional. It echoed a deeply-seated culture, which these words once captured:
“We have made you a creature neither of heaven nor of earth, neither mortal nor immortal, in order that you may, as the free and proud shaper of your own being, fashion yourself in the form you may prefer. It will be in your power to descend to the lower, brutish forms of life; you will be able, through your own decision, to rise again to the superior orders whose life is divine.”
This is a small excerpt from On the Dignity of Man, which Pico della Mirandola authored in 1496—five centuries before Barrow’s Declaration. Barrow may not have known Pico’s work; but he was clearly embued with his philosophy.
Implanting the seed of the common good into such a thick layer of freedom-oriented thinking and showing that the two can coexist is a daunting task. But the difficulties do not render this task less necessary.[*]
[1] Henry A. Kissinger, Eric Schmidt & Daniel Huttenlocher, The Age of AI and Our Human Future (2021).
[*] The author credits and is deeply thankful to the brilliant law student Brendan R. Spagnuolo for his terrific editorial help on this paper.