Thirty years of the world wide web: utopia or dystopia?
In 1989, Tim Berners-Lee, a scientist working at the CERN research laboratory, proposed a radical new way of linking and sharing information over the internet. Before long, the world wide web exploded into our everyday lives with millions around the world surfing the net, riding the ‘infobahn’ and embracing the innovations the web created in work, entertainment and social interactions.
Today, many of us are online all the time. But it’s useful to recall the sense of new possibilities generated by the initial proliferation of pioneering channels, platforms and apps. For some, the ease of acquiring a browser, setting up a blog or streaming homemade videos was the chance to democratise knowledge. To enthusiasts, using the web to share ideas, beyond the purview of established editors, aloof experts or a watchful state, was something akin to the arrival of the printing press, an invention that also anticipated a freer society by shifting publication and distribution of books away from the domain of religious and aristocratic elites.
Three decades on, the online future is discussed in distinctly downbeat terms. Huge areas of life have been made easier or more exciting by moving online – from gaming to dating, social networks to educational courses. But whether because of trolling or fake news, hate speech or internet addiction, self-harming videos or terrorist material, many seem drawn to a pessimistic view. Cases such as that of Molly Russell, a schoolgirl whose suicide was linked to viewing images on Instagram, or the Christchurch mosque shootings, when a gunman live-streamed his attack on Facebook Live, have increased concern about the dangers of unregulated spaces. Indeed, with the advent of the so-called ‘intellectual dark web’, even exploring ideas is deemed perilous.
As a result, the recent clamour for new laws and tougher protocols is perhaps unsurprising. Working on the basis that regulation lags behind the spread of dangerous or misleading content, recent government proposals to tackle online harms seek to protect the public by imposing a ‘duty of care’ to safeguard users. With a new regulator ready to step in if companies fall short, and enforcement mechanisms possibly including punitive fines for tech companies and the capacity to restrict internet access to platforms, what should be society’s response to new online constraints?
Are stricter regulations and tough penalties likely to solve issues such as the rise of cyberbullying, aggressive trolling and revenge pornography – which themselves can span a broad spectrum from outright unlawful to legal but harmful? Where should we draw the line on controls over the online world? With freedom of speech a growing issue, and public figures such as Tommy Robinson and an array of alt-right (and sometimes leftish) individuals now banished from many platforms, some say there’s a whiff of political censorship in the air. How much responsibility should private companies take when it comes to hateful or potentially harmful material carried on their publicly accessible platforms?
Nevertheless, many are convinced that threats to vulnerable groups are significant and should be a priority. With that in mind, should the right to legitimate speech be balanced against the need for legal protection? Do opponents have a point when they argue that censorship simply drives activity towards the darker side of the web, outside the reach of mainstream? Can society thrive with a totally free web, or should governments and corporations step in to protect the citizens they serve?