Enrichment and Exploitation: How Website Algorithms Affect Democracy

Abstract

This essay discusses the role algorithms in websites, social media and search engines play in the democratic processes of Western societies. As the political mechanisms of Western societies rely increasingly on the Internet for communication of information and to encourage voter participation, the way algorithms are configured to present information to the public is of great importance. Manipulation of search engine rankings or social media news feeds – intentionally or organically – can have a huge impact on what voters see and think about. Facebook and Google have a monopoly on news feeds and online search respectively, meaning any bias in the way their algorithms function can have ramifications on national and international levels. Evidence exists that manipulation of algorithms in Facebook and Google has participated in influencing the outcomes of elections on several occasions. Examining how algorithms can affect elections and other civic processes is crucial for the future of healthy democracy in Western societies.

Keywords: algorithm, democracy, Internet, news, search engine, social media, website, Facebook, Google, Instagram, Snapchat, Twitter

Introduction and Research Questions

In 2017, it is estimated that over half of the world’s population are regular Internet users (Kemp 2017, online), and around the same percentage are regular users of social media (Chaffey 2017, online). With such vast amounts of data moving through cyberspace constantly, it makes sense that algorithms should be employed to sort, sift through, and make sense of it all. On the face of things, it would seem logical for algorithms to be used to present to users of websites, social media and search engines a selection of information which may be relevant to what the user is looking for, and from which the user can make informed decisions. The problem with this is that it’s often impossible to know how an algorithm has arrived at a decision or set of search results, and many users aren’t aware that algorithms even exist, never mind how they come to the conclusions they do. With democratic processes now relying so heavily on information shared online, algorithms in websites, social media and search engines have the potential to play a crucial role in democracy. This essay will investigate this issue, and seek to answer the following questions:

-To what extent do algorithms in websites, networking services and social media have a negative effect on democracy in Western societies?

-To what extent, if any, can users of new and digital media be manipulated by algorithms to think or act in certain ways?

-To what extent, do search engine algorithms affect democracy in Western societies?

-Which website, networking service or search engine is most likely to affect democracy through its use of algorithms?

Methodology

Search engines, social media, and the algorithms that operate them are now firmly embedded in the everyday fabric of Western societies, and increasingly in their democratic processes, with no indication that this is likely to change at any time in the future. Algorithms used in Facebook and Google have been extensively studied individually, but there has been less research on the overall effect of algorithms in democratic processes in Western societies. This research essay aims to fill that gap.

The essay examines the use of algorithms in websites, networking services and social media, and aims to answer the question of whether they have a negative effect on democracy in Western societies. A detailed literature review of the subject of online algorithms is followed by an examination of algorithms used in Facebook, Google, Twitter, Instagram and Snapchat, with the likely effects of each of their algorithms discussed, most especially in relation to democratic processes in Western societies.

Real-life examples of algorithms affecting democratic processes are examined, and the extent to which algorithms have influenced recent political outcomes discussed. The essay will also discuss how algorithms are likely to affect democracy in the coming years.

Suggestions regarding the way future democratic processes must interact with, and incorporate, algorithm-driven websites, social media, and search engines are made, and conclusions on the future of the algorithm in democracy are drawn.

Literature Review

Origins

In their most basic form, algorithms are defined as “an automated set of rules for sorting data” (Oxford Reference 2017, online), and, in their online form, are concerned with “settings where the input data arrives and the current decision must be made by the algorithm without the knowledge of future input” (Bansal 2012, p.1). Algorithms are “dependent on the quality of their input data and the skills and integrity of their creators (Devlin 2017, online). By definition, data is historical, and the result of which is that algorithms predict the future based on actions taken in the past, hence their actions can be repetitive and flawed.

The first use of algorithms in an online sense occurred in the early 1970s and was used for bin-packing problems in early software programs, or organising and fitting items into a set space (Fiat & Woeginger 1998, p.7). This evolved in 1985 when Sleater and Tarjan constructed competitive algorithms to solve mathematical problems known as the list update problem and the paging problem (Fiat & Woeginger 1998, p.7). In the early 21st century, as the variety and use of digital technologies exploded, algorithms were still relatively harmless. Search engines offered personalised recommendations for products and services, and helped Internet users find what they wanted quicker. Information was collected from personal meta-data – information gathered from “previous searches, purchases and mobility behaviour, as well as social interactions” (Helbing et. al 2017, online). From these humble beginnings, algorithms have evolved to know everything about us – where we are, what we are doing, and what we are feeling (Helbing et. al 2017, online).

Algorithms in the Digital Age

The ubiquity of Internet access and the huge number of ways by which it can be accessed means it is now a “principal pillar of our information society” (Dusi et. al 2016, p.1805). Online communities have become hugely important and complex places in which people seek and share information (Zhang et. al 2007, p.221). A result of this is that online algorithms play a huge part in so many aspects of our lives. Ellis (2016, online) explains how three factors shape the online lives of citizens of digital societies: “the endless search for convenience, widespread ignorance as to how digital technologies work, and the sacrifice of privacy and security to relentless improvements in the efficiency of e-commerce”. The more our lives become reliant on digital technology, the more we are likely to be influenced by algorithms, from everyday tasks like online shopping to our political participation in elections, referendums and other civic activities.

Algorithms still carry out the same relatively harmless tasks as they have done since the Internet’s earliest days, including giving online shoppers advantages in making choices (“People who bought this book also bought this…” recommendations), helping match an online dater with a partner more suited to them (Sultan 2016, online), and retrieving search engine results more suited to the user, depending on past searches. Retail websites such as Amazon also use algorithms to keep pricing competitive – prices can drop sometimes several times a day until an item is the cheapest on the market and is sold, and then the price goes back up again (Baraniuk 2015, online).

Algorithms have evolved hugely from their humble beginnings, and can now “recognise handwritten language, describe the contents of photos and videos, generate news content, and perform financial transactions” (Helbing et. al 2017, online). Some can recognise language and patterns “almost as well as humans and even complete some tasks better than them” (Helbing et. al 2017, online). Today’s widespread use of algorithms online has been described in a range of ways, from having small advantages to Internet users and to making online communities smarter, to the more sinister end of the spectrum, entailing “capturing people and then keeping them on an emotional leash and never letting them go” (Anderson & Horvath 2017, online).

Despite huge advances in technology since the dawn of the Internet, even while conducting relatively simple tasks, algorithms can go wrong in spectacular ways. An algorithm used to generate wording for a company selling t-shirts to be sold online with the English World War II-era slogan “Keep Calm and Carry On” printed on them generated thousands of alternative options, with one result being “Keep Calm and Rape a Lot” (Baraniuk 2015, online). The company faced public condemnation and folded as a result. In 2011, a Massachusetts man who had never committed a traffic offence in his life had his driving licence revoked by an algorithm-generated facial recognition software failure (Dormehl 2014, online). Similar, and more serious, faults have meant that voters have been removed from electoral rolls, parents mistakenly labelled as abusive, and businesses have had government grants and contracts cancelled (Dormehl 2014, online). Even more problematic is the way in which algorithms can falsely profile individuals as terrorists at airports, which happens at a rate of about 1500 a week in the United States (Dormehl 2014, online). Reduced budgets in law and order services have a large part to play in this, as staff cuts lead to a greater reliance on automated services.

Entering the Democratic Space

Algorithms offer many benefits to the democracies of Western societies, but often in a way that have many more advantages for institutions than they do for individual users of digital technologies (Ellis 2016, online). The convenience so hungrily sought by end-users is a commodity many online businesses are eager to sell, and the hidden clauses are often “unknowable and entirely beyond users’ control” (Ellis 2016, online). Understanding algorithms’ lack of neutrality is low among end users, and while disclosure policies can help somewhat, many of the long-winded privacy policies which have become standard on the web are seldom read (Ellis 2016, online).

An example of a society heavily controlled by online data is Singapore. What started as a program set up with the aim of protecting its citizens from terrorism “has ended up influencing economic and immigration policy, the property market and school curricula” (Helbing et. al 2017, online). China is similar. Baidu, the Chinese equivalent of Google, incorporates a number of algorithms in its search engine to produce a “citizen score” (Helbing et. al 2017, online), which can affect a citizen’s chances of getting a job, a financial loan, or travel visa. This type of monitoring of data using algorithms is certain to affect everything about citizens’ lives, from everyday tasks to political contribution.

In the world of politics, digital technology and the algorithms they conceal are becoming increasingly popular as tools for ‘nudging’: a behavioural design concerned with trying to steer or influence citizens towards thinking and acting in a certain way (Helbing et. al 2017, online). A government can use this method of ensuring the public sees information that supports their agenda – the British government has “used it for everything from reducing tax fraud to lowering national alcohol consumption [while] Barack Obama and several American states have used it to win campaigns and save energy” (The Nudging Company 2017, online). The biggest goal for governments to influence people in this way is known as ‘big nudging’, or the combination of big data and nudging (Helbing et. al 2017, online). While the effectiveness of such methods are difficult to calculate, it has been suggested that could have the ability to control citizens by a “data-empowered ‘wise king’, who would be able to produce desired economic and social outcomes almost as if with a digital magic wand” (Helbing et. al 2017, online). During elections, political parties can use online nudging to influence voters in a major way. In fact, it has been argued that whoever controls this technology can “nudge themselves to power” (Helbing et. al 2017, online).

Critics of the use of online algorithms in Western democracies have pointed to how they can reinforce the ‘filter bubble’, or the way in which end users of search engines and social media get “all their own opinions reflected back at them” (Helbing et. al 2017, online). The result of this is a large degree of societal polarisation, resulting in sections of society who have little in common and have no method by which to understand each other’s beliefs. This form of social polarisation by the supply of personalised information can lead to fragmentation of societies, especially in the political arena. Helbing (2017, online) explains that this kind of divide is currently happening in the politics of the United States, where “Democrats and Republicans are increasingly drifting apart, so that political compromises become almost impossible”.

Algorithms and Data Mining

Data mining is the method by which large amounts of raw data is turned into useful information, and is increasingly becoming a useful influencing tool online. The practice has been described as “creat[ing] greater potential for violations of personal data” (Makulilo 2017, p.198) via the rise and use of big data, meaning the vast amounts of statistics in the public domain about people’s lives, money, health, jobs, desires, and more. The availability of all this data means algorithms are increasingly being used to sort and categorise it all, as well to make public policy and other decisions (O’Neil 2016, p.1). In Western democracies, the amount of online data produced is doubled every year, and in every single minute of every day, hundreds of thousands of Google searches and Facebook posts are made (Helbing et. al 2017, online), meaning more potential violations of personal data if used for immoral or criminal purposes.

Companies now use algorithms to help them decide who they should hire, banks use them to work out to whom to provide loans, and, increasingly, governments use them to make major policy decisions. Devlin (2017, online) contends that those working in the big data and analytics industries are perhaps the least likely to be surprised that political figures or parties would try to use algorithms to influence public behaviour in their favour, saying that “the application – both overt and covert – of technology to affect election outcomes was arguably inevitable” (Devlin 2017, online). O’Neil (2016, p.1) says that “some of these models are helpful, but many use sloppy statistics and biased assumptions; these wreak havoc on our society and particularly harm poor and vulnerable populations”.

Dormehl (2014, online) explains that not only is the use of algorithms in data mining open to misuse, but that it is foolish to believe all tasks can be automated in the first instance, and points to data mining as a method of uncovering terrorist attacks as an example. Dormehl describes finding terrorist plots as “a needle-in-a-haystack problem, and throwing more hay on the pile doesn’t make that problem any easier. We’d be far better off putting people in charge of investigating potential plots and letting them direct the computers, instead of putting the computers in charge and letting them decide who should be investigated” (2014, online).

Algorithms and Real-Life Events

Real-life examples of how algorithms can affect major world events are plentiful. Evidence has emerged that algorithms and their associated digital technologies have been used to bring about political outcomes in various countries in recent years, and it it likely that such methods will be an element of many future political campaigns. It has been alleged that online algorithms were deployed to influence voters’ decision-making in the 2016 US presidential election, the 2016 Brexit vote, and the 2017 French presidential election (Devlin 2017, online). Problems arise – and mistrust is created – when algorithms are used in such ways due to a lack of transparency and democratic control. The digital methods used to transmit messages and influence audiences evolve quicker than any regulatory framework can keep up with them.

An example of this is the alleged influence of online advertising which affected the outcome of the Trump-Clinton election, the result of which shocked many in the United States and around the world. The innovation of algorithms, according to some analysts, means “even our political leanings are being analysed and potentially also manipulated” (Arvanitakis 2017, online), and a prime example of this was undertaken by Cambridge Analytica, a data mining organisation that relies on artificial intelligence with the goal of manipulating opinions and behaviours “with the purpose of advancing specific political agendas” (Arvanitakis 2017, online), in this case in the favour of Trump. Facebook was the platform on which much of the alleged manipulation took place, with an estimated US$90 million spent on digital advertising to generate US$250 million in fundraising for the eventual winner (Shoval 2017, online). In September 2017, Facebook agreed to provide to United States congressional investigators the contents of 3000 online advertisements purchased by a Russian advertising agency, alleging to contain information on supposed digital interference in the election (ABC News 2017, online). Matthew Oczkowski, Cambridge Analytica’s Head of Product, told a recent interviewer: “We have elections going on in Africa and South America, and eastern and western Europe” (Kuper 2017, online).

Additionally, search engine algorithms and recommendation systems “can be influenced, and companies can bid on certain combinations of words to gain more favourable results” (Helbing et. al 2017, online). These methods have been defended by some, as Helbing (2017, online) explains, who say that political nudging is necessary as people find it hard to make decisions, and it is, therefore, necessary to help them – a way of thinking known as paternalism. He also refutes this by suggesting that nudging is not actually a way of persuading people of a particular opinion, but a method of “exploiting psychological weaknesses in order to bring about certain behaviours” (2017, online). Another critic of the use of algorithms to affect voters’ choices is Gavet (2017, online), who argues that the only results of such methods are self-reinforcing bias, and that digital technology of this nature are vulnerable to attack to agencies with potentially harmful agendas, and concludes by saying that all forms of artificial intelligence are a threat to democracy in some way.

In the same way that accurate information can be presented to the public to influence the way they think or act, incorrect information can do the same thing. The Digital Disinformation Forum, held in California in June 2017, stated that deliberate misinformation is the “most pressing threat to global democracy” (Digital Disinformation Forum 2017, online). Smith (2017, online) agrees, noting that “The insidious thing about information pollution is that it uses the Internet’s strengths,  like openness and decentralization, against it”, and that misinformation is a potential “global environmental disaster” that impacts everyone. Immediately after the 1st October 2017 Las Vegas Strip shooting, in which a gunman killed 58 people during the deadliest mass shooting committed by a lone gunman in US history, news spread by Facebook and Google falsely named a suspect, describing them as a “far-left loon” (ABC 2017, online) when the gunman had no known political affiliations. A pro-Trump Facebook page incorrectly named a person as the shooter, and the story became the first result on Google’s search page on the subject (ABC 2017, online). “This should not have appeared,” a Google spokesperson later said, as the information was removed from its search results (ABC 2017, online). Both Facebook and Google came under scrutiny from a variety of political sources for their slow response to requests to remove the information from their platforms (ABC 2017, online).

Adding algorithms to this mix can be dangerous, Smith notes, pointing to the way in which predictive policing algorithms in the United States increase patrols in high-crime areas, but can induce a cycle of violence between police and angry or disenfranchised residents as a consequence (2017, online). O’Neil (2016, p.1) explains that “this type of model is self-perpetuating, highly destructive, and very common.” Perhaps the most damning statement on the use of algorithms in societies based on data comes from Devlin (2017, online), who says that while societies which operate in this way “may seem appealing in the light of current political dysfunction worldwide … it is also deeply inimical to the process we call democracy”.

The Future of Algorithms

What does the future hold for algorithms and their place in Western societies and democracy? Floridi (2017, online) argues that the increasing proliferation of algorithms in digital technology will continue to threaten many aspects of our daily lives in increasing numbers of ways – employment, most especially. Floridi explains that because digital technology has replaced many tasks traditionally performed by us, “algorithms can step in and replace us”, and the consequence “may be widespread unemployment” (2017, online). It has been estimated that in the coming ten years, around half of jobs will be threatened by algorithms and up to 40% of the world’s top 500 companies will have vanished (Helbing et. al 2017, online). Algorithms may increasingly “take care of mundane administrative jobs, do the analysis of markets and roam through thousands of pages of case law”, as well as creating our news feeds (Stubb 2017, online).

A 2016 Pew Research Centre study found it likely that algorithms will “continue to have increasing influence over the next decade, shaping people’s work and personal lives and the ways they interact with information, institutions (banks, health care providers, retailers, governments, education, media and entertainment) and each other” (Ellis 2016, online). The flip side to the advantages algorithms are likely to have, the same study found, are the fear that they will “purposely or inadvertently create discrimination, enable social engineering and have other harmful societal impacts” (Ellis 2016, online).

In April 2017, a House of Commons committee in the United Kingdom published the results from its ‘Algorithms in Decision-Making’ inquiry, with the overall conclusion being that human intervention is almost always needed when it comes to trusting the decisions made by online algorithms (House of Parliament 2017, online). Some of the major points to be taken from the findings include algorithms are “subject to a range of biases related to their design, function, and the data used to train and enact these systems”, “transparency alone cannot address these biases”, and algorithmic biases have “cultural impacts beyond the specific cases in which they appear” (House of Parliament 2017, online). The inquiry also recommended greater regulation of online algorithms, as transparency alone “doesn’t necessarily create trust” (House of Parliament 2017, online).

A solution to the possibility of algorithmic errors, as suggested by Floridi, is to “put human intelligence back into the equation” (2017, online). This can be done by “designing the right sort of algorithm” (2017, online), making sure not all decisions are left to machines, and making sure humans oversee all decisions made by machines. In the political sphere, some politicians might be jubilant at the decline of journalism, but should remember that “algorithms will soon be better at legislation than they are” (Stubb 2017, online). Some commentators and experts have gone further with their predictions, with technology visionaries including Bill Gates, Elon Musk and Steve Wozniak warning that algorithms and associated artificial intelligence-based technologies are a “serious danger for humanity, possibly even more dangerous than nuclear weapons” (Helbing et. al 2017, online).

Case Studies

Facebook

“There was no tool where you could go and learn about other people. I didn’t know how to build that so instead I started building little tools,” Mark Zuckerberg said (Carson 2016, online) about the origins of the website that would turn into a 300 billion dollar company. In 2004 he launched the social networking site Facebook, and its popularity quickly spread across several universities before becoming Facebook.com in August 2005 (Phillips 2007, online). The site’s use grew exponentially, it now has two billion active users per month (Facebook, online) and has recently unveiled its new mission statement as: “To give people the power to build community and bring the world closer together” (Facebook, online). According to the site’s own statistics, an average user spends 50 minutes a day on Facebook, Facebook Messenger or Instagram and has 150 Facebook friends (Facebook, online). Until 2012, the site kept advertisements separate from its users’ personal content and did not share any information with marketing agencies. Then, floatation brought greater demands from investors for advertising revenue, and its methods changed (Kuper 2017, online).

Perhaps one of the more notable changes to democracy this brought is the way Facebook is controlling how citizens consume news. Most under-35s rely on Facebook for their news, both personal and world (Francis 2015, online; Jain 2016, online; Samler 2017, online), and its algorithms can control what information is seen by its users, and, hence, what is thought about democratic or political issues based on this information. In changing the fundamental methods by which people receive information on such a scale, Facebook is disrupting democracy like nothing the Internet has produced before. As Samler (2017, online) explains, Facebook is “one of the Internet’s most radical and innovative children”. The result has been “a loss of focus on critical national issues, an erosion of civil disagreement, and a threat to democracy itself” (O’Neil 2016, online).

As a result of more people getting their news from an algorithm-driven news feed, traditional journalism has been greatly affected by the rise of Facebook. The impact of increasing use of social media as a way of sourcing news, real or otherwise, is of concern to the traditional role of the media as the Fourth Estate. Facebook has been called a “social problem” (Francis 2015, online) that breeds shallowness that is sweeping Western societies, while creating a “world view about as comprehensive as was found in the high school cafeteria” (Francis 2015, online). Global leaders are taking advantage of its directness to bypass the media and speak directly to the public, and operators of Facebook and Twitter are enthusiastic about this behaviour as it increases engagement with their sites. Journalists are still attempting to report factual stories, but are under increasing pressure (Shoval 2017, online), and the disproportionately high financial awards made against newspapers in the courts threatens press freedom on an industry level (Linehan 2017, p.11).

With Facebook now having such a high degree of control over the way in which people consume news, traditional media companies are struggling to reach the public with legitimate news (Shoval 2017, online). After the 2016 US presidential election, Facebook announced its “Facebook Journalism Project” – a project with the aim of forging stronger ties with the journalism industry, including working more closely with local news outlets (Shoval 2017, online). With the number of news consumers who get their news from Facebook’s news feed on the rise, it is difficult to see how this is little more than an empty platitude.

While Facebook is described as ‘social media’, it is important to remember that its success it premised on using increasingly sophisticated techniques to target users by predicting the content they’ll want to read and watch, “along with the stuff they’ll want to buy from advertisers” (Ellis 2016, online). Facebook is now a “monumentally influential force in the fabric of modern life” (Statt 2017, online), and there now exists Facebook electioneering by major political candidates like Canadian Prime Minister Trudeau and French President Macron, of which algorithms play a huge part. Facebook’s algorithm generates a “plethora of ordinary effects” (Bucher 2015, p.44) from the hunt for ‘likes’ to asking the questions “Where did this information that has suddenly popped up come from?” (Bucher 2015, p.44). Francis (2015, online) suggests that the only antidote to relentless Facebook misinformation is to “do some serious fact-checking and research”, while Pennington (2013, p.193) says that while Facebook can be an excellent tool for political participation, the key for the individual user is to “keep an open mind to others instead of falling down the rabbit hole of narcissism”.

Fake news can be defined as “a political story which is seen as damaging to an agency, entity or person” (Merriam Webster Dictionary 2017, online), and the concept and its proliferation on various platforms, including Facebook, has been forced into the public domain by President Trump and the election from which he emerged victorious. Fake news has the power to “damage or even destroy democracy” (Jain 2016, online) if not regulated. During a 2016 press conference, then-President Obama noted that “If everything seems to be the same and no distinctions are made, then we won’t know what to protect” and “Everything is true and nothing is true” (Jain 2016, online) on a social network such as Facebook. Simply sending out Facebook advertisements to see how they are received can help a political party shape its manifesto (Kuper 2017, online). If a large number of users ‘like’ a story about a crackdown on immigration, a party or candidate can make it their official standpoint. Then those people can be targeted with more advertisements and for appeals for funding.

The unexpected election of Donald Trump is said to “owe debts to … rampant misinformation” (Heller 2016, online). During the last stages of campaigning by Trump and Clinton, it was obvious that Facebook’s news algorithm was not able to distinguish between real news and completely fabricated news: “the sort of tall tales, groundless conspiracy theories, and oppositional propaganda that, in the Cenozoic era, circulated mainly via forwarded e-mails” (Heller 2016, online).

Zuckerberg rejects the idea that his company played a role in spreading ‘fake news’ about political candidates, by saying in an interview: “Voters make decisions based on their lived experience” (Newton 2016, online). At the same time, a study found that “three big right-wing Facebook pages published false or misleading information 38% of the time during the period analysed, and three largely left-wing pages did so in nearly 20% of posts” (Silverman 2016, online). Zuckerberg then committed to his company doing more to fighting the spread of fake news and vowed it would be an “arbiter of truth” (Jain 2016, online), while also stating that he runs a “tech company, not a media company” (Samler 2017, online). He also denied that Facebook confounded the problem of its users living in an information ‘filter bubble’, even through his own company quietly released the results of a study in 2015 which showed exactly the opposite of which was true (Tufekci 2015, p.9), and another study has shown that users are much less likely to click on content that challenges their beliefs (Tufekci 2015, p.9). Western democracies have a liberal left and a conservative right, with “neither being exposed to the reasoned arguments of the other” (Samler 2017, online). Indeed, only 5% of Facebook users and 6% of Twitter users admit to associating themselves with people on these platforms who have differing political opinions to themselves (Samler 2017, online). Critics of how social media giants generate their users’ news feeds have said that these organisations need to accept the fact that they are no longer solely technology platforms, but media platforms too (Samler 2017, online).

Interestingly, on 30th September 2017, Zuckerberg made a post on his personal Facebook page for the end of Yom Kippur, apologising and seeking forgiveness for any of the ways that his organisation has been “used to divide people rather than bring us together” (Facebook 2017, online). This has been described as a “wholly surprising admission of guilt from someone in the tech world” (Barsanti 2017, online).

The key to Facebook’s ongoing success is to keep its users engaged. Bucher explains that “examining how algorithms make people feel … seems crucial if we want to understand their social power” (2015, p.30), if, indeed, users are even aware of the power of the algorithm at all. Facebook’s data teams are almost solely focussed on finding ways to increase the amount of time each and every user remains engaged with the platform, and they are not concerned with truth, learning, or civil conversation (O’Neil 2016, online). Success is measured by the number of clicks, ‘likes’, shares and comments, not the quality of the material being engaged with. The greater the amount of engagement, the more data Facebook can use to sell advertisements (O’Neil 2016, online). This seems like a fairly obvious business model, but research has shown that many users are unaware of this. In a 2015 study, more than half of Facebook users were unaware of how their Facebook news feed was put together (Eslami et. al 2015, p.153). This is problematic, as ignorance of how the site’s algorithm works can wrongly lead some users to “attribute the composition of their feeds to to the habits of their friends or family” (Eslami et. al 2015, p.153). This can reinforce the idea of the ‘filter bubble’ and lead many users to believe the information they are seeing is trustworthy and correct, as well as tracking behaviour in order to profile identity.

While finding news that fits a user’s news feed, Facebook’s algorithms can create other problems, including the “voracious appetite for personal data” (Ellis 2016, online) ad-supported services such as Facebook need to keep their predictions going. The consequence is an undermining of personal data and the increased likelihood of the site being used for data mining purposes by individuals, organisations or entities with potentially nefarious motives, and possibly leading to more “government by algorithmic regulation” (Ellis 2016, online). The potential for abuse is high when algorithms are unregulated and can be used by anyone with the money to invest in them.

Another major problem Facebook’s algorithm creates is one of repetition, and it has the potential to prevent democratic processes and decisions evolving over time. While real life allows the past to be in the past, “algorithmic systems make it difficult to move on” (Bucher 2015, p.42). This is the “politics of the archive” (Bucher 2015, p.42), as all decisions an algorithm will make on the information it allows you to see in the future is based on what you did in the past. What is relatable and retrievable from the past shapes the way Facebook’s algorithm works in the present, and will potentially affect the user’s decisions in the future.

Despite the many negative effects on democracy Facebook can have, it can be a positive force for it too. During elections in the United States in 2010 and 2012, the site conducted experiments with a tool it called the ‘voter megaphone’ (O’Neil 2016, online). The idea of this was to encourage users to make a post saying they had voted, which would, in turn, remind and encourage others to do the same. Statistics showed 61 million people made such a post, with the likely result of increasing participation in democratic processes, especially among young people (O’Neill 2016, online). Additionally, movements can be organised on social media, including women’s marches in 2017, which saw about five million women march globally as a result of online organisation (Vestager 2017, online).

Facebook is determined to show that the information and feed its algorithm creates and controls is an ever-changing and independent tool for good, but the reality is it is a vital part of its business model. The Facebook algorithm is “biased towards producing agreement, not dissent” (Tufekci 2015, p.9). After all, if its users were continually presented with information they didn’t appreciate, they would simply go elsewhere. And that’s not a successful business model, by any definition. How the filter bubbles, in which Facebook users’ news feeds exist, affect democracy is as simple as it is destructive. Electoral laws are outdated, and “regulators aren’t big or savvy enough to catch transgressors” (Kuper 2017, online). Drawing conclusions from this alone, we can say that Facebook has changed democracy. Perhaps author and mathematician Cathy O’Neil put it at its simplest and best when she said “Over the last several years, Facebook has been participating – unintentionally – in the erosion of democracy” (2016, online).

Google

In 1998, university drop-outs Larry Page and Sergey Brin founded Google with the stated aim of hoping “to organise the world’s information and make it universally accessible and useful” (Google, online). Its search engine helped unlock many of the so-called ‘walled gardens’ of the Internet, including sites like AOL and Yahoo. Since then, it has organised every single piece of information on the Internet, and it continues to add many millions more to its searchable database every day (Vise & Malseed 2005, p.3).

After going public in 2004, its value and influence grew exponentially, and it began to challenge Microsoft’s dominance in the online world (Vise & Malseed 2005, p.3), overtaking it as the most visited site on the web in 2007 (Strickland 2017, online). The company owes its success to its search engine’s ability to search so well and in lightning-quick time. It now has over 50,000 employees globally, and has expanded its business interests into the fields of artificial intelligence and self-driving cars (Frommer 2014, online), and its search engine is used globally over 6.5 billion times every day (Allen 2017, online).

Google has been called “the keeper of web democracy” (Howie 2011, online) and its search engine is a very powerful and vital component to 21st century Western democratic life, yet its influence is not widely understood or researched (Richey & Taylor 2017, p.1). With 150,000,000 active websites on the Internet today (Strickland 2017, online), it performs an important role in the lives of millions of people. Google has 88% of the market share in search and search advertising (Hazen 2017, online), and combined with Facebook, has more than a billion regular users. It is partly because of the colossal amounts of users and data with which it operates that Google’s algorithms are so complex.

The company markets its algorithm-driven search engine as a tool which will “result in finer detail to make our services work better for you” (Google 2017, online), and, in theory, the first results from a search should be the ones which are most relevant to the keywords searched. This seems, on the face of things, to be a simple and incredibly convenient tool for all its users. Yet critics of its methods and its effects on democracy are plentiful.

“Unregulated search rankings could pose a significant threat to a democratic system of government,” says Forbes writer Tim Worstall (2013, online), while Hazen (2017, online) explains how Google’s “relentless pursuit of efficiency leads these companies to treat all media as a commodity”. The real value of the platform lies not in the quality, honesty or accuracy of information it produces, but the amount of time the user is engaged with the platform. Hazen goes on to describe how these methods have pushed Page and Brin into the top-ten most wealthy people in America, each with a personal fortune over US$37 billion, and suggests the way by which these methods have affected democracy haven’t seemed to have been taken into account at any point in the company’s evolution.

Much like Facebook, Google has been criticised for data mining, and, on several occasions, taken to court for mismanaging users’ data (Smith 2016, online). Following United States government whistle-blower Edward Snowden’s leaks, Google’s users have become more savvy to how the site collects and users their data, and critics have labelled the company’s data mining methods as “purely to benefit Google” (Miller 2012, online). Yet the practice continues. The collection of data, and the profits of around $40 billion a year it makes from these practices, is concerning to many users of Google, despite the fact the company claims it uses data mining techniques to “find more efficient algorithms for working with massive data sets, developing privacy-preserving methods for classification, or designing new machine learning approaches” (Google 2017, online).

Another way in which the vast amounts of data channelled through Google could be used is in making political predictions, although the usefulness of this is unclear. This can be demonstrated with a real-life example: Google data showed that searches for ‘Donald Trump’ accounted for almost 55% of views in the three days before the 2016 presidential election (Allegri 2016, online), when the majority of polls predicted a Clinton victory, and its data predicted his final total electoral college votes number to within two of the actual number. This made analysts, tech writers and journalists take notice, with the general consensus that it was time to “start taking the electoral prediction powers of Google much more seriously” (Kirby 2016, online).

Consistent accusations of tampering with results have plagued Google throughout its lifetime, and such actions have the potential to affect democracy negatively if true. The company’s Vice President Marissa Mayer appeared in a 2011 YouTube video telling an audience how her company regularly, and unashamedly, puts its own services as the top of search results (Howie 2011, online). In 2017, public trust in Europe of Google’s algorithm reached an all-time low, following the proliferation of fake news stories and clearly-engineered results. The European Commission advertised for a company to police Google’s algorithm to determine the extent to which results are deliberately positioned favourably to those who have paid for it, and how much Google was abusing its market dominance (Hall 2017, p.17). The Commission also launched an investigation into the extent to which Google banned competitors from search results and advertisements, with the promise of keeping the issue “on our desks for some time” (Hall 2017, p.17). The way in which Google “uses its dominant search engine to harm rivals” has led to critics like Derrick (2017, p.1) examining how the concentration or monopolization of services in this way “threatens our markets, threatens our economy, and threatens our democracy”. It is difficult to see how Google’s self-serving behaviour can have anything but an overall negative effect on democracy in Western societies.

Despite many criticisms of Google’s algorithm and its negative effects on privacy and democracy, its data mining practices have produced some positive outcomes. In 2014, Google found evidence of child pornography in one of its user’s e-mail accounts and reported the person to the National Centre for Missing and Exploited Children in the United States, resulting in an arrest (Matterson 2014, online). Google Maps’ ability to identify illegal activities such as marijuana growing and non-approved building have also been noted as positives (Google Earth Blog, online).

The future of Google is likely to see it maintain its virtually unchallengeable position at the head of Internet search engine use and advertising revenue generation. The site’s ability to change its algorithms at any time mean it can evolve to control the market in any way it wishes, and can control the impact it has on websites, its competitors, and entire industries. The company’s future is not likely to be one with a reduced involvement with algorithms, but something quite the opposite, says Davies (2017, online). When once upon a time Google’s algorithm had a relatively basic structure, it is now much more complex, and becoming more so. Its methods of pushing forward artificial intelligence and machine learning are happening at an “amazing if not alarming rate” (Davies 2017, online), meaning its influence on what data we see is likely to grow. “Not since Rockefeller and JP Morgan has there been such a concentration of wealth and power in the hands of so few” explains Hazen (2017, online).

Twitter

Twitter began as an idea that co-founder Jack Dorsey had in 2006, who originally imagined it as an SMS-based communications platform (MacArthur 2017, online), hence the 140-word character limit. Fast forward five years later, and it was one the biggest communication platforms in the world. Now it has over 200 million active monthly users and it is considered vital, along with Facebook, that every public figure who wishes to engage with their audience, have an account (MacArthur 2017, online).

Studies have shown that political candidates who use Twitter as a means for engaging with voters significantly increase their odds of winning (LaMarre & Suzuki-Lambrecht 2013, p.1). The platform stimulates word-of-mouth marketing and increases audience reach significantly (LaMarre & Suzuki-Lambrecht 2013, p.1), and live information being of particular importance and influence. Sustaining a live connection, via Tweeting, through an election cycle has been shown to result in a positive reaction from supporters (LaMarre & Suzuki-Lambrecht 2013, p.1), which has the potential to translate in positive results on election day. President Obama’s use of Twitter during his two campaigns is a good example of this.

However, not all use of Twitter is as open and honest it may seem. During the 2016 US presidential election, 20% of all political tweets made during the three televised political debates were made by bots (Campbell-Dollaghan 2016, online), or a piece of software designed to execute commands with a particular goal. It was unclear where many of the bots came from or who created them, making it easier to spread fake news stories and potentially influence public opinion. There is also evidence to show that during the UK Brexit campaign, huge numbers of “fake news stories, false factoids, and absurd claims were passed over social media networks, often by Twitter’s highly automated accounts” (Howard 2016, online). Bots and automated accounts are very easy to make (Campbell-Dollaghan 2016, online), and can amplify misinformation in a political campaign. Twitter allows news stories from untrustworthy sources to “spread like wildfire over networks of family and friends” (Howard 2016, online).

These examples of how Twitter is being used to spread information or misinformation strongly suggests that it should now be regarded as a media company. However, much like Facebook, Twitter is not legally obliged to regulate the information passed over its network for quality or accuracy. In fact, it has been given a “moral pass” (Howard 2016, online) when it comes to the obligations professional media organisations and journalists are held to.

As Twitter has rolled out a 280-character trial in October 2017 (Hale 2017, online), it is arguably positioning itself to be an even more influential transmitter of information, accurate or inaccurate, in future democratic processes. It remains to be seen whether the increase will increase engagement with the platform, but the potential is there for it to be an even bigger player in the political arena (Hale 2017, online).

Other Platforms

While algorithms used by Facebook and Google are the dominant forces in controlling what many people see and think about democracy, other platforms are playing increasing roles. With Facebook and Google now firmly part of the established mainstream, there is space for other social media to fill their previous roles as the newcomer or disruptor on the scene. A politician or political party can share images directly to their followers, and can engage directly with them while doing so.

The way in which these photo-sharing social media have been used in recent elections suggests they will have a huge role to play in future similar contests. The recent UK Prime Ministerial election saw both Theresa May and Jeremy Corbyn use Instagram to a small degree, with surveys showing Corbyn’s use was more effective, although this could also be explained by the fact that younger people are more likely to vote Labour (Kenningham 2017, online). French President Macron used it heavily and swept to power (Kenningham 2017, online), and Indian Prime Minister Modi has a huge eight million followers. In the UK alone, Instagram has 18 million users and Snapchat 10 million – both significant portions of the 65 million total population, so political parties and figures need to be using it to be successful in the ever-competitive mediascape.

Instagram’s and Snapchat’s core demographics are much younger, on average, than that of Twitter and Facebook, and the platforms have an ability to reach groups of people who feel permanently disengaged with the political process (Kenningham 2017, online). Ninety percent of Instagram’s users, for example, are under 35 years old, and it is increasingly becoming the platform of choice for image-fixated millennials (Kenningham 2017, online).

While Instagram may be an excellent tool for reaching a younger demographic, its algorithm can be used and abused, as well as negotiated. Much like the Facebook news feed algorithm, Instagram’s algorithm has been described as being “mysterious, yet ingenious and brilliant at showing the best content to the best people” (Lua 2017, online). Its algorithm is driven by seven key factors or elements of a post, including engagement, relevancy, relationships, timeliness, profile searches, direct shares, and time spent (Lua 2017, online). A 2016 Instagram study (Instagram 2016, online) found that, when posts were listed chronologically, users missed up to 70% of their feeds, and the platform changed to an algorithm-driven method of ordering. Despite some initial opposition to the move, feedback has been generally positive (Lua 2017, online), and the relatively simple nature of Instagram’s algorithm, compared to that of Facebook, means it is easy for users to work with or even “beat” (Chacon 2017, online).

Snapchat is behind Instagram on users, but crucially, it has high levels of engagement, with the average user spending up to 30 minutes per day on the platform (Kenningham 2017, online). Its algorithm, similar to that of Instagram, places certain posts to the top of its feed, which leaves it open to misuse, but it offers a “way to engage with people who normally switch off at the very mention of the word ‘politics’” (Kenningham 2017, online). Jeremy Corbyn used the platform extensively in the 2017 UK election with some success, and all three French Presidential candidates used it, most especially the eventual winner (Kenningham 2017, online).

While Instagram and Snapchat have not yet played defining roles in political processes anywhere in the world, and the extent to which their algorithms can be used or manipulated in doing so is yet unclear, they are needed to “become a central part of the democratic process to ensure more people have a say and stake in the future of [political processes]” (Kenningham 2017, online). It is likely that Instagram and Snapchat have only had a positive effect on Western democratic processes thus far.

Summary of Findings

After such a detailed examination of the use of algorithms in social media and search engines, it is important to summarise findings, with reference to the original research questions.

The first research question asked: To what extent do algorithms in websites, networking services and social media have a negative effect on democracy in Western societies?

When the effects on Western democracies of algorithms used by Facebook, Google and others are examined, it can be said that, in a general sense, these algorithms have a negative impact on Western democracies.

Facebook’s algorithm is probably the biggest offender in this regard. Its aims are not to promote or encourage quality content being uploaded or shared on the platform, but to get as much personal information about its users and keep them engaged for as long as possible, in order to better target paid advertisements to them. Its success does not rely on the ability or need to distinguish between quality, truthful information and dishonest, fake information – as long as users are engaged regularly and for lengthy periods, it can sell a large amount of advertisements and its financial success is certain. Facebook’s algorithm also perpetuates the ‘filter bubble’ method of news feed generation, in which users are rarely, if ever, exposed to information that is contrary to their personal beliefs. Its algorithm can, and has, been manipulated to promote news stories with false or misleading information in order to gain political advantage.

Similarly, Google’s algorithm has many negative effects on democracy. Its search engine’s algorithm is designed to produce results based on a user’s previous searches, which, similar to that of Facebook, perpetuates the ‘filter bubble’ and is designed to soak up as much information about the user in order to target advertisements and generate revenue. Google claims it uses data mining to improve its services for users, yet makes US$40 billion a year from these practices, so it is difficult to accept that it is not a self-serving activity. Additionally, the monopolization of data and advertising services by Google drives competition out of the market, and the site also regularly manipulates data and search results to place particular results higher than others.

The second research questions asked: To what extent, if any, can users of new and digital media be manipulated by algorithms to think or act in certain ways?

Algorithms used by Facebook and Google can control what information users have access to in their news feeds, and hence, what issues they are exposed to and are likely to think about (Francis 2015, online). While a small number of writers have argued that technologies like web search and social networks reduce ideological segregation (Flaxman et. al 2016, p.298), there is much evidence showing otherwise (Francis 2015, online). The repetitive nature of how web-based algorithms work means that information engaged with by users affects their future search results and the content of their news feed, and similar search results or information is likely to appear again, perpetuating the ‘filter bubble’. Facebook continually removes or hides news that it believes might offend users, including many investigative journalism pieces (Ingram 2015, online). When the filter bubble and easy proliferation of untruthful or misleading information are combined, users can be manipulated to think certain ways about political or other subjects. The monopolization of news distribution is arguably not of Facebook’s own doing, as such a high number of people use it globally, and media companies have no real choice but to use it as a way of interacting with news consumers, but the way that Facebook feels about how news feeds are generated can differ from one day to the next.

The third research question asked: To what extent do search engine algorithms affect democracy in Western societies?

The answer to this question is, quite simply, a huge extent. With a virtual monopoly on search, Google “has the power to flip the outcomes of close elections easily – and without anyone knowing” (Epstein 2014, online). The company has the ability to identify a candidate that best suits its needs, identify undecided voters and send them customised search results tailored to make the candidate look better, while nobody – candidate, voter or regulator – is any the wiser (Epstein 2014, online). There is no evidence for such direct manipulation, but favouritism can happen ‘organically’ on Google’s search engine – this is what the company claimed was the cause of Barack Obama’s consistently high rankings in the months just before the 2008 and 2012 elections (Epstein 2014, online). A 2010 study conducted on a group of Americans’ preferences for either Julia Gillard or Tony Abbott (people the test subjects were unfamiliar with) as the ideal candidate for the position of Prime Minister of Australia found that they made their choice based on search rankings (Epstein 2014, online). In future elections, as increasing numbers of undecided voters get their information on political matters through the Internet, the way that Google’s algorithm works will have international ramifications. Google is not ‘just’ a platform, it “frames, shapes and distorts how we see the world” (Arvanitakis 2017, online).

The fourth research question asked: Which website, networking service or search engine is most likely to affect democracy through its use of algorithms?

The answer is Facebook, and this can be seen in many real-life examples. Recent real-life examples include its algorithms manipulating data to gain political outcomes in the Brexit referendum, the Trump-Clinton election, the French presidential election, and the UK general election. The most notable case of algorithm-driven influence in politics is the Trump-Clinton election contest. President Trump’s Digital Director, Brad Parscale, admitted that Facebook was massively influential in winning the election for Trump (Lapowsky 2016, online), by generating huge sums of money in online fundraising, a large proportion of which went back into digital advertising. Analysts and writers have also pointed to “online echo chambers and the proliferation of fake news as the building blocks of Trump’s victory” (Lapowsky 2016, online) – echo chambers created by Facebook’s algorithm. Trump’s online team took advantage of Facebook’s ability to test audiences with ads, running 175,000 variations of ads on the day of the third presidential debate alone (Lapowsky 2016, online). Cambridge Analytica pulled data from Facebook and paired it with huge amounts of consumer information from data mining companies to “ develop algorithms that were supposedly able to identify the psychological make-up of every voter in the American electorate” (Halpern 2017, online).

The Future of Democracy in an Algorithm-Driven World

Increased use of algorithms and artificial intelligence can have many benefits to societies. New systems can identify students who need assistance, and data be can used to identify health hazards within a population (Arvanitakis 2017, online). However, a diminished human role in decision-making may have many negative consequences for democracy.

The innovation of algorithms means “our political leanings are constantly being analysed and potentially also manipulated” (Arvanitakis 2017, online), and opaque algorithms can be “very destructive” (O’Neil 2016, p.4). Citizens of Western democracies have always thought that they knew where their information was coming from, but that is no longer the case (Arvanitakis 2017, online). The sources we have come to trust to bring us information have fallen under the influence of powerful, self-serving website whose algorithms make no distinction between truth and lies, or high quality information and nonsense. When a list of search results appear upon searching for something using Google, it is not clear where the results have come from or why they have appeared in such an order, and this is what is concerning for healthy democracy. In fact, it’s almost impossible to work out where information in a search ranking has come from or ended up that way. A professor at Bath University explained that “it should be clear to voters where information is coming from, and if it’s not transparent or open where it’s coming from, it raises the question of whether we are actually living in a democracy or not” (Arvanitakis 2017, online).

In order for anything to survive for any length of time, it has to adapt, and the future for democracy is increasingly looking like one of constant technological adaptation. Newly emerging social media, which have not been sucked into the mainstream where the sole purpose is to collect data for advertisement placement, are, along with other online platforms likely to be crucial to political participation for future generations. It is vital that young people are civically engaged (actively working to make a positive difference to their communities) in order to define and address public problems (Levine 2007, p.1), and social media has the potential to play a huge part in this. As the variety of methods it presents for information sharing and interconnectivity increase, social media has the potential to encourage more people to engage with democratic processes.

It is also vital for algorithms to be transparent and accountable (Arvanitakis 2017, online) in order for users of websites, social media and search engines to know how their personal information is being used, and to ensure the information they are seeing is accurate and balanced. “Algorithms are designed with data, and if that data is biased, the algorithms themselves are biased,” explains O’Neil (2016, p.4). Algorithms could be transparent, accountable and objective, but, in most cases, are nothing more than “intimidating, mathematical lies” (O’Neil 2016, p.4). Overcoming this fact is the key to fair and balanced algorithm use in future democratic processes.

With a 2017 survey indicating that two-thirds of schoolchildren would not care if social media had never been invented and 71% admitting to taking “digital detoxes” (The Guardian 2017, online), there is the hint of a possibility that social media use may decline as the next generation of school-aged children reaches adulthood. Many respondents of the survey believed social media was having a negative effect on their mental well-being, with advertising, fake news and privacy being particular areas of concern (The Guardian 2017, online). Some positives were mentioned, including memes, photo filters, and Snapchat stories, reinforcing the theory that new social media platforms, not Facebook, Twitter or Instagram, may be the future for mass information sharing and for healthy democracy.

Conclusion

It is indisputable that search engines and social media increase the number of ideas, viewpoints, opinions and perspectives available to citizens taking part in democratic processes. An incredibly varied collection of information is available to Internet users at any time, which, on face value, would suggest that citizens should be more informed about political issues than ever before. The Internet is also an effective tool for carrying out successful political campaigns, offering an efficient method by which political groups or individuals can reach audiences with public relations and policy messages.

With these things in mind, it could be easy to move steadily and unquestioningly forward with the idea that software makes our lives more convenient and enjoyable. However, the algorithms controlling data in some of the most popular and widely-used social media and search engines are designed not with the user’s best interests in mind, but the websites themselves – they are businesses, after all. This is a direct and immediate threat to democracy.

The ability to manipulate information online, similarly, is a threat to democratic processes. Evidence and real-life examples show that the control of information and misinformation through search engine and social media manipulation can help bring about desired political results, and the algorithms controlling information in these platforms are not able to discern between real and fake, or truth and dishonesty. Algorithms functioning to target users with advertising material instead of presenting a fair and balanced variety of information perpetuate the division of society based on political beliefs, and engineer information ‘filter bubbles’. Algorithms operating in this way are a threat to democracy.

It is partly this online environment that has created a divisive populist sentiment that now defines many Western societies, and has left many citizens lacking the full range of knowledge needed to make informed democratic decisions. Thomas Jefferson once proclaimed that “a properly functioning democracy depends on an informed electorate” (Samler 2017, online), but when algorithms are manipulating news feeds and search engine results without regulation, free will in the political arena no longer seems so free.

References

ABC News, 2017. ‘Facebook to Release Russia-Linked Ads to Congress Amid Pressure Over Use in US Election’, online, accessed 26th September 2017: http://www.abc.net.au/news/2017-09-22/facebook-to-release-russia-ads-to-congress-amid-pressure/8973718

ABC News, 2017. ‘Las Vegas Shooting: Politicised “Fake News” of Attack Spread on Google, Facebook’, online, accessed 7th October 2017: http://www.abc.net.au/news/2017-10-03/las-vegas-shooting-false-news-of-attack-spread-google-facebook/9011152

Allegri, C, 2016. ‘Did Google search data provide a clue to Trump’s shock election victory?, Fox News, online, accessed 30th September 2017:
Did Google search data provide a clue to Trump’s shock election victory?

Allen R, 2017. ‘Search Engine Statistics 2017’, Smart Insights, online, accessed 30th September 2017: http://www.smartinsights.com/search-engine-marketing/search-engine-statistics/

Anderson, B & Horvath, B, 2017. ‘The Rise of the Weaponised AI Propaganda Machine’, Scout, online, accessed 16th August 2017: https://scout.ai/story/the-rise-of-the-weaponized-ai-propaganda-machine

Arvanitakis, J, 2017. ‘If Google and Facebook Rely on Opaque Algorithms, What Does That Mean for Democracy?’, ABC, online, accessed 1st October 2017: http://www.abc.net.au/news/2017-08-10/ai-democracy-google-facebook/8782970

Bansal, N, 2012. ‘The Primal-Duadl Approach for Online Algorithms’, Approximation and Online Algorithms, Springer, p.1

Baraniuk, C, 2015. ‘The Bad Things That Happen When Algorithms Run Online Shops’, BBC, online, accessed 23rd September 2017: http://www.bbc.com/future/story/20150820-the-bad-things-that-happen-when-algorithms-run-online-shops

Barsanti, S, 2017. ‘Mark Zuckerberg Apologises for Facebook Making Life Worse’, AV Club, online, accessed 2nd October 2017: https://www.avclub.com/mark-zuckerberg-apologizes-for-facebook-making-life-wor-1819042663?rev=1506899047971&utm_content=Main&utm_campaign=SF&utm_source=Facebook&utm_medium=SocialMarketing

Bozdag, E & van den Hoven, J, 2015. ‘Breaking the Filter Bubble: Democracy and Design’, Ethics and Information Technology, Issue 4, p.249

Bucher, T, 2017. ‘The Algorithmic Imaginary: Exploring the Ordinary Affects of Facebook Algorithms’, Information, Communication & Society, pp.30-44

Campbell-Dollaghan, K, 2016. ‘The Algorithmic Democracy’, FastCoDesign, online, accessed 2nd October 2017: https://www.fastcodesign.com/3065582/the-algorithmic-democracy

Carson, B, 2016. ‘Zuckerberg: The Real Reason I Founded Facebook’, Business Insider Australia, online, accessed 26th September 2017: https://www.businessinsider.com.au/the-true-story-of-how-mark-zuckerberg-founded-facebook-2016-2?r=US&IR=T

Chacon, B, 2017. ‘5 Things to Know About the Instagram Algorithm’, Later, online, accessed 1st October 2017: https://later.com/blog/instagram-algorithm/

Chaffey, D, 2017. ‘Global Social Media Research Summary 2017’, Smart Insights, online, accessed 1st October 2017: http://www.smartinsights.com/social-media-marketing/social-media-strategy/new-global-social-media-research/

Derrick, J, 2017. ‘Benzinga: Elizabeth Warren: Apple, Google, and Amazon Threaten Our Democracy’, Newstex, p.1

Devlin, B, 2017. ‘Algorithms or Democracy: Your Choice’, TDWI, online, accessed 23rd September 2017: https://tdwi.org/articles/2017/09/08/data-all-algorithms-or-democracy-your-choice.aspx

Digital Disinformation Forum, 2017. Online, accessed 23rd September 2017: https://www.disinforum.org/#intro

Domonoske, C, 2016. ‘Students Have ‘Dismaying’ Inability To Tell Fake News From Real, Study Finds’, NPR, online, accessed 30th September 2017: http://www.npr.org/sections/thetwo-way/2016/11/23/503129818/study-finds-students-have-dismaying-inability-to-tell-fake-news-from-real

Dormehl, L, 2014. ‘Algorithms are Great and All, But They Can Also Ruin Lives’, Wired, online, accessed 23rd September 2017: https://www.wired.com/2014/11/algorithms-great-can-also-ruin-lives/

Dusi, M, Finamore, A, Claffy, K, Brownlee, N & Veitch, D, 2016. ‘Guest Editorial Measuring and Troubleshooting the Internet: Algorithms, Tools and Applications’, IEEE Journal on Selected Areas in Communications, Volume 34, Issue 6, p.1805

Ellis, D, 2016. ‘Why Algorithms are Bad For You’, Life on the Broadband Internet, Pew/Elon

Epstein, R, 2014. ‘How Google Could End Democracy’, US News, online, accessed 1st October 2017: https://www.usnews.com/opinion/articles/2014/06/09/how-googles-search-rankings-could-manipulate-elections-and-end-democracy

Eslami, M, Rickman, A, Vaccara, K & Aleyasen, A, 2015. ‘I Always Assumed That I Was Really Close to Her’, Proceedings of the 33rd Annual SIGCHI Conference on Human Factors in Computing Systems, New York, pp.153-162

Facebook, 2017. Online, accessed various dates: http://www.facebook.com

Fiat, A & Woeginger, GJ, 1998. Online Algorithms: The State of the Art, p.7

Flaxman, S, Goel, S & Rao, JM, 2016. ‘Filter Bubbles, Echo Chambers, and Online News Consumption’, Public Opinion Quarterly, Volume 80, p.298

Floridi, L, 2017. ‘The Rise of the Algorithm Need Not Be Bad News for Humans’, Financial Times, online, accessed 23rd September 2017: https://www.ft.com/content/ac9e10ce-30b2-11e7-9555-23ef563ecf9a

Francis, D, 2015. ‘Facebook Elections, Facebook Candidates, Facebook Democracy’, Huffington Post, online, accessed 27th September 2017: http://www.huffingtonpost.com/dian-m-francis/facebook-elections-facebo_1_b_8271488.html

Frommer, D, 2014. ‘Google’s Growth Since its IPO is Simply Amazing’, Quartz, online, accessed 30th September 2017: https://qz.com/252004/googles-growth-since-its-ipo-is-simply-amazing/

Gavet, M, 2017. ‘Rage Against the Machines: Is AI-Powered Government Worth It?’, We Forum, online, accessed 23rd September 2017: https://www.weforum.org/agenda/2017/07/artificial-intelligence-in-government

Google Earth Blog, online, accessed 30th September 2017: http://www.gearthblog.com

Google, ‘From the Garage to the Googleplex’, online, accessed 30th September 2017: https://www.google.com/intl/en/about/our-story/

Google Research, online, accessed 30th September 2017: http://www.research.google.com/pubs/dataminingandmodeling.html

The Guardian, 2017. ‘Growing Social Media Backlash Among Young People, Survey Shows’, online, accessed 7th October 2017: https://www.theguardian.com/media/2017/oct/05/growing-social-media-backlash-among-young-people-survey-shows

Hale, S, 2017. ‘Twitter Trials 280 Characters, But Its Success in Japan is More Than a Character Difference’, Oxford Online Institute, online, accessed 2nd October 2017: https://www.oii.ox.ac.uk/blog/success-is-more-than-a-character-difference/

Hall, K, 2017. ‘Europe Seeks Company to Monitor Google’s Algorithm in $10m Deal’, The Register, p.11

Halpern, S, 2017. ‘How He Used Facebook to Win’, NY Books, online, accessed 1st October 2017: http://www.nybooks.com/articles/2017/06/08/how-trump-used-facebook-to-win/

Hazen, D, 2017. ‘Google, Facebook, Amazon Undermine Democracy: They Play a Role in Destroying Privacy, Producing Inequality’, Salon, online, accessed 30th September 2017

Helbing, D, Bruno, S, Gigerenzer, G, Hafen, E, Hagner, M, Hofstetter, Y, van den Hoven, J, Zicari, RV & Zwitter, A, 2017. ‘Will Democracy Survive Big Data and Artificial Intelligence?’, Scientific American, online, accessed 23rd September 2017: https://www.scientificamerican.com/article/will-democracy-survive-big-data-and-artificial-intelligence/

Heller, N, 2016. ‘The Failure of Facebook Democracy’, The New Yorker, online, accessed 28th September 2017: https://www.newyorker.com/culture/cultural-comment/the-failure-of-facebook-democracy

House of Parliament, ‘Algorithms in Decision-Making Inquiry – Publications’, online, accessed 23rd September 2017: https://www.parliament.uk/business/committees/committees-a-z/commons-select/science-and-technology-committee/inquiries/parliament-2015/inquiry9/publications/

Howard, J, 2016. ‘Is Social Media Killing Democracy?’, Oxford Internet Institute, online, accessed 2nd October 2017: https://www.oii.ox.ac.uk/blog/is-social-media-killing-democracy/

Howie, P, 2011. ‘The End of the Google Democracy’, Fast Company, online, accessed 30th September 2017: https://www.fastcompany.com/1746616/end-google-democracy

Instagram, 2016. ‘See the Moments You Care About First’, Instagram, online, accessed 1st October 2017: http://blog.instagram.com/post/145322772067/160602-news

Introna, LD & Nissenbaum, H, 2000. ‘Shaping the Web: Why the Politics of Search Engines Matters’, The Information Society, pp.169-185

Jain, A, 2016. ‘Spread of Fake News on Facebook Eroding Democracy: Obama’, Newstex Global Business Blogs, online, accessed 27th September 2017

Kemp, S, 2017. ‘Digital in 2017: Global Overview’, WeAreSocial, online, accessed 1st October 2017:
https://wearesocial.com/special-reports/digital-in-2017-global-overview

Kenningham, G, 2017. ‘Instagram and Snapchat are Vital Tools for Activating Democracy’, Cityam.com, accessed 1st October 2017: http://www.cityam.com/266023/instagram-and-snapchat-vital-tools-activating-democracy

Kirby, J, 2016. ‘Google Predicted Donald Trump Would Win the Election’, Macleans, online, accessed 30th September 2017: http://www.macleans.ca/politics/washington/google-predicted-donald-trump-would-win-the-election/

Kuper, S, 2017. ‘How Facebook is Changing Democracy’, Financial Times, online, accessed 28th September 2017: https://www.ft.com/content/a533d5ec-5085-11e7-bfb8-997009366969?mhq5j=e7

LaMarre, HL & Suzuki-Lambrecht, Y, 2013. ‘Tweeting Democracy? Examining Twitter as an Online Public Relations Strategy for Congressional Campaigns’, Public Relations Review, Volume 39, p.1

Lapowsky, I, 2016. ‘Here’s How Facebook Actually Won Trump the Presidency’, Wired, online, accessed 1st October 2017: https://www.wired.com/2016/11/facebook-won-trump-election-not-just-fake-news/

Levine, P, 2007. The Future of Democracy: Developing the Next Generation of American Citizens, UPNE, p.1

Levy, S, 2010. ‘How Google’s Algorithm Rules the Web’, Wired, online, accessed 15th August 2017: https://www.wired.com/2010/02/ff_google_algorithm/

Linehan, H, 2017. ‘Google, Facebook are a Threat to Democracy, says Press Council Chair’, Irish Times, 25th May 2017, p.11

Lua, A, 2017. ‘Understanding the Instagram Algorithm: 7 Key Factors and Why the Algorithm is Great for Marketers’, BufferApp, online, accessed 1st October 2017: https://blog.bufferapp.com/instagram-algorithm

MacArther, A, 2017, ‘The Real History of Twitter, in Brief’, LifeWire, online, accessed 2nd October 2017: https://www.lifewire.com/history-of-twitter-3288854

Makulilo, A, 2017. ‘Rebooting Democracy? Political Data-Mining and Biometric Voter Registration in Africa’, Information and Communications Technology

Marchi, R, 2012. ‘With Facebook, Blogs, and Fake News, Teens Reject Journalistic “Objectivity”’, Journal of Communication Inquiry, Volume 36, p.246

Matteson, S, 2014. ‘Google Turns in a User for Allegedly Possessing Criminal Material’, TechRepublic, online, accessed 30th September 2017: http://www.techrepublic.com

Merriam-Webster Dictionary, 2017. ‘Fake News’, online, accessed 27th September 2017: https://www.merriam-webster.com/words-at-play/the-real-story-of-fake-news

Miller, D, 2012. ‘Google: Let Us Opt Out of Your Data Mining Machine’, Wired, online, accessed 30th September 2017: https://www.wired.com/insights/2012/10/google-opt-out/

Newton, C, 2016. ‘Zuckerberg: The Idea That Fake News on Facebook Influenced the Election is “Crazy”’, The Verge, online, accessed 26th September 2017: https://www.theverge.com/2016/11/10/13594558/mark-zuckerberg-election-fake-news-trump

The Nudging Company, ‘Nudging and Behavioural Design’ online, accessed 23rd September 2017: https://thenudgingcompany.com/en/free-online-workshop-on-nudging-and-behavioral-design/

O’Neil, C, 2016. ‘Commentary: Facebook’s Algorithm vs. Democracy’, PBS, online, accessed 30th September 2017: http://www.pbs.org/wgbh/nova/next/tech/facebook-vs-democracy/

O’Neil, C, 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown Publishing

Oremus, W, 2016. ‘Who Controls Your Facebook Feed’, Slate, online, accessed 16th August 2017: http://www.slate.com/articles/technology/cover_story/2016/01/how_facebook_s_news_feed_algorithm_works.html

Oxford Reference, ‘Algorithm’, A Dictionary of Social Media, Oxford University Press

Pennington, N, 2013. ‘Facebook Democracy: The Architecture of Disclosure and the Threat to Public Life’, European Journal of Communication, p.193

Phillips, S, 2007. ‘A Brief History of Facebook’, The Guardian, online, accessed 26th September 2017: https://www.theguardian.com/technology/2007/jul/25/media.newmedia

Richey, S & Taylor, B, 2017. Google and Democracy, Taylor & Francis Ltd, p.1

Samler, J, 2017. ‘Why Facebook’s Algorithms are Destroying Democracy’, Harbus, online, accessed 30th September 2017: http://www.harbus.org/2017/facebooks-algorithms-destroying-democracy/

Shoval, N, 2017. ‘Facebook is Dangerous for Democracy – Here’s Why’, Mashable, online, accessed 28th September 2017: http://mashable.com/2017/07/17/facebook-social-media-dangerous-for-democracy/#pfqHTN6bRPqo

Silverman, C, 2016. ‘Hyperpartisan Facebook Pages Are Publishing False And Misleading Information At An Alarming Rate’, Buzzfeed, online, accessed 26th September 2017: https://www.buzzfeed.com/craigsilverman/partisan-fb-pages-analysis?utm_term=.gwJPAMm2Qm#.uj5PYxjKRj

Smith, C, 2016. ‘Why Facebook and Google Mine Your Data, And Why There’s Nothing You Can DO to Stop It’, BGR, online, accessed 30th September 2017: http://bgr.com/2016/02/11/why-facebook-and-google-mine-your-data-and-why-theres-nothing-you-can-do-to-stop-it/

Smith, P, 2017. ‘Dear Internet, Can We Talk? We Have an Information Pollution Problem of Epic Proportions’, Misinfocon, online, accessed 26th September 2017: https://misinfocon.com/dear-internet-can-we-talk-we-have-an-information-pollution-problem-of-epic-proportions-a1c31b600fdc

Spinney, L, 2017. ‘Facebook and Instagram, Blurring the Line Between Individual and Collective Memories’, Nature, Volume 543, p.168

Statt, N, 2017. ‘Mark Zuckerberg Just Unveiled Facebook’s New Mission Statement’, The Verge, online, accessed 26th September 2017: https://www.theverge.com/2017/6/22/15855202/facebook-ceo-mark-zuckerberg-new-mission-statement-groups

Strickland, J, 2017. ‘Why is the Google Algorithm So Important?’, How Stuff Works, online, accessed 30th September 2017: http://www.computer.howstuffworks.com/google-algorithm.htm

Stubb, A, 2017. ‘Why Democracies Should Care Who Codes Algorithms’, Financial Times, online, accessed 23rd September 2017: https://www.ft.com/content/0322c920-421b-11e7-9d56-25f963e998b2

Sultan, A, 2016. ‘Matchmaking Sites: An Algorithm of the Heart’, Sydney Morning Herald, online, accessed 23rd September 2017: http://www.smh.com.au/technology/sci-tech/matchmaking-sites-an-algorithm-of-the-heart-20160215-gmuztu.html

Tufekci, Z, 2015. ‘Facebook Said Its Algorithms Do Help Form Echo Chambers, and the Tech Press Missed It’, New Perspectives Quarterly, p.9

Vestager, M, 2017. ‘A Healthy Democracy in a Social Media Age’, European Commission, online, accessed 1st October 2017: https://ec.europa.eu/commission/commissioners/2014-2019/vestager/announcements/healthy-democracy-social-media-age_en

Vise, DA & Malseed, M, 2005. The Google Story, Delacorte Press, pp.1-10

White, J, 2012. Bandit Algorithms for Website Optimization, O’Reilly Media, pp.7-9

Worstall, T, 2013. ‘Google Is A Significant Threat To Democracy: Therefore It Must Be Regulated’, Forbes, online, accessed 30th September 2017: https://www.forbes.com/sites/timworstall/2013/04/02/google-is-a-significant-threat-to-democracy-therefore-it-must-be-regulated/#2523f85227c9

Zhang, J, Ackerman, MS & Adamic, L, 2007. ‘Expertise Networks in Online Communities: Structure and Algorithms’, Proceedings of the 16th International Conference on World Wide Web, ACM, p.221

Digital Technologies and the Erosion of Social Trust

Paul McBride Brisbane essay

Social trust and the negative impact of its decline has been interesting and concerning economists and political scientists for some time (Hakansson & Wittmer 2015, p.517). As digital technology evolves, modern forms of media communication have become increasingly complex and discursive in terms of developing trust relations (Berry 1999, p.28), and concerns involving social trust and digital technology have become increasingly intertwined. Societies benefit from high levels of social trust, and while we are now communicating quicker and in a greater variety of ways than ever before, it is not immediately obvious whether the many forms of digital technology and their rapidly-evolving natures have a positive or negative impact on the social trust within a society. Social trust relies on many factors, and while digital technology is far from being the only, or even major, factor in influencing the amount of social trust within a society, it can play a major part. This essay will examine the question of whether digital technologies erode social trust, and the potential implication of the effects of digital technologies and related issues on social trust.

Social trust is a “belief in the honesty, integrity and reliability of others” (Taylor 2007, p.1). It provides the “cohesiveness necessary for the development of meaningful social relationships” (Welch 2001, p.3) and is highly important for both social and political reasons. The level of social trust within a society has implications in the fields of sociology, economics, psychology, anthropology and others. It contributes to a wide range of social phenomena and attributes, from stable government, social equity, market growth, and public harmony, as well as elements on an individual level, such as optimism, physical and mental well-being, education, community, and participation (European Social Survey, online). Individuals benefit from being part of a society with high social trust, as well as contributing to, and participating in, it. Social trust is a “deep-seated indicator of the health of societies and our economies” (Halpern 2015, online) and, when averaged across a country, the levels of social trust “predict national economic growth as powerfully as financial and physical capital, and more powerfully than skill levels” (Halpern 2015, online). Abundant social trust in a society is often see as “a lubricant facilitating all types of economic exchanges” (Krishna 2000, p.71).

In 1994 there were just 10,000 websites globally (Swire 2014, online). This changed with the launch of search engines – particularly market leader Google – as so-called ‘walled gardens’ such as AOL “were killed” (Swire 2014, online), allowing users to easily and quickly find what they were looking for. E-commerce exploded, and in 2001, well over 100 million Americans had purchased a product online (Mutz 2009, p.439). Blogs, chat websites, and early forms of social media followed, and broadband Internet began to increase in availability in 2005. Sites such as YouTube, which allowed users to upload and watch videos, became hugely popular, and social media emerged as a major online presence with Facebook and Twitter in 2004 and 2006 respectively. Smart phones (particularly Apple’s iPhone) brought the Internet to mobile phones in the early 2010s and have “completely changed the way that people consume content on a daily basis” (Swire 2014, online). The majority of Internet time is now spent on mobile devices worldwide, and around 50% of people now get their news from a digital source such as a website, app or e-mail alert (American Press Institute 2016, online). The media’s role in mediating experience by bridging the gap between events and audiences is a broad but extremely important one (Berry 1999, p.28), and media organisations now have to take into account the presentation of their news more than ever, as users of digital media place high importance on the presentation and delivery of news.

The Internet’s early architecture was built on a foundation of trust (Hurwitz 2013, p.1580), but as it matured, its uses and users became increasingly complex. Online social networks are now a major part of everyday life and the method by which many of us stay connected with friends, consume news, and conduct business. They are a prominent method by which people foster social connections, and the significance and depth of these connections and their relationship with fostering trust has been extensively studied. The Internet’s transition from an early “community with a common purpose” to one that “supports myriad, often conflicting, private interests” (Hurwitz 2013, p.1580) has both positive and negative aspects, with corresponding effects on social trust.

Variation across individuals in their levels of trust in the Internet supports the view that the Internet is an ‘experience’ technology – users’ views of it are greatly shaped by their experience (Dutton & Shepherd 2003, p.7). The rapid proliferation of social media websites since the mid-2000s has accelerated this notion, as users’ experiences of using social media can differ widely. It has been suggested that social networking websites should inform potential users that “risk-taking and privacy concerns are potentially relevant and important concerns” before they sign up to become members (Fogel & Nehmad 2009, p.153), as one of the major negative aspects of social networking sites is the potential for users to cause harm to other users, and thus causing a drop in social trust. Internet users initially experience a high level of trust in online communities, but as time passes, trust rapidly declines (Parker 2015, online).

Social networking on the Internet takes place in a context of trust, but trust is a concept with many dimensions and facets (Grabner-Krauter & Bitter 2013, p.1). Studies suggest that the lay public relies on social trust when making judgements of risks and benefits when personal knowledge about a subject is lacking (Siegrist & Cvetkovich 2000, p.1), so Internet users place trust in other Internet users with expertise, identity, personal information and some even with money lending (Lai & Turban 2008, p.387). This can often cause distress or harm as a result, with a corresponding drop in social trust. Trust in the Internet and the information that is obtainable from it is critical to the development of electronic services such as public service delivery to online commerce, and these are harmed if social trust is low.

However, Hakansson and Witmer (2015, p.518) argue that greater use of social media and an increase in number and variety of online communities can affect social trust positively. They suggest that because information and knowledge is vital to building trust, and digital media transmits information much faster than face-to-face relationships, social trust can be increased as a result. Similarly, social media also makes it easier to find new relationships and opportunities for marketing.

As the Internet has matured and the number of users suffering harm or having a negative experience online has increased, there have been increased calls for Internet providers to mediate use of the Internet, which has caused concern for people who place high value on privacy. Various methods have been proposed to calculate levels of, and manage, social trust in online social networks, but none have proved to work definitively (Carminati et. al 2014, p.16). In today’s Internet, intermediaries are increasingly active (Hurwitz 2013, p.1581), and can protect users from experiencing harm online, and thus prevent a drop in social trust. Parigi and Cook (2015, p.19) explain how digital technology operates as an assurance structure when mediation is a factor in interactions. Mediation “reduces overall uncertainty and promotes trust between strangers”. At the same time, it removes any of the human emotions connected with meeting new people. Social interactions are often uniform and stripped of uncertainty or individuality, and are therefore devoid of the “cohesiveness necessary for the development of meaningful social relationships” (Welch 2001, p.3) that high social trust requires.

An additional concerning element of the proliferation of intermediaries is that is can often be unclear “which institutions, if any, safeguard users from harm” (Hurwitz 2013, p.1581). In the post-trust Internet, users “cannot embrace active intermediaries without assurances that their data will be handled in accordance with their expectation” (Hurwitz 2013, p.1582). Moving forward, it is the very nature of the Internet which makes establishing liability for intermediaries extremely difficult, as well as allowing it to thrive. A recent study showed that 48% of Americans expressed concern about corporate intrusion in their Internet activities (Brynko 2011, p.11).

In many cases, attempts to regulate digital technologies can erode social trust. In democratic societies, it is the role of legislators to defend and promote the public interest, but Australia is rare among Western democracies in that it has no constitutional guarantee of media freedom or free expression (Pearson 2012, p.99). Generally, journalists prefer to run their own affairs by creating systems of self-regulation (White 2014, p.4), but are often subject to intense scrutiny. In Australia, a proposed 2010 federal government review was meant to map out the future of media regulation in the digital era (Conroy 2010, online), but fell by the wayside after the News of the World phone hacking scandal shifted attention back to print media (Pearson 2012, p.99). Further government inquiries in 2011 and 2012 sought to establish the extent to which rapidly developing news businesses and their digital platforms required regulation, but no obvious solution was reached (Pearson 2012, p.99). The lack of a written guarantee of media freedom in Australia means that any attempts to regulate media is more of a threat to democracy, and hence social trust. Enforced self-regulation “is not a suitable option – at least not until free expression earns stronger protection” (Pearson 2012, p.99). A UK study found that current regulation of the Internet is “failing to address the democratic value in enabling citizens to navigate … public space” and “failing to support informed choices about content” (Fielden 2011, p.99).

While the Internet has no guarantees of freedom from regulation, it presents many challenges to those seeking to regulate it. A lack of centralised control, widely-used encryption techniques, its international nature, and anonymity of its users are just a few of the factors which make regulation of the Internet incredibly difficult. While cyberspace has been described as “a terra nullius in which social relations and laws have no historical existence and must be reinvented” (Chenou 2014, p.205), the nature of the Internet, and therefore its affect on the social trust of a nation or group of people, varies greatly depending on location. For example, Australia has legislation prohibiting abuse of market power to lessen competition, whereas in the United States, these laws are not as stringent.

However, not all legislation involving regulation of digital technology is likely to decrease social trust. It could be argued that the Spam Act 2003 is likely to prevent a decrease in a society’s social trust as it greatly prohibits online fraud and encourages self-regulation by users. Similarly, regulation of cyberspace for children is almost universally accepted as a reasonable form of mediation in digital technology with no decrease in social trust likely as a result. While the Australian Labor Party’s 2007 proposal for a blanket ban on content deemed harmful to children was rejected, further legislation has been implemented to protect children online in Australia with the Enhancing Online Safety for Children Act 2015. In the United Kingdom, a 2008 report by the government’s Culture, Media and Sport Select Committee expressed concern about the amount of time taken for the most extreme content to be removed from video-sharing websites such as YouTube (Fielden 2011, p.78). While YouTube introduced a ‘safety mode’ in 2010 to address concerns over parental controls, there is still much concern over the amount of inappropriate material children can access, and the lack of regulation faced by the hosts of this material. As so much data is uploaded to sites such as YouTube every minute, hour and day, it is physically impossible for every piece of content to be checked, so the future of online content regulation for sites such as these is, essentially, crowdsourced (Fielden 2011, p.77). The YouTube community guidelines state: “Every new community feature on YouTube involves a certain level of trust. We trust you to be responsible, and millions of users respect that trust, so please be one of them” (YouTube, online). Discussions at government level concerning the possibility of further regulation of online content still exist in many Western democracies.

Another area which has potential for eroding social trust is in the area of copyright. Copyright has developed over centuries, and friction between users of digital technologies and regulatory bodies has existed for as long as digital technology has been a medium for communication. The digital age has made many traditional modes of reproduction of intellectual property obsolete, and despite many positive aspects of faster and more widely available communication options, methods of creativity and ownership have been tested in profound ways (Fitzgerald 2008, online). The Digital Millennium Copyright Act 1998 criminalised copyright infringement on the Internet, but has attracted criticism for overzealous application of its powers and undermining free speech, and therefore having the potential to erode social trust. In the digital age, copyright activists argue that overzealous use of copyright laws online restrict access to information (Lessig 2008, online). Organisations such as Creative Commons and the Electronic Frontier Foundation (EFF) provide alternatives to copyright, and aim to protect the public interest regarding new technologies (Lambe 2014, p.448). The EFF is especially active in the fields of intellectual property, free speech, anti-surveillance, and bloggers’ rights, and has been in legal disputes with several commercial entities and law enforcement agencies as a result.

Today, every social media user is a publisher of sorts (Cuddy 2016, online). Social media provides instant access to potentially huge audiences, and huge potential for copyright infringement too. Social networking sites provide perhaps the greatest risk of an erosion of social trust in the realm of copyright by providing a platform for users who have shared their creative work with the world to have it stolen and used by others (Legal Aid NSW 2017, online). Copyright law in Australia covers works that are created or shared online, but a social media website’s terms and conditions may change the rights to the work, and these conditions are not always clear or understood.

Another element of digital technologies which has vast potential to erode social trust is the concern of government and corporate Internet surveillance. Post 9/11, the United States government and its federal agencies greatly increased surveillance of its citizens online and introduced a large amount of of cybersecurity legislation as an overall part of their anti-terrorism policy (Nhan & Carroll 2012, p.394). Many watchdog groups expressed concern as a result, although the effect of the legislative and policy changes were perhaps unclear until notorious NSA whistleblower Edward Snowden leaked information regarding government surveillance of private citizens’ online information and habits. In 2014, a survey found that 60% of respondents had heard of Snowden, and that 39% of people have changed their online behaviour as a result of the information he leaked (Jardine & Hampson 2016, online). Jardine and Hampson (2016, online) also found that that many people’s routine online activity had changed substantially, with the most common change being a move from ‘public’ search engines to private search engines with built-in anonymity technology. Similarly, recent scandals in the United States exposing surveillance by the government on its citizens’ online information is likely to have greatly eroded trust in digital media, and thus, social trust (Anderson & Rainie 2014, p.20). This supports the theory that that digital technology has a negative effect on social trust. (Hakasson & Witmer p.518).

There are many real-life examples of digital technology affecting democracy worthy of study, and many of them display potential to erode social trust. Govier (1997, p.20) points out that distrust in politics is “especially prevalent, and, while it may be well-founded, can have pernicious effects” on a society. The 2016 United States presidential election saw the Electronic Frontier Foundation involving itself in an attempt to force a recount in three key states after evidence showed that hackers had manipulated voting machines and optical scanners (Hoffman-Andrews 2016, online), most likely affecting the overall result of the election. In its role as the Fourth Estate, the media is hypothetically the guardian of the public interest and the regulators of those holding democratic power. However, as Coronel (2003, p.9) explains, the media are often used “in the battle between rival political groups, in the process sowing divisiveness rather than consensus, hate speech instead of sober debate, and suspicion rather than social trust”. In these cases, media contribute to public cynicism and apathy, and have a negative effect on democratic processes, and hence a decline in social trust.

President Trump’s first 100 days in office have seen him launch numerous verbal attacks on the media, which have likely eroded social trust for many Americans, but interestingly, polls have provided conflicting results on whether the American public trust the media or the President more (Farber 2017, online; Lima 2017, online; Patterson 2017, online). The goals of advocates for free speech online and anti-regulation groups are often intertwined with those seeking political reform, and those operating at the same time as the current political administration are no different. Ericson (2016, online) goes as far as saying that Lawrence Lessig has “already transformed intellectual-property law with his Creative Commons innovation, and now he’s focused on an even bigger problem: the US’ broken political system”.

In conclusion, it can be said that as societies function on the basis of trust, and users of digital technology are no different, social trust is paramount to a well-functioning democracy. For a high level of social trust to be maintained, users need to trust the Internet and associated digital technologies to keep their information secure and private. Trust is the bedrock of the Internet, is the basis for much of its success, and, in many ways, the philosophy behind much of what keeps it running. However, the Internet provides many opportunities for social trust to be eroded, and trust in digital technologies, and especially the Internet, is arguably declining. When trust in digital technology starts to wane, or government agencies or organisations are shown to be breaching privacy or perceived as being dishonest, users change how they behave and social trust declines. Recent copyright and regulatory conflict, and scandals involving surveillance and privacy have likely had a negative effect on social trust in many Western democracies. The resulting drop in social trust has a negative effect on a society, in terms of public harmony, economics, and other areas. Social cohesion can be established or demolished by high or low social trust.

References

American Press Institute, 2016. ‘How People Decide What News to Trust on Digital Platforms and Social Media’, online, accessed 13th May 2017: https://www.americanpressinstitute.org/publications/reports/survey-research/news-trust-digital-social-media/

Anderson J & Rainie L, 2014. ‘The Future of the Internet: Net Threats’, Internet and American Life Project, Pew Research Centre, p.20

Berry, D, 1999. Ethics and Media Culture: Practices and Representations, Taylor and Francis, p.28

Brynko, B, 2011. ‘Trust in Social Networking’, Information Today, p.11

Carminati, B, Ferrari, E & Viviani, M, 2014. Security and Trust in Online Social Networks, p.16

Chenou, JM, 2014. ‘From Cyber-Libertarianism to Neoliberalism: Internet
Exceptionalism, Multi-stakeholderism, and the Institutionalisation of Internet Governance in the 1990s’, Globalizations, pp.205-223

Conroy, S, 2010. ‘Convergence Review’, media release, accessed 17th May 2017: http://www.minister.dbcde.gov.au/media/media_releases/2010/115

Coronel, S, 2003. ‘The Role of Media in Deepening Democracy’, United Nations, online, accessed 13th May 2017: http://unpan1.un.org/intradoc/groups/public/documents/un/unpan010194.pdf

Cuddy, RH, 2016. ‘Copyright Issues for Social Media’, Legal Zoom, online, accessed 20th May 2017: https://www.legalzoom.com/articles/copyright-issues-for-social-media

Dutton, WH & Shepherd, A, 2003. ‘Trust in the Internet: The Social Dynamics of an Experience Technology’, Oxford Internet Institute, University of Oxford, p.7

Ericson, B, 2015. ‘Transcript that Choke Creativity’, Communication and New Media, online, accessed 20th May 2017: https://medium.com/communication-new-media/lawrence-lessig-laws-that-choke-creativity-4aa99ded4ce4

European Social Survey, 2017. ‘Social Trust and Its Origin’, online, accessed 13th May 2017: http://essedunet.nsd.uib.no/cms/topics/2/1/

Farber, M, 2017. ‘Sorry President Trump, But Voters Trust the Media More Than You’, Fortune, online, accessed 20th May 2017: http://fortune.com/2017/02/23/voters-trust-media-more-than-trump/

Federal Register of Legislation, Australian Government, ‘Enhancing Online Safety for Children Act 2015’, online, accessed 17th May 2017: https://www.legislation.gov.au/Details/C2015A00024/Controls/

Federal Register of Legislation, Australian Government, Spam Act 2003, online, accessed 17th May 2017: https://www.legislation.gov.au/Series/C2004A01214

Fielden, L, 2011. ‘Standards Regulation in the Age of Blended Media’, Regulating for Trust in Journalism, City University London, pp.78,99

Fitzgerald, B, 2008. ‘Copyright 2010: The Future of Copyright’, European Intellectual Property Review, accessed 20th May 2017: http://eprints.qut.edu.au/

Fogel, J & Nehmad, E, 2009. ‘Internet Social Network Communities: Risk Taking, Trust, and Privacy Concerns’, Computers in Human Behaviour, p.153

Govier, T, 1997. Social Trust and Human Communities, MQUP: Montreal, p.20

Grabner-Krauter, S & Bitter, S, 2013.’ Trust in Online Social Networks: A Multifaceted Perspective’, Forum for Social Economics, Volume 44, pp.48-68

Hakansson, P & Witmer, H, 2015. ‘Social Media and Trust – A Systematic Literature Review’, Journal of Business and Economics, University of Malmo, pp.517-524

Halpern, D, 2015. ‘Social Trust is One of the Most Important Measures That Most People Have Never Heard Of – and It’s Moving’, The Behavioural Insights Team, online, accessed 13th May 2017: http://www.behaviouralinsights.co.uk/uncategorized/social-trust-is-one-of-the-most-important-measures-that-most-people-have-never-heard-of-and-its-moving/

Hoffman-Andrews, J, 2016. ‘Election Audits Ought to Be Like an Annual Checkup, Not a Visit to the Emergency Room’, Electronic Frontier Foundation, online, accessed 20th May 2017: https://www.eff.org/deeplinks/2016/12/audit-better-faster-cheaper

Hurwitz, J, 2013. ‘Trust and Online Interaction’, University of Pennsylvania Law Review, Volume 161, pp.1580-1590

Jardine, E & Hampson, F, 2016. ‘Trust: The Social Basis of the Internet Ecosystem’, Tripwire, online, accessed 19th May 2017: https://www.tripwire.com/state-of-security/security-awareness/trust-social-basis-internet-ecosystem/

Krishna, A, 2000. ‘Creating and Harnessing Social Capital’, Social Capital: A Multifaceted Perspective, pp.71-93

Lai, LS & Turban, E, 2008. ‘Groups Formation and Operations in the Web 2.0 Environment and Social Networks’, Group Decision and Negotiation, pp.387–402

Lambe, J, 2014. ‘Electronic Frontier Foundation’, Encyclopaedia of Social Media and Politics, California: SAGE Publications, pp.448-449

Legal Aid NSW, ‘Online Social Networking: Copyright’, online, accessed 20th May 2017: http://www.legalaid.nsw.gov.au/publications/factsheets-and-resources/online-social-networking-copyright

Lessig, L, 2008. Remix: Making Art and Commerce Thrive in the Hybrid Economy, online, accessed 20th May 2017: https://archive.org/stream/LawrenceLessigRemix/Remix-o.txt

Lima, C, 2017. ‘Poll: Trump Administration Edges Media in Voter Trust’, Politico, online, accessed 20th May 2017: http://www.politico.com/story/2017/02/trump-media-trust-poll-fox-news-235168

Mutz, D, 2009. ‘Effects of Internet Commerce on Social Trust’, Public Opinion Quarterly, pp.439-461

Nhan, J & Carroll, B, 2012. ‘The Offline Defence of the Internet: An Examination of the Electronic Frontier Foundation’, SMU Science and Technology Law Review, Volume 15, pp.389,394

Paligi, P & Cook, L, 2015. ‘Trust and Economics in the Sharing Economy’, Viewpoints on the Sharing Economy, Sage Journals, p.19

Parker. C, 2015. ‘Trust Erodes Over Time in the Online World, Stanford Experts Say’, Stanford News, online, accessed 13th May 2017: http://news.stanford.edu/2015/03/18/sharing-trust-online-031815/

Patterson, T, 2017. ‘News Coverage of Donald Trump’s First 100 Days’, Harvard Kennedy School, online, accessed 20th May 2017: https://shorensteincenter.org/news-coverage-donald-trumps-first-100-days/?utm_source=POLITICO.EU&utm_campaign=ab6d830a9d-EMAIL_CAMPAIGN_2017_05_19&utm_medium=email&utm_term=0_10959edeb5-ab6d830a9d-189799085

Pearson, M, 2012. ‘The Media Regulation Debate in a Democracy Lacking a Free Expression Guarantee’, Pacific Journalism Review, Griffith University, p.99

Siegrist, M & Cvetkovich, G, 2000. ‘Perceptions of Hazards: The Role of Social Trust and Knowledge’, Risk Analysis, p.1

Swire, R, 2014. ‘The Evolution of Digital Media Over the Past 20 Years’, Parallax, online, accessed 18th May 2017: https://parall.ax/blog/view/3052/the-evolution-of-digital-media-over-the-past-20-years

Welch, MR, 2001. ‘Determinants and Consequences of Social Trust’, Sociology, University of Notre Dame Press, p.3

White, A, 2014. ‘The Trust Factor: An EJN Review of Journalism and Self-regulation’, Ethical Journalism Network, online, accessed 20th May 2017: http://ethicaljournalismnetwork.org/assets/docs/142/118/79dd78e-837b376.pdf

YouTube, 2017. ‘Community Guidelines’, online, accessed 20th May 2017: https://www.youtube.com/yt/policyandsafety/communityguidelines.html

END

Dark Tourism and Mass Media

killing fields cambodia

A large amount of tourism literature deals with the marketing and consumption of “pleasant diversions in pleasant places” (Strange & Kempa 2003, p.386), but a number of communications scholars have recently attempted to explore tourism sites of a darker nature. This has helped popularise the form of travel known as dark tourism: tourism which provides “potential spiritual journeys for [those] who wish to gaze upon real and recreated death” (Stone 2006, p.54). In modern Western societies, normal death is hidden from public consumption, yet “extraordinary death is recreated for popular consumption” (Stone 2012, p.1565). Marketing of dark tourism often overlaps with historical or heritage tourism (Mullins 2016, online), and can present promoters with challenges not present with the tourism of ‘pleasant diversion’. This essay will examine some of those challenges and the relationship between mass media and dark tourism in the context of this rapidly developing tourism form.

Dark tourism has a long history, having existed since the earliest pilgrimages and times when people would travel to witness public executions (Jahnke 2003, p.6). When academic research on the topic became significant in the 1990s, at the same time as growing numbers of tourists were seeking these new experiences, the complexities of dark tourism’s relationship with mass media became apparent. Just as all cultural production and consumption is complex and dynamic, the production and consumption of dark tourism has been described variously as “continuous and interrelated as demand appears to be supply‐driven and attraction‐based” (Farmaki 2013, p.281), fuelled by “an increasing supply of carnage and blood” online (Hiebert 2014), driven by factors “extend[ing] from an interest in history and heritage to education to remembrance” (Yuill 2004, p.1), and as a “source of private pleasure” (Seaton 1996, p.235).

The issue of how death is presented to mass audiences is particularly complex. In the realm of dark tourism, media can bring about a “neutralisation of death” (Jahnke 2003, p.8), helping tourists to become more aware of the mortality of others and themselves, or a mental state of being which Stone (2012, p.1565) describes as “a space to construct contemporary ontological meanings of mortality”. In many ways, mass media and dark tourism are “in the same business” (Walter 2009, p.41) in that they both mediate death to mass audiences. Many Western societies have relinquished their attachments to the dead, yet retain a vibrant interest in history (Walter 2009, p.40) and the people who inhabited familiar spaces, setting the stage for two key industries to bridge the gap between the dead and contemporary living: mass media and tourism.

Mass media plays a central role in marketing many dark tourism sites, using tourism literature, Hollywood films, television, newspapers, and comic strips in the role of public relations. Similarly, mass media can keep other sites from public view (Yuill 2004, p.125). By placing sites and events in the forefront of communications, mass media have the ability to attract visitors to dark tourism destinations. Media can provide the public with a general understanding of, and encourage an interest in, dark tourism sites, although Seaton and Lennon (2004, p.62) describe how many Western media outlets tend towards creating a moral panic around dark tourism sites through “sensational exposes of dubiously verified stories”: the result of moral debates about dark tourism within society.

At the same time as promoting and marketing dark tourism destinations, mass media has a distinct influence over public opinion and interpretation of many sites of dark tourism (Ntunda 2014, online). New media technologies can “deliver global events into situations that make them appear to be local” (Lennon & Foley 2000, p.46), embodying simulation and interpretation of historical experiences for a mass audience. Public perception of the importance or prominence of dark tourism sites may also be affected by mass media. Dachau concentration camp, for example, was not one of the largest Nazi extermination camps, yet is one of the most visited, due to its appearance in many films and books (Young 1993, p.10). However, while media is central to understanding and interpreting historical events, it can cause dissatisfaction brought about by constant exposure to simulation (Lennon & Foley 2000, p.47). This can often be countered by the reality of visiting a permanent ruin, monument or preserved space.

Motivations of visitors travelling to dark tourism destinations are varied, and often not directly related to mass media. The need to reconcile comparisons between imagined landscapes and topographical reality (Podoshen 2012, p.263), an interest in history and heritage, educational reasons, collective and personal remembrance (Dunkley & Morgan 2010, p.860), and emotional attachment to a place (Rasul & Mowatt 2011, p.1410), among others, can be important factors encouraging dark tourism. Biran and Hyde (2013, p.191) suggest the primary motivation for many dark tourism participants is to “contemplate life and one’s mortality through gazing upon the significant other dead”, fitting with Stone’s (2012, p.1565) description of dark tourism destinations as “space[s] to construct contemporary ontological meanings of mortality”. Additionally, in the past two decades, many tourists have sought to escape the “sanitised version of reality that tourism has traditionally offered” (Robb 2009, p.51); with many no longer content to lounge by the pool or hotel bar, or embark on guided tours. It could perhaps be argued that each of these motivations could be influenced by mass media to varying degrees, but media is unlikely to be the main driving force. It is also problematic to group all dark tourism destinations together under one category, making it just as difficult to group together motivations for visiting them. Representations of death are unique from site to site and often from visitor to visitor (Robb 2009, p.51). Indeed, many managers of dark tourism sites no longer wish their destinations to be viewed as dark, but as sites of sensitive heritage with a focus on social engagement (Magee & Gilmore 2014, p.898).

In conclusion, it can be said that, despite many challenges, mass media plays a part in encouraging tourists’ interest in dark tourism sites, although it is neither the only, nor arguably the major, driving factor in promoting dark tourism destinations. Dark tourism sites are cultural landscapes which can be interpreted in many ways, as can tourists’ motivations for visiting them. Visitors to dark tourism destinations seek a variety of meanings from their experience and their reasons for visiting sites of real or recreated death are numerous and varied. Dark tourism is a complex issue, in terms of consumption and supply, and its relationship with mass media.

References

Biran, A & Hyde, K, 2013. ‘New Perspectives on Dark Tourism’, International Journal of Culture, Tourism and Hospitality Research, pp.191-198

Dunkley, R & Morgan, N, 2010. ‘Visiting the Trenches: Exploring Meanings and Motivations in Battlefield Tourism’, Tourism Management, p.860-868

Farmaki, A, 2013. ‘Dark Tourism Revisited: A Supply/Demand Conceptualisation’, International Journal of Culture, Tourism and Hospitality Research, p.281

Hiebert, P, 2014. ‘The Growing Quandary of Dark Tourism’, Pacific Standard, online, accessed 9th January 2017: https://psmag.com/the-growing-quandary-of-dark-tourism-733629dd26c5#.xcwen7dal

Jahnke, D, 2013. ‘Dark Tourism and Destination Marketing’, Theseus.Fi, online, accessed 7th January 2016: https://www.theseus.fi/handle/10024/64693

Lennon, J & Foley, M, 2000. ‘Interpretation of the Unimaginable: The U.S. Holocaust Memorial Museum, Washington, D.C., and “Dark Tourism”‘, Dark Tourism, pp.46-50

Magee, R & Gilmore, A, 2014. ‘Heritage Site Management: From Dark Tourism to Transformative Service Experience’, The Services Industries Journal, p.898

Mullins, D, 2016. ‘What is Dark Tourism?’, Cultural Tourism, online, accessed 7th January 2016: http://culturaltourism.thegossagency.com/what-is-dark-tourism/

Ntunda, J, 2014. ‘Investigating the Challenges of Promoting Dark Tourism in Rwanda’, Anchor Academic Publishing, online, accessed 7th January 2016: http://www.anchor-publishing.com/e-book/277349/investigating-the-challenges-of-promoting-dark-tourism-in-rwanda

Podoshen, J, 2012. ‘Dark Tourism Motivations: Simulation, Emotional Contagion and Topographic Comparison’, Tourism Management, p.263-271

Rasul, A & Mowatt, C, 2011. ‘Visiting Death and Life: Dark Tourism and Slave Castles’, Annals of Tourism Research, p.1410

Robb, E, 2009. ‘Violence and Recreation: Vacationing in the Realm of Dark Tourism’, Anthropology and Humanism, p.51

Seaton, AV 1996. ‘Guided by the Dark: From Thanatopsis to Thanatourism’, International Journal of Heritage Studies, pp.234-244

Seaton, AV & Lennon, J, 2004. ‘Thanatourism in the Early 21st Century: Moral Panics, Ulterior Motives and Ulterior Desires’, in TV Singh (ed.) New Horizons in Tourism: Strange Experiences and Stranger Practices, pp.62–82

Stone, P, 2012. ‘Dark Tourism and Significant Other Death: Towards a Model of Mortality Meditation’, Annals of Tourism Research, Vol. 39, p. 1565

Stone, P, 2006. ‘A Dark Tourism Spectrum: Towards a Typology of Death and Macabre Related Tourist Sites, Attractions and Exhibitions’, Tourism: An Interdisciplinary International Journal, p.54

Strange, C & Kempa, M, 2003. ‘Shades of Dark Tourism: Alcatraz and Robben Island’, Annals of Tourism Research, pp.386–405

Walter, T, 2009. ‘Dark Tourism: Mediating Between the Dead and the Living’, The Darker Side of Travel: The Theory and Practice of Dark Tourism, pp. 39-55

Young, JE, 1993. The Texture of Memory: Holocaust Memorials and Meaning, New Haven: Yale University Press, p.10

Yuill, S, 2004. Dark Tourism: Understanding Visitor Motivation at Sites of Death and Disaster, Texas A&M University, pp.1-125

 

The Punk Movement in the Realms of Subculture, Fashion and Style

punk subculture

Style and fashion play important roles in distinguishing one social group from another and from the rest of society, and are vital in giving individuals and groups both a sense of belonging and of being unique. Through sartorial and behavioural choices group identification is produced, and just as fashion encodes style, members of a group bearing a particular fashion reinforce their tribalism. Simultaneously able to be both a whimsical pleasure or novelty and a bold social or political statement, fashion is, in modern society, a functional equivalent to good taste, although the idea of using dress to distinguish oneself is age-old. With their ability and track record of traversing class and social status, fashion and style can be discussed in relation to individuals and groups of people as diverse as monarchs and heads of state, to gatherings of fans of a particular band or genre of music. This essay will examine the punk fashion and youth movement of the late 1970s in Britain and America in the realm of the youth culture it was formed in and influenced, including how it was received by the wider Western society of the time, and its long-term impacts on Western society as a whole.

There is a strong argument for fashion not having existed in any major sense before the growth of capitalism and the formation of industrial cities in Western Europe, although there is some evidence of ancient Roman and Greek ideas of fashion remaining static (Wilson 1985, p.16). By the fourteenth century, trade expansion, the growth of urban life, and the increasing sophistication of aristocratic and royal courts led to an increase in tailoring (Wilson 1985, p.16). Communications technologies introduced at the end of the nineteenth century helped spread knowledge of the latest fashions worldwide, and fashion and style have been a part of Western societies ever since.

Although closely linked, fashion and style can be defined in different ways. A dictionary definition of fashion is “a popular or the latest style of clothing, hair, decoration or behaviour” (Oxford Dictionary, online), whereas style is defined as “a manner or way” (Oxford Dictionary, online). However, the terms have much deeper meanings when explored further and compared.

While Gronow (1993, p.89) describes fashion as a “socially acceptable and safe way to distinguish oneself from others and, at the same time, it satisfies the individual’s need for social adaptation and imitation”, Mauss (1973, p.70) went further by explaining how even the most mundane bodily activity is a cultural technique. Fashion and style can have effects on societies and cultures on a much grander scale than the individual; a person’s fashion choices do not merely represent their taste in clothes or hairstyle, but the attitude they adopt to the world and the people and objects with which they choose to surround themselves (Merleau-Ponty 2004, p.63). With the arrival of mass communications technologies in Western societies, it became possible for individuals or entire subcultures to become famous on national or international scales, and for individuals and groups to seek fame with the use of fashion and style.

Style, in a broad sense, has been defined as “the counter-hegemonic practices of youth subcultures” (Hebdige 1979, p.2) and, in Hebdige’s description of style in the realm of subcultures, style is a form of social refusal or “criminal art” (1979, p.2). Like fashion, the concept of style can be relevant when discussing both individuals and cultures.

As the idea of ‘youth’ appeared in post-war Britain as one of the most obvious social changes, the social landscape changed accordingly. The appearance of youth brought about new legislation, official interventions, and was signified as something “we ought to do something about” (Jefferson 1989, p.10). Youth was a metaphor for social change in ways which took many years to pinpoint, and an idea aided by media constructions and exaggerations about what was organic and what was forced (Gramsci 1971, p.177). Images of youth were self-destructive, misdirected, criminal, impressionable, apathetic, victimised, cool, and cutting edge (Wilson 2006, p.5). As cultural social groups within the arena of youth developed, identified by their distinct patterns of life, they formed ideas about the meanings and values embodied in institutions and traditional customs (Jefferson 1989, p.11). Youth subcultures formed as “crimes against the social order” (Hebdige 1979, p.3); perpetuated by a change of clothes, hairstyle or adoption of fandom of a particular type of music or band.

Subculturalists have been described in many different ways, as both “postmodern in their identification with fragmentation and heterogeneity” and “modern in their commitment to individual freedom and self-expression” (Brodie-Smith 2000, p.174). It has also been argued that subculturisation is the result of urbanism; cities having large heterogeneous populations and thus weaker interpersonal ties (Fischer 1972, p.187). These newly-formed groups engaged in a struggle over cultural ‘space’ and expressed themselves in new ways, but were not able to solve many of the problems associated with the peripheral social position of youth (Hall & Jefferson 1976, p.1).

The exact definition of a subculture is always in dispute and boundaries remain a problem, but the concept style is important as it is “the area in which the opposing definitions clash with the most dramatic force” (Hebdige 1979, p.3). Indeed, possibly the most important aspect of a subcultural group is its use of symbolic style (Brake 2013, p.12), with the dominant values of style being image, demeanour and ‘argot’, or a special vocabulary and how it is delivered (Brake 2013, p.13). The nature of subcultural groups’ clothes is very complex: they are the “system of signals by which [they] broadcast [their] intentions, projection of [their] fantasy selves, weapons, challenges, insults” (Carter 1967, p.10).

Historian Jon Savage said “Many of the people whose lives were touched by punk talk of being in a state of shock ever since” (1991, p.4). It is generally accepted that the punk movement began in America in the early 1970s, but it became to be perhaps most closely associated with Britain in the mid- and late-1970s: a time when an economic recession, with its high levels of unemployment and increase in poverty-line living conditions, provided a catalyst for a new youth movement. When John Lydon – then know as Johnny Rotten – wrote the lyrics to his band the Sex Pistols’ single ‘God Save the Queen’ in late 1976, at a time when young working class English people were facing grim economic prospects, little did he know of the cultural and social impact his band and songs would have. The social meanings created by English punk bands like The Sex Pistols, The Clash, The Slits and The Damned, their American counterparts the New York Dolls and the Ramones, and Australia’s The Saints, have been pored over in the ensuing decades, and for good reason: very few youth subcultures have had such an impact on Western society as punk.

“Punk” is a vague concept, but its origins can be traced to the 1960s as a reaction to the cultural landscape of the time. It was “a subculture that scornfully rejected the political idealism and Californian flower-power silliness of the hippy myth” (Christgau 1976). It also followed the lead of much of the mod youth movement; bands like the MC5 and The Stooges brought a stripped-down version of rock and roll into the arena of popular music. There is an intersection between youth and extreme fashion as a method of asserting an attitude of dissent in times of crisis (Fury 2016, p.230), and strong visual styles accompanied punk’s music, with the American bands sporting mainly black leather jackets and blue jeans, and the British bands tending towards ripped shirts, safety pins, Nazi imagery, and bondage wear in a self-mocking, shocking image (Isler & Robins 2007, p.23). ‘Porn chic’ was a style of punk clothing which can be viewed as a critique of patriarchal fashion codes, giving female punks a new basis of empowerment and authenticity (Langman 2008, p.1). The power of ‘otherness’ was deliberately harnessed as a tool of protest, as a way to provoke and agitate. A post-modern society, transformed by evolving fashions, music, and attitudes, emerged as a challenge to the status quo; the prevailing social and cultural positions of modern life (Chambers & Cohen 1990, p.143).

The British punk movement was a much more politicised version of the American movement, and arguably had a greater cultural impact in its own country. However, it could also be argued that the British movement would not have happened without the American movement occurring first (Henry 1984, p.30). As a social movement it was considered fresh and exciting by many young people, with the feeling that “it made one feel that maybe music had some sort of relevant part to play in one’s life” (Vermorel 2006, p.1) being common.

The subculture’s high point was reach between 1976 and 1979, but throughout this period it had no set ideology or agenda (Sabin 2002, p.2). However, certain attitudes were prevalent across this time-frame and were common across all geographical locations where the subculture was apparent; including an awareness of class politics, a belief in spontaneity or “do it yourself”, and a focus on negationism (Sabin 2002, p.2). It is generally accepted that the movement ended in 1979, when other youth subcultures became more prevalent as fashion and social culture evolved. The movement’s most prominent band, the Sex Pistols, broke up in acrimony in January 1978 after a chaotic and shambolic North American tour. It has been claimed that the movement died with the death of Sex Pistols bassist Sid Vicious, who overdosed on heroin in early 1979, shortly after the stabbing death of his girlfriend, Nancy Spungen, in unknown circumstances (Sabin 2002, p.2).

In the summer of 1976 the punk movement gathered speed as the number of participants swelled, reproducing the “entire sartorial history of post-war working class youth cultures in ‘cut-up’ form” (Hebdige 1979, p.27). The rhetoric of punk as a subculture was steeped in apocalyptic words, many of which were painted or stitched brazenly across garments in the style of the movement, yet the movement as a whole was obviously innocent of literature (Hebdige 1979, p.28).

Alienation and cosmetic rage were the manners of choice for all the major participants, in the same way that most youth cultures are a reaction to bourgeois values (Hall & Jefferson 1976, p.232). Langman (2008, p.1) describes how any form of fashion or lifestyle can be understood as a way of “claiming agency to resist domination, invert disciplinary codes and experience ‘utopian moments’”, although this theory has been disputed. The punk subculture was in many ways defined by the idea of its participants being ‘outsiders’, or opposed to bourgeois institutions, although it has been argued that the irony of this situation is that the punk movement’s reaction or resistance to bourgeois society takes place “as a result of their incorporation into bourgeois institutions” (Hall & Jefferson 1976, p.236). Foucault (1972, p.778) described how, when a human turns himself into a subject, the human subject is “placed in power relations which are very complex”. Describing the punk movement’s reaction to institutional power is not as simple as saying “it was against it”. In examining possible answers to the question “What legitimates power?” (1972, p.778), Foucault suggests that in examining the aspects of power relations between two entities, there is more to be learned from the subject of power than the holder of power.

The idea of punks being oppressed by the state is therefore open to debate; they were self-excluding and went to great lengths to keep it that way. Similarly, it has been suggested that the concept of a ‘generation gap’ is not an appropriate reason for the prevalence of many youth subcultures, including punk: it is inappropriate for youth’s reactions and attitudes to institutions to be blamed on institutions, as their responses to them are likely based on the same value systems used by the institutions themselves (Hall & Jefferson 1976, p.236). It is most likely that the combination of elements in their lives – including school, family, job, police, courts, youth clubs, social workers, mass media, and commerce – that decides a young person’s reaction to institutional power (Hall & Jefferson 1976, p.237). Foucault (1985, p.28) describes how all moral action involves both a relationship with the reality in which it is carried out and with the self. Self-formation as an “ethical subject” concerns a participant deciding on a certain method of being which will serve his moral aims (Foucault 1985, p.28).

In other ways, the punk movement has been described as “dole queue rock” (Marsh 1977, p.10), and it has been argued that level of education and income are unrelated to fashion leadership (Goldsmith et. al 1991, p.37). The punk movement was initially a reaction not to institutional power, but to the over-inflated ‘superstar’ stadium rock acts of the early- and mid-1970s. In an era when musical technical virtuosity pointed to commercial success and concert ticket prices were often too high for most working class youth to be able to afford, gaps emerged between millionaire musicians and unemployed fans (Brake 2013, p.77). The punk movement has also been described as a “condition of postmodernity” (Moore 2010, p.305), or a crisis of meaning caused by the commodification of everyday life, bringing about a reaction in the form of a “culture of destruction” (Moore 2010, p.305).

In saying this, there were more than one class of subculturalists within the movement itself, ranging from the art school students and cultural rebels who developed bohemian careers, to working class youth who refused to conform to anything and remained unemployed (Brake 2013, p.78). In some cases, the punk fashion movement saw the blurring of boundaries between art, fashion and everyday life; in others, art, fashion and everyday life were seemingly disparate objects and behaviours (Henry 1984, p.30). There also existed a hierarchy of members based on their perceived level of commitment to the scene (Fox 1987, p.344), and a paradox between the unaffordable fashion items sold by the primary trendsetting designer of the movement, Vivienne Westwood – partner of the Sex Pistols’ manager Malcolm McLaren – and the ‘garbage bag’ fashion she created.

Perhaps the best way to encapsulate how the fashion and music of the punk movement had an effect on Western society in the 1970s is to examine the wider public’s reaction to the Sex Pistols’ ‘Anarchy in the UK’ tour of December 1976. ‘Moral panic’, or a process by which “politicians, commercial promoters and media habitually attempt to incite” (McRobbie & Thornton 1995, p.559), surrounded the band’s concerts, and many were picketed by local residents, cancelled by venue owners, or overcrowded by hostile press. London councillor Bernard Brook Partridge infamously declared in a television interview: “Some of these groups would be vastly improved by sudden death” (Simpson, 2007, online). The response in the form of a moral panic to a youth culture shows the complexity of feeling towards subcultures, and while the response to the Sex Pistols’ ‘Anarchy in the UK’ tour was partially socially-constructed by media and politicians, “reactions by trade unionists, students, feminists and socialists show that concerns about British society in 1976 were not confined to religious pressure groups, conservative media commentators and political elites” (Gildart, 2015, online). The band played up to their supposed role as trouble-makers, deliberately provoking media and politicians alike, and the result was a general increase in intensity of the moral panic.

Although the intensity of, and participation in, the original punk movement was high for only a short time in the 1970s, it had a sizeable impact on fashion, music, and culture, and thus wider Western society as a whole. The fashion, music and attitudes of the Sex Pistols, in particular, facilitated a “reframing and a re-imagining of English culture” (Adams 2008, p.469), which has been drawn on by a number of subsequent fashion, art and music subcultures. The evolving punk subculture of the 1980s attempted to tackle many of the problems of inner-city life, most especially on the east coast of the United States, and soon after embraced much larger social and ethical issues (Parkes 2014, p.42). Although the original punk subculture failed to create the revolution in everyday British and American life that many of the bands involved called for in their lyrics, the punk fashion and music movement changed the way people thought about and discussed social stratification in Britain and America from the late 1970s onwards (Simonelli 2010, p.121). Unfortunately for the participants themselves, their efforts – using fashion and music – to protest and agitate against the bourgeois culture of their home countries was doomed to failure, as the main players involved could not resist becoming professionalised themselves (Simonelli 2010, p.121). A prime example of this happened in 2005, when the Sex Pistols’ logo and branding appeared on a Virgin credit card, with many news headlines containing words similar to the affect of “Punk Rock Dies a Little” (Tuttle 2015, online).

Towards the end of the 1980s and into the 1990s the punk movement “began to produce its members, as opposed to its members producing it” (Parkes 2014, p.80). The punk aesthetics of awareness of class politics, a belief in spontaneity or “doing it yourself”, and a focus on negationism (Sabin 2002, p.2) largely disappeared; an ironic turn typified by subcultural patterns. The musical, and accompanying fashion, form found a new audience in the 1990s, when mainly American bands like Green Day, The Offspring and Blink-182 brought a more pop-oriented version of the genre to the mainstream. The genre as a subcultural movement has since not been able to match its 1970s heyday for social and cultural impact (Brake, 2003, p.24). It is no longer the subculture ‘of the moment’ since going mainstream, but the after-effects are still present: it is studied in universities and art colleges across Britain, America and Australia, and its music press has long since made the move from the underground to the overground (Sabin 2002, p.2).

In conclusion, it can be said that the punk fashion and music movement had effects on the wider society of which it was part, albeit a lesser one than was intended by many of the participants of the subculture. If culture is defined as “all the characteristic activities and interests of a people” (Hebdige 1979, p.137), then the punk subculture took – however temporarily – a prominent position, and had an affect on, those cultures. While initially a subculture alienated from contemporary mainstream culture, the movement was absorbed into the mainstream within a few short years; completing what is considered by many to be an inevitable cycle (Hebdige 1979, p.137). This is perhaps best summed up by Barthes (1972, p.10), who wrote: “Everything nourishing is spoiled; every spontaneous event or emotion a potential prey to myth”.

References

Adams, R, 2008. ‘The Englishness of English Punk: Sex Pistols, Subcultures, and Nostalgia’, Popular Music and Society, p.469

Barthes, R, 1972. Mythologies, p.10

Brake, M, 2013. Comparative Youth Culture, Taylor and Francis: London, pp.12-25

Brodie-Smith, A, 2000. ‘Inside Subculture: The Postmodern Meaning of Style’, Library Journal, p.174

Carter, A, 1967. Notes for a Theory of Sixties Style, p.10

Chambers, D and Cohen, H, 1990. ‘New Colours: Post Modernism and the Visual’, Australian Cultural Studies Conference, 1990: Proceedings, University of Western Sydney, p.143

Christgau, R, 1976. ‘Yes, There is a Rock-Critic Establishment (but is That Bad for Rock?)’, Village Voice

Fischer, CS, 1972. ‘Urbanism as a Way of Life: A Review and an Agenda’, Sociological Methods and Research, p.187

Foucault, M, 1982. ‘The Subject and Power’, Critical Inquiry, pp.777-795

Foucault, M, 1985. The Use of Pleasure: The History of Sexuality, p.28

Fox, KJ, 1987. ‘Real Punks and Pretenders: The Social Organization of a Counterculture’, Journal of Contemporary Ethnography, pp.344-370

Fury, A, 2016. ‘Fashion as Protest’, The New York Times, August 21st 2016, p.230

Gildart, K, 2015. ‘The Antithesis of Humankind: Exploring Responses to the Sex Pistols’ Anarchy Tour 1976′, accessed 5th October 2016

Goldsmith, RE, Heitmeyer, JR and Freiden, JB, 1991. ‘Social Values and Fashion Leadership, Clothing and Textiles Research Journal, pp.37-45

Gramsci, A, 1971. Selections from the Prison Notebooks, Lawrence and Wishart, p.177

Gronow, J, 1993. ‘Taste and Fashion: the Social Function of Fashion and Style’, Acta Sociologica, pp.89-100

Hall, S and Jefferson, T, 1976. ‘Resistance Through Rituals’, Youth Cultures in Post-War Britain, p.1

Hebdige, D, 1979. Subculture: the Meaning of Style, London: Methuen

Henry, T, 1984. Punk and Avant-Garde Art, p.30

Isler, S and Robbins, I, 2007. Richard Hell and the Voidoids, p.23

Jefferson, T, 1989. Resistance Through Rituals, Taylor and Francis: London, pp.10-20

Langman, L, 2008. ‘Punk, Porn and Resistance: Carnivalization and The Body in Popular Culture’, Current Sociology, p.1

Marsh, P, 1977. ‘Dole Queue Rock’, New Society, p.10

Mauss, M, 1973. ‘Techniques of the Body’, Economy and Society, pp.70-88

McRobbie, A and Thornton, SL, 1995. ‘Rethinking “Moral Panic” for Multi-Mediated Social Worlds, British Journal of Sociology, pp.559-574

Merleau-Ponty, M, 2004. The World of Perception, Abingdon, Routledge, p.63

Moore, R, 2010. ‘Postmodernism and Punk Subculture: Cultures of Authenticity and Deconstruction’, The Communication Review, p.305

Oxford Dictionary, online, accessed 4th October 2016: https://en.oxforddictionaries.com/definition/fashion

Parkes, A, 2014. ‘This Small Word: The Legacy and Impact of New York City Hardcore Punk and Straight Edge in the 1980s’, Digital Commons, online, accessed 7th October 2016: http://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?article=1100&context=forum

Sabin, R, 2002. Punk Rock: So What? The Cultural Legacy of Punk, p.4

Savage, J, 1991. England’s Dreaming, p.4

Simonelli, D, 2010. ‘Anarchy, Pop and Violence: Punk Rock Subculture and the Rhetoric of Class, 1976-78’, Contemporary British History, p.121

Simpson, D, 2007. ‘Memo to the Sex Pistols: Get off Your Arse and Out of London’, The Guardian, online, accessed 6th October 2016: https://www.theguardian.com/music/musicblog/2007/sep/18/memotothesexpistolsgetof

Solomon, MR, 1985. Psychology of Fashion, Lexington Books

Tuttle, B, 2015. ‘Sex Pistols Credit Cards Are Here and Punk Rock Dies a Little’, Time, online, accessed 7th October 2016: http://time.com/money/3914318/sex-pistols-credit-cards/

Vermorel, F, 2006. Sex Pistols: The Inside Story, p.1

Wilson, B, 2006. Fight, Flight, or Chill: Subcultures, Youth, and Rave into the Twenty-First Century, p.5

Wilson, E, 1985. Adorned in Dreams: Fashion and Modernity, IB Tauris Books, p.16

<END>