Enrichment and Exploitation: How Website Algorithms Affect Democracy

Abstract

This essay discusses the role algorithms in websites, social media and search engines play in the democratic processes of Western societies. As the political mechanisms of Western societies rely increasingly on the Internet for communication of information and to encourage voter participation, the way algorithms are configured to present information to the public is of great importance. Manipulation of search engine rankings or social media news feeds – intentionally or organically – can have a huge impact on what voters see and think about. Facebook and Google have a monopoly on news feeds and online search respectively, meaning any bias in the way their algorithms function can have ramifications on national and international levels. Evidence exists that manipulation of algorithms in Facebook and Google has participated in influencing the outcomes of elections on several occasions. Examining how algorithms can affect elections and other civic processes is crucial for the future of healthy democracy in Western societies.

Keywords: algorithm, democracy, Internet, news, search engine, social media, website, Facebook, Google, Instagram, Snapchat, Twitter

Introduction and Research Questions

In 2017, it is estimated that over half of the world’s population are regular Internet users (Kemp 2017, online), and around the same percentage are regular users of social media (Chaffey 2017, online). With such vast amounts of data moving through cyberspace constantly, it makes sense that algorithms should be employed to sort, sift through, and make sense of it all. On the face of things, it would seem logical for algorithms to be used to present to users of websites, social media and search engines a selection of information which may be relevant to what the user is looking for, and from which the user can make informed decisions. The problem with this is that it’s often impossible to know how an algorithm has arrived at a decision or set of search results, and many users aren’t aware that algorithms even exist, never mind how they come to the conclusions they do. With democratic processes now relying so heavily on information shared online, algorithms in websites, social media and search engines have the potential to play a crucial role in democracy. This essay will investigate this issue, and seek to answer the following questions:

-To what extent do algorithms in websites, networking services and social media have a negative effect on democracy in Western societies?

-To what extent, if any, can users of new and digital media be manipulated by algorithms to think or act in certain ways?

-To what extent, do search engine algorithms affect democracy in Western societies?

-Which website, networking service or search engine is most likely to affect democracy through its use of algorithms?

Methodology

Search engines, social media, and the algorithms that operate them are now firmly embedded in the everyday fabric of Western societies, and increasingly in their democratic processes, with no indication that this is likely to change at any time in the future. Algorithms used in Facebook and Google have been extensively studied individually, but there has been less research on the overall effect of algorithms in democratic processes in Western societies. This research essay aims to fill that gap.

The essay examines the use of algorithms in websites, networking services and social media, and aims to answer the question of whether they have a negative effect on democracy in Western societies. A detailed literature review of the subject of online algorithms is followed by an examination of algorithms used in Facebook, Google, Twitter, Instagram and Snapchat, with the likely effects of each of their algorithms discussed, most especially in relation to democratic processes in Western societies.

Real-life examples of algorithms affecting democratic processes are examined, and the extent to which algorithms have influenced recent political outcomes discussed. The essay will also discuss how algorithms are likely to affect democracy in the coming years.

Suggestions regarding the way future democratic processes must interact with, and incorporate, algorithm-driven websites, social media, and search engines are made, and conclusions on the future of the algorithm in democracy are drawn.

Literature Review

Origins

In their most basic form, algorithms are defined as “an automated set of rules for sorting data” (Oxford Reference 2017, online), and, in their online form, are concerned with “settings where the input data arrives and the current decision must be made by the algorithm without the knowledge of future input” (Bansal 2012, p.1). Algorithms are “dependent on the quality of their input data and the skills and integrity of their creators (Devlin 2017, online). By definition, data is historical, and the result of which is that algorithms predict the future based on actions taken in the past, hence their actions can be repetitive and flawed.

The first use of algorithms in an online sense occurred in the early 1970s and was used for bin-packing problems in early software programs, or organising and fitting items into a set space (Fiat & Woeginger 1998, p.7). This evolved in 1985 when Sleater and Tarjan constructed competitive algorithms to solve mathematical problems known as the list update problem and the paging problem (Fiat & Woeginger 1998, p.7). In the early 21st century, as the variety and use of digital technologies exploded, algorithms were still relatively harmless. Search engines offered personalised recommendations for products and services, and helped Internet users find what they wanted quicker. Information was collected from personal meta-data – information gathered from “previous searches, purchases and mobility behaviour, as well as social interactions” (Helbing et. al 2017, online). From these humble beginnings, algorithms have evolved to know everything about us – where we are, what we are doing, and what we are feeling (Helbing et. al 2017, online).

Algorithms in the Digital Age

The ubiquity of Internet access and the huge number of ways by which it can be accessed means it is now a “principal pillar of our information society” (Dusi et. al 2016, p.1805). Online communities have become hugely important and complex places in which people seek and share information (Zhang et. al 2007, p.221). A result of this is that online algorithms play a huge part in so many aspects of our lives. Ellis (2016, online) explains how three factors shape the online lives of citizens of digital societies: “the endless search for convenience, widespread ignorance as to how digital technologies work, and the sacrifice of privacy and security to relentless improvements in the efficiency of e-commerce”. The more our lives become reliant on digital technology, the more we are likely to be influenced by algorithms, from everyday tasks like online shopping to our political participation in elections, referendums and other civic activities.

Algorithms still carry out the same relatively harmless tasks as they have done since the Internet’s earliest days, including giving online shoppers advantages in making choices (“People who bought this book also bought this…” recommendations), helping match an online dater with a partner more suited to them (Sultan 2016, online), and retrieving search engine results more suited to the user, depending on past searches. Retail websites such as Amazon also use algorithms to keep pricing competitive – prices can drop sometimes several times a day until an item is the cheapest on the market and is sold, and then the price goes back up again (Baraniuk 2015, online).

Algorithms have evolved hugely from their humble beginnings, and can now “recognise handwritten language, describe the contents of photos and videos, generate news content, and perform financial transactions” (Helbing et. al 2017, online). Some can recognise language and patterns “almost as well as humans and even complete some tasks better than them” (Helbing et. al 2017, online). Today’s widespread use of algorithms online has been described in a range of ways, from having small advantages to Internet users and to making online communities smarter, to the more sinister end of the spectrum, entailing “capturing people and then keeping them on an emotional leash and never letting them go” (Anderson & Horvath 2017, online).

Despite huge advances in technology since the dawn of the Internet, even while conducting relatively simple tasks, algorithms can go wrong in spectacular ways. An algorithm used to generate wording for a company selling t-shirts to be sold online with the English World War II-era slogan “Keep Calm and Carry On” printed on them generated thousands of alternative options, with one result being “Keep Calm and Rape a Lot” (Baraniuk 2015, online). The company faced public condemnation and folded as a result. In 2011, a Massachusetts man who had never committed a traffic offence in his life had his driving licence revoked by an algorithm-generated facial recognition software failure (Dormehl 2014, online). Similar, and more serious, faults have meant that voters have been removed from electoral rolls, parents mistakenly labelled as abusive, and businesses have had government grants and contracts cancelled (Dormehl 2014, online). Even more problematic is the way in which algorithms can falsely profile individuals as terrorists at airports, which happens at a rate of about 1500 a week in the United States (Dormehl 2014, online). Reduced budgets in law and order services have a large part to play in this, as staff cuts lead to a greater reliance on automated services.

Entering the Democratic Space

Algorithms offer many benefits to the democracies of Western societies, but often in a way that have many more advantages for institutions than they do for individual users of digital technologies (Ellis 2016, online). The convenience so hungrily sought by end-users is a commodity many online businesses are eager to sell, and the hidden clauses are often “unknowable and entirely beyond users’ control” (Ellis 2016, online). Understanding algorithms’ lack of neutrality is low among end users, and while disclosure policies can help somewhat, many of the long-winded privacy policies which have become standard on the web are seldom read (Ellis 2016, online).

An example of a society heavily controlled by online data is Singapore. What started as a program set up with the aim of protecting its citizens from terrorism “has ended up influencing economic and immigration policy, the property market and school curricula” (Helbing et. al 2017, online). China is similar. Baidu, the Chinese equivalent of Google, incorporates a number of algorithms in its search engine to produce a “citizen score” (Helbing et. al 2017, online), which can affect a citizen’s chances of getting a job, a financial loan, or travel visa. This type of monitoring of data using algorithms is certain to affect everything about citizens’ lives, from everyday tasks to political contribution.

In the world of politics, digital technology and the algorithms they conceal are becoming increasingly popular as tools for ‘nudging’: a behavioural design concerned with trying to steer or influence citizens towards thinking and acting in a certain way (Helbing et. al 2017, online). A government can use this method of ensuring the public sees information that supports their agenda – the British government has “used it for everything from reducing tax fraud to lowering national alcohol consumption [while] Barack Obama and several American states have used it to win campaigns and save energy” (The Nudging Company 2017, online). The biggest goal for governments to influence people in this way is known as ‘big nudging’, or the combination of big data and nudging (Helbing et. al 2017, online). While the effectiveness of such methods are difficult to calculate, it has been suggested that could have the ability to control citizens by a “data-empowered ‘wise king’, who would be able to produce desired economic and social outcomes almost as if with a digital magic wand” (Helbing et. al 2017, online). During elections, political parties can use online nudging to influence voters in a major way. In fact, it has been argued that whoever controls this technology can “nudge themselves to power” (Helbing et. al 2017, online).

Critics of the use of online algorithms in Western democracies have pointed to how they can reinforce the ‘filter bubble’, or the way in which end users of search engines and social media get “all their own opinions reflected back at them” (Helbing et. al 2017, online). The result of this is a large degree of societal polarisation, resulting in sections of society who have little in common and have no method by which to understand each other’s beliefs. This form of social polarisation by the supply of personalised information can lead to fragmentation of societies, especially in the political arena. Helbing (2017, online) explains that this kind of divide is currently happening in the politics of the United States, where “Democrats and Republicans are increasingly drifting apart, so that political compromises become almost impossible”.

Algorithms and Data Mining

Data mining is the method by which large amounts of raw data is turned into useful information, and is increasingly becoming a useful influencing tool online. The practice has been described as “creat[ing] greater potential for violations of personal data” (Makulilo 2017, p.198) via the rise and use of big data, meaning the vast amounts of statistics in the public domain about people’s lives, money, health, jobs, desires, and more. The availability of all this data means algorithms are increasingly being used to sort and categorise it all, as well to make public policy and other decisions (O’Neil 2016, p.1). In Western democracies, the amount of online data produced is doubled every year, and in every single minute of every day, hundreds of thousands of Google searches and Facebook posts are made (Helbing et. al 2017, online), meaning more potential violations of personal data if used for immoral or criminal purposes.

Companies now use algorithms to help them decide who they should hire, banks use them to work out to whom to provide loans, and, increasingly, governments use them to make major policy decisions. Devlin (2017, online) contends that those working in the big data and analytics industries are perhaps the least likely to be surprised that political figures or parties would try to use algorithms to influence public behaviour in their favour, saying that “the application – both overt and covert – of technology to affect election outcomes was arguably inevitable” (Devlin 2017, online). O’Neil (2016, p.1) says that “some of these models are helpful, but many use sloppy statistics and biased assumptions; these wreak havoc on our society and particularly harm poor and vulnerable populations”.

Dormehl (2014, online) explains that not only is the use of algorithms in data mining open to misuse, but that it is foolish to believe all tasks can be automated in the first instance, and points to data mining as a method of uncovering terrorist attacks as an example. Dormehl describes finding terrorist plots as “a needle-in-a-haystack problem, and throwing more hay on the pile doesn’t make that problem any easier. We’d be far better off putting people in charge of investigating potential plots and letting them direct the computers, instead of putting the computers in charge and letting them decide who should be investigated” (2014, online).

Algorithms and Real-Life Events

Real-life examples of how algorithms can affect major world events are plentiful. Evidence has emerged that algorithms and their associated digital technologies have been used to bring about political outcomes in various countries in recent years, and it it likely that such methods will be an element of many future political campaigns. It has been alleged that online algorithms were deployed to influence voters’ decision-making in the 2016 US presidential election, the 2016 Brexit vote, and the 2017 French presidential election (Devlin 2017, online). Problems arise – and mistrust is created – when algorithms are used in such ways due to a lack of transparency and democratic control. The digital methods used to transmit messages and influence audiences evolve quicker than any regulatory framework can keep up with them.

An example of this is the alleged influence of online advertising which affected the outcome of the Trump-Clinton election, the result of which shocked many in the United States and around the world. The innovation of algorithms, according to some analysts, means “even our political leanings are being analysed and potentially also manipulated” (Arvanitakis 2017, online), and a prime example of this was undertaken by Cambridge Analytica, a data mining organisation that relies on artificial intelligence with the goal of manipulating opinions and behaviours “with the purpose of advancing specific political agendas” (Arvanitakis 2017, online), in this case in the favour of Trump. Facebook was the platform on which much of the alleged manipulation took place, with an estimated US$90 million spent on digital advertising to generate US$250 million in fundraising for the eventual winner (Shoval 2017, online). In September 2017, Facebook agreed to provide to United States congressional investigators the contents of 3000 online advertisements purchased by a Russian advertising agency, alleging to contain information on supposed digital interference in the election (ABC News 2017, online). Matthew Oczkowski, Cambridge Analytica’s Head of Product, told a recent interviewer: “We have elections going on in Africa and South America, and eastern and western Europe” (Kuper 2017, online).

Additionally, search engine algorithms and recommendation systems “can be influenced, and companies can bid on certain combinations of words to gain more favourable results” (Helbing et. al 2017, online). These methods have been defended by some, as Helbing (2017, online) explains, who say that political nudging is necessary as people find it hard to make decisions, and it is, therefore, necessary to help them – a way of thinking known as paternalism. He also refutes this by suggesting that nudging is not actually a way of persuading people of a particular opinion, but a method of “exploiting psychological weaknesses in order to bring about certain behaviours” (2017, online). Another critic of the use of algorithms to affect voters’ choices is Gavet (2017, online), who argues that the only results of such methods are self-reinforcing bias, and that digital technology of this nature are vulnerable to attack to agencies with potentially harmful agendas, and concludes by saying that all forms of artificial intelligence are a threat to democracy in some way.

In the same way that accurate information can be presented to the public to influence the way they think or act, incorrect information can do the same thing. The Digital Disinformation Forum, held in California in June 2017, stated that deliberate misinformation is the “most pressing threat to global democracy” (Digital Disinformation Forum 2017, online). Smith (2017, online) agrees, noting that “The insidious thing about information pollution is that it uses the Internet’s strengths,  like openness and decentralization, against it”, and that misinformation is a potential “global environmental disaster” that impacts everyone. Immediately after the 1st October 2017 Las Vegas Strip shooting, in which a gunman killed 58 people during the deadliest mass shooting committed by a lone gunman in US history, news spread by Facebook and Google falsely named a suspect, describing them as a “far-left loon” (ABC 2017, online) when the gunman had no known political affiliations. A pro-Trump Facebook page incorrectly named a person as the shooter, and the story became the first result on Google’s search page on the subject (ABC 2017, online). “This should not have appeared,” a Google spokesperson later said, as the information was removed from its search results (ABC 2017, online). Both Facebook and Google came under scrutiny from a variety of political sources for their slow response to requests to remove the information from their platforms (ABC 2017, online).

Adding algorithms to this mix can be dangerous, Smith notes, pointing to the way in which predictive policing algorithms in the United States increase patrols in high-crime areas, but can induce a cycle of violence between police and angry or disenfranchised residents as a consequence (2017, online). O’Neil (2016, p.1) explains that “this type of model is self-perpetuating, highly destructive, and very common.” Perhaps the most damning statement on the use of algorithms in societies based on data comes from Devlin (2017, online), who says that while societies which operate in this way “may seem appealing in the light of current political dysfunction worldwide … it is also deeply inimical to the process we call democracy”.

The Future of Algorithms

What does the future hold for algorithms and their place in Western societies and democracy? Floridi (2017, online) argues that the increasing proliferation of algorithms in digital technology will continue to threaten many aspects of our daily lives in increasing numbers of ways – employment, most especially. Floridi explains that because digital technology has replaced many tasks traditionally performed by us, “algorithms can step in and replace us”, and the consequence “may be widespread unemployment” (2017, online). It has been estimated that in the coming ten years, around half of jobs will be threatened by algorithms and up to 40% of the world’s top 500 companies will have vanished (Helbing et. al 2017, online). Algorithms may increasingly “take care of mundane administrative jobs, do the analysis of markets and roam through thousands of pages of case law”, as well as creating our news feeds (Stubb 2017, online).

A 2016 Pew Research Centre study found it likely that algorithms will “continue to have increasing influence over the next decade, shaping people’s work and personal lives and the ways they interact with information, institutions (banks, health care providers, retailers, governments, education, media and entertainment) and each other” (Ellis 2016, online). The flip side to the advantages algorithms are likely to have, the same study found, are the fear that they will “purposely or inadvertently create discrimination, enable social engineering and have other harmful societal impacts” (Ellis 2016, online).

In April 2017, a House of Commons committee in the United Kingdom published the results from its ‘Algorithms in Decision-Making’ inquiry, with the overall conclusion being that human intervention is almost always needed when it comes to trusting the decisions made by online algorithms (House of Parliament 2017, online). Some of the major points to be taken from the findings include algorithms are “subject to a range of biases related to their design, function, and the data used to train and enact these systems”, “transparency alone cannot address these biases”, and algorithmic biases have “cultural impacts beyond the specific cases in which they appear” (House of Parliament 2017, online). The inquiry also recommended greater regulation of online algorithms, as transparency alone “doesn’t necessarily create trust” (House of Parliament 2017, online).

A solution to the possibility of algorithmic errors, as suggested by Floridi, is to “put human intelligence back into the equation” (2017, online). This can be done by “designing the right sort of algorithm” (2017, online), making sure not all decisions are left to machines, and making sure humans oversee all decisions made by machines. In the political sphere, some politicians might be jubilant at the decline of journalism, but should remember that “algorithms will soon be better at legislation than they are” (Stubb 2017, online). Some commentators and experts have gone further with their predictions, with technology visionaries including Bill Gates, Elon Musk and Steve Wozniak warning that algorithms and associated artificial intelligence-based technologies are a “serious danger for humanity, possibly even more dangerous than nuclear weapons” (Helbing et. al 2017, online).

Case Studies

Facebook

“There was no tool where you could go and learn about other people. I didn’t know how to build that so instead I started building little tools,” Mark Zuckerberg said (Carson 2016, online) about the origins of the website that would turn into a 300 billion dollar company. In 2004 he launched the social networking site Facebook, and its popularity quickly spread across several universities before becoming Facebook.com in August 2005 (Phillips 2007, online). The site’s use grew exponentially, it now has two billion active users per month (Facebook, online) and has recently unveiled its new mission statement as: “To give people the power to build community and bring the world closer together” (Facebook, online). According to the site’s own statistics, an average user spends 50 minutes a day on Facebook, Facebook Messenger or Instagram and has 150 Facebook friends (Facebook, online). Until 2012, the site kept advertisements separate from its users’ personal content and did not share any information with marketing agencies. Then, floatation brought greater demands from investors for advertising revenue, and its methods changed (Kuper 2017, online).

Perhaps one of the more notable changes to democracy this brought is the way Facebook is controlling how citizens consume news. Most under-35s rely on Facebook for their news, both personal and world (Francis 2015, online; Jain 2016, online; Samler 2017, online), and its algorithms can control what information is seen by its users, and, hence, what is thought about democratic or political issues based on this information. In changing the fundamental methods by which people receive information on such a scale, Facebook is disrupting democracy like nothing the Internet has produced before. As Samler (2017, online) explains, Facebook is “one of the Internet’s most radical and innovative children”. The result has been “a loss of focus on critical national issues, an erosion of civil disagreement, and a threat to democracy itself” (O’Neil 2016, online).

As a result of more people getting their news from an algorithm-driven news feed, traditional journalism has been greatly affected by the rise of Facebook. The impact of increasing use of social media as a way of sourcing news, real or otherwise, is of concern to the traditional role of the media as the Fourth Estate. Facebook has been called a “social problem” (Francis 2015, online) that breeds shallowness that is sweeping Western societies, while creating a “world view about as comprehensive as was found in the high school cafeteria” (Francis 2015, online). Global leaders are taking advantage of its directness to bypass the media and speak directly to the public, and operators of Facebook and Twitter are enthusiastic about this behaviour as it increases engagement with their sites. Journalists are still attempting to report factual stories, but are under increasing pressure (Shoval 2017, online), and the disproportionately high financial awards made against newspapers in the courts threatens press freedom on an industry level (Linehan 2017, p.11).

With Facebook now having such a high degree of control over the way in which people consume news, traditional media companies are struggling to reach the public with legitimate news (Shoval 2017, online). After the 2016 US presidential election, Facebook announced its “Facebook Journalism Project” – a project with the aim of forging stronger ties with the journalism industry, including working more closely with local news outlets (Shoval 2017, online). With the number of news consumers who get their news from Facebook’s news feed on the rise, it is difficult to see how this is little more than an empty platitude.

While Facebook is described as ‘social media’, it is important to remember that its success it premised on using increasingly sophisticated techniques to target users by predicting the content they’ll want to read and watch, “along with the stuff they’ll want to buy from advertisers” (Ellis 2016, online). Facebook is now a “monumentally influential force in the fabric of modern life” (Statt 2017, online), and there now exists Facebook electioneering by major political candidates like Canadian Prime Minister Trudeau and French President Macron, of which algorithms play a huge part. Facebook’s algorithm generates a “plethora of ordinary effects” (Bucher 2015, p.44) from the hunt for ‘likes’ to asking the questions “Where did this information that has suddenly popped up come from?” (Bucher 2015, p.44). Francis (2015, online) suggests that the only antidote to relentless Facebook misinformation is to “do some serious fact-checking and research”, while Pennington (2013, p.193) says that while Facebook can be an excellent tool for political participation, the key for the individual user is to “keep an open mind to others instead of falling down the rabbit hole of narcissism”.

Fake news can be defined as “a political story which is seen as damaging to an agency, entity or person” (Merriam Webster Dictionary 2017, online), and the concept and its proliferation on various platforms, including Facebook, has been forced into the public domain by President Trump and the election from which he emerged victorious. Fake news has the power to “damage or even destroy democracy” (Jain 2016, online) if not regulated. During a 2016 press conference, then-President Obama noted that “If everything seems to be the same and no distinctions are made, then we won’t know what to protect” and “Everything is true and nothing is true” (Jain 2016, online) on a social network such as Facebook. Simply sending out Facebook advertisements to see how they are received can help a political party shape its manifesto (Kuper 2017, online). If a large number of users ‘like’ a story about a crackdown on immigration, a party or candidate can make it their official standpoint. Then those people can be targeted with more advertisements and for appeals for funding.

The unexpected election of Donald Trump is said to “owe debts to … rampant misinformation” (Heller 2016, online). During the last stages of campaigning by Trump and Clinton, it was obvious that Facebook’s news algorithm was not able to distinguish between real news and completely fabricated news: “the sort of tall tales, groundless conspiracy theories, and oppositional propaganda that, in the Cenozoic era, circulated mainly via forwarded e-mails” (Heller 2016, online).

Zuckerberg rejects the idea that his company played a role in spreading ‘fake news’ about political candidates, by saying in an interview: “Voters make decisions based on their lived experience” (Newton 2016, online). At the same time, a study found that “three big right-wing Facebook pages published false or misleading information 38% of the time during the period analysed, and three largely left-wing pages did so in nearly 20% of posts” (Silverman 2016, online). Zuckerberg then committed to his company doing more to fighting the spread of fake news and vowed it would be an “arbiter of truth” (Jain 2016, online), while also stating that he runs a “tech company, not a media company” (Samler 2017, online). He also denied that Facebook confounded the problem of its users living in an information ‘filter bubble’, even through his own company quietly released the results of a study in 2015 which showed exactly the opposite of which was true (Tufekci 2015, p.9), and another study has shown that users are much less likely to click on content that challenges their beliefs (Tufekci 2015, p.9). Western democracies have a liberal left and a conservative right, with “neither being exposed to the reasoned arguments of the other” (Samler 2017, online). Indeed, only 5% of Facebook users and 6% of Twitter users admit to associating themselves with people on these platforms who have differing political opinions to themselves (Samler 2017, online). Critics of how social media giants generate their users’ news feeds have said that these organisations need to accept the fact that they are no longer solely technology platforms, but media platforms too (Samler 2017, online).

Interestingly, on 30th September 2017, Zuckerberg made a post on his personal Facebook page for the end of Yom Kippur, apologising and seeking forgiveness for any of the ways that his organisation has been “used to divide people rather than bring us together” (Facebook 2017, online). This has been described as a “wholly surprising admission of guilt from someone in the tech world” (Barsanti 2017, online).

The key to Facebook’s ongoing success is to keep its users engaged. Bucher explains that “examining how algorithms make people feel … seems crucial if we want to understand their social power” (2015, p.30), if, indeed, users are even aware of the power of the algorithm at all. Facebook’s data teams are almost solely focussed on finding ways to increase the amount of time each and every user remains engaged with the platform, and they are not concerned with truth, learning, or civil conversation (O’Neil 2016, online). Success is measured by the number of clicks, ‘likes’, shares and comments, not the quality of the material being engaged with. The greater the amount of engagement, the more data Facebook can use to sell advertisements (O’Neil 2016, online). This seems like a fairly obvious business model, but research has shown that many users are unaware of this. In a 2015 study, more than half of Facebook users were unaware of how their Facebook news feed was put together (Eslami et. al 2015, p.153). This is problematic, as ignorance of how the site’s algorithm works can wrongly lead some users to “attribute the composition of their feeds to to the habits of their friends or family” (Eslami et. al 2015, p.153). This can reinforce the idea of the ‘filter bubble’ and lead many users to believe the information they are seeing is trustworthy and correct, as well as tracking behaviour in order to profile identity.

While finding news that fits a user’s news feed, Facebook’s algorithms can create other problems, including the “voracious appetite for personal data” (Ellis 2016, online) ad-supported services such as Facebook need to keep their predictions going. The consequence is an undermining of personal data and the increased likelihood of the site being used for data mining purposes by individuals, organisations or entities with potentially nefarious motives, and possibly leading to more “government by algorithmic regulation” (Ellis 2016, online). The potential for abuse is high when algorithms are unregulated and can be used by anyone with the money to invest in them.

Another major problem Facebook’s algorithm creates is one of repetition, and it has the potential to prevent democratic processes and decisions evolving over time. While real life allows the past to be in the past, “algorithmic systems make it difficult to move on” (Bucher 2015, p.42). This is the “politics of the archive” (Bucher 2015, p.42), as all decisions an algorithm will make on the information it allows you to see in the future is based on what you did in the past. What is relatable and retrievable from the past shapes the way Facebook’s algorithm works in the present, and will potentially affect the user’s decisions in the future.

Despite the many negative effects on democracy Facebook can have, it can be a positive force for it too. During elections in the United States in 2010 and 2012, the site conducted experiments with a tool it called the ‘voter megaphone’ (O’Neil 2016, online). The idea of this was to encourage users to make a post saying they had voted, which would, in turn, remind and encourage others to do the same. Statistics showed 61 million people made such a post, with the likely result of increasing participation in democratic processes, especially among young people (O’Neill 2016, online). Additionally, movements can be organised on social media, including women’s marches in 2017, which saw about five million women march globally as a result of online organisation (Vestager 2017, online).

Facebook is determined to show that the information and feed its algorithm creates and controls is an ever-changing and independent tool for good, but the reality is it is a vital part of its business model. The Facebook algorithm is “biased towards producing agreement, not dissent” (Tufekci 2015, p.9). After all, if its users were continually presented with information they didn’t appreciate, they would simply go elsewhere. And that’s not a successful business model, by any definition. How the filter bubbles, in which Facebook users’ news feeds exist, affect democracy is as simple as it is destructive. Electoral laws are outdated, and “regulators aren’t big or savvy enough to catch transgressors” (Kuper 2017, online). Drawing conclusions from this alone, we can say that Facebook has changed democracy. Perhaps author and mathematician Cathy O’Neil put it at its simplest and best when she said “Over the last several years, Facebook has been participating – unintentionally – in the erosion of democracy” (2016, online).

Google

In 1998, university drop-outs Larry Page and Sergey Brin founded Google with the stated aim of hoping “to organise the world’s information and make it universally accessible and useful” (Google, online). Its search engine helped unlock many of the so-called ‘walled gardens’ of the Internet, including sites like AOL and Yahoo. Since then, it has organised every single piece of information on the Internet, and it continues to add many millions more to its searchable database every day (Vise & Malseed 2005, p.3).

After going public in 2004, its value and influence grew exponentially, and it began to challenge Microsoft’s dominance in the online world (Vise & Malseed 2005, p.3), overtaking it as the most visited site on the web in 2007 (Strickland 2017, online). The company owes its success to its search engine’s ability to search so well and in lightning-quick time. It now has over 50,000 employees globally, and has expanded its business interests into the fields of artificial intelligence and self-driving cars (Frommer 2014, online), and its search engine is used globally over 6.5 billion times every day (Allen 2017, online).

Google has been called “the keeper of web democracy” (Howie 2011, online) and its search engine is a very powerful and vital component to 21st century Western democratic life, yet its influence is not widely understood or researched (Richey & Taylor 2017, p.1). With 150,000,000 active websites on the Internet today (Strickland 2017, online), it performs an important role in the lives of millions of people. Google has 88% of the market share in search and search advertising (Hazen 2017, online), and combined with Facebook, has more than a billion regular users. It is partly because of the colossal amounts of users and data with which it operates that Google’s algorithms are so complex.

The company markets its algorithm-driven search engine as a tool which will “result in finer detail to make our services work better for you” (Google 2017, online), and, in theory, the first results from a search should be the ones which are most relevant to the keywords searched. This seems, on the face of things, to be a simple and incredibly convenient tool for all its users. Yet critics of its methods and its effects on democracy are plentiful.

“Unregulated search rankings could pose a significant threat to a democratic system of government,” says Forbes writer Tim Worstall (2013, online), while Hazen (2017, online) explains how Google’s “relentless pursuit of efficiency leads these companies to treat all media as a commodity”. The real value of the platform lies not in the quality, honesty or accuracy of information it produces, but the amount of time the user is engaged with the platform. Hazen goes on to describe how these methods have pushed Page and Brin into the top-ten most wealthy people in America, each with a personal fortune over US$37 billion, and suggests the way by which these methods have affected democracy haven’t seemed to have been taken into account at any point in the company’s evolution.

Much like Facebook, Google has been criticised for data mining, and, on several occasions, taken to court for mismanaging users’ data (Smith 2016, online). Following United States government whistle-blower Edward Snowden’s leaks, Google’s users have become more savvy to how the site collects and users their data, and critics have labelled the company’s data mining methods as “purely to benefit Google” (Miller 2012, online). Yet the practice continues. The collection of data, and the profits of around $40 billion a year it makes from these practices, is concerning to many users of Google, despite the fact the company claims it uses data mining techniques to “find more efficient algorithms for working with massive data sets, developing privacy-preserving methods for classification, or designing new machine learning approaches” (Google 2017, online).

Another way in which the vast amounts of data channelled through Google could be used is in making political predictions, although the usefulness of this is unclear. This can be demonstrated with a real-life example: Google data showed that searches for ‘Donald Trump’ accounted for almost 55% of views in the three days before the 2016 presidential election (Allegri 2016, online), when the majority of polls predicted a Clinton victory, and its data predicted his final total electoral college votes number to within two of the actual number. This made analysts, tech writers and journalists take notice, with the general consensus that it was time to “start taking the electoral prediction powers of Google much more seriously” (Kirby 2016, online).

Consistent accusations of tampering with results have plagued Google throughout its lifetime, and such actions have the potential to affect democracy negatively if true. The company’s Vice President Marissa Mayer appeared in a 2011 YouTube video telling an audience how her company regularly, and unashamedly, puts its own services as the top of search results (Howie 2011, online). In 2017, public trust in Europe of Google’s algorithm reached an all-time low, following the proliferation of fake news stories and clearly-engineered results. The European Commission advertised for a company to police Google’s algorithm to determine the extent to which results are deliberately positioned favourably to those who have paid for it, and how much Google was abusing its market dominance (Hall 2017, p.17). The Commission also launched an investigation into the extent to which Google banned competitors from search results and advertisements, with the promise of keeping the issue “on our desks for some time” (Hall 2017, p.17). The way in which Google “uses its dominant search engine to harm rivals” has led to critics like Derrick (2017, p.1) examining how the concentration or monopolization of services in this way “threatens our markets, threatens our economy, and threatens our democracy”. It is difficult to see how Google’s self-serving behaviour can have anything but an overall negative effect on democracy in Western societies.

Despite many criticisms of Google’s algorithm and its negative effects on privacy and democracy, its data mining practices have produced some positive outcomes. In 2014, Google found evidence of child pornography in one of its user’s e-mail accounts and reported the person to the National Centre for Missing and Exploited Children in the United States, resulting in an arrest (Matterson 2014, online). Google Maps’ ability to identify illegal activities such as marijuana growing and non-approved building have also been noted as positives (Google Earth Blog, online).

The future of Google is likely to see it maintain its virtually unchallengeable position at the head of Internet search engine use and advertising revenue generation. The site’s ability to change its algorithms at any time mean it can evolve to control the market in any way it wishes, and can control the impact it has on websites, its competitors, and entire industries. The company’s future is not likely to be one with a reduced involvement with algorithms, but something quite the opposite, says Davies (2017, online). When once upon a time Google’s algorithm had a relatively basic structure, it is now much more complex, and becoming more so. Its methods of pushing forward artificial intelligence and machine learning are happening at an “amazing if not alarming rate” (Davies 2017, online), meaning its influence on what data we see is likely to grow. “Not since Rockefeller and JP Morgan has there been such a concentration of wealth and power in the hands of so few” explains Hazen (2017, online).

Twitter

Twitter began as an idea that co-founder Jack Dorsey had in 2006, who originally imagined it as an SMS-based communications platform (MacArthur 2017, online), hence the 140-word character limit. Fast forward five years later, and it was one the biggest communication platforms in the world. Now it has over 200 million active monthly users and it is considered vital, along with Facebook, that every public figure who wishes to engage with their audience, have an account (MacArthur 2017, online).

Studies have shown that political candidates who use Twitter as a means for engaging with voters significantly increase their odds of winning (LaMarre & Suzuki-Lambrecht 2013, p.1). The platform stimulates word-of-mouth marketing and increases audience reach significantly (LaMarre & Suzuki-Lambrecht 2013, p.1), and live information being of particular importance and influence. Sustaining a live connection, via Tweeting, through an election cycle has been shown to result in a positive reaction from supporters (LaMarre & Suzuki-Lambrecht 2013, p.1), which has the potential to translate in positive results on election day. President Obama’s use of Twitter during his two campaigns is a good example of this.

However, not all use of Twitter is as open and honest it may seem. During the 2016 US presidential election, 20% of all political tweets made during the three televised political debates were made by bots (Campbell-Dollaghan 2016, online), or a piece of software designed to execute commands with a particular goal. It was unclear where many of the bots came from or who created them, making it easier to spread fake news stories and potentially influence public opinion. There is also evidence to show that during the UK Brexit campaign, huge numbers of “fake news stories, false factoids, and absurd claims were passed over social media networks, often by Twitter’s highly automated accounts” (Howard 2016, online). Bots and automated accounts are very easy to make (Campbell-Dollaghan 2016, online), and can amplify misinformation in a political campaign. Twitter allows news stories from untrustworthy sources to “spread like wildfire over networks of family and friends” (Howard 2016, online).

These examples of how Twitter is being used to spread information or misinformation strongly suggests that it should now be regarded as a media company. However, much like Facebook, Twitter is not legally obliged to regulate the information passed over its network for quality or accuracy. In fact, it has been given a “moral pass” (Howard 2016, online) when it comes to the obligations professional media organisations and journalists are held to.

As Twitter has rolled out a 280-character trial in October 2017 (Hale 2017, online), it is arguably positioning itself to be an even more influential transmitter of information, accurate or inaccurate, in future democratic processes. It remains to be seen whether the increase will increase engagement with the platform, but the potential is there for it to be an even bigger player in the political arena (Hale 2017, online).

Other Platforms

While algorithms used by Facebook and Google are the dominant forces in controlling what many people see and think about democracy, other platforms are playing increasing roles. With Facebook and Google now firmly part of the established mainstream, there is space for other social media to fill their previous roles as the newcomer or disruptor on the scene. A politician or political party can share images directly to their followers, and can engage directly with them while doing so.

The way in which these photo-sharing social media have been used in recent elections suggests they will have a huge role to play in future similar contests. The recent UK Prime Ministerial election saw both Theresa May and Jeremy Corbyn use Instagram to a small degree, with surveys showing Corbyn’s use was more effective, although this could also be explained by the fact that younger people are more likely to vote Labour (Kenningham 2017, online). French President Macron used it heavily and swept to power (Kenningham 2017, online), and Indian Prime Minister Modi has a huge eight million followers. In the UK alone, Instagram has 18 million users and Snapchat 10 million – both significant portions of the 65 million total population, so political parties and figures need to be using it to be successful in the ever-competitive mediascape.

Instagram’s and Snapchat’s core demographics are much younger, on average, than that of Twitter and Facebook, and the platforms have an ability to reach groups of people who feel permanently disengaged with the political process (Kenningham 2017, online). Ninety percent of Instagram’s users, for example, are under 35 years old, and it is increasingly becoming the platform of choice for image-fixated millennials (Kenningham 2017, online).

While Instagram may be an excellent tool for reaching a younger demographic, its algorithm can be used and abused, as well as negotiated. Much like the Facebook news feed algorithm, Instagram’s algorithm has been described as being “mysterious, yet ingenious and brilliant at showing the best content to the best people” (Lua 2017, online). Its algorithm is driven by seven key factors or elements of a post, including engagement, relevancy, relationships, timeliness, profile searches, direct shares, and time spent (Lua 2017, online). A 2016 Instagram study (Instagram 2016, online) found that, when posts were listed chronologically, users missed up to 70% of their feeds, and the platform changed to an algorithm-driven method of ordering. Despite some initial opposition to the move, feedback has been generally positive (Lua 2017, online), and the relatively simple nature of Instagram’s algorithm, compared to that of Facebook, means it is easy for users to work with or even “beat” (Chacon 2017, online).

Snapchat is behind Instagram on users, but crucially, it has high levels of engagement, with the average user spending up to 30 minutes per day on the platform (Kenningham 2017, online). Its algorithm, similar to that of Instagram, places certain posts to the top of its feed, which leaves it open to misuse, but it offers a “way to engage with people who normally switch off at the very mention of the word ‘politics’” (Kenningham 2017, online). Jeremy Corbyn used the platform extensively in the 2017 UK election with some success, and all three French Presidential candidates used it, most especially the eventual winner (Kenningham 2017, online).

While Instagram and Snapchat have not yet played defining roles in political processes anywhere in the world, and the extent to which their algorithms can be used or manipulated in doing so is yet unclear, they are needed to “become a central part of the democratic process to ensure more people have a say and stake in the future of [political processes]” (Kenningham 2017, online). It is likely that Instagram and Snapchat have only had a positive effect on Western democratic processes thus far.

Summary of Findings

After such a detailed examination of the use of algorithms in social media and search engines, it is important to summarise findings, with reference to the original research questions.

The first research question asked: To what extent do algorithms in websites, networking services and social media have a negative effect on democracy in Western societies?

When the effects on Western democracies of algorithms used by Facebook, Google and others are examined, it can be said that, in a general sense, these algorithms have a negative impact on Western democracies.

Facebook’s algorithm is probably the biggest offender in this regard. Its aims are not to promote or encourage quality content being uploaded or shared on the platform, but to get as much personal information about its users and keep them engaged for as long as possible, in order to better target paid advertisements to them. Its success does not rely on the ability or need to distinguish between quality, truthful information and dishonest, fake information – as long as users are engaged regularly and for lengthy periods, it can sell a large amount of advertisements and its financial success is certain. Facebook’s algorithm also perpetuates the ‘filter bubble’ method of news feed generation, in which users are rarely, if ever, exposed to information that is contrary to their personal beliefs. Its algorithm can, and has, been manipulated to promote news stories with false or misleading information in order to gain political advantage.

Similarly, Google’s algorithm has many negative effects on democracy. Its search engine’s algorithm is designed to produce results based on a user’s previous searches, which, similar to that of Facebook, perpetuates the ‘filter bubble’ and is designed to soak up as much information about the user in order to target advertisements and generate revenue. Google claims it uses data mining to improve its services for users, yet makes US$40 billion a year from these practices, so it is difficult to accept that it is not a self-serving activity. Additionally, the monopolization of data and advertising services by Google drives competition out of the market, and the site also regularly manipulates data and search results to place particular results higher than others.

The second research questions asked: To what extent, if any, can users of new and digital media be manipulated by algorithms to think or act in certain ways?

Algorithms used by Facebook and Google can control what information users have access to in their news feeds, and hence, what issues they are exposed to and are likely to think about (Francis 2015, online). While a small number of writers have argued that technologies like web search and social networks reduce ideological segregation (Flaxman et. al 2016, p.298), there is much evidence showing otherwise (Francis 2015, online). The repetitive nature of how web-based algorithms work means that information engaged with by users affects their future search results and the content of their news feed, and similar search results or information is likely to appear again, perpetuating the ‘filter bubble’. Facebook continually removes or hides news that it believes might offend users, including many investigative journalism pieces (Ingram 2015, online). When the filter bubble and easy proliferation of untruthful or misleading information are combined, users can be manipulated to think certain ways about political or other subjects. The monopolization of news distribution is arguably not of Facebook’s own doing, as such a high number of people use it globally, and media companies have no real choice but to use it as a way of interacting with news consumers, but the way that Facebook feels about how news feeds are generated can differ from one day to the next.

The third research question asked: To what extent do search engine algorithms affect democracy in Western societies?

The answer to this question is, quite simply, a huge extent. With a virtual monopoly on search, Google “has the power to flip the outcomes of close elections easily – and without anyone knowing” (Epstein 2014, online). The company has the ability to identify a candidate that best suits its needs, identify undecided voters and send them customised search results tailored to make the candidate look better, while nobody – candidate, voter or regulator – is any the wiser (Epstein 2014, online). There is no evidence for such direct manipulation, but favouritism can happen ‘organically’ on Google’s search engine – this is what the company claimed was the cause of Barack Obama’s consistently high rankings in the months just before the 2008 and 2012 elections (Epstein 2014, online). A 2010 study conducted on a group of Americans’ preferences for either Julia Gillard or Tony Abbott (people the test subjects were unfamiliar with) as the ideal candidate for the position of Prime Minister of Australia found that they made their choice based on search rankings (Epstein 2014, online). In future elections, as increasing numbers of undecided voters get their information on political matters through the Internet, the way that Google’s algorithm works will have international ramifications. Google is not ‘just’ a platform, it “frames, shapes and distorts how we see the world” (Arvanitakis 2017, online).

The fourth research question asked: Which website, networking service or search engine is most likely to affect democracy through its use of algorithms?

The answer is Facebook, and this can be seen in many real-life examples. Recent real-life examples include its algorithms manipulating data to gain political outcomes in the Brexit referendum, the Trump-Clinton election, the French presidential election, and the UK general election. The most notable case of algorithm-driven influence in politics is the Trump-Clinton election contest. President Trump’s Digital Director, Brad Parscale, admitted that Facebook was massively influential in winning the election for Trump (Lapowsky 2016, online), by generating huge sums of money in online fundraising, a large proportion of which went back into digital advertising. Analysts and writers have also pointed to “online echo chambers and the proliferation of fake news as the building blocks of Trump’s victory” (Lapowsky 2016, online) – echo chambers created by Facebook’s algorithm. Trump’s online team took advantage of Facebook’s ability to test audiences with ads, running 175,000 variations of ads on the day of the third presidential debate alone (Lapowsky 2016, online). Cambridge Analytica pulled data from Facebook and paired it with huge amounts of consumer information from data mining companies to “ develop algorithms that were supposedly able to identify the psychological make-up of every voter in the American electorate” (Halpern 2017, online).

The Future of Democracy in an Algorithm-Driven World

Increased use of algorithms and artificial intelligence can have many benefits to societies. New systems can identify students who need assistance, and data be can used to identify health hazards within a population (Arvanitakis 2017, online). However, a diminished human role in decision-making may have many negative consequences for democracy.

The innovation of algorithms means “our political leanings are constantly being analysed and potentially also manipulated” (Arvanitakis 2017, online), and opaque algorithms can be “very destructive” (O’Neil 2016, p.4). Citizens of Western democracies have always thought that they knew where their information was coming from, but that is no longer the case (Arvanitakis 2017, online). The sources we have come to trust to bring us information have fallen under the influence of powerful, self-serving website whose algorithms make no distinction between truth and lies, or high quality information and nonsense. When a list of search results appear upon searching for something using Google, it is not clear where the results have come from or why they have appeared in such an order, and this is what is concerning for healthy democracy. In fact, it’s almost impossible to work out where information in a search ranking has come from or ended up that way. A professor at Bath University explained that “it should be clear to voters where information is coming from, and if it’s not transparent or open where it’s coming from, it raises the question of whether we are actually living in a democracy or not” (Arvanitakis 2017, online).

In order for anything to survive for any length of time, it has to adapt, and the future for democracy is increasingly looking like one of constant technological adaptation. Newly emerging social media, which have not been sucked into the mainstream where the sole purpose is to collect data for advertisement placement, are, along with other online platforms likely to be crucial to political participation for future generations. It is vital that young people are civically engaged (actively working to make a positive difference to their communities) in order to define and address public problems (Levine 2007, p.1), and social media has the potential to play a huge part in this. As the variety of methods it presents for information sharing and interconnectivity increase, social media has the potential to encourage more people to engage with democratic processes.

It is also vital for algorithms to be transparent and accountable (Arvanitakis 2017, online) in order for users of websites, social media and search engines to know how their personal information is being used, and to ensure the information they are seeing is accurate and balanced. “Algorithms are designed with data, and if that data is biased, the algorithms themselves are biased,” explains O’Neil (2016, p.4). Algorithms could be transparent, accountable and objective, but, in most cases, are nothing more than “intimidating, mathematical lies” (O’Neil 2016, p.4). Overcoming this fact is the key to fair and balanced algorithm use in future democratic processes.

With a 2017 survey indicating that two-thirds of schoolchildren would not care if social media had never been invented and 71% admitting to taking “digital detoxes” (The Guardian 2017, online), there is the hint of a possibility that social media use may decline as the next generation of school-aged children reaches adulthood. Many respondents of the survey believed social media was having a negative effect on their mental well-being, with advertising, fake news and privacy being particular areas of concern (The Guardian 2017, online). Some positives were mentioned, including memes, photo filters, and Snapchat stories, reinforcing the theory that new social media platforms, not Facebook, Twitter or Instagram, may be the future for mass information sharing and for healthy democracy.

Conclusion

It is indisputable that search engines and social media increase the number of ideas, viewpoints, opinions and perspectives available to citizens taking part in democratic processes. An incredibly varied collection of information is available to Internet users at any time, which, on face value, would suggest that citizens should be more informed about political issues than ever before. The Internet is also an effective tool for carrying out successful political campaigns, offering an efficient method by which political groups or individuals can reach audiences with public relations and policy messages.

With these things in mind, it could be easy to move steadily and unquestioningly forward with the idea that software makes our lives more convenient and enjoyable. However, the algorithms controlling data in some of the most popular and widely-used social media and search engines are designed not with the user’s best interests in mind, but the websites themselves – they are businesses, after all. This is a direct and immediate threat to democracy.

The ability to manipulate information online, similarly, is a threat to democratic processes. Evidence and real-life examples show that the control of information and misinformation through search engine and social media manipulation can help bring about desired political results, and the algorithms controlling information in these platforms are not able to discern between real and fake, or truth and dishonesty. Algorithms functioning to target users with advertising material instead of presenting a fair and balanced variety of information perpetuate the division of society based on political beliefs, and engineer information ‘filter bubbles’. Algorithms operating in this way are a threat to democracy.

It is partly this online environment that has created a divisive populist sentiment that now defines many Western societies, and has left many citizens lacking the full range of knowledge needed to make informed democratic decisions. Thomas Jefferson once proclaimed that “a properly functioning democracy depends on an informed electorate” (Samler 2017, online), but when algorithms are manipulating news feeds and search engine results without regulation, free will in the political arena no longer seems so free.

References

ABC News, 2017. ‘Facebook to Release Russia-Linked Ads to Congress Amid Pressure Over Use in US Election’, online, accessed 26th September 2017: http://www.abc.net.au/news/2017-09-22/facebook-to-release-russia-ads-to-congress-amid-pressure/8973718

ABC News, 2017. ‘Las Vegas Shooting: Politicised “Fake News” of Attack Spread on Google, Facebook’, online, accessed 7th October 2017: http://www.abc.net.au/news/2017-10-03/las-vegas-shooting-false-news-of-attack-spread-google-facebook/9011152

Allegri, C, 2016. ‘Did Google search data provide a clue to Trump’s shock election victory?, Fox News, online, accessed 30th September 2017:
Did Google search data provide a clue to Trump’s shock election victory?

Allen R, 2017. ‘Search Engine Statistics 2017’, Smart Insights, online, accessed 30th September 2017: http://www.smartinsights.com/search-engine-marketing/search-engine-statistics/

Anderson, B & Horvath, B, 2017. ‘The Rise of the Weaponised AI Propaganda Machine’, Scout, online, accessed 16th August 2017: https://scout.ai/story/the-rise-of-the-weaponized-ai-propaganda-machine

Arvanitakis, J, 2017. ‘If Google and Facebook Rely on Opaque Algorithms, What Does That Mean for Democracy?’, ABC, online, accessed 1st October 2017: http://www.abc.net.au/news/2017-08-10/ai-democracy-google-facebook/8782970

Bansal, N, 2012. ‘The Primal-Duadl Approach for Online Algorithms’, Approximation and Online Algorithms, Springer, p.1

Baraniuk, C, 2015. ‘The Bad Things That Happen When Algorithms Run Online Shops’, BBC, online, accessed 23rd September 2017: http://www.bbc.com/future/story/20150820-the-bad-things-that-happen-when-algorithms-run-online-shops

Barsanti, S, 2017. ‘Mark Zuckerberg Apologises for Facebook Making Life Worse’, AV Club, online, accessed 2nd October 2017: https://www.avclub.com/mark-zuckerberg-apologizes-for-facebook-making-life-wor-1819042663?rev=1506899047971&utm_content=Main&utm_campaign=SF&utm_source=Facebook&utm_medium=SocialMarketing

Bozdag, E & van den Hoven, J, 2015. ‘Breaking the Filter Bubble: Democracy and Design’, Ethics and Information Technology, Issue 4, p.249

Bucher, T, 2017. ‘The Algorithmic Imaginary: Exploring the Ordinary Affects of Facebook Algorithms’, Information, Communication & Society, pp.30-44

Campbell-Dollaghan, K, 2016. ‘The Algorithmic Democracy’, FastCoDesign, online, accessed 2nd October 2017: https://www.fastcodesign.com/3065582/the-algorithmic-democracy

Carson, B, 2016. ‘Zuckerberg: The Real Reason I Founded Facebook’, Business Insider Australia, online, accessed 26th September 2017: https://www.businessinsider.com.au/the-true-story-of-how-mark-zuckerberg-founded-facebook-2016-2?r=US&IR=T

Chacon, B, 2017. ‘5 Things to Know About the Instagram Algorithm’, Later, online, accessed 1st October 2017: https://later.com/blog/instagram-algorithm/

Chaffey, D, 2017. ‘Global Social Media Research Summary 2017’, Smart Insights, online, accessed 1st October 2017: http://www.smartinsights.com/social-media-marketing/social-media-strategy/new-global-social-media-research/

Derrick, J, 2017. ‘Benzinga: Elizabeth Warren: Apple, Google, and Amazon Threaten Our Democracy’, Newstex, p.1

Devlin, B, 2017. ‘Algorithms or Democracy: Your Choice’, TDWI, online, accessed 23rd September 2017: https://tdwi.org/articles/2017/09/08/data-all-algorithms-or-democracy-your-choice.aspx

Digital Disinformation Forum, 2017. Online, accessed 23rd September 2017: https://www.disinforum.org/#intro

Domonoske, C, 2016. ‘Students Have ‘Dismaying’ Inability To Tell Fake News From Real, Study Finds’, NPR, online, accessed 30th September 2017: http://www.npr.org/sections/thetwo-way/2016/11/23/503129818/study-finds-students-have-dismaying-inability-to-tell-fake-news-from-real

Dormehl, L, 2014. ‘Algorithms are Great and All, But They Can Also Ruin Lives’, Wired, online, accessed 23rd September 2017: https://www.wired.com/2014/11/algorithms-great-can-also-ruin-lives/

Dusi, M, Finamore, A, Claffy, K, Brownlee, N & Veitch, D, 2016. ‘Guest Editorial Measuring and Troubleshooting the Internet: Algorithms, Tools and Applications’, IEEE Journal on Selected Areas in Communications, Volume 34, Issue 6, p.1805

Ellis, D, 2016. ‘Why Algorithms are Bad For You’, Life on the Broadband Internet, Pew/Elon

Epstein, R, 2014. ‘How Google Could End Democracy’, US News, online, accessed 1st October 2017: https://www.usnews.com/opinion/articles/2014/06/09/how-googles-search-rankings-could-manipulate-elections-and-end-democracy

Eslami, M, Rickman, A, Vaccara, K & Aleyasen, A, 2015. ‘I Always Assumed That I Was Really Close to Her’, Proceedings of the 33rd Annual SIGCHI Conference on Human Factors in Computing Systems, New York, pp.153-162

Facebook, 2017. Online, accessed various dates: http://www.facebook.com

Fiat, A & Woeginger, GJ, 1998. Online Algorithms: The State of the Art, p.7

Flaxman, S, Goel, S & Rao, JM, 2016. ‘Filter Bubbles, Echo Chambers, and Online News Consumption’, Public Opinion Quarterly, Volume 80, p.298

Floridi, L, 2017. ‘The Rise of the Algorithm Need Not Be Bad News for Humans’, Financial Times, online, accessed 23rd September 2017: https://www.ft.com/content/ac9e10ce-30b2-11e7-9555-23ef563ecf9a

Francis, D, 2015. ‘Facebook Elections, Facebook Candidates, Facebook Democracy’, Huffington Post, online, accessed 27th September 2017: http://www.huffingtonpost.com/dian-m-francis/facebook-elections-facebo_1_b_8271488.html

Frommer, D, 2014. ‘Google’s Growth Since its IPO is Simply Amazing’, Quartz, online, accessed 30th September 2017: https://qz.com/252004/googles-growth-since-its-ipo-is-simply-amazing/

Gavet, M, 2017. ‘Rage Against the Machines: Is AI-Powered Government Worth It?’, We Forum, online, accessed 23rd September 2017: https://www.weforum.org/agenda/2017/07/artificial-intelligence-in-government

Google Earth Blog, online, accessed 30th September 2017: http://www.gearthblog.com

Google, ‘From the Garage to the Googleplex’, online, accessed 30th September 2017: https://www.google.com/intl/en/about/our-story/

Google Research, online, accessed 30th September 2017: http://www.research.google.com/pubs/dataminingandmodeling.html

The Guardian, 2017. ‘Growing Social Media Backlash Among Young People, Survey Shows’, online, accessed 7th October 2017: https://www.theguardian.com/media/2017/oct/05/growing-social-media-backlash-among-young-people-survey-shows

Hale, S, 2017. ‘Twitter Trials 280 Characters, But Its Success in Japan is More Than a Character Difference’, Oxford Online Institute, online, accessed 2nd October 2017: https://www.oii.ox.ac.uk/blog/success-is-more-than-a-character-difference/

Hall, K, 2017. ‘Europe Seeks Company to Monitor Google’s Algorithm in $10m Deal’, The Register, p.11

Halpern, S, 2017. ‘How He Used Facebook to Win’, NY Books, online, accessed 1st October 2017: http://www.nybooks.com/articles/2017/06/08/how-trump-used-facebook-to-win/

Hazen, D, 2017. ‘Google, Facebook, Amazon Undermine Democracy: They Play a Role in Destroying Privacy, Producing Inequality’, Salon, online, accessed 30th September 2017

Helbing, D, Bruno, S, Gigerenzer, G, Hafen, E, Hagner, M, Hofstetter, Y, van den Hoven, J, Zicari, RV & Zwitter, A, 2017. ‘Will Democracy Survive Big Data and Artificial Intelligence?’, Scientific American, online, accessed 23rd September 2017: https://www.scientificamerican.com/article/will-democracy-survive-big-data-and-artificial-intelligence/

Heller, N, 2016. ‘The Failure of Facebook Democracy’, The New Yorker, online, accessed 28th September 2017: https://www.newyorker.com/culture/cultural-comment/the-failure-of-facebook-democracy

House of Parliament, ‘Algorithms in Decision-Making Inquiry – Publications’, online, accessed 23rd September 2017: https://www.parliament.uk/business/committees/committees-a-z/commons-select/science-and-technology-committee/inquiries/parliament-2015/inquiry9/publications/

Howard, J, 2016. ‘Is Social Media Killing Democracy?’, Oxford Internet Institute, online, accessed 2nd October 2017: https://www.oii.ox.ac.uk/blog/is-social-media-killing-democracy/

Howie, P, 2011. ‘The End of the Google Democracy’, Fast Company, online, accessed 30th September 2017: https://www.fastcompany.com/1746616/end-google-democracy

Instagram, 2016. ‘See the Moments You Care About First’, Instagram, online, accessed 1st October 2017: http://blog.instagram.com/post/145322772067/160602-news

Introna, LD & Nissenbaum, H, 2000. ‘Shaping the Web: Why the Politics of Search Engines Matters’, The Information Society, pp.169-185

Jain, A, 2016. ‘Spread of Fake News on Facebook Eroding Democracy: Obama’, Newstex Global Business Blogs, online, accessed 27th September 2017

Kemp, S, 2017. ‘Digital in 2017: Global Overview’, WeAreSocial, online, accessed 1st October 2017:
https://wearesocial.com/special-reports/digital-in-2017-global-overview

Kenningham, G, 2017. ‘Instagram and Snapchat are Vital Tools for Activating Democracy’, Cityam.com, accessed 1st October 2017: http://www.cityam.com/266023/instagram-and-snapchat-vital-tools-activating-democracy

Kirby, J, 2016. ‘Google Predicted Donald Trump Would Win the Election’, Macleans, online, accessed 30th September 2017: http://www.macleans.ca/politics/washington/google-predicted-donald-trump-would-win-the-election/

Kuper, S, 2017. ‘How Facebook is Changing Democracy’, Financial Times, online, accessed 28th September 2017: https://www.ft.com/content/a533d5ec-5085-11e7-bfb8-997009366969?mhq5j=e7

LaMarre, HL & Suzuki-Lambrecht, Y, 2013. ‘Tweeting Democracy? Examining Twitter as an Online Public Relations Strategy for Congressional Campaigns’, Public Relations Review, Volume 39, p.1

Lapowsky, I, 2016. ‘Here’s How Facebook Actually Won Trump the Presidency’, Wired, online, accessed 1st October 2017: https://www.wired.com/2016/11/facebook-won-trump-election-not-just-fake-news/

Levine, P, 2007. The Future of Democracy: Developing the Next Generation of American Citizens, UPNE, p.1

Levy, S, 2010. ‘How Google’s Algorithm Rules the Web’, Wired, online, accessed 15th August 2017: https://www.wired.com/2010/02/ff_google_algorithm/

Linehan, H, 2017. ‘Google, Facebook are a Threat to Democracy, says Press Council Chair’, Irish Times, 25th May 2017, p.11

Lua, A, 2017. ‘Understanding the Instagram Algorithm: 7 Key Factors and Why the Algorithm is Great for Marketers’, BufferApp, online, accessed 1st October 2017: https://blog.bufferapp.com/instagram-algorithm

MacArther, A, 2017, ‘The Real History of Twitter, in Brief’, LifeWire, online, accessed 2nd October 2017: https://www.lifewire.com/history-of-twitter-3288854

Makulilo, A, 2017. ‘Rebooting Democracy? Political Data-Mining and Biometric Voter Registration in Africa’, Information and Communications Technology

Marchi, R, 2012. ‘With Facebook, Blogs, and Fake News, Teens Reject Journalistic “Objectivity”’, Journal of Communication Inquiry, Volume 36, p.246

Matteson, S, 2014. ‘Google Turns in a User for Allegedly Possessing Criminal Material’, TechRepublic, online, accessed 30th September 2017: http://www.techrepublic.com

Merriam-Webster Dictionary, 2017. ‘Fake News’, online, accessed 27th September 2017: https://www.merriam-webster.com/words-at-play/the-real-story-of-fake-news

Miller, D, 2012. ‘Google: Let Us Opt Out of Your Data Mining Machine’, Wired, online, accessed 30th September 2017: https://www.wired.com/insights/2012/10/google-opt-out/

Newton, C, 2016. ‘Zuckerberg: The Idea That Fake News on Facebook Influenced the Election is “Crazy”’, The Verge, online, accessed 26th September 2017: https://www.theverge.com/2016/11/10/13594558/mark-zuckerberg-election-fake-news-trump

The Nudging Company, ‘Nudging and Behavioural Design’ online, accessed 23rd September 2017: https://thenudgingcompany.com/en/free-online-workshop-on-nudging-and-behavioral-design/

O’Neil, C, 2016. ‘Commentary: Facebook’s Algorithm vs. Democracy’, PBS, online, accessed 30th September 2017: http://www.pbs.org/wgbh/nova/next/tech/facebook-vs-democracy/

O’Neil, C, 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown Publishing

Oremus, W, 2016. ‘Who Controls Your Facebook Feed’, Slate, online, accessed 16th August 2017: http://www.slate.com/articles/technology/cover_story/2016/01/how_facebook_s_news_feed_algorithm_works.html

Oxford Reference, ‘Algorithm’, A Dictionary of Social Media, Oxford University Press

Pennington, N, 2013. ‘Facebook Democracy: The Architecture of Disclosure and the Threat to Public Life’, European Journal of Communication, p.193

Phillips, S, 2007. ‘A Brief History of Facebook’, The Guardian, online, accessed 26th September 2017: https://www.theguardian.com/technology/2007/jul/25/media.newmedia

Richey, S & Taylor, B, 2017. Google and Democracy, Taylor & Francis Ltd, p.1

Samler, J, 2017. ‘Why Facebook’s Algorithms are Destroying Democracy’, Harbus, online, accessed 30th September 2017: http://www.harbus.org/2017/facebooks-algorithms-destroying-democracy/

Shoval, N, 2017. ‘Facebook is Dangerous for Democracy – Here’s Why’, Mashable, online, accessed 28th September 2017: http://mashable.com/2017/07/17/facebook-social-media-dangerous-for-democracy/#pfqHTN6bRPqo

Silverman, C, 2016. ‘Hyperpartisan Facebook Pages Are Publishing False And Misleading Information At An Alarming Rate’, Buzzfeed, online, accessed 26th September 2017: https://www.buzzfeed.com/craigsilverman/partisan-fb-pages-analysis?utm_term=.gwJPAMm2Qm#.uj5PYxjKRj

Smith, C, 2016. ‘Why Facebook and Google Mine Your Data, And Why There’s Nothing You Can DO to Stop It’, BGR, online, accessed 30th September 2017: http://bgr.com/2016/02/11/why-facebook-and-google-mine-your-data-and-why-theres-nothing-you-can-do-to-stop-it/

Smith, P, 2017. ‘Dear Internet, Can We Talk? We Have an Information Pollution Problem of Epic Proportions’, Misinfocon, online, accessed 26th September 2017: https://misinfocon.com/dear-internet-can-we-talk-we-have-an-information-pollution-problem-of-epic-proportions-a1c31b600fdc

Spinney, L, 2017. ‘Facebook and Instagram, Blurring the Line Between Individual and Collective Memories’, Nature, Volume 543, p.168

Statt, N, 2017. ‘Mark Zuckerberg Just Unveiled Facebook’s New Mission Statement’, The Verge, online, accessed 26th September 2017: https://www.theverge.com/2017/6/22/15855202/facebook-ceo-mark-zuckerberg-new-mission-statement-groups

Strickland, J, 2017. ‘Why is the Google Algorithm So Important?’, How Stuff Works, online, accessed 30th September 2017: http://www.computer.howstuffworks.com/google-algorithm.htm

Stubb, A, 2017. ‘Why Democracies Should Care Who Codes Algorithms’, Financial Times, online, accessed 23rd September 2017: https://www.ft.com/content/0322c920-421b-11e7-9d56-25f963e998b2

Sultan, A, 2016. ‘Matchmaking Sites: An Algorithm of the Heart’, Sydney Morning Herald, online, accessed 23rd September 2017: http://www.smh.com.au/technology/sci-tech/matchmaking-sites-an-algorithm-of-the-heart-20160215-gmuztu.html

Tufekci, Z, 2015. ‘Facebook Said Its Algorithms Do Help Form Echo Chambers, and the Tech Press Missed It’, New Perspectives Quarterly, p.9

Vestager, M, 2017. ‘A Healthy Democracy in a Social Media Age’, European Commission, online, accessed 1st October 2017: https://ec.europa.eu/commission/commissioners/2014-2019/vestager/announcements/healthy-democracy-social-media-age_en

Vise, DA & Malseed, M, 2005. The Google Story, Delacorte Press, pp.1-10

White, J, 2012. Bandit Algorithms for Website Optimization, O’Reilly Media, pp.7-9

Worstall, T, 2013. ‘Google Is A Significant Threat To Democracy: Therefore It Must Be Regulated’, Forbes, online, accessed 30th September 2017: https://www.forbes.com/sites/timworstall/2013/04/02/google-is-a-significant-threat-to-democracy-therefore-it-must-be-regulated/#2523f85227c9

Zhang, J, Ackerman, MS & Adamic, L, 2007. ‘Expertise Networks in Online Communities: Structure and Algorithms’, Proceedings of the 16th International Conference on World Wide Web, ACM, p.221

Orientalism and the Media’s Treatment of the 1st January Istanbul Nightclub Attack

istanbul nightclub attack

The entire study of mass communication is “based on the premise that media have significant effects” (McQuail 1994, p.327). In the realm of hard news reporting, this can be especially true when negativity and sensationalism are used to skew perception, exploit fear, or craft a news story so that it appeals to as many people as possible. In today’s mediascape, in which a large number of organisations compete for audiences’ attention, a news story may be presented or framed in many different ways. Examining how this is done and the likely outcomes are valuable in understanding the functions and effects of mass communication. This essay will examine four news organisations’ – two English, one Turkish, and prominent Qatari broadcaster Al Jazeera – coverage of the 1st January Istanbul nightclub attack in the days immediately after the incident. Instances of media framing and use of rhetoric will be recorded and potential motivations for framing suggested. Edward Said’s work on Orientalism provides a theoretical framework in which media framing of this news story can be contextualised.

Public communication occurs when individuals or organisations communicate with a large audience: the effects and implications of which have been scrutinised for decades. Framing by news organisations can influence the actions and choices an audience makes with a piece of information (Scheufele 1999, p.114). Entman (1993, p.52) described media framing as “select[ing] some aspects of a perceived reality and mak[ing] them more salient in a communicating text”, with the aim of “promot[ing] a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation”.

Edward Said wrote that the West’s view of the East – the societies and countries in the Middle East and Asia – is a “regular constellation of ideas” created as a “system of knowledge”, providing Europeans with an identity which is a “superior one in comparison with all the non-European peoples and cultures” (Williams & Chrisman 1993, p.133). In Orientalism (1978), Said is concerned with establishing the context of the East as an “arena of continual imperial ambition” (Scott 2008, p.64) and describes the West’s ‘othering’ of the East as being forged in the realms of empire, patronisation and interference. This otherness is described as being created over centuries by Westerners viewing the East as a place of despotism, arbitrary lawlessness, and servility (Lockman 2004, p.48), which creates a “willed, imaginative and geographic distinction … between East and West” (Said 1978, p.140). While some critics have charged Said with cherry-picking evidence to create a case of Western racism against the East (Lockman 2004, p.182; Scott 2008, p.64), his work on Orientalism has been hugely influential since it was published. It is still relevant today, in that it can provide a framework for examining how the gap that exists between one human consciousness or set of societies and another can widen rapidly and tragically under “circumstances of time, distance, or oppression” (Scott 2008, p.64). Prejudice against Muslims preceded the 9/11 attacks and the so-called ‘War on Terror’, but those events and many terrorism attacks which followed have created a climate of distrust surrounding many Muslim communities (Ogan et. al 2013, p.28). These feelings of distrust continue to be perpetuated by some Western media organisations.

In the early hours of 1st January 2017, a gunman opened fire in Istanbul’s Reina nightclub, which was filled with revellers celebrating New Year’s Eve. Thirty-nine people were killed and dozens wounded before the gunman fled the scene. Citizens of Morocco, Lebanon, Libya, Belgium, Saudi Arabia and France were killed, officials later said (Pamuk & Tattersall, 2017, online). Witnesses said the gunman shouted Islamist slogans as he discharged his weapon. He was arrested by Turkish authorities on 16th January, and it was reported he had links with Islamist militant groups (Arslan, 2017, online). News organisations picked up the story within minutes of the incident happening.

The Guardian has traditionally operated and been regarded as a left-wing or centre-left publication on the left-right political spectrum, and has been known for portraying Middle-Eastern refugees with empathy (Pupavac 2008, p.270). On 1st January, it first reported the nightclub attack story with a piece entitled ‘Turkey nightclub shooting: Istanbul on alert after gunman kills dozens’ (The Guardian 2017, online). The story labels the perpetrator a “gunman” and “attacker”, and by the fifth sentence, notes that “no group has claimed responsibility for the attack”, before moving on to describe the known series of events in simple, factual detail; including the number of dead, their nationalities, and details of the police search for the attacker. The publication quickly began a blog of rolling coverage for the news item, running through the following 24 hours (The Guardian 2017, online). Again, writers described the perpetrator as the “assailant”, “attacker” and “gunman”, with no reference to nationality, religion, or skin colour. A single mention of religion exists in a quote by Turkey’s most senior cleric, who condemned the attack as “savagery … that no Muslim conscience can accept” (The Guardian 2017, online). In a story published on 5th January entitled ‘Istanbul nightclub gunman identified, says Turkish foreign minister’, The Guardian reported that the identity of the gunman had been established, but did not give further details as his identity was not yet confirmed. In the same article, it was mentioned that “Isis claimed responsibility for the attack” (The Guardian 2017, online) and that Turkey is a NATO member working with the United States against Isis in Syria and Iraq. No direct implication was made that this fact and the attack were linked.

The Daily Mail has traditionally operated and been regarded as a conservative or right-wing publication, and has received criticism for portraying Middle-Eastern refugees in a negative fashion (Khosravinik 2009, p.477). Shortly after the attack took place, the Daily Mail reported the story in a piece entitled ‘Terrifying moment terrorist dressed as Santa stalks Istanbul nightclub where he killed 39 and wounded 69 before leaving his weapon behind – as funerals are held for victims just 13 hours after the atrocity’ (Daily Mail 2017, online). The fourth sentence in the story includes the words “it is unclear who carried out the shooting, however recent terror attacks in Turkey have been carried out by groups such as ISIS and Kurdish militants” (Daily Mail 2017, online); immediately suggesting the motivations or background of the attacker. Several sentences later, it is noted that the Turkish President “has vowed to fight to the end against all forms of attack by terror groups and their backers” and that the attack “had been carried out with Kalashnikov rifles” (Daily Mail 2017, online); again framing the attack as having been carried out by a terror group of ‘Eastern’ origin. On 2nd January, the Daily Mail ran a story with the headline ‘ISIS claim responsibility for Istanbul nightclub atrocity as police hunt gunman who murdered 39 revellers in five-minute shooting spree’ (Daily Mail 2017, online). The first sentence of the story begins with the words “ISIS fanatics…”, mentions the type of weapon as a Kalashnikov, states that the killer “shouted in Arabic during the attack”, lists a series of unrelated attacks which occurred in Turkey throughout 2016, before moving the focus to the United Kingdom by describing London as being on “high alert” and having an increased number of police officers on patrol (Daily Mail 2017, online). The Daily Mail published further stories daily until 16th January with a heavy focus on the attackers supposed links to ISIS, along with a ‘selfie’ photograph of the alleged attacker described as “menacing” (Daily Mail 2017, online).

Al Jazeera, despite its relatively short history, has been described as having “changed the face of a formerly parochial Arab media” (Zayani 2005, p.1) and as an organisation that has “scooped” Western media many times (El-Nawawy 2003, p.1). The broadcaster has helped to shape Arab identities in the public sphere, while “rattling the status quo” in the West (Seib 2008, p.7). On 1st January, Al Jazeera first reported the story under the headline ‘Istanbul attack: Dozens dead at Reina nightclub’ (Al Jazeera 2017, online), referring to the perpetrator as “attacker” and quoting a Turkish minister as “hunting one ‘terrorist’”. The story mentions that no claim of responsibility has been made for the attack, but that “experts say the needle of suspicion points at” ISIS (Al Jazeera 2017, online), and goes on to describe other terrorist attacks which occurred in Turkey during the previous twelve months. On 2nd January, Al Jazeera published a story with the headline ‘Istanbul: Police release photo of Reina attack suspect’ (Al Jazeera 2017, online), which displayed the photo with no accompanying description. The article quotes the Turkish Deputy Prime Minister on the country’s state of emergency and reports the attack as being claimed by ISIS, but does not state this as fact or make unsubstantiated claims on terrorism-related activity. By 17th January, in a story published with the headline ‘Istanbul Reina club suspect “confesses”: official’, Al Jazeera quotes Istanbul’s governor as saying that a suspect, Uzbekistan national Abdulgadir Masharipov, has confessed to the attack, and that it was “carried out in the name of [ISIS]” (Al Jazeera 2017, online). The story again sticks to quoting officials rather than making firm statements about the perpetrator’s arrest or possible motivations for the attack. Interestingly, the writer of the story deems it important to mention that the perpetrator was found and arrested in the Esenyurt district, which is “on Istanbul’s European side” (Al Jazeera 2017, online). This is not mentioned in any of the Western-published stories on the arrest.

Turkey is ranked lowly on the Reporters Without Borders press freedom index (Solmaz 2015, online), but only one of its top-four selling newspapers is pro-government: the Daily Sabah. On 1st January the Daily Sabah reported the attack with a story with the headline ‘Terror attack on Istanbul nightclub leaves 39 dead, 65 wounded’, which describes the perpetrator simply as an “assailant” (Daily Sabah 2017, online) and makes no mention of religion. By 2nd January, a story is published with the headline ‘US denies having intelligence on Istanbul nightclub attack which killed 39’ (Daily Sabah 2017, online), bringing a potentially important new issue to the public’s attention, and one which is not mentioned anywhere in Western media. The story quotes the nightclub owner, Mehmet Koçarslan, as claiming U.S. sources had intelligence on the attack (Daily Sabah 2017, online). On the same day, the story ‘Istanbul nightclub attacker’s identity coming to light as Turkish police deepens probe’ is published (Daily Sabah 2017, online), in which the alleged perpetrator’s wife is reported as saying she is unaware of her husband’s “sympathies with the Daesh terrorist organisation”. Again, the name ‘ISIS’ is not mentioned. Use of the term ‘Daesh’ in media has been described as a better choice by a range of world leaders, including French Foreign Minister Laurent Fabius, who said “This is a terrorist group and not a state … the term Islamic State blurs the lines between Islam, Muslims, and Islamists” (Khan 2014, online).

Aristotle described rhetoric as fundamentally “the political art of persuasion” (Varisco 2011, p.96): this ‘art’ was present in various amounts in the news organisations’ stories analysed. From this analysis, it can be said that The Guardian reported the story with little to no framing of the attack as being of ‘Eastern’ origin, and mentions of religion and appearance of the attacker were minimal or non-existent. The Guardian showed very little evidence of Said’s description of the West ‘othering’ the East. The Daily Mail almost immediately framed the attacker as an “ISIS fanatic” (Daily Mail 2017, online), and the majority of related stories in the days following the attack mentioned ISIS in the headline or opening paragraphs. The Daily Mail was the only publication to mention the weapons used as being Kalashnikovs, and described the alleged perpetrator’s unremarkable photograph as “menacing” (Daily Mail 2017, online). Known for portraying Muslims as an “alien other” (Saeed 207, p.1), the Daily Mail displayed the largest amount of reporting which fitted Said’s description of the West’s ‘othering’ of the East. This framing fits with Entman’s description of “select[ing] some aspects of a perceived reality and mak[ing] them more salient in a communicating text with the aim of promot[ing] a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation” (1993, p.52). A likely result is that Europeans are presented with an identity which Said described a “superior one in comparison with all the non-European peoples and cultures” (Williams & Chrisman 1993, p.133) and that the resulting idea of Europe is “a collective notion identifying ‘us’ Europeans as against all those non-Europeans” (Said 1979, p.134). Al Jazeera displayed restraint in not making unsubstantiated claims about the attacker’s identity or links to terrorist groups in the days following the attack; instead quoting the Turkish Prime Minister and government officials. It is interesting to note that Al Jazeera found it necessary to mention that the perpetrator was found and arrested “on Istanbul’s European side” (Al Jazeera 2017, online), perhaps confirming its status as a broadcaster which “rattl[es] the status quo” in the West (Seib 2008, p.7). The Daily Sabah, it could be argued, was bold in raising the question over whether the United States had any prior warning of the attack, and was the most careful of any of the news organisations analysed in its labelling of the group allegedly responsible as Daesh, not ISIS (Daily Sabah 2017, online).

In conclusion, it can be said that Western media frames a vision of the East through its mass media organisations, although the extent to which this occurs varies depending on an organisation’s traditional position on the left-right political spectrum. Reporting news stories concerning terrorism or religious extremism in the East can be particularly problematic for Western news organisations. Said’s theory that the West allows the East into its consciousness through a filtered grid – a complex relationship between power, domination and varying degrees of hegemony – is still as relevant today as it was in the late 1970s. It could be argued that Qatari broadcaster Al Jazeera provides a reliable, alternative option to Western media for coverage of stories concerning the Middle East and Asia.

References

Al Jazeera, ‘Istanbul attack: Dozens dead at Reina nightclub’, online, accessed 2nd February 2017: http://www.aljazeera.com/news/2017/01/scores-dead-attack-istanbul-nightclub-170101003450788.html

Al Jazeera, ‘Istanbul Reina club suspect “confesses”: official’, online, accessed 2nd February 2017: http://www.aljazeera.com/news/2017/01/istanbul-reina-club-suspect-confesses-official-170117084328630.html

Al Jazeera, ‘Istanbul: Police release photo of Reina attack suspect’, online, accessed 2nd February 2017: http://www.aljazeera.com/news/2017/01/istanbul-police-release-photo-reina-attack-suspect-170103052219132.html

Arslan, R, 2017. ‘Abdulkadir Masharipov: Who is Istanbul Gun Attack Suspect?’, BBC European News, online, accessed 28th January 2017: http://www.bbc.com/news/world-europe-38648350

Daily Mail, ‘”I had no idea he was an ISIS sympathiser – we came to Turkey for work”: Istanbul nightclub gunman’s wife tells police how she discovered he had murdered 39 people when she saw it on TV’, online, accessed 2nd February 2017: http://www.dailymail.co.uk/news/article-4083438/A-massacre-military-planning-Highly-trained-Istanbul-nightclub-killer-used-FLARES-light-targets-weeks-entering-Turkey-wife-two-children.html

Daily Mail, ‘Is this the face of a cold-eyed killer? Menacing SELFIE released of suspected ISIS gunman goading Turkish secularists by posing in Taksim Square protest site as different CCTV clips shows the wanted man roaming Istanbul before the massacre’, online, accessed 2nd February 2017: http://www.dailymail.co.uk/news/article-4082290/Turkish-police-release-film-footage-ISIS-gunman-murdered-39-Istanbul-nightclub-quiz-eight-suspects-raid-homes.html

Daily Mail, ‘ISIS claim responsibility for Istanbul nightclub atrocity as police hunt gunman who murdered 39 revellers in five-minute shooting spree’, online, accessed 2nd February 2017: http://www.dailymail.co.uk/news/article-4079942/Pictured-Female-security-guard-27-gunned-Istanbul-New-Year-terror-attack-nightclub.html

Daily Mail, ‘Terrifying moment terrorist dressed as Santa stalks Istanbul nightclub where he killed 39 and wounded 69 before leaving his weapon behind – as funerals are held for victims just 13 hours after the atrocity’, online, accessed 2nd February 2017: http://www.dailymail.co.uk/news/article-4079942/Pictured-Female-security-guard-27-gunned-Istanbul-New-Year-terror-attack-nightclub.html

Daily Sabah, Terror attack on Istanbul nightclub leaves 39 dead, 65 wounded’ online, accessed 2nd February 2017: http://www.dailysabah.com/istanbul/2017/01/01/terror-attack-on-istanbul-nightclub-leaves-39-dead-65-wounded

Daily Sabah, ‘US denies having intelligence on Istanbul nightclub attack which killed 39’, online, accessed 2nd February 2017: http://www.dailysabah.com/war-on-terror/2017/01/01/us-denies-having-intelligence-on-istanbul-nightclub-attack-which-killed-39

El-Nawawy, M & Iskandar, A, 2003. Al-Jazeera: The Story of the Network that is Rattling Governments and Redefining Modern Journalism, Basic Books, p.1

Entman, RM, 1994. ‘Framing: Towards a Clarification of a Fractured Paradigm’, Journal of Communication, Volume 42, p.52

Fortenbaugh, WW, 2007. ‘Aristotle’s Art of Rhetoric’, A Companion to Greek Rhetoric, Wiley, pp.107-123

The Guardian, ‘Istanbul attack: Manhunt for attacker who killed 39 in nightclub – as it happened’, online, accessed 2nd February 2017: https://www.theguardian.com/world/live/2017/jan/01/istanbul-nightclub-attack-dozens-killed-new-years-eve-mass-shooting-live-updates

The Guardian, ‘Istanbul nightclub gunman identified, says Turkish foreign minister’, online, accessed 2nd February 2017: https://www.theguardian.com/world/2017/jan/04/istanbul-nightclub-gunman-identified-says-turkish-foreign-minister

The Guardian, ‘Turkey nightclub shooting: Istanbul on alert after gunman kills dozens’, online, accessed 2nd February 2017: https://www.theguardian.com/world/2016/dec/31/turkey-armed-attacker-opens-fire-in-istanbul-nightclub-reports

Khan, Z, 2014. ‘Words Matter in “Isis” War, So Use “Daesh”’, The Boston Globe, online, accessed 2nd February 2017: https://www.bostonglobe.com/opinion/2014/10/09/words-matter-isis-war-use-daesh/V85GYEuasEEJgrUun0dMUP/story.html

Khosravinik, M, 2009. ‘The Representation of Refugees, Asylum Seekers and Immigrants in British Newspapers During the Balkan Conflict (1999) and the British General Election (2005)’, Discourse & Society, Volume 20, p.477

Lockman, Z, 2004. Contending Visions of the Middle East: The History and Politics of Orientalism, Cambridge, pp.48,182

McQuail, D, 1994. Mass Communication Theory: An Introduction, Sage, p.27

Ogan, C, Willnat, L, Pennington, R & Bashir, M, 2013. ‘The Rise of Anti-Muslim Prejudice: Media and Islamophobia in Europe and the United States’, The International Communication Gazette, Sage, p.28

Pamuk, H & Tattersall, N, 2017. ‘Gunman Kills 39 in Istanbul Nightclub, Manhunt Under Way’, Reuters World News, online, accessed 28th January 2017: http://www.reuters.com/article/us-turkey-attack-idUSKBN14K0NH

Pupavac, V, 2008. ‘Refugee Advocacy, Traumatic Representations and Political Disenchantment’, Government and Opposition, Volume 43, p.270

Saeed, A, 2007. ‘Media, Racism and Islamophobia: The Representation of Islam and Muslims in the Media’, Sociology Compass, Wiley Online, p.1

Said, E, 1978. Orientalism, New York: Vintage, pp.130-140

Seib, P, 2008. The Al Jazeera Effect: How the New Global Media are Reshaping World Politics, Potomac Books, p.7

Scheufele, D, 1999. ‘Framing as a Theory of Media Effects’, Journal of Communication, Volume 49, pp.103-122

Scott, M, 2008. ‘Edward Said’s Orientalism’, Essays in Criticism, Volume 58, p.64

Solmaz, M, 2015. ‘The Other Side of the Coin in Turkish Media’, Middle East Eye, online, accessed 2nd February 2017: http://www.middleeasteye.net/columns/other-side-coin-turkish-media-707841943

Varisco, DM, 2011. Publications on the Near East: Reading Orientalism: Said and the Unsaid, University of Washington Press, p.96

Williams, P & Chrisman, L, 1993. ‘Orientalism’, Colonial Discourse and Post-Colonial Theory, Wheatsheaf, pp.132-149

Zayani, M, 2005. The Al Jazeera Phenomenon: Critical Perspectives on New Arab Media, Georgetown University, p.1

Digital Technologies and the Erosion of Social Trust

Paul McBride Brisbane essay

Social trust and the negative impact of its decline has been interesting and concerning economists and political scientists for some time (Hakansson & Wittmer 2015, p.517). As digital technology evolves, modern forms of media communication have become increasingly complex and discursive in terms of developing trust relations (Berry 1999, p.28), and concerns involving social trust and digital technology have become increasingly intertwined. Societies benefit from high levels of social trust, and while we are now communicating quicker and in a greater variety of ways than ever before, it is not immediately obvious whether the many forms of digital technology and their rapidly-evolving natures have a positive or negative impact on the social trust within a society. Social trust relies on many factors, and while digital technology is far from being the only, or even major, factor in influencing the amount of social trust within a society, it can play a major part. This essay will examine the question of whether digital technologies erode social trust, and the potential implication of the effects of digital technologies and related issues on social trust.

Social trust is a “belief in the honesty, integrity and reliability of others” (Taylor 2007, p.1). It provides the “cohesiveness necessary for the development of meaningful social relationships” (Welch 2001, p.3) and is highly important for both social and political reasons. The level of social trust within a society has implications in the fields of sociology, economics, psychology, anthropology and others. It contributes to a wide range of social phenomena and attributes, from stable government, social equity, market growth, and public harmony, as well as elements on an individual level, such as optimism, physical and mental well-being, education, community, and participation (European Social Survey, online). Individuals benefit from being part of a society with high social trust, as well as contributing to, and participating in, it. Social trust is a “deep-seated indicator of the health of societies and our economies” (Halpern 2015, online) and, when averaged across a country, the levels of social trust “predict national economic growth as powerfully as financial and physical capital, and more powerfully than skill levels” (Halpern 2015, online). Abundant social trust in a society is often see as “a lubricant facilitating all types of economic exchanges” (Krishna 2000, p.71).

In 1994 there were just 10,000 websites globally (Swire 2014, online). This changed with the launch of search engines – particularly market leader Google – as so-called ‘walled gardens’ such as AOL “were killed” (Swire 2014, online), allowing users to easily and quickly find what they were looking for. E-commerce exploded, and in 2001, well over 100 million Americans had purchased a product online (Mutz 2009, p.439). Blogs, chat websites, and early forms of social media followed, and broadband Internet began to increase in availability in 2005. Sites such as YouTube, which allowed users to upload and watch videos, became hugely popular, and social media emerged as a major online presence with Facebook and Twitter in 2004 and 2006 respectively. Smart phones (particularly Apple’s iPhone) brought the Internet to mobile phones in the early 2010s and have “completely changed the way that people consume content on a daily basis” (Swire 2014, online). The majority of Internet time is now spent on mobile devices worldwide, and around 50% of people now get their news from a digital source such as a website, app or e-mail alert (American Press Institute 2016, online). The media’s role in mediating experience by bridging the gap between events and audiences is a broad but extremely important one (Berry 1999, p.28), and media organisations now have to take into account the presentation of their news more than ever, as users of digital media place high importance on the presentation and delivery of news.

The Internet’s early architecture was built on a foundation of trust (Hurwitz 2013, p.1580), but as it matured, its uses and users became increasingly complex. Online social networks are now a major part of everyday life and the method by which many of us stay connected with friends, consume news, and conduct business. They are a prominent method by which people foster social connections, and the significance and depth of these connections and their relationship with fostering trust has been extensively studied. The Internet’s transition from an early “community with a common purpose” to one that “supports myriad, often conflicting, private interests” (Hurwitz 2013, p.1580) has both positive and negative aspects, with corresponding effects on social trust.

Variation across individuals in their levels of trust in the Internet supports the view that the Internet is an ‘experience’ technology – users’ views of it are greatly shaped by their experience (Dutton & Shepherd 2003, p.7). The rapid proliferation of social media websites since the mid-2000s has accelerated this notion, as users’ experiences of using social media can differ widely. It has been suggested that social networking websites should inform potential users that “risk-taking and privacy concerns are potentially relevant and important concerns” before they sign up to become members (Fogel & Nehmad 2009, p.153), as one of the major negative aspects of social networking sites is the potential for users to cause harm to other users, and thus causing a drop in social trust. Internet users initially experience a high level of trust in online communities, but as time passes, trust rapidly declines (Parker 2015, online).

Social networking on the Internet takes place in a context of trust, but trust is a concept with many dimensions and facets (Grabner-Krauter & Bitter 2013, p.1). Studies suggest that the lay public relies on social trust when making judgements of risks and benefits when personal knowledge about a subject is lacking (Siegrist & Cvetkovich 2000, p.1), so Internet users place trust in other Internet users with expertise, identity, personal information and some even with money lending (Lai & Turban 2008, p.387). This can often cause distress or harm as a result, with a corresponding drop in social trust. Trust in the Internet and the information that is obtainable from it is critical to the development of electronic services such as public service delivery to online commerce, and these are harmed if social trust is low.

However, Hakansson and Witmer (2015, p.518) argue that greater use of social media and an increase in number and variety of online communities can affect social trust positively. They suggest that because information and knowledge is vital to building trust, and digital media transmits information much faster than face-to-face relationships, social trust can be increased as a result. Similarly, social media also makes it easier to find new relationships and opportunities for marketing.

As the Internet has matured and the number of users suffering harm or having a negative experience online has increased, there have been increased calls for Internet providers to mediate use of the Internet, which has caused concern for people who place high value on privacy. Various methods have been proposed to calculate levels of, and manage, social trust in online social networks, but none have proved to work definitively (Carminati et. al 2014, p.16). In today’s Internet, intermediaries are increasingly active (Hurwitz 2013, p.1581), and can protect users from experiencing harm online, and thus prevent a drop in social trust. Parigi and Cook (2015, p.19) explain how digital technology operates as an assurance structure when mediation is a factor in interactions. Mediation “reduces overall uncertainty and promotes trust between strangers”. At the same time, it removes any of the human emotions connected with meeting new people. Social interactions are often uniform and stripped of uncertainty or individuality, and are therefore devoid of the “cohesiveness necessary for the development of meaningful social relationships” (Welch 2001, p.3) that high social trust requires.

An additional concerning element of the proliferation of intermediaries is that is can often be unclear “which institutions, if any, safeguard users from harm” (Hurwitz 2013, p.1581). In the post-trust Internet, users “cannot embrace active intermediaries without assurances that their data will be handled in accordance with their expectation” (Hurwitz 2013, p.1582). Moving forward, it is the very nature of the Internet which makes establishing liability for intermediaries extremely difficult, as well as allowing it to thrive. A recent study showed that 48% of Americans expressed concern about corporate intrusion in their Internet activities (Brynko 2011, p.11).

In many cases, attempts to regulate digital technologies can erode social trust. In democratic societies, it is the role of legislators to defend and promote the public interest, but Australia is rare among Western democracies in that it has no constitutional guarantee of media freedom or free expression (Pearson 2012, p.99). Generally, journalists prefer to run their own affairs by creating systems of self-regulation (White 2014, p.4), but are often subject to intense scrutiny. In Australia, a proposed 2010 federal government review was meant to map out the future of media regulation in the digital era (Conroy 2010, online), but fell by the wayside after the News of the World phone hacking scandal shifted attention back to print media (Pearson 2012, p.99). Further government inquiries in 2011 and 2012 sought to establish the extent to which rapidly developing news businesses and their digital platforms required regulation, but no obvious solution was reached (Pearson 2012, p.99). The lack of a written guarantee of media freedom in Australia means that any attempts to regulate media is more of a threat to democracy, and hence social trust. Enforced self-regulation “is not a suitable option – at least not until free expression earns stronger protection” (Pearson 2012, p.99). A UK study found that current regulation of the Internet is “failing to address the democratic value in enabling citizens to navigate … public space” and “failing to support informed choices about content” (Fielden 2011, p.99).

While the Internet has no guarantees of freedom from regulation, it presents many challenges to those seeking to regulate it. A lack of centralised control, widely-used encryption techniques, its international nature, and anonymity of its users are just a few of the factors which make regulation of the Internet incredibly difficult. While cyberspace has been described as “a terra nullius in which social relations and laws have no historical existence and must be reinvented” (Chenou 2014, p.205), the nature of the Internet, and therefore its affect on the social trust of a nation or group of people, varies greatly depending on location. For example, Australia has legislation prohibiting abuse of market power to lessen competition, whereas in the United States, these laws are not as stringent.

However, not all legislation involving regulation of digital technology is likely to decrease social trust. It could be argued that the Spam Act 2003 is likely to prevent a decrease in a society’s social trust as it greatly prohibits online fraud and encourages self-regulation by users. Similarly, regulation of cyberspace for children is almost universally accepted as a reasonable form of mediation in digital technology with no decrease in social trust likely as a result. While the Australian Labor Party’s 2007 proposal for a blanket ban on content deemed harmful to children was rejected, further legislation has been implemented to protect children online in Australia with the Enhancing Online Safety for Children Act 2015. In the United Kingdom, a 2008 report by the government’s Culture, Media and Sport Select Committee expressed concern about the amount of time taken for the most extreme content to be removed from video-sharing websites such as YouTube (Fielden 2011, p.78). While YouTube introduced a ‘safety mode’ in 2010 to address concerns over parental controls, there is still much concern over the amount of inappropriate material children can access, and the lack of regulation faced by the hosts of this material. As so much data is uploaded to sites such as YouTube every minute, hour and day, it is physically impossible for every piece of content to be checked, so the future of online content regulation for sites such as these is, essentially, crowdsourced (Fielden 2011, p.77). The YouTube community guidelines state: “Every new community feature on YouTube involves a certain level of trust. We trust you to be responsible, and millions of users respect that trust, so please be one of them” (YouTube, online). Discussions at government level concerning the possibility of further regulation of online content still exist in many Western democracies.

Another area which has potential for eroding social trust is in the area of copyright. Copyright has developed over centuries, and friction between users of digital technologies and regulatory bodies has existed for as long as digital technology has been a medium for communication. The digital age has made many traditional modes of reproduction of intellectual property obsolete, and despite many positive aspects of faster and more widely available communication options, methods of creativity and ownership have been tested in profound ways (Fitzgerald 2008, online). The Digital Millennium Copyright Act 1998 criminalised copyright infringement on the Internet, but has attracted criticism for overzealous application of its powers and undermining free speech, and therefore having the potential to erode social trust. In the digital age, copyright activists argue that overzealous use of copyright laws online restrict access to information (Lessig 2008, online). Organisations such as Creative Commons and the Electronic Frontier Foundation (EFF) provide alternatives to copyright, and aim to protect the public interest regarding new technologies (Lambe 2014, p.448). The EFF is especially active in the fields of intellectual property, free speech, anti-surveillance, and bloggers’ rights, and has been in legal disputes with several commercial entities and law enforcement agencies as a result.

Today, every social media user is a publisher of sorts (Cuddy 2016, online). Social media provides instant access to potentially huge audiences, and huge potential for copyright infringement too. Social networking sites provide perhaps the greatest risk of an erosion of social trust in the realm of copyright by providing a platform for users who have shared their creative work with the world to have it stolen and used by others (Legal Aid NSW 2017, online). Copyright law in Australia covers works that are created or shared online, but a social media website’s terms and conditions may change the rights to the work, and these conditions are not always clear or understood.

Another element of digital technologies which has vast potential to erode social trust is the concern of government and corporate Internet surveillance. Post 9/11, the United States government and its federal agencies greatly increased surveillance of its citizens online and introduced a large amount of of cybersecurity legislation as an overall part of their anti-terrorism policy (Nhan & Carroll 2012, p.394). Many watchdog groups expressed concern as a result, although the effect of the legislative and policy changes were perhaps unclear until notorious NSA whistleblower Edward Snowden leaked information regarding government surveillance of private citizens’ online information and habits. In 2014, a survey found that 60% of respondents had heard of Snowden, and that 39% of people have changed their online behaviour as a result of the information he leaked (Jardine & Hampson 2016, online). Jardine and Hampson (2016, online) also found that that many people’s routine online activity had changed substantially, with the most common change being a move from ‘public’ search engines to private search engines with built-in anonymity technology. Similarly, recent scandals in the United States exposing surveillance by the government on its citizens’ online information is likely to have greatly eroded trust in digital media, and thus, social trust (Anderson & Rainie 2014, p.20). This supports the theory that that digital technology has a negative effect on social trust. (Hakasson & Witmer p.518).

There are many real-life examples of digital technology affecting democracy worthy of study, and many of them display potential to erode social trust. Govier (1997, p.20) points out that distrust in politics is “especially prevalent, and, while it may be well-founded, can have pernicious effects” on a society. The 2016 United States presidential election saw the Electronic Frontier Foundation involving itself in an attempt to force a recount in three key states after evidence showed that hackers had manipulated voting machines and optical scanners (Hoffman-Andrews 2016, online), most likely affecting the overall result of the election. In its role as the Fourth Estate, the media is hypothetically the guardian of the public interest and the regulators of those holding democratic power. However, as Coronel (2003, p.9) explains, the media are often used “in the battle between rival political groups, in the process sowing divisiveness rather than consensus, hate speech instead of sober debate, and suspicion rather than social trust”. In these cases, media contribute to public cynicism and apathy, and have a negative effect on democratic processes, and hence a decline in social trust.

President Trump’s first 100 days in office have seen him launch numerous verbal attacks on the media, which have likely eroded social trust for many Americans, but interestingly, polls have provided conflicting results on whether the American public trust the media or the President more (Farber 2017, online; Lima 2017, online; Patterson 2017, online). The goals of advocates for free speech online and anti-regulation groups are often intertwined with those seeking political reform, and those operating at the same time as the current political administration are no different. Ericson (2016, online) goes as far as saying that Lawrence Lessig has “already transformed intellectual-property law with his Creative Commons innovation, and now he’s focused on an even bigger problem: the US’ broken political system”.

In conclusion, it can be said that as societies function on the basis of trust, and users of digital technology are no different, social trust is paramount to a well-functioning democracy. For a high level of social trust to be maintained, users need to trust the Internet and associated digital technologies to keep their information secure and private. Trust is the bedrock of the Internet, is the basis for much of its success, and, in many ways, the philosophy behind much of what keeps it running. However, the Internet provides many opportunities for social trust to be eroded, and trust in digital technologies, and especially the Internet, is arguably declining. When trust in digital technology starts to wane, or government agencies or organisations are shown to be breaching privacy or perceived as being dishonest, users change how they behave and social trust declines. Recent copyright and regulatory conflict, and scandals involving surveillance and privacy have likely had a negative effect on social trust in many Western democracies. The resulting drop in social trust has a negative effect on a society, in terms of public harmony, economics, and other areas. Social cohesion can be established or demolished by high or low social trust.

References

American Press Institute, 2016. ‘How People Decide What News to Trust on Digital Platforms and Social Media’, online, accessed 13th May 2017: https://www.americanpressinstitute.org/publications/reports/survey-research/news-trust-digital-social-media/

Anderson J & Rainie L, 2014. ‘The Future of the Internet: Net Threats’, Internet and American Life Project, Pew Research Centre, p.20

Berry, D, 1999. Ethics and Media Culture: Practices and Representations, Taylor and Francis, p.28

Brynko, B, 2011. ‘Trust in Social Networking’, Information Today, p.11

Carminati, B, Ferrari, E & Viviani, M, 2014. Security and Trust in Online Social Networks, p.16

Chenou, JM, 2014. ‘From Cyber-Libertarianism to Neoliberalism: Internet
Exceptionalism, Multi-stakeholderism, and the Institutionalisation of Internet Governance in the 1990s’, Globalizations, pp.205-223

Conroy, S, 2010. ‘Convergence Review’, media release, accessed 17th May 2017: http://www.minister.dbcde.gov.au/media/media_releases/2010/115

Coronel, S, 2003. ‘The Role of Media in Deepening Democracy’, United Nations, online, accessed 13th May 2017: http://unpan1.un.org/intradoc/groups/public/documents/un/unpan010194.pdf

Cuddy, RH, 2016. ‘Copyright Issues for Social Media’, Legal Zoom, online, accessed 20th May 2017: https://www.legalzoom.com/articles/copyright-issues-for-social-media

Dutton, WH & Shepherd, A, 2003. ‘Trust in the Internet: The Social Dynamics of an Experience Technology’, Oxford Internet Institute, University of Oxford, p.7

Ericson, B, 2015. ‘Transcript that Choke Creativity’, Communication and New Media, online, accessed 20th May 2017: https://medium.com/communication-new-media/lawrence-lessig-laws-that-choke-creativity-4aa99ded4ce4

European Social Survey, 2017. ‘Social Trust and Its Origin’, online, accessed 13th May 2017: http://essedunet.nsd.uib.no/cms/topics/2/1/

Farber, M, 2017. ‘Sorry President Trump, But Voters Trust the Media More Than You’, Fortune, online, accessed 20th May 2017: http://fortune.com/2017/02/23/voters-trust-media-more-than-trump/

Federal Register of Legislation, Australian Government, ‘Enhancing Online Safety for Children Act 2015’, online, accessed 17th May 2017: https://www.legislation.gov.au/Details/C2015A00024/Controls/

Federal Register of Legislation, Australian Government, Spam Act 2003, online, accessed 17th May 2017: https://www.legislation.gov.au/Series/C2004A01214

Fielden, L, 2011. ‘Standards Regulation in the Age of Blended Media’, Regulating for Trust in Journalism, City University London, pp.78,99

Fitzgerald, B, 2008. ‘Copyright 2010: The Future of Copyright’, European Intellectual Property Review, accessed 20th May 2017: http://eprints.qut.edu.au/

Fogel, J & Nehmad, E, 2009. ‘Internet Social Network Communities: Risk Taking, Trust, and Privacy Concerns’, Computers in Human Behaviour, p.153

Govier, T, 1997. Social Trust and Human Communities, MQUP: Montreal, p.20

Grabner-Krauter, S & Bitter, S, 2013.’ Trust in Online Social Networks: A Multifaceted Perspective’, Forum for Social Economics, Volume 44, pp.48-68

Hakansson, P & Witmer, H, 2015. ‘Social Media and Trust – A Systematic Literature Review’, Journal of Business and Economics, University of Malmo, pp.517-524

Halpern, D, 2015. ‘Social Trust is One of the Most Important Measures That Most People Have Never Heard Of – and It’s Moving’, The Behavioural Insights Team, online, accessed 13th May 2017: http://www.behaviouralinsights.co.uk/uncategorized/social-trust-is-one-of-the-most-important-measures-that-most-people-have-never-heard-of-and-its-moving/

Hoffman-Andrews, J, 2016. ‘Election Audits Ought to Be Like an Annual Checkup, Not a Visit to the Emergency Room’, Electronic Frontier Foundation, online, accessed 20th May 2017: https://www.eff.org/deeplinks/2016/12/audit-better-faster-cheaper

Hurwitz, J, 2013. ‘Trust and Online Interaction’, University of Pennsylvania Law Review, Volume 161, pp.1580-1590

Jardine, E & Hampson, F, 2016. ‘Trust: The Social Basis of the Internet Ecosystem’, Tripwire, online, accessed 19th May 2017: https://www.tripwire.com/state-of-security/security-awareness/trust-social-basis-internet-ecosystem/

Krishna, A, 2000. ‘Creating and Harnessing Social Capital’, Social Capital: A Multifaceted Perspective, pp.71-93

Lai, LS & Turban, E, 2008. ‘Groups Formation and Operations in the Web 2.0 Environment and Social Networks’, Group Decision and Negotiation, pp.387–402

Lambe, J, 2014. ‘Electronic Frontier Foundation’, Encyclopaedia of Social Media and Politics, California: SAGE Publications, pp.448-449

Legal Aid NSW, ‘Online Social Networking: Copyright’, online, accessed 20th May 2017: http://www.legalaid.nsw.gov.au/publications/factsheets-and-resources/online-social-networking-copyright

Lessig, L, 2008. Remix: Making Art and Commerce Thrive in the Hybrid Economy, online, accessed 20th May 2017: https://archive.org/stream/LawrenceLessigRemix/Remix-o.txt

Lima, C, 2017. ‘Poll: Trump Administration Edges Media in Voter Trust’, Politico, online, accessed 20th May 2017: http://www.politico.com/story/2017/02/trump-media-trust-poll-fox-news-235168

Mutz, D, 2009. ‘Effects of Internet Commerce on Social Trust’, Public Opinion Quarterly, pp.439-461

Nhan, J & Carroll, B, 2012. ‘The Offline Defence of the Internet: An Examination of the Electronic Frontier Foundation’, SMU Science and Technology Law Review, Volume 15, pp.389,394

Paligi, P & Cook, L, 2015. ‘Trust and Economics in the Sharing Economy’, Viewpoints on the Sharing Economy, Sage Journals, p.19

Parker. C, 2015. ‘Trust Erodes Over Time in the Online World, Stanford Experts Say’, Stanford News, online, accessed 13th May 2017: http://news.stanford.edu/2015/03/18/sharing-trust-online-031815/

Patterson, T, 2017. ‘News Coverage of Donald Trump’s First 100 Days’, Harvard Kennedy School, online, accessed 20th May 2017: https://shorensteincenter.org/news-coverage-donald-trumps-first-100-days/?utm_source=POLITICO.EU&utm_campaign=ab6d830a9d-EMAIL_CAMPAIGN_2017_05_19&utm_medium=email&utm_term=0_10959edeb5-ab6d830a9d-189799085

Pearson, M, 2012. ‘The Media Regulation Debate in a Democracy Lacking a Free Expression Guarantee’, Pacific Journalism Review, Griffith University, p.99

Siegrist, M & Cvetkovich, G, 2000. ‘Perceptions of Hazards: The Role of Social Trust and Knowledge’, Risk Analysis, p.1

Swire, R, 2014. ‘The Evolution of Digital Media Over the Past 20 Years’, Parallax, online, accessed 18th May 2017: https://parall.ax/blog/view/3052/the-evolution-of-digital-media-over-the-past-20-years

Welch, MR, 2001. ‘Determinants and Consequences of Social Trust’, Sociology, University of Notre Dame Press, p.3

White, A, 2014. ‘The Trust Factor: An EJN Review of Journalism and Self-regulation’, Ethical Journalism Network, online, accessed 20th May 2017: http://ethicaljournalismnetwork.org/assets/docs/142/118/79dd78e-837b376.pdf

YouTube, 2017. ‘Community Guidelines’, online, accessed 20th May 2017: https://www.youtube.com/yt/policyandsafety/communityguidelines.html

END

Dark Tourism and Mass Media

killing fields cambodia

A large amount of tourism literature deals with the marketing and consumption of “pleasant diversions in pleasant places” (Strange & Kempa 2003, p.386), but a number of communications scholars have recently attempted to explore tourism sites of a darker nature. This has helped popularise the form of travel known as dark tourism: tourism which provides “potential spiritual journeys for [those] who wish to gaze upon real and recreated death” (Stone 2006, p.54). In modern Western societies, normal death is hidden from public consumption, yet “extraordinary death is recreated for popular consumption” (Stone 2012, p.1565). Marketing of dark tourism often overlaps with historical or heritage tourism (Mullins 2016, online), and can present promoters with challenges not present with the tourism of ‘pleasant diversion’. This essay will examine some of those challenges and the relationship between mass media and dark tourism in the context of this rapidly developing tourism form.

Dark tourism has a long history, having existed since the earliest pilgrimages and times when people would travel to witness public executions (Jahnke 2003, p.6). When academic research on the topic became significant in the 1990s, at the same time as growing numbers of tourists were seeking these new experiences, the complexities of dark tourism’s relationship with mass media became apparent. Just as all cultural production and consumption is complex and dynamic, the production and consumption of dark tourism has been described variously as “continuous and interrelated as demand appears to be supply‐driven and attraction‐based” (Farmaki 2013, p.281), fuelled by “an increasing supply of carnage and blood” online (Hiebert 2014), driven by factors “extend[ing] from an interest in history and heritage to education to remembrance” (Yuill 2004, p.1), and as a “source of private pleasure” (Seaton 1996, p.235).

The issue of how death is presented to mass audiences is particularly complex. In the realm of dark tourism, media can bring about a “neutralisation of death” (Jahnke 2003, p.8), helping tourists to become more aware of the mortality of others and themselves, or a mental state of being which Stone (2012, p.1565) describes as “a space to construct contemporary ontological meanings of mortality”. In many ways, mass media and dark tourism are “in the same business” (Walter 2009, p.41) in that they both mediate death to mass audiences. Many Western societies have relinquished their attachments to the dead, yet retain a vibrant interest in history (Walter 2009, p.40) and the people who inhabited familiar spaces, setting the stage for two key industries to bridge the gap between the dead and contemporary living: mass media and tourism.

Mass media plays a central role in marketing many dark tourism sites, using tourism literature, Hollywood films, television, newspapers, and comic strips in the role of public relations. Similarly, mass media can keep other sites from public view (Yuill 2004, p.125). By placing sites and events in the forefront of communications, mass media have the ability to attract visitors to dark tourism destinations. Media can provide the public with a general understanding of, and encourage an interest in, dark tourism sites, although Seaton and Lennon (2004, p.62) describe how many Western media outlets tend towards creating a moral panic around dark tourism sites through “sensational exposes of dubiously verified stories”: the result of moral debates about dark tourism within society.

At the same time as promoting and marketing dark tourism destinations, mass media has a distinct influence over public opinion and interpretation of many sites of dark tourism (Ntunda 2014, online). New media technologies can “deliver global events into situations that make them appear to be local” (Lennon & Foley 2000, p.46), embodying simulation and interpretation of historical experiences for a mass audience. Public perception of the importance or prominence of dark tourism sites may also be affected by mass media. Dachau concentration camp, for example, was not one of the largest Nazi extermination camps, yet is one of the most visited, due to its appearance in many films and books (Young 1993, p.10). However, while media is central to understanding and interpreting historical events, it can cause dissatisfaction brought about by constant exposure to simulation (Lennon & Foley 2000, p.47). This can often be countered by the reality of visiting a permanent ruin, monument or preserved space.

Motivations of visitors travelling to dark tourism destinations are varied, and often not directly related to mass media. The need to reconcile comparisons between imagined landscapes and topographical reality (Podoshen 2012, p.263), an interest in history and heritage, educational reasons, collective and personal remembrance (Dunkley & Morgan 2010, p.860), and emotional attachment to a place (Rasul & Mowatt 2011, p.1410), among others, can be important factors encouraging dark tourism. Biran and Hyde (2013, p.191) suggest the primary motivation for many dark tourism participants is to “contemplate life and one’s mortality through gazing upon the significant other dead”, fitting with Stone’s (2012, p.1565) description of dark tourism destinations as “space[s] to construct contemporary ontological meanings of mortality”. Additionally, in the past two decades, many tourists have sought to escape the “sanitised version of reality that tourism has traditionally offered” (Robb 2009, p.51); with many no longer content to lounge by the pool or hotel bar, or embark on guided tours. It could perhaps be argued that each of these motivations could be influenced by mass media to varying degrees, but media is unlikely to be the main driving force. It is also problematic to group all dark tourism destinations together under one category, making it just as difficult to group together motivations for visiting them. Representations of death are unique from site to site and often from visitor to visitor (Robb 2009, p.51). Indeed, many managers of dark tourism sites no longer wish their destinations to be viewed as dark, but as sites of sensitive heritage with a focus on social engagement (Magee & Gilmore 2014, p.898).

In conclusion, it can be said that, despite many challenges, mass media plays a part in encouraging tourists’ interest in dark tourism sites, although it is neither the only, nor arguably the major, driving factor in promoting dark tourism destinations. Dark tourism sites are cultural landscapes which can be interpreted in many ways, as can tourists’ motivations for visiting them. Visitors to dark tourism destinations seek a variety of meanings from their experience and their reasons for visiting sites of real or recreated death are numerous and varied. Dark tourism is a complex issue, in terms of consumption and supply, and its relationship with mass media.

References

Biran, A & Hyde, K, 2013. ‘New Perspectives on Dark Tourism’, International Journal of Culture, Tourism and Hospitality Research, pp.191-198

Dunkley, R & Morgan, N, 2010. ‘Visiting the Trenches: Exploring Meanings and Motivations in Battlefield Tourism’, Tourism Management, p.860-868

Farmaki, A, 2013. ‘Dark Tourism Revisited: A Supply/Demand Conceptualisation’, International Journal of Culture, Tourism and Hospitality Research, p.281

Hiebert, P, 2014. ‘The Growing Quandary of Dark Tourism’, Pacific Standard, online, accessed 9th January 2017: https://psmag.com/the-growing-quandary-of-dark-tourism-733629dd26c5#.xcwen7dal

Jahnke, D, 2013. ‘Dark Tourism and Destination Marketing’, Theseus.Fi, online, accessed 7th January 2016: https://www.theseus.fi/handle/10024/64693

Lennon, J & Foley, M, 2000. ‘Interpretation of the Unimaginable: The U.S. Holocaust Memorial Museum, Washington, D.C., and “Dark Tourism”‘, Dark Tourism, pp.46-50

Magee, R & Gilmore, A, 2014. ‘Heritage Site Management: From Dark Tourism to Transformative Service Experience’, The Services Industries Journal, p.898

Mullins, D, 2016. ‘What is Dark Tourism?’, Cultural Tourism, online, accessed 7th January 2016: http://culturaltourism.thegossagency.com/what-is-dark-tourism/

Ntunda, J, 2014. ‘Investigating the Challenges of Promoting Dark Tourism in Rwanda’, Anchor Academic Publishing, online, accessed 7th January 2016: http://www.anchor-publishing.com/e-book/277349/investigating-the-challenges-of-promoting-dark-tourism-in-rwanda

Podoshen, J, 2012. ‘Dark Tourism Motivations: Simulation, Emotional Contagion and Topographic Comparison’, Tourism Management, p.263-271

Rasul, A & Mowatt, C, 2011. ‘Visiting Death and Life: Dark Tourism and Slave Castles’, Annals of Tourism Research, p.1410

Robb, E, 2009. ‘Violence and Recreation: Vacationing in the Realm of Dark Tourism’, Anthropology and Humanism, p.51

Seaton, AV 1996. ‘Guided by the Dark: From Thanatopsis to Thanatourism’, International Journal of Heritage Studies, pp.234-244

Seaton, AV & Lennon, J, 2004. ‘Thanatourism in the Early 21st Century: Moral Panics, Ulterior Motives and Ulterior Desires’, in TV Singh (ed.) New Horizons in Tourism: Strange Experiences and Stranger Practices, pp.62–82

Stone, P, 2012. ‘Dark Tourism and Significant Other Death: Towards a Model of Mortality Meditation’, Annals of Tourism Research, Vol. 39, p. 1565

Stone, P, 2006. ‘A Dark Tourism Spectrum: Towards a Typology of Death and Macabre Related Tourist Sites, Attractions and Exhibitions’, Tourism: An Interdisciplinary International Journal, p.54

Strange, C & Kempa, M, 2003. ‘Shades of Dark Tourism: Alcatraz and Robben Island’, Annals of Tourism Research, pp.386–405

Walter, T, 2009. ‘Dark Tourism: Mediating Between the Dead and the Living’, The Darker Side of Travel: The Theory and Practice of Dark Tourism, pp. 39-55

Young, JE, 1993. The Texture of Memory: Holocaust Memorials and Meaning, New Haven: Yale University Press, p.10

Yuill, S, 2004. Dark Tourism: Understanding Visitor Motivation at Sites of Death and Disaster, Texas A&M University, pp.1-125

 

Twin Peaks as Complex Television: An Evaluative Critique

twin peaks sign television

Television in the 21st century is more complex than that of the late 1980s and prior (Hundley 2007, p.3). This is largely due to the increase in complex narratives, characterisations and interesting plots that require stricter viewer attention: elements which have become commonplace in television series since they were first seen in the early 1990s. Much of this new complexity was conceived in the science fiction genre, with programs such as Twin Peaks, The X-Files and Lost ushering in a new era of complex television. The popularity of these shows had ramifications across all areas of television, transforming the mainstream television arena and enabling the success of complex storylines by “weaning audiences onto them” (Hundley 2007, p.6). This influence is still evident in the production of quality television today. This essay will make an evaluative critique of the American television series Twin Peaks (1990-1991) in the realm of how it accords with the definition of complex television, including both its textual and contextual dimensions, and the various factors which played out in the series’ making.

Complex television is described by Mittell as an “alternative to the conventional episodic and serial forms that have typified most American television since its inception” (2015, p.17), and he explains that the viewer can derive pleasure from trying to figure out the kernels and satellites in plotlines of complex narratives (2015, p.24). Complex television texts are encoded with dense meaning and imagery, often including multiple characterisations and intricate plotlines. Narrative complexity can be considered a distinct narrational mode, or a “historically distinct set of norms of narrational construction and comprehension” (Bordwell 1985, p.1) that allows for “a range of potential storytelling possibilities” (Mittell 2015, p.22), and in which oscillation between long-term story arcs and stand-alone episodes is possible. A prominent example is the 1990s American television series The X-Files, which Sconce (2004, p.93) describes as having both an “ongoing, highly elaborate conspiracy plot” and “self-contained ‘monster-of-the-week’ stories”. Complex television rejects the need for plot closure within every episode, employs a range of serial techniques that build over time, is not as uniform as traditional serial norms, creates an elaborate network of characters, and is often highly unconventional in many ways (Mittell 2015, p.17; Booth 2011, p.371).

Twin Peaks was created jointly by David Lynch and Mark Frost, and premièred in the United States in April 1990. It was a ground-breaking series that “changed most norms about television at that time” (Hundley 2007, p.6), and despite consisting of only two series of 29 episodes in total, has inspired numerous complex debates about its interactions with its medium (Baderoon 1999, p.94). Nominated for fourteen Emmys and broadcast in 55 non-American markets (Muir 2001, p.250), Twin Peaks was described as “revolutionary” at the time of its release (Hundley 2007, p.24), and is still considered so today. The series primarily centred on an investigation by FBI Special Agent Dale Cooper (Kyle MacLachlan) into the murder of Laura Palmer (Sheryl Lee): a beloved high school student, homecoming queen, and native of the fictional small town of Twin Peaks, close to the Canadian border. Cooper’s investigations quickly lead him to discover that Palmer was not so innocent as she might have seemed. He learns that the teenager lived a precarious, multi-layered life, the town and its people are full of secrets and mystery, and the surrounding woods are home to something supernatural and possibly evil. The viewer quickly becomes aware that Twin Peaks is a series “full of secrets, variegated orders, ambiguous characters and [with a] supernatural overtone” (Loacker & Peters 2015, p.624). In the company of an array of complex characters who “cheat, steal, kill, rape, and deal drugs” (Hundley 2007, p.24), Cooper solves the murder at the end of the first series, consisting of eight episodes. The second, and much longer, series moves the narrative ever deeper into the realm of science fiction, as Cooper investigates the malevolent spirit, Bob, who possessed Laura’s father, and visits the Black Lodge in the woods. Ratings dropped in the second series, perhaps due to the increase in some of the more bizarre science-fiction-oriented elements of the show, and the fact the murder-mystery had been largely resolved, but it is the aforementioned ingredients and the quality of their presentation which made Twin Peaks such a highlight of modern television.

Twin Peaks presents an isolated community beset by evil forces, and its narrative is driven by a murder investigation: an event which reverberates through the close-knit community. It has been argued that the first series constituted little more than an “above-average, literarily-allusive, highly exploitative mini-series about an honours student cheerleader by day/prostitute-drug dealer by night” (Dolan 1995, p.43), but this description does not begin to scratch the surface of the series’ depth. Twin Peaks was partially marketed as a police procedural (Collins 1992, p.345) and has many elements of a classic detective story, in which the investigator is the “traditionally-expected centre of signification” (Carrion 1993, p.242). It is easy to suggest that Dale Cooper is the “literal hero” of Twin Peaks (Baderoon 1999, p.94) and that the series revisits the staples of traditional televisual story-telling by “inhabiting the genres of detective series and soap opera” (Fiske 1987, p.237). However, the way in which each episode feeds back onto itself as the narrative progresses towards a conclusion, moves the narrative away from the traditional detective story and into a space much more complex and interesting. The narrative is also constantly undermined by evil forces, and many other televisual devices introduced by Lynch, which remove ontological certainty in the text and add to viewing enjoyment. There is an ominous sense that anything could befall any of the characters at any time (Woodward 1990, p.50), and deciphering and understanding the intricacies of their fates “became a national pastime and a boon for TV and film critics alike” (Muir 2001, p.251). The presence of these elements in Twin Peaks again point to its accordance with the definition of complex television, in fitting with Mittell’s description of complexity as being when the “ongoing narrative pushes outward, spreading characters across an expanding story world” (2015, p.52). Its multiple complexities led to it being labelled a “genre-splicing work of film art, a parodic, convention-defying detective story” (Lavery 1996, p.16).

Thompson (2003, p.120) suggests that Twin Peaks can be described as “art television, or television which brings elements from art cinema to the small screen”, and for Lynch, film and television are “art medium[s] that subvert and play with well-known boundaries, meanings – and with our senses” (Loacker & Peters 2015, p.621). He seemed thoroughly determined to push these boundaries throughout the entirety of Twin Peaks, with the most obvious challenge to reason and convention being the development of the story of Laura Palmer (Telotte 1995, p.162). Her double or ‘phantom’ life obscures the viewer’s desire to see her lead a normal existence; instead, “drugs, illicit sex, sadomasochism, and hints of devil worship are or were the hidden, yet real, highlights of Laura’s after-school life” (Telotte 1995 p.162), and become inseparable from her identity. Additionally, eccentric characters with sometimes odd or silly mannerisms are deployed generously throughout the narrative to challenge convention and question normality. Even Agent Cooper, the “literal hero” (Baderoon 1999, p.94), uses peculiar methods to solve cases, including speaking to a tape recorder, the use of dreams and visions, and Tibetan meditation.

Multiple uses of complexity on concurrent levels means Twin Peaks‘ narration is extremely effective at “frustrat[ing] the resolution of the murder mystery by revealing ever more elaborate networks of connections” (Baderoon 1999, p.102). It offers a radical rereading of the detective story and, at its close, “disavows the implications of its own subversiveness” (Baderoon 1999, p.94). In combining elements of a police investigation with soap opera and strong surreal elements, the series “prominently alters and undermines ‘normal’ orders, established boundaries and the ‘grid’ of common meaning – in television narratives, but also far beyond” (Telotte 1995, p.165). In the closing scene of the final episode of series two, in which Cooper is possessed by Bob, the hero of the story occupies the position held by the female victim in the opening scene. The audience is faced with a narration “simultaneously subversive and ambivalent” (Baderoon 1999, p.105), as well as dramatic and gripping.

Since the series aired, Twin Peaks has increasingly been framed in the context of science fiction (Weinstock & Spooner 2015, p.161), and it is useful to examine this contextualisation to see how it confirms the series as being complex television. Agent Cooper faces evil forces from not only within the town, but the surrounding woods – a historical link could be drawn to many 1950s science fiction films which presented monsters as a displaced form of communism (The Invasion of the Body Snatchers, for example) as both an internal and external threat to the country. Lynch also includes many direct links to the decade throughout the series, from the inclusion of actors who rose to prominence in the 1950s in Piper Laurie, Russ Tamblyn and Richard Beymer, to the fashion, style and music taste of Audrey Horne (Sherilyn Fenn), and the pristine image of the 1950s diner. As the second series moves deeper into the realm of science-fiction, Major Briggs’ superiors further reference the era by warning Cooper that Brigg’s abduction “could make the Cold War seem like a case of the sniffles” (Hundley 2007, p.26).

Additionally, much of the ambiguity concerning the natural and supernatural elements of the murder of Laura can be seen as being influenced by 1980s science-fiction (Hundley 2007, pp.26-27). Lingering doubt over the extent to which Leland Palmer’s possession played in Laura’s murder, and the cliffhanger ending as Agent Cooper is himself possessed by Bob, leave the audience unsure of many elements of the story. It is uncommon for a traditional detective story to leave unresolved issues, further cementing the idea that Twin Peaks fits with Mittell’s (2015, p.17) definition of complex television, in that it is “highly unconventional in many ways”.

Another element of Twin Peaks‘ complexity, which can be seen throughout the history of horror and science fiction, is the inclusion of sites of deviance or different behaviour (Loacker & Peters 2015, p.622): places where otherworldly occurrences take place. These include The Great Northern Hotel, The Roadhouse and One-Eyed Jacks, and other sites which appear in an imaginary or dreamlike state: the Red Room, the Black and White Lodges, and the Ghostwood Country Club and Estate – a “space in the business imagination of Benjamin Horne” (Loacker & Peters 2015, p.622). Similar sites are used as spaces of deviance throughout television and film history, from the Overlook Hotel in Stanley Kubrick’s The Shining, Bates’s motel in Hitchcock’s Psycho and others, and in several screen adaptations of Stephen King’s work. The sites in Twin Peaks which exist between the real and the imaginary bring about many rapid changes in the narrative, add many layers of complexity to plotlines, and can leave the viewer puzzled or intrigued (Davis, 2010). There also exist sites which are presented as less deviant or evil, but are often just as affective in altering the course of the narrative: the Double R Diner or Twin Peaks Sheriff’s Department, for example (Loacker & Peters 2015, p.622). Agent Cooper’s meditative states and dreams are also arguably sites of deviance, although they are used for good in the solving of crime. Hence, it could be argued, the physical landscape of the town of Twin Peaks, and hence the series itself, is a “maze” (Blassmann 1999, p.49), made up of “multiple, seemingly contradicting and obscure formulas, codes and landmarks” (Westwood 2004, p.775): again adding to the complexity and overall quality of the viewing experience.

It is also useful to examine television’s history to see which factors may have influenced Twin Peaks‘ production and to contextualise it within the evolution of television in the United States over a number of decades. Beginning with visual and narrative style, it can be argued that Twin Peaks has been influenced by film noir; a genre of film which emerged in the 1940s and 1950s consisting of drama infused with fear, crime, shadows and violent death, or “films filled with trust and betrayal” (Duncan 2000, p.7). Lynch has drawn on many of the themes and styles from film noir throughout his career, most especially in his choice of settings in Mulholland Drive and Lost Highway. In Twin Peaks, the ‘otherness’ of the cold northern climate mirrors the psychological state of many of its characters. In his version of small-town America, a majority of characters feel and act like outsiders.

The town of Twin Peaks itself has multiple significances, and is the basis for much of the complexity throughout the narrative. Dienst (1994, p.95) explains that Lynch and Frost wrote the first storylines for the series based on an idea of the town, rather than any particular plotline. Small towns have a long tradition in the American narrative and are often mythologised in American television (Carroll 1993, p288), but this concept is quickly revealed to be a construct in Twin Peaks (Pollard 1993, p.303).

Much of Twin Peaks‘ style is deeply steeped in the Gothic genre of television: a genre generally including plot devices which “produce fear or dread, the central enigma of the family, and a difficult narrative structure or one that frustrates attempts at understanding” (Ledwon 1993, p.260). The Gothic is “a literature of nightmare” (MacAndrew 1979, p.3), where “fear is the motivating and sustaining emotion” (Gross 1989, p.1), and in Twin Peaks, the viewer is exposed to devices such as “incest, the grotesque, repetition, interpolated narration, haunted settings, mirrors, doubles, and supernatural occurrences” (Ledwon 1993 p.260). Its narrative breaks away from the uniformity of traditional television through transgression and uncertainty in a distinctly post-modern fashion. Lynch combines the mundane with the horrific repeatedly throughout the series; most especially when the evil Bob appears to Laura while she is performing simple tasks like writing in her diary or changing clothes. By “exploit[ing] the … potential of Gothic devices to the hilt” and “challeng[ing] the most deep-seated expectations of … television” (Ledwon 1993, p.269) Lynch blurs the distinction between the normal and abnormal, the everyday and the extraordinary, so that the Gothic becomes normal.

Additionally, the influence of many cultural factors are evident in Twin Peaks‘ narratives and its modes of production, and the combination of these lend further complexity to the series. A prominent cultural factor is that of gender and its treatment within the series. Following a decade in which concepts of masculinity and feminism had undergone significant public shifts and homosexuals had “moved from a position of outlaw to one of respectable citizen” (Rich 1986, p.532), Twin Peaks‘ writers were more free to challenge gender boundaries and “open up space for a wider range of acceptable masculinities” (Comfort 2009, p.44). This is done partly through giving value to a wide range of eccentric characters: many of the main male characters exhibit eccentric behaviours, and it can be argued that traditional gender roles are “freed up”(Comfort 2009, p.44) and the idea of what masculinity entails is opened up to greater scope as a result. This is most evident by the inclusion of the character of DEA Agent Denis/Denise (David Duchovny, future star of The X-Files), who alludes to Cooper that he is heterosexual despite dressing as a woman. In one short scene, the idea of masculinity is challenged and eccentricity is accepted at the same time.

However, another element of the culture which influenced Twin Peaks is of a more unsavoury nature. The series suggests that “the worst secrets of all … are the secret connections between culture and self that allow men to brutalise women” (Davenport 1993, p.258). Laura Palmer is first presented as a “stunning corpse wrapped in plastic” (Moore 2015, online), and while Lynch extended the narrative possibilities of television, he did so by telling a story of a girl whose downfall consisted of being abused – sexually and otherwise – by a variety of powerful men, although it has also been argued that Lynch is simply following a well-known formula of “exploiting our love affair with … sex and death” (George 1995, p.110). It is easy to ignore the reality of violence in Twin Peaks, as, when watching TV, people are “in their own homes and…well placed for entering into a dream” (Henry 1999 p.103), a mode of viewing that often overrides the opportunity television gives us to “critically and creatively reflect upon established, often idealizing images” (Weiskopf 2014, p.152). Upon release of the series, Lynch downplayed the violence, describing the plot as simply being “about a woman in trouble … and that’s all I want to say about it” (Blassman 1999, online).

Storey (2015, p.210) describes all television as “hopelessly commercial”: and Twin Peaks‘ displays commercial intertextuality, in the form of its follow-up feature film and The Secret Diary of Laura Palmer, to international sales of T-shirts featuring the words ‘I Killed Laura Palmer’. The series was produced to win back sections of a fragmented audience partially lost to cable, cinema and video (Storey 2015, p.210) and was marketed to different audiences in various ways, based on factors ranging from “Gothic horror, police procedural, science fiction and soap opera” (Collins 1992, p.345). Producers hoped the series would “appeal to fans of Hill Street Blues, St Elsewhere and Moonlighting, along with people who enjoyed nighttime soaps” (Allen 1992, p.342). This attempt to create new, post-modern productions is now well-established in complex television (Nelson 1996, p.677).

In conclusion, it can be said that if complex television texts can be defined as being encoded with dense meaning and imagery, employing a range of serial techniques that build over time, containing elaborate networks of characters, and being highly unconventional in many ways, it must be said that Twin Peaks qualifies as complex television. Its signs and codes are open to a range of interpretations, and its influences are as varied as the range of television shows it went on to influence in turn. A plethora of factors are played out in the making of the series: historical, institutional, economic and cultural, and it presents many different genre resonances to audiences. It can be considered a particularly high-quality example of complex television: the wealth of academic study it has attracted is evidence of this. Twin Peaks is an important example of everything television can be.

References

Allen, RC, 1992. Channels of Discourse Reassembled, London: Routledge, p.342

Baderoon, G, 1999. ‘Happy Endings: The Story of Twin Peaks’, Journal of Literary Studies, Volume 15, pp.94-107

Blassmann, A, 1999. ‘The Detective in Twin Peaks’, online, accessed 4th February 2017: http://www.thecityofabsurdity.com

Booker, MK, 2002. Strange TV: Innovative Television Series from the Twilight Zone to the X-Files, Westport, Connecticut: Greenwood Press, p.98

Booth, P, 2011. ‘Memories, Temporalities, Fictions: Temporal Displacement in Contemporary Television’, Television & New Media, Sage, p.371

Bordwell, D, 1985. Narration in the Fiction Film, Madison: University of Wisconsin Press, p.1

Carrion, MM, 1993. ‘Twin Peaks and the Circular Ruins of Fiction’, Literature/Film Quarterly, Volume 21, p.242

Carroll, M, 1993. ‘Agent Cooper’s Errand in the Wilderness: Twin Peaks and American Mythology’, Literature/Film Quarterly, Volume 24, p.288

Collins, J, 1992. ‘Television and Postmodernism’, The Politics of Postmodernism, p.345

Comfort, B, 2009. ‘Eccentricity and Masculinity in Twin Peaks’, Gender Forum, Volume 27, p.44

Davenport, R, 1993. ‘The Knowing Spectator of Twin Peaks: Culture, Feminism, and Family Violence’, Literature/Film Quarterly, Volume 21, pp.255-259

Dienst, R, 1994. Still Life in Real Time: Theory after Television, Durham & London: Duke University Press, pp.95,99

Dolan, M, 1995. ‘The Peaks and Valleys of Social Creativity: What Happened to/on Twin Peaks’, Full of Secrets: Critical Approaches to Twin Peaks, Detroit, Michigan: Wayne State University Press, pp.33-50

Duncan, P, 2000. Film Noir, Pocket Essentials, p.7

Fiske, J, 1987. Television Culture, London & New York: Routledge, p.237

George, DH, 1995. ‘Lynching Women: A Feminist Reading of Twin Peaks’, Full of Secrets: Critical Approaches to Twin Peaks, Wayne State University Press, p.110

Gross, LS, 1989. Redefining the American Gothic: From Wieland to Day of the Dead, Ann Arbor: UMI Research, p.1

Henry, M, 1999. ‘David Lynch: A 180-Degree Turnaround’, in Barney, RA (ed.), David Lynch: Interviews, Jackson: University Press of Mississippi, p.103

Hundley, K, 2007. ‘Narrative Complication through Science Fiction Television: From “Twin Peaks” to “The X-Files” and “Lost”’, Theater and Film, University of Kansas, pp.1-15

Jensen, PM & Waade, AM, 2013. ‘Nordic Noir Challenging “the Language of Advantage”: Setting, Light and Language as Production Valued in Danish Television Series’, Journal of Popular Television, Volume 1, pp.259-265

Lavery, D, 1996. ‘Introduction’, in Lavery, D, (ed.), Full of Secrets: Critical Approaches to Twin Peaks, Detroit: Wayne State University Press, p.16

Ledwon, L, 1993. ‘Twin Peaks and the Television Gothic’, Literature/Film Quarterly, Volume 21, pp.260-270

Loacker B & Peters, L, 2015. ‘Exploring Absurdity and Sites of Alternate Ordering in Twin Peaks’, Ephemera, Volume 15, pp.621-649

Lost, 2004-2010. Television series, Touchstone Television/ABC Studios, United States

MacAndrew, E, 1979. The Gothic Tradition in Fiction, New York: Columbia UP, p.3

Marc, D, 1987. ‘Beginning to Begin Again’, Television: The Critical View, New York: Oxford University Press, pp.323-60.

Mittell, J, 2015. Complex TV: The Poetics of Contemporary Television Storytelling, New York University Press, pp.17-25

Moore, S, 2015. ‘Never Mind How “Cool” Twin Peaks is, What About Taking it Seriously?’, The Guardian, online, accessed 6th February 2017: https://www.theguardian.com/commentisfree/2015/apr/06/cool-twin-peaks-david-lynch-abuse-sexual-murder-young-women

Muir, J, 2001. Terror Television: American Series, 1970-1999, Jefferson, North Carolina: McFarland & Co, p.250

Nelson, R, 1996. ‘From Twin Peaks, USA, to Lesser Peaks, UK: Building the Postmodern TV Audience’, Media, Culture and Society, Sage: London, p.677

Newman, M, 2006. ‘From Beats to Arcs: Toward a Poetics of Television Narrative’, Velvet Light Trap, pp.16-28

Pollard, S, 1993. ‘Cooper, Details, and the Patriotic Mission of Twin Peaks’, Literature/Film Quarterly, Volume 21, p.303

Rich, R, 1986. ‘Feminism and Sexuality in the 1980s’, Feminism Studies, University of Maryland Press, p.532

Storey, J, 2015. Cultural Theory and Popular Culture: An Introduction, Routledge, p.210

Telotte, JP, 1995. ‘The Disorder of things in Twin Peaks’, in Lavery, D, (ed.), Full of Secrets: Critical Approaches to Twin Peaks, Detroit: Wayne State University Press, p.165

Thompson, K, 2003. Storytelling in Film and Television, Cambridge: Harvard University Press, p.120

Twin Peaks, 1990-1991. Television series, Lynch/Frost Productions, United States

The X-Files, 1993-2002. Television series, 20th Century Fox Television, United States

Weinstock J & Spooner, C, 2015. Return to Twin Peaks: New Approaches to Materiality, Theory and Genre, Palgrave, p.161

Weiskopf, R, 2014. ‘Ethical-Aesthetic Critique of Moral Organization: Inspirations from Michael Haneke’s cinematic work’, Culture and Organization, Volume 20, pp.152-174

Westwood, R, 2004. ‘Comic Relief: Subversion and Catharsis in Organizational Comedic Theatre’, Organization Studies, Volume 25, pp.775-795

Woodward, RB, 1990. ‘A Dark Lens on America’, in Barney, RA, (ed.) David Lynch: Interviews, Jackson: University Press of Mississippi, p.50