Enrichment and Exploitation: How Website Algorithms Affect Democracy

Abstract

This essay discusses the role algorithms in websites, social media and search engines play in the democratic processes of Western societies. As the political mechanisms of Western societies rely increasingly on the Internet for communication of information and to encourage voter participation, the way algorithms are configured to present information to the public is of great importance. Manipulation of search engine rankings or social media news feeds – intentionally or organically – can have a huge impact on what voters see and think about. Facebook and Google have a monopoly on news feeds and online search respectively, meaning any bias in the way their algorithms function can have ramifications on national and international levels. Evidence exists that manipulation of algorithms in Facebook and Google has participated in influencing the outcomes of elections on several occasions. Examining how algorithms can affect elections and other civic processes is crucial for the future of healthy democracy in Western societies.

Keywords: algorithm, democracy, Internet, news, search engine, social media, website, Facebook, Google, Instagram, Snapchat, Twitter

Introduction and Research Questions

In 2017, it is estimated that over half of the world’s population are regular Internet users (Kemp 2017, online), and around the same percentage are regular users of social media (Chaffey 2017, online). With such vast amounts of data moving through cyberspace constantly, it makes sense that algorithms should be employed to sort, sift through, and make sense of it all. On the face of things, it would seem logical for algorithms to be used to present to users of websites, social media and search engines a selection of information which may be relevant to what the user is looking for, and from which the user can make informed decisions. The problem with this is that it’s often impossible to know how an algorithm has arrived at a decision or set of search results, and many users aren’t aware that algorithms even exist, never mind how they come to the conclusions they do. With democratic processes now relying so heavily on information shared online, algorithms in websites, social media and search engines have the potential to play a crucial role in democracy. This essay will investigate this issue, and seek to answer the following questions:

-To what extent do algorithms in websites, networking services and social media have a negative effect on democracy in Western societies?

-To what extent, if any, can users of new and digital media be manipulated by algorithms to think or act in certain ways?

-To what extent, do search engine algorithms affect democracy in Western societies?

-Which website, networking service or search engine is most likely to affect democracy through its use of algorithms?

Methodology

Search engines, social media, and the algorithms that operate them are now firmly embedded in the everyday fabric of Western societies, and increasingly in their democratic processes, with no indication that this is likely to change at any time in the future. Algorithms used in Facebook and Google have been extensively studied individually, but there has been less research on the overall effect of algorithms in democratic processes in Western societies. This research essay aims to fill that gap.

The essay examines the use of algorithms in websites, networking services and social media, and aims to answer the question of whether they have a negative effect on democracy in Western societies. A detailed literature review of the subject of online algorithms is followed by an examination of algorithms used in Facebook, Google, Twitter, Instagram and Snapchat, with the likely effects of each of their algorithms discussed, most especially in relation to democratic processes in Western societies.

Real-life examples of algorithms affecting democratic processes are examined, and the extent to which algorithms have influenced recent political outcomes discussed. The essay will also discuss how algorithms are likely to affect democracy in the coming years.

Suggestions regarding the way future democratic processes must interact with, and incorporate, algorithm-driven websites, social media, and search engines are made, and conclusions on the future of the algorithm in democracy are drawn.

Literature Review

Origins

In their most basic form, algorithms are defined as “an automated set of rules for sorting data” (Oxford Reference 2017, online), and, in their online form, are concerned with “settings where the input data arrives and the current decision must be made by the algorithm without the knowledge of future input” (Bansal 2012, p.1). Algorithms are “dependent on the quality of their input data and the skills and integrity of their creators (Devlin 2017, online). By definition, data is historical, and the result of which is that algorithms predict the future based on actions taken in the past, hence their actions can be repetitive and flawed.

The first use of algorithms in an online sense occurred in the early 1970s and was used for bin-packing problems in early software programs, or organising and fitting items into a set space (Fiat & Woeginger 1998, p.7). This evolved in 1985 when Sleater and Tarjan constructed competitive algorithms to solve mathematical problems known as the list update problem and the paging problem (Fiat & Woeginger 1998, p.7). In the early 21st century, as the variety and use of digital technologies exploded, algorithms were still relatively harmless. Search engines offered personalised recommendations for products and services, and helped Internet users find what they wanted quicker. Information was collected from personal meta-data – information gathered from “previous searches, purchases and mobility behaviour, as well as social interactions” (Helbing et. al 2017, online). From these humble beginnings, algorithms have evolved to know everything about us – where we are, what we are doing, and what we are feeling (Helbing et. al 2017, online).

Algorithms in the Digital Age

The ubiquity of Internet access and the huge number of ways by which it can be accessed means it is now a “principal pillar of our information society” (Dusi et. al 2016, p.1805). Online communities have become hugely important and complex places in which people seek and share information (Zhang et. al 2007, p.221). A result of this is that online algorithms play a huge part in so many aspects of our lives. Ellis (2016, online) explains how three factors shape the online lives of citizens of digital societies: “the endless search for convenience, widespread ignorance as to how digital technologies work, and the sacrifice of privacy and security to relentless improvements in the efficiency of e-commerce”. The more our lives become reliant on digital technology, the more we are likely to be influenced by algorithms, from everyday tasks like online shopping to our political participation in elections, referendums and other civic activities.

Algorithms still carry out the same relatively harmless tasks as they have done since the Internet’s earliest days, including giving online shoppers advantages in making choices (“People who bought this book also bought this…” recommendations), helping match an online dater with a partner more suited to them (Sultan 2016, online), and retrieving search engine results more suited to the user, depending on past searches. Retail websites such as Amazon also use algorithms to keep pricing competitive – prices can drop sometimes several times a day until an item is the cheapest on the market and is sold, and then the price goes back up again (Baraniuk 2015, online).

Algorithms have evolved hugely from their humble beginnings, and can now “recognise handwritten language, describe the contents of photos and videos, generate news content, and perform financial transactions” (Helbing et. al 2017, online). Some can recognise language and patterns “almost as well as humans and even complete some tasks better than them” (Helbing et. al 2017, online). Today’s widespread use of algorithms online has been described in a range of ways, from having small advantages to Internet users and to making online communities smarter, to the more sinister end of the spectrum, entailing “capturing people and then keeping them on an emotional leash and never letting them go” (Anderson & Horvath 2017, online).

Despite huge advances in technology since the dawn of the Internet, even while conducting relatively simple tasks, algorithms can go wrong in spectacular ways. An algorithm used to generate wording for a company selling t-shirts to be sold online with the English World War II-era slogan “Keep Calm and Carry On” printed on them generated thousands of alternative options, with one result being “Keep Calm and Rape a Lot” (Baraniuk 2015, online). The company faced public condemnation and folded as a result. In 2011, a Massachusetts man who had never committed a traffic offence in his life had his driving licence revoked by an algorithm-generated facial recognition software failure (Dormehl 2014, online). Similar, and more serious, faults have meant that voters have been removed from electoral rolls, parents mistakenly labelled as abusive, and businesses have had government grants and contracts cancelled (Dormehl 2014, online). Even more problematic is the way in which algorithms can falsely profile individuals as terrorists at airports, which happens at a rate of about 1500 a week in the United States (Dormehl 2014, online). Reduced budgets in law and order services have a large part to play in this, as staff cuts lead to a greater reliance on automated services.

Entering the Democratic Space

Algorithms offer many benefits to the democracies of Western societies, but often in a way that have many more advantages for institutions than they do for individual users of digital technologies (Ellis 2016, online). The convenience so hungrily sought by end-users is a commodity many online businesses are eager to sell, and the hidden clauses are often “unknowable and entirely beyond users’ control” (Ellis 2016, online). Understanding algorithms’ lack of neutrality is low among end users, and while disclosure policies can help somewhat, many of the long-winded privacy policies which have become standard on the web are seldom read (Ellis 2016, online).

An example of a society heavily controlled by online data is Singapore. What started as a program set up with the aim of protecting its citizens from terrorism “has ended up influencing economic and immigration policy, the property market and school curricula” (Helbing et. al 2017, online). China is similar. Baidu, the Chinese equivalent of Google, incorporates a number of algorithms in its search engine to produce a “citizen score” (Helbing et. al 2017, online), which can affect a citizen’s chances of getting a job, a financial loan, or travel visa. This type of monitoring of data using algorithms is certain to affect everything about citizens’ lives, from everyday tasks to political contribution.

In the world of politics, digital technology and the algorithms they conceal are becoming increasingly popular as tools for ‘nudging’: a behavioural design concerned with trying to steer or influence citizens towards thinking and acting in a certain way (Helbing et. al 2017, online). A government can use this method of ensuring the public sees information that supports their agenda – the British government has “used it for everything from reducing tax fraud to lowering national alcohol consumption [while] Barack Obama and several American states have used it to win campaigns and save energy” (The Nudging Company 2017, online). The biggest goal for governments to influence people in this way is known as ‘big nudging’, or the combination of big data and nudging (Helbing et. al 2017, online). While the effectiveness of such methods are difficult to calculate, it has been suggested that could have the ability to control citizens by a “data-empowered ‘wise king’, who would be able to produce desired economic and social outcomes almost as if with a digital magic wand” (Helbing et. al 2017, online). During elections, political parties can use online nudging to influence voters in a major way. In fact, it has been argued that whoever controls this technology can “nudge themselves to power” (Helbing et. al 2017, online).

Critics of the use of online algorithms in Western democracies have pointed to how they can reinforce the ‘filter bubble’, or the way in which end users of search engines and social media get “all their own opinions reflected back at them” (Helbing et. al 2017, online). The result of this is a large degree of societal polarisation, resulting in sections of society who have little in common and have no method by which to understand each other’s beliefs. This form of social polarisation by the supply of personalised information can lead to fragmentation of societies, especially in the political arena. Helbing (2017, online) explains that this kind of divide is currently happening in the politics of the United States, where “Democrats and Republicans are increasingly drifting apart, so that political compromises become almost impossible”.

Algorithms and Data Mining

Data mining is the method by which large amounts of raw data is turned into useful information, and is increasingly becoming a useful influencing tool online. The practice has been described as “creat[ing] greater potential for violations of personal data” (Makulilo 2017, p.198) via the rise and use of big data, meaning the vast amounts of statistics in the public domain about people’s lives, money, health, jobs, desires, and more. The availability of all this data means algorithms are increasingly being used to sort and categorise it all, as well to make public policy and other decisions (O’Neil 2016, p.1). In Western democracies, the amount of online data produced is doubled every year, and in every single minute of every day, hundreds of thousands of Google searches and Facebook posts are made (Helbing et. al 2017, online), meaning more potential violations of personal data if used for immoral or criminal purposes.

Companies now use algorithms to help them decide who they should hire, banks use them to work out to whom to provide loans, and, increasingly, governments use them to make major policy decisions. Devlin (2017, online) contends that those working in the big data and analytics industries are perhaps the least likely to be surprised that political figures or parties would try to use algorithms to influence public behaviour in their favour, saying that “the application – both overt and covert – of technology to affect election outcomes was arguably inevitable” (Devlin 2017, online). O’Neil (2016, p.1) says that “some of these models are helpful, but many use sloppy statistics and biased assumptions; these wreak havoc on our society and particularly harm poor and vulnerable populations”.

Dormehl (2014, online) explains that not only is the use of algorithms in data mining open to misuse, but that it is foolish to believe all tasks can be automated in the first instance, and points to data mining as a method of uncovering terrorist attacks as an example. Dormehl describes finding terrorist plots as “a needle-in-a-haystack problem, and throwing more hay on the pile doesn’t make that problem any easier. We’d be far better off putting people in charge of investigating potential plots and letting them direct the computers, instead of putting the computers in charge and letting them decide who should be investigated” (2014, online).

Algorithms and Real-Life Events

Real-life examples of how algorithms can affect major world events are plentiful. Evidence has emerged that algorithms and their associated digital technologies have been used to bring about political outcomes in various countries in recent years, and it it likely that such methods will be an element of many future political campaigns. It has been alleged that online algorithms were deployed to influence voters’ decision-making in the 2016 US presidential election, the 2016 Brexit vote, and the 2017 French presidential election (Devlin 2017, online). Problems arise – and mistrust is created – when algorithms are used in such ways due to a lack of transparency and democratic control. The digital methods used to transmit messages and influence audiences evolve quicker than any regulatory framework can keep up with them.

An example of this is the alleged influence of online advertising which affected the outcome of the Trump-Clinton election, the result of which shocked many in the United States and around the world. The innovation of algorithms, according to some analysts, means “even our political leanings are being analysed and potentially also manipulated” (Arvanitakis 2017, online), and a prime example of this was undertaken by Cambridge Analytica, a data mining organisation that relies on artificial intelligence with the goal of manipulating opinions and behaviours “with the purpose of advancing specific political agendas” (Arvanitakis 2017, online), in this case in the favour of Trump. Facebook was the platform on which much of the alleged manipulation took place, with an estimated US$90 million spent on digital advertising to generate US$250 million in fundraising for the eventual winner (Shoval 2017, online). In September 2017, Facebook agreed to provide to United States congressional investigators the contents of 3000 online advertisements purchased by a Russian advertising agency, alleging to contain information on supposed digital interference in the election (ABC News 2017, online). Matthew Oczkowski, Cambridge Analytica’s Head of Product, told a recent interviewer: “We have elections going on in Africa and South America, and eastern and western Europe” (Kuper 2017, online).

Additionally, search engine algorithms and recommendation systems “can be influenced, and companies can bid on certain combinations of words to gain more favourable results” (Helbing et. al 2017, online). These methods have been defended by some, as Helbing (2017, online) explains, who say that political nudging is necessary as people find it hard to make decisions, and it is, therefore, necessary to help them – a way of thinking known as paternalism. He also refutes this by suggesting that nudging is not actually a way of persuading people of a particular opinion, but a method of “exploiting psychological weaknesses in order to bring about certain behaviours” (2017, online). Another critic of the use of algorithms to affect voters’ choices is Gavet (2017, online), who argues that the only results of such methods are self-reinforcing bias, and that digital technology of this nature are vulnerable to attack to agencies with potentially harmful agendas, and concludes by saying that all forms of artificial intelligence are a threat to democracy in some way.

In the same way that accurate information can be presented to the public to influence the way they think or act, incorrect information can do the same thing. The Digital Disinformation Forum, held in California in June 2017, stated that deliberate misinformation is the “most pressing threat to global democracy” (Digital Disinformation Forum 2017, online). Smith (2017, online) agrees, noting that “The insidious thing about information pollution is that it uses the Internet’s strengths,  like openness and decentralization, against it”, and that misinformation is a potential “global environmental disaster” that impacts everyone. Immediately after the 1st October 2017 Las Vegas Strip shooting, in which a gunman killed 58 people during the deadliest mass shooting committed by a lone gunman in US history, news spread by Facebook and Google falsely named a suspect, describing them as a “far-left loon” (ABC 2017, online) when the gunman had no known political affiliations. A pro-Trump Facebook page incorrectly named a person as the shooter, and the story became the first result on Google’s search page on the subject (ABC 2017, online). “This should not have appeared,” a Google spokesperson later said, as the information was removed from its search results (ABC 2017, online). Both Facebook and Google came under scrutiny from a variety of political sources for their slow response to requests to remove the information from their platforms (ABC 2017, online).

Adding algorithms to this mix can be dangerous, Smith notes, pointing to the way in which predictive policing algorithms in the United States increase patrols in high-crime areas, but can induce a cycle of violence between police and angry or disenfranchised residents as a consequence (2017, online). O’Neil (2016, p.1) explains that “this type of model is self-perpetuating, highly destructive, and very common.” Perhaps the most damning statement on the use of algorithms in societies based on data comes from Devlin (2017, online), who says that while societies which operate in this way “may seem appealing in the light of current political dysfunction worldwide … it is also deeply inimical to the process we call democracy”.

The Future of Algorithms

What does the future hold for algorithms and their place in Western societies and democracy? Floridi (2017, online) argues that the increasing proliferation of algorithms in digital technology will continue to threaten many aspects of our daily lives in increasing numbers of ways – employment, most especially. Floridi explains that because digital technology has replaced many tasks traditionally performed by us, “algorithms can step in and replace us”, and the consequence “may be widespread unemployment” (2017, online). It has been estimated that in the coming ten years, around half of jobs will be threatened by algorithms and up to 40% of the world’s top 500 companies will have vanished (Helbing et. al 2017, online). Algorithms may increasingly “take care of mundane administrative jobs, do the analysis of markets and roam through thousands of pages of case law”, as well as creating our news feeds (Stubb 2017, online).

A 2016 Pew Research Centre study found it likely that algorithms will “continue to have increasing influence over the next decade, shaping people’s work and personal lives and the ways they interact with information, institutions (banks, health care providers, retailers, governments, education, media and entertainment) and each other” (Ellis 2016, online). The flip side to the advantages algorithms are likely to have, the same study found, are the fear that they will “purposely or inadvertently create discrimination, enable social engineering and have other harmful societal impacts” (Ellis 2016, online).

In April 2017, a House of Commons committee in the United Kingdom published the results from its ‘Algorithms in Decision-Making’ inquiry, with the overall conclusion being that human intervention is almost always needed when it comes to trusting the decisions made by online algorithms (House of Parliament 2017, online). Some of the major points to be taken from the findings include algorithms are “subject to a range of biases related to their design, function, and the data used to train and enact these systems”, “transparency alone cannot address these biases”, and algorithmic biases have “cultural impacts beyond the specific cases in which they appear” (House of Parliament 2017, online). The inquiry also recommended greater regulation of online algorithms, as transparency alone “doesn’t necessarily create trust” (House of Parliament 2017, online).

A solution to the possibility of algorithmic errors, as suggested by Floridi, is to “put human intelligence back into the equation” (2017, online). This can be done by “designing the right sort of algorithm” (2017, online), making sure not all decisions are left to machines, and making sure humans oversee all decisions made by machines. In the political sphere, some politicians might be jubilant at the decline of journalism, but should remember that “algorithms will soon be better at legislation than they are” (Stubb 2017, online). Some commentators and experts have gone further with their predictions, with technology visionaries including Bill Gates, Elon Musk and Steve Wozniak warning that algorithms and associated artificial intelligence-based technologies are a “serious danger for humanity, possibly even more dangerous than nuclear weapons” (Helbing et. al 2017, online).

Case Studies

Facebook

“There was no tool where you could go and learn about other people. I didn’t know how to build that so instead I started building little tools,” Mark Zuckerberg said (Carson 2016, online) about the origins of the website that would turn into a 300 billion dollar company. In 2004 he launched the social networking site Facebook, and its popularity quickly spread across several universities before becoming Facebook.com in August 2005 (Phillips 2007, online). The site’s use grew exponentially, it now has two billion active users per month (Facebook, online) and has recently unveiled its new mission statement as: “To give people the power to build community and bring the world closer together” (Facebook, online). According to the site’s own statistics, an average user spends 50 minutes a day on Facebook, Facebook Messenger or Instagram and has 150 Facebook friends (Facebook, online). Until 2012, the site kept advertisements separate from its users’ personal content and did not share any information with marketing agencies. Then, floatation brought greater demands from investors for advertising revenue, and its methods changed (Kuper 2017, online).

Perhaps one of the more notable changes to democracy this brought is the way Facebook is controlling how citizens consume news. Most under-35s rely on Facebook for their news, both personal and world (Francis 2015, online; Jain 2016, online; Samler 2017, online), and its algorithms can control what information is seen by its users, and, hence, what is thought about democratic or political issues based on this information. In changing the fundamental methods by which people receive information on such a scale, Facebook is disrupting democracy like nothing the Internet has produced before. As Samler (2017, online) explains, Facebook is “one of the Internet’s most radical and innovative children”. The result has been “a loss of focus on critical national issues, an erosion of civil disagreement, and a threat to democracy itself” (O’Neil 2016, online).

As a result of more people getting their news from an algorithm-driven news feed, traditional journalism has been greatly affected by the rise of Facebook. The impact of increasing use of social media as a way of sourcing news, real or otherwise, is of concern to the traditional role of the media as the Fourth Estate. Facebook has been called a “social problem” (Francis 2015, online) that breeds shallowness that is sweeping Western societies, while creating a “world view about as comprehensive as was found in the high school cafeteria” (Francis 2015, online). Global leaders are taking advantage of its directness to bypass the media and speak directly to the public, and operators of Facebook and Twitter are enthusiastic about this behaviour as it increases engagement with their sites. Journalists are still attempting to report factual stories, but are under increasing pressure (Shoval 2017, online), and the disproportionately high financial awards made against newspapers in the courts threatens press freedom on an industry level (Linehan 2017, p.11).

With Facebook now having such a high degree of control over the way in which people consume news, traditional media companies are struggling to reach the public with legitimate news (Shoval 2017, online). After the 2016 US presidential election, Facebook announced its “Facebook Journalism Project” – a project with the aim of forging stronger ties with the journalism industry, including working more closely with local news outlets (Shoval 2017, online). With the number of news consumers who get their news from Facebook’s news feed on the rise, it is difficult to see how this is little more than an empty platitude.

While Facebook is described as ‘social media’, it is important to remember that its success it premised on using increasingly sophisticated techniques to target users by predicting the content they’ll want to read and watch, “along with the stuff they’ll want to buy from advertisers” (Ellis 2016, online). Facebook is now a “monumentally influential force in the fabric of modern life” (Statt 2017, online), and there now exists Facebook electioneering by major political candidates like Canadian Prime Minister Trudeau and French President Macron, of which algorithms play a huge part. Facebook’s algorithm generates a “plethora of ordinary effects” (Bucher 2015, p.44) from the hunt for ‘likes’ to asking the questions “Where did this information that has suddenly popped up come from?” (Bucher 2015, p.44). Francis (2015, online) suggests that the only antidote to relentless Facebook misinformation is to “do some serious fact-checking and research”, while Pennington (2013, p.193) says that while Facebook can be an excellent tool for political participation, the key for the individual user is to “keep an open mind to others instead of falling down the rabbit hole of narcissism”.

Fake news can be defined as “a political story which is seen as damaging to an agency, entity or person” (Merriam Webster Dictionary 2017, online), and the concept and its proliferation on various platforms, including Facebook, has been forced into the public domain by President Trump and the election from which he emerged victorious. Fake news has the power to “damage or even destroy democracy” (Jain 2016, online) if not regulated. During a 2016 press conference, then-President Obama noted that “If everything seems to be the same and no distinctions are made, then we won’t know what to protect” and “Everything is true and nothing is true” (Jain 2016, online) on a social network such as Facebook. Simply sending out Facebook advertisements to see how they are received can help a political party shape its manifesto (Kuper 2017, online). If a large number of users ‘like’ a story about a crackdown on immigration, a party or candidate can make it their official standpoint. Then those people can be targeted with more advertisements and for appeals for funding.

The unexpected election of Donald Trump is said to “owe debts to … rampant misinformation” (Heller 2016, online). During the last stages of campaigning by Trump and Clinton, it was obvious that Facebook’s news algorithm was not able to distinguish between real news and completely fabricated news: “the sort of tall tales, groundless conspiracy theories, and oppositional propaganda that, in the Cenozoic era, circulated mainly via forwarded e-mails” (Heller 2016, online).

Zuckerberg rejects the idea that his company played a role in spreading ‘fake news’ about political candidates, by saying in an interview: “Voters make decisions based on their lived experience” (Newton 2016, online). At the same time, a study found that “three big right-wing Facebook pages published false or misleading information 38% of the time during the period analysed, and three largely left-wing pages did so in nearly 20% of posts” (Silverman 2016, online). Zuckerberg then committed to his company doing more to fighting the spread of fake news and vowed it would be an “arbiter of truth” (Jain 2016, online), while also stating that he runs a “tech company, not a media company” (Samler 2017, online). He also denied that Facebook confounded the problem of its users living in an information ‘filter bubble’, even through his own company quietly released the results of a study in 2015 which showed exactly the opposite of which was true (Tufekci 2015, p.9), and another study has shown that users are much less likely to click on content that challenges their beliefs (Tufekci 2015, p.9). Western democracies have a liberal left and a conservative right, with “neither being exposed to the reasoned arguments of the other” (Samler 2017, online). Indeed, only 5% of Facebook users and 6% of Twitter users admit to associating themselves with people on these platforms who have differing political opinions to themselves (Samler 2017, online). Critics of how social media giants generate their users’ news feeds have said that these organisations need to accept the fact that they are no longer solely technology platforms, but media platforms too (Samler 2017, online).

Interestingly, on 30th September 2017, Zuckerberg made a post on his personal Facebook page for the end of Yom Kippur, apologising and seeking forgiveness for any of the ways that his organisation has been “used to divide people rather than bring us together” (Facebook 2017, online). This has been described as a “wholly surprising admission of guilt from someone in the tech world” (Barsanti 2017, online).

The key to Facebook’s ongoing success is to keep its users engaged. Bucher explains that “examining how algorithms make people feel … seems crucial if we want to understand their social power” (2015, p.30), if, indeed, users are even aware of the power of the algorithm at all. Facebook’s data teams are almost solely focussed on finding ways to increase the amount of time each and every user remains engaged with the platform, and they are not concerned with truth, learning, or civil conversation (O’Neil 2016, online). Success is measured by the number of clicks, ‘likes’, shares and comments, not the quality of the material being engaged with. The greater the amount of engagement, the more data Facebook can use to sell advertisements (O’Neil 2016, online). This seems like a fairly obvious business model, but research has shown that many users are unaware of this. In a 2015 study, more than half of Facebook users were unaware of how their Facebook news feed was put together (Eslami et. al 2015, p.153). This is problematic, as ignorance of how the site’s algorithm works can wrongly lead some users to “attribute the composition of their feeds to to the habits of their friends or family” (Eslami et. al 2015, p.153). This can reinforce the idea of the ‘filter bubble’ and lead many users to believe the information they are seeing is trustworthy and correct, as well as tracking behaviour in order to profile identity.

While finding news that fits a user’s news feed, Facebook’s algorithms can create other problems, including the “voracious appetite for personal data” (Ellis 2016, online) ad-supported services such as Facebook need to keep their predictions going. The consequence is an undermining of personal data and the increased likelihood of the site being used for data mining purposes by individuals, organisations or entities with potentially nefarious motives, and possibly leading to more “government by algorithmic regulation” (Ellis 2016, online). The potential for abuse is high when algorithms are unregulated and can be used by anyone with the money to invest in them.

Another major problem Facebook’s algorithm creates is one of repetition, and it has the potential to prevent democratic processes and decisions evolving over time. While real life allows the past to be in the past, “algorithmic systems make it difficult to move on” (Bucher 2015, p.42). This is the “politics of the archive” (Bucher 2015, p.42), as all decisions an algorithm will make on the information it allows you to see in the future is based on what you did in the past. What is relatable and retrievable from the past shapes the way Facebook’s algorithm works in the present, and will potentially affect the user’s decisions in the future.

Despite the many negative effects on democracy Facebook can have, it can be a positive force for it too. During elections in the United States in 2010 and 2012, the site conducted experiments with a tool it called the ‘voter megaphone’ (O’Neil 2016, online). The idea of this was to encourage users to make a post saying they had voted, which would, in turn, remind and encourage others to do the same. Statistics showed 61 million people made such a post, with the likely result of increasing participation in democratic processes, especially among young people (O’Neill 2016, online). Additionally, movements can be organised on social media, including women’s marches in 2017, which saw about five million women march globally as a result of online organisation (Vestager 2017, online).

Facebook is determined to show that the information and feed its algorithm creates and controls is an ever-changing and independent tool for good, but the reality is it is a vital part of its business model. The Facebook algorithm is “biased towards producing agreement, not dissent” (Tufekci 2015, p.9). After all, if its users were continually presented with information they didn’t appreciate, they would simply go elsewhere. And that’s not a successful business model, by any definition. How the filter bubbles, in which Facebook users’ news feeds exist, affect democracy is as simple as it is destructive. Electoral laws are outdated, and “regulators aren’t big or savvy enough to catch transgressors” (Kuper 2017, online). Drawing conclusions from this alone, we can say that Facebook has changed democracy. Perhaps author and mathematician Cathy O’Neil put it at its simplest and best when she said “Over the last several years, Facebook has been participating – unintentionally – in the erosion of democracy” (2016, online).

Google

In 1998, university drop-outs Larry Page and Sergey Brin founded Google with the stated aim of hoping “to organise the world’s information and make it universally accessible and useful” (Google, online). Its search engine helped unlock many of the so-called ‘walled gardens’ of the Internet, including sites like AOL and Yahoo. Since then, it has organised every single piece of information on the Internet, and it continues to add many millions more to its searchable database every day (Vise & Malseed 2005, p.3).

After going public in 2004, its value and influence grew exponentially, and it began to challenge Microsoft’s dominance in the online world (Vise & Malseed 2005, p.3), overtaking it as the most visited site on the web in 2007 (Strickland 2017, online). The company owes its success to its search engine’s ability to search so well and in lightning-quick time. It now has over 50,000 employees globally, and has expanded its business interests into the fields of artificial intelligence and self-driving cars (Frommer 2014, online), and its search engine is used globally over 6.5 billion times every day (Allen 2017, online).

Google has been called “the keeper of web democracy” (Howie 2011, online) and its search engine is a very powerful and vital component to 21st century Western democratic life, yet its influence is not widely understood or researched (Richey & Taylor 2017, p.1). With 150,000,000 active websites on the Internet today (Strickland 2017, online), it performs an important role in the lives of millions of people. Google has 88% of the market share in search and search advertising (Hazen 2017, online), and combined with Facebook, has more than a billion regular users. It is partly because of the colossal amounts of users and data with which it operates that Google’s algorithms are so complex.

The company markets its algorithm-driven search engine as a tool which will “result in finer detail to make our services work better for you” (Google 2017, online), and, in theory, the first results from a search should be the ones which are most relevant to the keywords searched. This seems, on the face of things, to be a simple and incredibly convenient tool for all its users. Yet critics of its methods and its effects on democracy are plentiful.

“Unregulated search rankings could pose a significant threat to a democratic system of government,” says Forbes writer Tim Worstall (2013, online), while Hazen (2017, online) explains how Google’s “relentless pursuit of efficiency leads these companies to treat all media as a commodity”. The real value of the platform lies not in the quality, honesty or accuracy of information it produces, but the amount of time the user is engaged with the platform. Hazen goes on to describe how these methods have pushed Page and Brin into the top-ten most wealthy people in America, each with a personal fortune over US$37 billion, and suggests the way by which these methods have affected democracy haven’t seemed to have been taken into account at any point in the company’s evolution.

Much like Facebook, Google has been criticised for data mining, and, on several occasions, taken to court for mismanaging users’ data (Smith 2016, online). Following United States government whistle-blower Edward Snowden’s leaks, Google’s users have become more savvy to how the site collects and users their data, and critics have labelled the company’s data mining methods as “purely to benefit Google” (Miller 2012, online). Yet the practice continues. The collection of data, and the profits of around $40 billion a year it makes from these practices, is concerning to many users of Google, despite the fact the company claims it uses data mining techniques to “find more efficient algorithms for working with massive data sets, developing privacy-preserving methods for classification, or designing new machine learning approaches” (Google 2017, online).

Another way in which the vast amounts of data channelled through Google could be used is in making political predictions, although the usefulness of this is unclear. This can be demonstrated with a real-life example: Google data showed that searches for ‘Donald Trump’ accounted for almost 55% of views in the three days before the 2016 presidential election (Allegri 2016, online), when the majority of polls predicted a Clinton victory, and its data predicted his final total electoral college votes number to within two of the actual number. This made analysts, tech writers and journalists take notice, with the general consensus that it was time to “start taking the electoral prediction powers of Google much more seriously” (Kirby 2016, online).

Consistent accusations of tampering with results have plagued Google throughout its lifetime, and such actions have the potential to affect democracy negatively if true. The company’s Vice President Marissa Mayer appeared in a 2011 YouTube video telling an audience how her company regularly, and unashamedly, puts its own services as the top of search results (Howie 2011, online). In 2017, public trust in Europe of Google’s algorithm reached an all-time low, following the proliferation of fake news stories and clearly-engineered results. The European Commission advertised for a company to police Google’s algorithm to determine the extent to which results are deliberately positioned favourably to those who have paid for it, and how much Google was abusing its market dominance (Hall 2017, p.17). The Commission also launched an investigation into the extent to which Google banned competitors from search results and advertisements, with the promise of keeping the issue “on our desks for some time” (Hall 2017, p.17). The way in which Google “uses its dominant search engine to harm rivals” has led to critics like Derrick (2017, p.1) examining how the concentration or monopolization of services in this way “threatens our markets, threatens our economy, and threatens our democracy”. It is difficult to see how Google’s self-serving behaviour can have anything but an overall negative effect on democracy in Western societies.

Despite many criticisms of Google’s algorithm and its negative effects on privacy and democracy, its data mining practices have produced some positive outcomes. In 2014, Google found evidence of child pornography in one of its user’s e-mail accounts and reported the person to the National Centre for Missing and Exploited Children in the United States, resulting in an arrest (Matterson 2014, online). Google Maps’ ability to identify illegal activities such as marijuana growing and non-approved building have also been noted as positives (Google Earth Blog, online).

The future of Google is likely to see it maintain its virtually unchallengeable position at the head of Internet search engine use and advertising revenue generation. The site’s ability to change its algorithms at any time mean it can evolve to control the market in any way it wishes, and can control the impact it has on websites, its competitors, and entire industries. The company’s future is not likely to be one with a reduced involvement with algorithms, but something quite the opposite, says Davies (2017, online). When once upon a time Google’s algorithm had a relatively basic structure, it is now much more complex, and becoming more so. Its methods of pushing forward artificial intelligence and machine learning are happening at an “amazing if not alarming rate” (Davies 2017, online), meaning its influence on what data we see is likely to grow. “Not since Rockefeller and JP Morgan has there been such a concentration of wealth and power in the hands of so few” explains Hazen (2017, online).

Twitter

Twitter began as an idea that co-founder Jack Dorsey had in 2006, who originally imagined it as an SMS-based communications platform (MacArthur 2017, online), hence the 140-word character limit. Fast forward five years later, and it was one the biggest communication platforms in the world. Now it has over 200 million active monthly users and it is considered vital, along with Facebook, that every public figure who wishes to engage with their audience, have an account (MacArthur 2017, online).

Studies have shown that political candidates who use Twitter as a means for engaging with voters significantly increase their odds of winning (LaMarre & Suzuki-Lambrecht 2013, p.1). The platform stimulates word-of-mouth marketing and increases audience reach significantly (LaMarre & Suzuki-Lambrecht 2013, p.1), and live information being of particular importance and influence. Sustaining a live connection, via Tweeting, through an election cycle has been shown to result in a positive reaction from supporters (LaMarre & Suzuki-Lambrecht 2013, p.1), which has the potential to translate in positive results on election day. President Obama’s use of Twitter during his two campaigns is a good example of this.

However, not all use of Twitter is as open and honest it may seem. During the 2016 US presidential election, 20% of all political tweets made during the three televised political debates were made by bots (Campbell-Dollaghan 2016, online), or a piece of software designed to execute commands with a particular goal. It was unclear where many of the bots came from or who created them, making it easier to spread fake news stories and potentially influence public opinion. There is also evidence to show that during the UK Brexit campaign, huge numbers of “fake news stories, false factoids, and absurd claims were passed over social media networks, often by Twitter’s highly automated accounts” (Howard 2016, online). Bots and automated accounts are very easy to make (Campbell-Dollaghan 2016, online), and can amplify misinformation in a political campaign. Twitter allows news stories from untrustworthy sources to “spread like wildfire over networks of family and friends” (Howard 2016, online).

These examples of how Twitter is being used to spread information or misinformation strongly suggests that it should now be regarded as a media company. However, much like Facebook, Twitter is not legally obliged to regulate the information passed over its network for quality or accuracy. In fact, it has been given a “moral pass” (Howard 2016, online) when it comes to the obligations professional media organisations and journalists are held to.

As Twitter has rolled out a 280-character trial in October 2017 (Hale 2017, online), it is arguably positioning itself to be an even more influential transmitter of information, accurate or inaccurate, in future democratic processes. It remains to be seen whether the increase will increase engagement with the platform, but the potential is there for it to be an even bigger player in the political arena (Hale 2017, online).

Other Platforms

While algorithms used by Facebook and Google are the dominant forces in controlling what many people see and think about democracy, other platforms are playing increasing roles. With Facebook and Google now firmly part of the established mainstream, there is space for other social media to fill their previous roles as the newcomer or disruptor on the scene. A politician or political party can share images directly to their followers, and can engage directly with them while doing so.

The way in which these photo-sharing social media have been used in recent elections suggests they will have a huge role to play in future similar contests. The recent UK Prime Ministerial election saw both Theresa May and Jeremy Corbyn use Instagram to a small degree, with surveys showing Corbyn’s use was more effective, although this could also be explained by the fact that younger people are more likely to vote Labour (Kenningham 2017, online). French President Macron used it heavily and swept to power (Kenningham 2017, online), and Indian Prime Minister Modi has a huge eight million followers. In the UK alone, Instagram has 18 million users and Snapchat 10 million – both significant portions of the 65 million total population, so political parties and figures need to be using it to be successful in the ever-competitive mediascape.

Instagram’s and Snapchat’s core demographics are much younger, on average, than that of Twitter and Facebook, and the platforms have an ability to reach groups of people who feel permanently disengaged with the political process (Kenningham 2017, online). Ninety percent of Instagram’s users, for example, are under 35 years old, and it is increasingly becoming the platform of choice for image-fixated millennials (Kenningham 2017, online).

While Instagram may be an excellent tool for reaching a younger demographic, its algorithm can be used and abused, as well as negotiated. Much like the Facebook news feed algorithm, Instagram’s algorithm has been described as being “mysterious, yet ingenious and brilliant at showing the best content to the best people” (Lua 2017, online). Its algorithm is driven by seven key factors or elements of a post, including engagement, relevancy, relationships, timeliness, profile searches, direct shares, and time spent (Lua 2017, online). A 2016 Instagram study (Instagram 2016, online) found that, when posts were listed chronologically, users missed up to 70% of their feeds, and the platform changed to an algorithm-driven method of ordering. Despite some initial opposition to the move, feedback has been generally positive (Lua 2017, online), and the relatively simple nature of Instagram’s algorithm, compared to that of Facebook, means it is easy for users to work with or even “beat” (Chacon 2017, online).

Snapchat is behind Instagram on users, but crucially, it has high levels of engagement, with the average user spending up to 30 minutes per day on the platform (Kenningham 2017, online). Its algorithm, similar to that of Instagram, places certain posts to the top of its feed, which leaves it open to misuse, but it offers a “way to engage with people who normally switch off at the very mention of the word ‘politics’” (Kenningham 2017, online). Jeremy Corbyn used the platform extensively in the 2017 UK election with some success, and all three French Presidential candidates used it, most especially the eventual winner (Kenningham 2017, online).

While Instagram and Snapchat have not yet played defining roles in political processes anywhere in the world, and the extent to which their algorithms can be used or manipulated in doing so is yet unclear, they are needed to “become a central part of the democratic process to ensure more people have a say and stake in the future of [political processes]” (Kenningham 2017, online). It is likely that Instagram and Snapchat have only had a positive effect on Western democratic processes thus far.

Summary of Findings

After such a detailed examination of the use of algorithms in social media and search engines, it is important to summarise findings, with reference to the original research questions.

The first research question asked: To what extent do algorithms in websites, networking services and social media have a negative effect on democracy in Western societies?

When the effects on Western democracies of algorithms used by Facebook, Google and others are examined, it can be said that, in a general sense, these algorithms have a negative impact on Western democracies.

Facebook’s algorithm is probably the biggest offender in this regard. Its aims are not to promote or encourage quality content being uploaded or shared on the platform, but to get as much personal information about its users and keep them engaged for as long as possible, in order to better target paid advertisements to them. Its success does not rely on the ability or need to distinguish between quality, truthful information and dishonest, fake information – as long as users are engaged regularly and for lengthy periods, it can sell a large amount of advertisements and its financial success is certain. Facebook’s algorithm also perpetuates the ‘filter bubble’ method of news feed generation, in which users are rarely, if ever, exposed to information that is contrary to their personal beliefs. Its algorithm can, and has, been manipulated to promote news stories with false or misleading information in order to gain political advantage.

Similarly, Google’s algorithm has many negative effects on democracy. Its search engine’s algorithm is designed to produce results based on a user’s previous searches, which, similar to that of Facebook, perpetuates the ‘filter bubble’ and is designed to soak up as much information about the user in order to target advertisements and generate revenue. Google claims it uses data mining to improve its services for users, yet makes US$40 billion a year from these practices, so it is difficult to accept that it is not a self-serving activity. Additionally, the monopolization of data and advertising services by Google drives competition out of the market, and the site also regularly manipulates data and search results to place particular results higher than others.

The second research questions asked: To what extent, if any, can users of new and digital media be manipulated by algorithms to think or act in certain ways?

Algorithms used by Facebook and Google can control what information users have access to in their news feeds, and hence, what issues they are exposed to and are likely to think about (Francis 2015, online). While a small number of writers have argued that technologies like web search and social networks reduce ideological segregation (Flaxman et. al 2016, p.298), there is much evidence showing otherwise (Francis 2015, online). The repetitive nature of how web-based algorithms work means that information engaged with by users affects their future search results and the content of their news feed, and similar search results or information is likely to appear again, perpetuating the ‘filter bubble’. Facebook continually removes or hides news that it believes might offend users, including many investigative journalism pieces (Ingram 2015, online). When the filter bubble and easy proliferation of untruthful or misleading information are combined, users can be manipulated to think certain ways about political or other subjects. The monopolization of news distribution is arguably not of Facebook’s own doing, as such a high number of people use it globally, and media companies have no real choice but to use it as a way of interacting with news consumers, but the way that Facebook feels about how news feeds are generated can differ from one day to the next.

The third research question asked: To what extent do search engine algorithms affect democracy in Western societies?

The answer to this question is, quite simply, a huge extent. With a virtual monopoly on search, Google “has the power to flip the outcomes of close elections easily – and without anyone knowing” (Epstein 2014, online). The company has the ability to identify a candidate that best suits its needs, identify undecided voters and send them customised search results tailored to make the candidate look better, while nobody – candidate, voter or regulator – is any the wiser (Epstein 2014, online). There is no evidence for such direct manipulation, but favouritism can happen ‘organically’ on Google’s search engine – this is what the company claimed was the cause of Barack Obama’s consistently high rankings in the months just before the 2008 and 2012 elections (Epstein 2014, online). A 2010 study conducted on a group of Americans’ preferences for either Julia Gillard or Tony Abbott (people the test subjects were unfamiliar with) as the ideal candidate for the position of Prime Minister of Australia found that they made their choice based on search rankings (Epstein 2014, online). In future elections, as increasing numbers of undecided voters get their information on political matters through the Internet, the way that Google’s algorithm works will have international ramifications. Google is not ‘just’ a platform, it “frames, shapes and distorts how we see the world” (Arvanitakis 2017, online).

The fourth research question asked: Which website, networking service or search engine is most likely to affect democracy through its use of algorithms?

The answer is Facebook, and this can be seen in many real-life examples. Recent real-life examples include its algorithms manipulating data to gain political outcomes in the Brexit referendum, the Trump-Clinton election, the French presidential election, and the UK general election. The most notable case of algorithm-driven influence in politics is the Trump-Clinton election contest. President Trump’s Digital Director, Brad Parscale, admitted that Facebook was massively influential in winning the election for Trump (Lapowsky 2016, online), by generating huge sums of money in online fundraising, a large proportion of which went back into digital advertising. Analysts and writers have also pointed to “online echo chambers and the proliferation of fake news as the building blocks of Trump’s victory” (Lapowsky 2016, online) – echo chambers created by Facebook’s algorithm. Trump’s online team took advantage of Facebook’s ability to test audiences with ads, running 175,000 variations of ads on the day of the third presidential debate alone (Lapowsky 2016, online). Cambridge Analytica pulled data from Facebook and paired it with huge amounts of consumer information from data mining companies to “ develop algorithms that were supposedly able to identify the psychological make-up of every voter in the American electorate” (Halpern 2017, online).

The Future of Democracy in an Algorithm-Driven World

Increased use of algorithms and artificial intelligence can have many benefits to societies. New systems can identify students who need assistance, and data be can used to identify health hazards within a population (Arvanitakis 2017, online). However, a diminished human role in decision-making may have many negative consequences for democracy.

The innovation of algorithms means “our political leanings are constantly being analysed and potentially also manipulated” (Arvanitakis 2017, online), and opaque algorithms can be “very destructive” (O’Neil 2016, p.4). Citizens of Western democracies have always thought that they knew where their information was coming from, but that is no longer the case (Arvanitakis 2017, online). The sources we have come to trust to bring us information have fallen under the influence of powerful, self-serving website whose algorithms make no distinction between truth and lies, or high quality information and nonsense. When a list of search results appear upon searching for something using Google, it is not clear where the results have come from or why they have appeared in such an order, and this is what is concerning for healthy democracy. In fact, it’s almost impossible to work out where information in a search ranking has come from or ended up that way. A professor at Bath University explained that “it should be clear to voters where information is coming from, and if it’s not transparent or open where it’s coming from, it raises the question of whether we are actually living in a democracy or not” (Arvanitakis 2017, online).

In order for anything to survive for any length of time, it has to adapt, and the future for democracy is increasingly looking like one of constant technological adaptation. Newly emerging social media, which have not been sucked into the mainstream where the sole purpose is to collect data for advertisement placement, are, along with other online platforms likely to be crucial to political participation for future generations. It is vital that young people are civically engaged (actively working to make a positive difference to their communities) in order to define and address public problems (Levine 2007, p.1), and social media has the potential to play a huge part in this. As the variety of methods it presents for information sharing and interconnectivity increase, social media has the potential to encourage more people to engage with democratic processes.

It is also vital for algorithms to be transparent and accountable (Arvanitakis 2017, online) in order for users of websites, social media and search engines to know how their personal information is being used, and to ensure the information they are seeing is accurate and balanced. “Algorithms are designed with data, and if that data is biased, the algorithms themselves are biased,” explains O’Neil (2016, p.4). Algorithms could be transparent, accountable and objective, but, in most cases, are nothing more than “intimidating, mathematical lies” (O’Neil 2016, p.4). Overcoming this fact is the key to fair and balanced algorithm use in future democratic processes.

With a 2017 survey indicating that two-thirds of schoolchildren would not care if social media had never been invented and 71% admitting to taking “digital detoxes” (The Guardian 2017, online), there is the hint of a possibility that social media use may decline as the next generation of school-aged children reaches adulthood. Many respondents of the survey believed social media was having a negative effect on their mental well-being, with advertising, fake news and privacy being particular areas of concern (The Guardian 2017, online). Some positives were mentioned, including memes, photo filters, and Snapchat stories, reinforcing the theory that new social media platforms, not Facebook, Twitter or Instagram, may be the future for mass information sharing and for healthy democracy.

Conclusion

It is indisputable that search engines and social media increase the number of ideas, viewpoints, opinions and perspectives available to citizens taking part in democratic processes. An incredibly varied collection of information is available to Internet users at any time, which, on face value, would suggest that citizens should be more informed about political issues than ever before. The Internet is also an effective tool for carrying out successful political campaigns, offering an efficient method by which political groups or individuals can reach audiences with public relations and policy messages.

With these things in mind, it could be easy to move steadily and unquestioningly forward with the idea that software makes our lives more convenient and enjoyable. However, the algorithms controlling data in some of the most popular and widely-used social media and search engines are designed not with the user’s best interests in mind, but the websites themselves – they are businesses, after all. This is a direct and immediate threat to democracy.

The ability to manipulate information online, similarly, is a threat to democratic processes. Evidence and real-life examples show that the control of information and misinformation through search engine and social media manipulation can help bring about desired political results, and the algorithms controlling information in these platforms are not able to discern between real and fake, or truth and dishonesty. Algorithms functioning to target users with advertising material instead of presenting a fair and balanced variety of information perpetuate the division of society based on political beliefs, and engineer information ‘filter bubbles’. Algorithms operating in this way are a threat to democracy.

It is partly this online environment that has created a divisive populist sentiment that now defines many Western societies, and has left many citizens lacking the full range of knowledge needed to make informed democratic decisions. Thomas Jefferson once proclaimed that “a properly functioning democracy depends on an informed electorate” (Samler 2017, online), but when algorithms are manipulating news feeds and search engine results without regulation, free will in the political arena no longer seems so free.

References

ABC News, 2017. ‘Facebook to Release Russia-Linked Ads to Congress Amid Pressure Over Use in US Election’, online, accessed 26th September 2017: http://www.abc.net.au/news/2017-09-22/facebook-to-release-russia-ads-to-congress-amid-pressure/8973718

ABC News, 2017. ‘Las Vegas Shooting: Politicised “Fake News” of Attack Spread on Google, Facebook’, online, accessed 7th October 2017: http://www.abc.net.au/news/2017-10-03/las-vegas-shooting-false-news-of-attack-spread-google-facebook/9011152

Allegri, C, 2016. ‘Did Google search data provide a clue to Trump’s shock election victory?, Fox News, online, accessed 30th September 2017:
Did Google search data provide a clue to Trump’s shock election victory?

Allen R, 2017. ‘Search Engine Statistics 2017’, Smart Insights, online, accessed 30th September 2017: http://www.smartinsights.com/search-engine-marketing/search-engine-statistics/

Anderson, B & Horvath, B, 2017. ‘The Rise of the Weaponised AI Propaganda Machine’, Scout, online, accessed 16th August 2017: https://scout.ai/story/the-rise-of-the-weaponized-ai-propaganda-machine

Arvanitakis, J, 2017. ‘If Google and Facebook Rely on Opaque Algorithms, What Does That Mean for Democracy?’, ABC, online, accessed 1st October 2017: http://www.abc.net.au/news/2017-08-10/ai-democracy-google-facebook/8782970

Bansal, N, 2012. ‘The Primal-Duadl Approach for Online Algorithms’, Approximation and Online Algorithms, Springer, p.1

Baraniuk, C, 2015. ‘The Bad Things That Happen When Algorithms Run Online Shops’, BBC, online, accessed 23rd September 2017: http://www.bbc.com/future/story/20150820-the-bad-things-that-happen-when-algorithms-run-online-shops

Barsanti, S, 2017. ‘Mark Zuckerberg Apologises for Facebook Making Life Worse’, AV Club, online, accessed 2nd October 2017: https://www.avclub.com/mark-zuckerberg-apologizes-for-facebook-making-life-wor-1819042663?rev=1506899047971&utm_content=Main&utm_campaign=SF&utm_source=Facebook&utm_medium=SocialMarketing

Bozdag, E & van den Hoven, J, 2015. ‘Breaking the Filter Bubble: Democracy and Design’, Ethics and Information Technology, Issue 4, p.249

Bucher, T, 2017. ‘The Algorithmic Imaginary: Exploring the Ordinary Affects of Facebook Algorithms’, Information, Communication & Society, pp.30-44

Campbell-Dollaghan, K, 2016. ‘The Algorithmic Democracy’, FastCoDesign, online, accessed 2nd October 2017: https://www.fastcodesign.com/3065582/the-algorithmic-democracy

Carson, B, 2016. ‘Zuckerberg: The Real Reason I Founded Facebook’, Business Insider Australia, online, accessed 26th September 2017: https://www.businessinsider.com.au/the-true-story-of-how-mark-zuckerberg-founded-facebook-2016-2?r=US&IR=T

Chacon, B, 2017. ‘5 Things to Know About the Instagram Algorithm’, Later, online, accessed 1st October 2017: https://later.com/blog/instagram-algorithm/

Chaffey, D, 2017. ‘Global Social Media Research Summary 2017’, Smart Insights, online, accessed 1st October 2017: http://www.smartinsights.com/social-media-marketing/social-media-strategy/new-global-social-media-research/

Derrick, J, 2017. ‘Benzinga: Elizabeth Warren: Apple, Google, and Amazon Threaten Our Democracy’, Newstex, p.1

Devlin, B, 2017. ‘Algorithms or Democracy: Your Choice’, TDWI, online, accessed 23rd September 2017: https://tdwi.org/articles/2017/09/08/data-all-algorithms-or-democracy-your-choice.aspx

Digital Disinformation Forum, 2017. Online, accessed 23rd September 2017: https://www.disinforum.org/#intro

Domonoske, C, 2016. ‘Students Have ‘Dismaying’ Inability To Tell Fake News From Real, Study Finds’, NPR, online, accessed 30th September 2017: http://www.npr.org/sections/thetwo-way/2016/11/23/503129818/study-finds-students-have-dismaying-inability-to-tell-fake-news-from-real

Dormehl, L, 2014. ‘Algorithms are Great and All, But They Can Also Ruin Lives’, Wired, online, accessed 23rd September 2017: https://www.wired.com/2014/11/algorithms-great-can-also-ruin-lives/

Dusi, M, Finamore, A, Claffy, K, Brownlee, N & Veitch, D, 2016. ‘Guest Editorial Measuring and Troubleshooting the Internet: Algorithms, Tools and Applications’, IEEE Journal on Selected Areas in Communications, Volume 34, Issue 6, p.1805

Ellis, D, 2016. ‘Why Algorithms are Bad For You’, Life on the Broadband Internet, Pew/Elon

Epstein, R, 2014. ‘How Google Could End Democracy’, US News, online, accessed 1st October 2017: https://www.usnews.com/opinion/articles/2014/06/09/how-googles-search-rankings-could-manipulate-elections-and-end-democracy

Eslami, M, Rickman, A, Vaccara, K & Aleyasen, A, 2015. ‘I Always Assumed That I Was Really Close to Her’, Proceedings of the 33rd Annual SIGCHI Conference on Human Factors in Computing Systems, New York, pp.153-162

Facebook, 2017. Online, accessed various dates: http://www.facebook.com

Fiat, A & Woeginger, GJ, 1998. Online Algorithms: The State of the Art, p.7

Flaxman, S, Goel, S & Rao, JM, 2016. ‘Filter Bubbles, Echo Chambers, and Online News Consumption’, Public Opinion Quarterly, Volume 80, p.298

Floridi, L, 2017. ‘The Rise of the Algorithm Need Not Be Bad News for Humans’, Financial Times, online, accessed 23rd September 2017: https://www.ft.com/content/ac9e10ce-30b2-11e7-9555-23ef563ecf9a

Francis, D, 2015. ‘Facebook Elections, Facebook Candidates, Facebook Democracy’, Huffington Post, online, accessed 27th September 2017: http://www.huffingtonpost.com/dian-m-francis/facebook-elections-facebo_1_b_8271488.html

Frommer, D, 2014. ‘Google’s Growth Since its IPO is Simply Amazing’, Quartz, online, accessed 30th September 2017: https://qz.com/252004/googles-growth-since-its-ipo-is-simply-amazing/

Gavet, M, 2017. ‘Rage Against the Machines: Is AI-Powered Government Worth It?’, We Forum, online, accessed 23rd September 2017: https://www.weforum.org/agenda/2017/07/artificial-intelligence-in-government

Google Earth Blog, online, accessed 30th September 2017: http://www.gearthblog.com

Google, ‘From the Garage to the Googleplex’, online, accessed 30th September 2017: https://www.google.com/intl/en/about/our-story/

Google Research, online, accessed 30th September 2017: http://www.research.google.com/pubs/dataminingandmodeling.html

The Guardian, 2017. ‘Growing Social Media Backlash Among Young People, Survey Shows’, online, accessed 7th October 2017: https://www.theguardian.com/media/2017/oct/05/growing-social-media-backlash-among-young-people-survey-shows

Hale, S, 2017. ‘Twitter Trials 280 Characters, But Its Success in Japan is More Than a Character Difference’, Oxford Online Institute, online, accessed 2nd October 2017: https://www.oii.ox.ac.uk/blog/success-is-more-than-a-character-difference/

Hall, K, 2017. ‘Europe Seeks Company to Monitor Google’s Algorithm in $10m Deal’, The Register, p.11

Halpern, S, 2017. ‘How He Used Facebook to Win’, NY Books, online, accessed 1st October 2017: http://www.nybooks.com/articles/2017/06/08/how-trump-used-facebook-to-win/

Hazen, D, 2017. ‘Google, Facebook, Amazon Undermine Democracy: They Play a Role in Destroying Privacy, Producing Inequality’, Salon, online, accessed 30th September 2017

Helbing, D, Bruno, S, Gigerenzer, G, Hafen, E, Hagner, M, Hofstetter, Y, van den Hoven, J, Zicari, RV & Zwitter, A, 2017. ‘Will Democracy Survive Big Data and Artificial Intelligence?’, Scientific American, online, accessed 23rd September 2017: https://www.scientificamerican.com/article/will-democracy-survive-big-data-and-artificial-intelligence/

Heller, N, 2016. ‘The Failure of Facebook Democracy’, The New Yorker, online, accessed 28th September 2017: https://www.newyorker.com/culture/cultural-comment/the-failure-of-facebook-democracy

House of Parliament, ‘Algorithms in Decision-Making Inquiry – Publications’, online, accessed 23rd September 2017: https://www.parliament.uk/business/committees/committees-a-z/commons-select/science-and-technology-committee/inquiries/parliament-2015/inquiry9/publications/

Howard, J, 2016. ‘Is Social Media Killing Democracy?’, Oxford Internet Institute, online, accessed 2nd October 2017: https://www.oii.ox.ac.uk/blog/is-social-media-killing-democracy/

Howie, P, 2011. ‘The End of the Google Democracy’, Fast Company, online, accessed 30th September 2017: https://www.fastcompany.com/1746616/end-google-democracy

Instagram, 2016. ‘See the Moments You Care About First’, Instagram, online, accessed 1st October 2017: http://blog.instagram.com/post/145322772067/160602-news

Introna, LD & Nissenbaum, H, 2000. ‘Shaping the Web: Why the Politics of Search Engines Matters’, The Information Society, pp.169-185

Jain, A, 2016. ‘Spread of Fake News on Facebook Eroding Democracy: Obama’, Newstex Global Business Blogs, online, accessed 27th September 2017

Kemp, S, 2017. ‘Digital in 2017: Global Overview’, WeAreSocial, online, accessed 1st October 2017:
https://wearesocial.com/special-reports/digital-in-2017-global-overview

Kenningham, G, 2017. ‘Instagram and Snapchat are Vital Tools for Activating Democracy’, Cityam.com, accessed 1st October 2017: http://www.cityam.com/266023/instagram-and-snapchat-vital-tools-activating-democracy

Kirby, J, 2016. ‘Google Predicted Donald Trump Would Win the Election’, Macleans, online, accessed 30th September 2017: http://www.macleans.ca/politics/washington/google-predicted-donald-trump-would-win-the-election/

Kuper, S, 2017. ‘How Facebook is Changing Democracy’, Financial Times, online, accessed 28th September 2017: https://www.ft.com/content/a533d5ec-5085-11e7-bfb8-997009366969?mhq5j=e7

LaMarre, HL & Suzuki-Lambrecht, Y, 2013. ‘Tweeting Democracy? Examining Twitter as an Online Public Relations Strategy for Congressional Campaigns’, Public Relations Review, Volume 39, p.1

Lapowsky, I, 2016. ‘Here’s How Facebook Actually Won Trump the Presidency’, Wired, online, accessed 1st October 2017: https://www.wired.com/2016/11/facebook-won-trump-election-not-just-fake-news/

Levine, P, 2007. The Future of Democracy: Developing the Next Generation of American Citizens, UPNE, p.1

Levy, S, 2010. ‘How Google’s Algorithm Rules the Web’, Wired, online, accessed 15th August 2017: https://www.wired.com/2010/02/ff_google_algorithm/

Linehan, H, 2017. ‘Google, Facebook are a Threat to Democracy, says Press Council Chair’, Irish Times, 25th May 2017, p.11

Lua, A, 2017. ‘Understanding the Instagram Algorithm: 7 Key Factors and Why the Algorithm is Great for Marketers’, BufferApp, online, accessed 1st October 2017: https://blog.bufferapp.com/instagram-algorithm

MacArther, A, 2017, ‘The Real History of Twitter, in Brief’, LifeWire, online, accessed 2nd October 2017: https://www.lifewire.com/history-of-twitter-3288854

Makulilo, A, 2017. ‘Rebooting Democracy? Political Data-Mining and Biometric Voter Registration in Africa’, Information and Communications Technology

Marchi, R, 2012. ‘With Facebook, Blogs, and Fake News, Teens Reject Journalistic “Objectivity”’, Journal of Communication Inquiry, Volume 36, p.246

Matteson, S, 2014. ‘Google Turns in a User for Allegedly Possessing Criminal Material’, TechRepublic, online, accessed 30th September 2017: http://www.techrepublic.com

Merriam-Webster Dictionary, 2017. ‘Fake News’, online, accessed 27th September 2017: https://www.merriam-webster.com/words-at-play/the-real-story-of-fake-news

Miller, D, 2012. ‘Google: Let Us Opt Out of Your Data Mining Machine’, Wired, online, accessed 30th September 2017: https://www.wired.com/insights/2012/10/google-opt-out/

Newton, C, 2016. ‘Zuckerberg: The Idea That Fake News on Facebook Influenced the Election is “Crazy”’, The Verge, online, accessed 26th September 2017: https://www.theverge.com/2016/11/10/13594558/mark-zuckerberg-election-fake-news-trump

The Nudging Company, ‘Nudging and Behavioural Design’ online, accessed 23rd September 2017: https://thenudgingcompany.com/en/free-online-workshop-on-nudging-and-behavioral-design/

O’Neil, C, 2016. ‘Commentary: Facebook’s Algorithm vs. Democracy’, PBS, online, accessed 30th September 2017: http://www.pbs.org/wgbh/nova/next/tech/facebook-vs-democracy/

O’Neil, C, 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown Publishing

Oremus, W, 2016. ‘Who Controls Your Facebook Feed’, Slate, online, accessed 16th August 2017: http://www.slate.com/articles/technology/cover_story/2016/01/how_facebook_s_news_feed_algorithm_works.html

Oxford Reference, ‘Algorithm’, A Dictionary of Social Media, Oxford University Press

Pennington, N, 2013. ‘Facebook Democracy: The Architecture of Disclosure and the Threat to Public Life’, European Journal of Communication, p.193

Phillips, S, 2007. ‘A Brief History of Facebook’, The Guardian, online, accessed 26th September 2017: https://www.theguardian.com/technology/2007/jul/25/media.newmedia

Richey, S & Taylor, B, 2017. Google and Democracy, Taylor & Francis Ltd, p.1

Samler, J, 2017. ‘Why Facebook’s Algorithms are Destroying Democracy’, Harbus, online, accessed 30th September 2017: http://www.harbus.org/2017/facebooks-algorithms-destroying-democracy/

Shoval, N, 2017. ‘Facebook is Dangerous for Democracy – Here’s Why’, Mashable, online, accessed 28th September 2017: http://mashable.com/2017/07/17/facebook-social-media-dangerous-for-democracy/#pfqHTN6bRPqo

Silverman, C, 2016. ‘Hyperpartisan Facebook Pages Are Publishing False And Misleading Information At An Alarming Rate’, Buzzfeed, online, accessed 26th September 2017: https://www.buzzfeed.com/craigsilverman/partisan-fb-pages-analysis?utm_term=.gwJPAMm2Qm#.uj5PYxjKRj

Smith, C, 2016. ‘Why Facebook and Google Mine Your Data, And Why There’s Nothing You Can DO to Stop It’, BGR, online, accessed 30th September 2017: http://bgr.com/2016/02/11/why-facebook-and-google-mine-your-data-and-why-theres-nothing-you-can-do-to-stop-it/

Smith, P, 2017. ‘Dear Internet, Can We Talk? We Have an Information Pollution Problem of Epic Proportions’, Misinfocon, online, accessed 26th September 2017: https://misinfocon.com/dear-internet-can-we-talk-we-have-an-information-pollution-problem-of-epic-proportions-a1c31b600fdc

Spinney, L, 2017. ‘Facebook and Instagram, Blurring the Line Between Individual and Collective Memories’, Nature, Volume 543, p.168

Statt, N, 2017. ‘Mark Zuckerberg Just Unveiled Facebook’s New Mission Statement’, The Verge, online, accessed 26th September 2017: https://www.theverge.com/2017/6/22/15855202/facebook-ceo-mark-zuckerberg-new-mission-statement-groups

Strickland, J, 2017. ‘Why is the Google Algorithm So Important?’, How Stuff Works, online, accessed 30th September 2017: http://www.computer.howstuffworks.com/google-algorithm.htm

Stubb, A, 2017. ‘Why Democracies Should Care Who Codes Algorithms’, Financial Times, online, accessed 23rd September 2017: https://www.ft.com/content/0322c920-421b-11e7-9d56-25f963e998b2

Sultan, A, 2016. ‘Matchmaking Sites: An Algorithm of the Heart’, Sydney Morning Herald, online, accessed 23rd September 2017: http://www.smh.com.au/technology/sci-tech/matchmaking-sites-an-algorithm-of-the-heart-20160215-gmuztu.html

Tufekci, Z, 2015. ‘Facebook Said Its Algorithms Do Help Form Echo Chambers, and the Tech Press Missed It’, New Perspectives Quarterly, p.9

Vestager, M, 2017. ‘A Healthy Democracy in a Social Media Age’, European Commission, online, accessed 1st October 2017: https://ec.europa.eu/commission/commissioners/2014-2019/vestager/announcements/healthy-democracy-social-media-age_en

Vise, DA & Malseed, M, 2005. The Google Story, Delacorte Press, pp.1-10

White, J, 2012. Bandit Algorithms for Website Optimization, O’Reilly Media, pp.7-9

Worstall, T, 2013. ‘Google Is A Significant Threat To Democracy: Therefore It Must Be Regulated’, Forbes, online, accessed 30th September 2017: https://www.forbes.com/sites/timworstall/2013/04/02/google-is-a-significant-threat-to-democracy-therefore-it-must-be-regulated/#2523f85227c9

Zhang, J, Ackerman, MS & Adamic, L, 2007. ‘Expertise Networks in Online Communities: Structure and Algorithms’, Proceedings of the 16th International Conference on World Wide Web, ACM, p.221

A Critical Analysis of Blade Runner as a Resource for Speculation on the Possibilities of New Media and Cyberspace

blade runner poster

Humanity has a complex and sometimes concerning relationship with technology and the roles it plays in our lives, and this relationship has been examined in an array of cultural forms and contexts over a long period of history. Technology not only provides us with new tools for communication and expression, but continually-evolving social contexts for our daily existence (Lunenfeld 2000, p.1). From the conception of human engagement with technology, there has been concern about the potential for the end of humanity (Hansen 2004, p.14), and while this may seem like an extreme way of viewing technological progress, these fears remain today. Some of the most powerful arguments for and against the use of technology in our lives have been made in utopian or dystopian texts. Film has, for many decades, been a vehicle for bringing these arguments to mass audiences; both in terms of their historical context and its possibilities for the future. Technology or new media has, at different times, been shown on film as being the saviour or the downfall of humanity, or sometimes both simultaneously. This essay will critically analyse the 1982 film Blade Runner in the realm of new media and technology as resources for speculation and possibility, and the science fiction genre of cyberpunk from which it was spawned, and show that it is a culturally-significant example of technology and humanity colliding in fiction.

Directed by Ridley Scott, Blade Runner is partly based on Philip K. Dick’s 1968 novel Do Androids Dream of Electric Sheep?. The plot follows ‘blade runner’ Rick Deckard, as he hunts four renegade human-like androids, or ‘replicants’, who are on the run from authorities in a dystopian Los Angeles. The clash between “creatures engineered in biomedical laboratories and those who create them to achieve colonial ends” (Lussier & Gowan 2012, p.165) form the basis of the dramatic narrative. Originally released in 1982 to a poor showing at the international box office and mixed reviews, the film gained a cult following in the following decades, and, today, is “consistently listed as one of the most important science fiction movies ever made” (Latham & Hicks 2015, p.1). Hailed for its production design, showing a retrofitted future, Blade Runner remains a leading example of the neo-noir genre, is heavily indebted to the femme fatale and cyberpunk genres, and has been the subject of much scholarly debate and examination since its release. It is a difficult film to examine as it exists in so many states, having been re-released in 1992 with a ‘director’s cut’ label, and again as a ‘final cut’ in 2007 (Dienstag 2015, p.107), and latter versions of the film make increasing suggestions that Deckard himself may be a replicant. Despite this, it is a prominent early depiction of the questions posed by combining high technology and humanity.

An obvious question posed by Blade Runner and similar texts is one which concerns the point at which technology moves from being beneficial to humanity to being a threat. The complex relationship between the two principal characters, Deckard and Rachael, could be seen as symbolising the relationship between humanity and technology. At first they are highly sceptical of each other: Deckard because Rachael is not human, and Rachael because Deckard is a murderer (Dienstag 2015, p.108). At the start of the film, Deckard remarks “replicants are like any other machine” (Locke 2009, p.115), and when he meets Rachael, asks her maker “how can it not know what it is?” (Locke 2009, p.115). Immediately, Rachael challenges Deckard’s ideas about the difference between human and machine by asking him “Have you ever retired a human by mistake?” (Locke 2009, p.115). Deckard breaks the news to her that she is a replicant and a single tear is shown to fall down her face. In this scene, it is the machine which is shown to have emotion, while Deckard remains cold and detached. As the story progresses, the pair come to respect and rely on each other, to the point at which their lives become irreversibly intertwined and they escape to be together. In a speech delivered four years after the publication of Do Androids Dream of Electric Sheep?, Dick suggested that: “In a very real sense our environment is becoming alive, or at least quasi-alive, and in ways specifically and fundamentally analogous to ourselves” (Galvan 1997, p.413). Suggesting that technology is infiltrating our lives and changing our characters in subtle ways, Dick said we risk being reduced to “humans of mere use – men made into machines” (Galvan 1997, p.414). The film tempts viewers to imagine themselves in Deckard’s position and wonder if they would succumb to the same temptations posed by technology. The answer, in most cases, is likely to be in the affirmative.

The film also explores the difference between what defines humanity and technology (Locke 2009, p.113). The most obvious answer is the ability to feel emotion, or most importantly, empathy. However, the suggestion that artificial intelligence has the potential to become ‘human-like’ while humans themselves become increasingly less so is perhaps one of the most interesting areas for speculation within the film. The predominantly human traits of community and togetherness are more apparent in the replicant world of Blade Runner than in Deckard’s lonely existence – they fight to survive together and mourn when one of their group is killed. Batty, leader of the replicants, is, at times, playful and amiable, despite the certainty of his impending doom. Batty exhibits a sense of high culture and “proves his humanity by demonstrating that he is physically, intellectually, and even morally, superior to everyone else in the film, humans as well as slave” (Locke 2009, p.120). At the film’s climax, the exemplary and human-like behaviour of Batty, as he dies on the rooftops fighting the blade runner (and also saving his life), sees him transferring his freedom to Deckard (Lussier & Gowan 2012, p.165), and Deckard, as a result, is free. Deckard realizes that similarities between men and replicants “run deeper than their differences and that they are in fact the same type of man, ‘brothers’, regardless of any distinction between human and android” (Locke, 2009, p.138). Deckard comes to grips with his own humanity by falling in love with a replicant and deciding that he wants to live his life with her.

Blade Runner Roy Batty

Writers have also, at different times, used an analysis of the themes in Blade Runner to explore issues affecting vast numbers of people in the world today. Workman likens themes in the film to issues of a medical nature, comparing the replicants’ desire to not die an early death to that of people with fatal diseases. “Almost all of us shall feel the pain and frustration that comes from living with the knowledge that we will in some sense die prematurely” (2006, p.95), he suggests. It has also been argued that Blade Runner uses the relationship between technology and humanity to make political statements. The film’s humanization of its replicants is a “compelling statement against exploitation and domination” (Dienstag 2015, p.101), although this could be tempered with the argument that it is necessary for humanity to control technology to prevent technology from controlling it. Dienstag (2015, p.108) argues that Blade Runner shows us that to “live freely in any regime, we must understand the dangers of representation, even if, in a large state, we must continue to make use of it”. If the success of democracy relies singularly on representation, it risks being dehumanized, much like the initial relationships in the film (Dienstag 2015, p.119). Furthermore, Brooker (2009, p.79) proposes that the ‘final cut’ of the film constructs it as “a fictional world with some parallels to contemporary transmedia franchises”, as it creates a narrative path with several possible routes.

Blade Runner is an early example of a film containing cyberpunk elements, as defined by Bukatman’s definition of cyberpunk as being particularly concerned with the “interface of technology and human subject” (1993, p.54). Mead (1991, p.350) describes cyberpunk as depicting the type of radical technological change seen in Blade Runner as an opportunity to positively change the “perceptual and psychic definitions of what it means to be human” (Mead 1991, p.350). Deckard fits the description of an archetypal cyberpunk character perfectly: he is a “marginalized, alienated loner who live[s] on the edge of society in [a] generally dystopic future, where daily life [is] impacted by rapid technological change, an ubiquitous datasphere of computerized information, and invasive modification of the human body” (Person 1998, online).

Despite its roots lying in the cyberpunk genre, Blade Runner offers so much more thematically. It considers political, moral and technological issues, has stood the test of time and is more popular today than when it was released, unlike many early cyberpunk works. Many of these set out to demythologise technology and failed, but, interestingly, Blade Runner found a greater number of fans as time passed and technology – especially the Internet and robotics – evolved and flourished. It also influenced films featuring similar android or human-like robot storylines which are still popular in cinema in the 21st century (such as The Terminator series of films). In this way, it could be argued Blade Runner contributed to expanding our ideas about the limits of technology, and how it interacts with humanity, in exciting and possibly concerning ways.

Blade Runner also sits thematically within the postmodernism movement, and adopts and puts creative spins on many of its assertions about society and technology, although it has also been argued that the differences between the 1982 and 1992 versions “thus establish a foundational tension that fuels both modern and postmodern interpretations” (Begley 2004, p.186). Jameson explains that “cyberpunk offers privileged insights into contemporary culture providing a cognitive space through which we can understand the postmodern condition” (1991, p.96). Harvey (1990, p.323) suggests that “Blade Runner hold[s] up to us, as in a mirror, many of the essential features of the condition of postmodernity”, while Clayton (1996, p.15) explains that “[s]ince its first release in 1982, Blade Runner has been taken by critics as a vision of a particular historical epoch, the period many people today are calling postmodernism” (1996, p.15). The film rejects the idea of social progress and promotes pluralism in the form of multiple, co-existing realities, while the human-replicant bond between Deckard and Rachael “manifests a form of hybridized love” (Lussier & Gowan 2012, p.165). This bond becomes a crucial plot device for the film, as well as contributing to the “continued relevance of Romanticism for postmodernism” (Lussier & Gowan 2012, p.165). The film’s depiction of Los Angeles is of an orientalised, post-modern, noir-ish city that is an archetypal cyberpunk landscape, offering the viewer at a glimpse at both a high level of technological advancement and increasing social breakdown. A dark, despoiled environment, dominated by the towering pyramid of the Tyrell Corporation headquarters – a metaphor for the class system depicted in the film – is the setting in which the story plays out. The postmodern cityscape depicted “shares the attributes of the globalised, transnational, borderless space” similar to the notion of cyberspace (Yu 2008, p.46).

In conclusion, it can be said that Blade Runner’s many narrative and thematic complexities offer ample opportunity to explore the world of new media and technology as resources for speculation and possibility. The relationship between technology and humanity is at the core of the film, and, in essence, the film tells the story of one individual’s gradual acceptance of the changing parameters of how technology and humanity interact and operate together. How this happens is a complex tale with many elements open to interpretation. The ability for artificial intelligence to show humanity, while humans simultaneously become increasingly dehumanized, is perhaps the most interesting subject presented by the film, and worthy of further examination. The system of master and slave is turned on its head by the very suggestion that machines may have the ability to show humanity. By being saved from death and set free by Batty, has Deckard been set free by technology, or set free by humanity? It’s an interesting question which leaves plenty of room for speculation and possibility.

References

Begley, V, 2004. ‘Blade Runner and the Postmodern: A Reconsideration’, Literature/Film Quarterly, Volume 32, pp.186-192

Bukatman, S, 1993. Terminal Identity: The Virtual Subject in Postmodern Science Fiction, Duke University Press, London, p.54

Clayton, J, 1996. Concealed Circuits: Frankenstein’s Monster, the Medusa, and the Cyborg, Raritan, p.15

Dienstag, JF, 2015. ‘Blade Runner’s Humanism: Cinema and Representation’, Contemporary Political Theory, Volume 14, pp.101-119

Galvan, J, 1997. ‘Entering the Posthuman Collective in Philip K. Dick’s “Do Androids Dream of Electric Sheep?”‘, Science Fiction Studies, Volume 24, pp.413-429

Hansen, MB, 2004. New Philosophy for New Media, MIT Press, p.14

Harvey, D, 1990. The Condition of Postmodernity: An Enquiry into the Origins of Cultural Change, Blackwell, p.323

Jameson, F, 1991. Postmodernism, or, The Logic of Late Capitalism, Duke University Press, Durham, p.96

Latham, R & Hicks, J, 2011. ‘Blade Runner’, Cinema and Media Studies, Oxford University Press, p.1

Locke, B, 2009. ‘White and Black Politics versus Yellow: Metaphor and Blade Runner’s Racial Politics’, The Arizona Quarterly, Volume 65, pp.113-138

Lunenfeld, P, 2000. The Digital Dialectic: New Essays on New Media, MIT Press, p.1

Lussier, M & Gowan, K, 2012. ‘The Romantic Roots of Blade Runner’, Wordsworth Circle, Volume 43, p.165

Mead, D, 1991. ‘Technological Transformation in William Gibson’s Sprawl Novels: Neuromancer, Count Zero, and Mono Lisa Overdrive’, Extrapolation, Volume 32, pp.350-60

Person, L, 1998. ‘Notes Toward a Post-Cyberpunk Manifesto’, Nova Express, online, accessed 22nd April 2017: https://slashdot.org/story/99/10/08/2123255/notes-toward-a-postcyberpunk-manifesto

Workman, S, 2006. ‘Blade Runner’, BMJ: British Medical Journal, Volume 332, p.695

Yu, T, 2008. ‘Oriental Cities, Postmodern Futures: “Naked Lunch, Blade Runner” and “Neuromancer”’, MELUS, Volume 33, p.46

Twin Peaks as Complex Television: An Evaluative Critique

twin peaks sign television

Television in the 21st century is more complex than that of the late 1980s and prior (Hundley 2007, p.3). This is largely due to the increase in complex narratives, characterisations and interesting plots that require stricter viewer attention: elements which have become commonplace in television series since they were first seen in the early 1990s. Much of this new complexity was conceived in the science fiction genre, with programs such as Twin Peaks, The X-Files and Lost ushering in a new era of complex television. The popularity of these shows had ramifications across all areas of television, transforming the mainstream television arena and enabling the success of complex storylines by “weaning audiences onto them” (Hundley 2007, p.6). This influence is still evident in the production of quality television today. This essay will make an evaluative critique of the American television series Twin Peaks (1990-1991) in the realm of how it accords with the definition of complex television, including both its textual and contextual dimensions, and the various factors which played out in the series’ making.

Complex television is described by Mittell as an “alternative to the conventional episodic and serial forms that have typified most American television since its inception” (2015, p.17), and he explains that the viewer can derive pleasure from trying to figure out the kernels and satellites in plotlines of complex narratives (2015, p.24). Complex television texts are encoded with dense meaning and imagery, often including multiple characterisations and intricate plotlines. Narrative complexity can be considered a distinct narrational mode, or a “historically distinct set of norms of narrational construction and comprehension” (Bordwell 1985, p.1) that allows for “a range of potential storytelling possibilities” (Mittell 2015, p.22), and in which oscillation between long-term story arcs and stand-alone episodes is possible. A prominent example is the 1990s American television series The X-Files, which Sconce (2004, p.93) describes as having both an “ongoing, highly elaborate conspiracy plot” and “self-contained ‘monster-of-the-week’ stories”. Complex television rejects the need for plot closure within every episode, employs a range of serial techniques that build over time, is not as uniform as traditional serial norms, creates an elaborate network of characters, and is often highly unconventional in many ways (Mittell 2015, p.17; Booth 2011, p.371).

Twin Peaks was created jointly by David Lynch and Mark Frost, and premièred in the United States in April 1990. It was a ground-breaking series that “changed most norms about television at that time” (Hundley 2007, p.6), and despite consisting of only two series of 29 episodes in total, has inspired numerous complex debates about its interactions with its medium (Baderoon 1999, p.94). Nominated for fourteen Emmys and broadcast in 55 non-American markets (Muir 2001, p.250), Twin Peaks was described as “revolutionary” at the time of its release (Hundley 2007, p.24), and is still considered so today. The series primarily centred on an investigation by FBI Special Agent Dale Cooper (Kyle MacLachlan) into the murder of Laura Palmer (Sheryl Lee): a beloved high school student, homecoming queen, and native of the fictional small town of Twin Peaks, close to the Canadian border. Cooper’s investigations quickly lead him to discover that Palmer was not so innocent as she might have seemed. He learns that the teenager lived a precarious, multi-layered life, the town and its people are full of secrets and mystery, and the surrounding woods are home to something supernatural and possibly evil. The viewer quickly becomes aware that Twin Peaks is a series “full of secrets, variegated orders, ambiguous characters and [with a] supernatural overtone” (Loacker & Peters 2015, p.624). In the company of an array of complex characters who “cheat, steal, kill, rape, and deal drugs” (Hundley 2007, p.24), Cooper solves the murder at the end of the first series, consisting of eight episodes. The second, and much longer, series moves the narrative ever deeper into the realm of science fiction, as Cooper investigates the malevolent spirit, Bob, who possessed Laura’s father, and visits the Black Lodge in the woods. Ratings dropped in the second series, perhaps due to the increase in some of the more bizarre science-fiction-oriented elements of the show, and the fact the murder-mystery had been largely resolved, but it is the aforementioned ingredients and the quality of their presentation which made Twin Peaks such a highlight of modern television.

Twin Peaks presents an isolated community beset by evil forces, and its narrative is driven by a murder investigation: an event which reverberates through the close-knit community. It has been argued that the first series constituted little more than an “above-average, literarily-allusive, highly exploitative mini-series about an honours student cheerleader by day/prostitute-drug dealer by night” (Dolan 1995, p.43), but this description does not begin to scratch the surface of the series’ depth. Twin Peaks was partially marketed as a police procedural (Collins 1992, p.345) and has many elements of a classic detective story, in which the investigator is the “traditionally-expected centre of signification” (Carrion 1993, p.242). It is easy to suggest that Dale Cooper is the “literal hero” of Twin Peaks (Baderoon 1999, p.94) and that the series revisits the staples of traditional televisual story-telling by “inhabiting the genres of detective series and soap opera” (Fiske 1987, p.237). However, the way in which each episode feeds back onto itself as the narrative progresses towards a conclusion, moves the narrative away from the traditional detective story and into a space much more complex and interesting. The narrative is also constantly undermined by evil forces, and many other televisual devices introduced by Lynch, which remove ontological certainty in the text and add to viewing enjoyment. There is an ominous sense that anything could befall any of the characters at any time (Woodward 1990, p.50), and deciphering and understanding the intricacies of their fates “became a national pastime and a boon for TV and film critics alike” (Muir 2001, p.251). The presence of these elements in Twin Peaks again point to its accordance with the definition of complex television, in fitting with Mittell’s description of complexity as being when the “ongoing narrative pushes outward, spreading characters across an expanding story world” (2015, p.52). Its multiple complexities led to it being labelled a “genre-splicing work of film art, a parodic, convention-defying detective story” (Lavery 1996, p.16).

Thompson (2003, p.120) suggests that Twin Peaks can be described as “art television, or television which brings elements from art cinema to the small screen”, and for Lynch, film and television are “art medium[s] that subvert and play with well-known boundaries, meanings – and with our senses” (Loacker & Peters 2015, p.621). He seemed thoroughly determined to push these boundaries throughout the entirety of Twin Peaks, with the most obvious challenge to reason and convention being the development of the story of Laura Palmer (Telotte 1995, p.162). Her double or ‘phantom’ life obscures the viewer’s desire to see her lead a normal existence; instead, “drugs, illicit sex, sadomasochism, and hints of devil worship are or were the hidden, yet real, highlights of Laura’s after-school life” (Telotte 1995 p.162), and become inseparable from her identity. Additionally, eccentric characters with sometimes odd or silly mannerisms are deployed generously throughout the narrative to challenge convention and question normality. Even Agent Cooper, the “literal hero” (Baderoon 1999, p.94), uses peculiar methods to solve cases, including speaking to a tape recorder, the use of dreams and visions, and Tibetan meditation.

Multiple uses of complexity on concurrent levels means Twin Peaks‘ narration is extremely effective at “frustrat[ing] the resolution of the murder mystery by revealing ever more elaborate networks of connections” (Baderoon 1999, p.102). It offers a radical rereading of the detective story and, at its close, “disavows the implications of its own subversiveness” (Baderoon 1999, p.94). In combining elements of a police investigation with soap opera and strong surreal elements, the series “prominently alters and undermines ‘normal’ orders, established boundaries and the ‘grid’ of common meaning – in television narratives, but also far beyond” (Telotte 1995, p.165). In the closing scene of the final episode of series two, in which Cooper is possessed by Bob, the hero of the story occupies the position held by the female victim in the opening scene. The audience is faced with a narration “simultaneously subversive and ambivalent” (Baderoon 1999, p.105), as well as dramatic and gripping.

Since the series aired, Twin Peaks has increasingly been framed in the context of science fiction (Weinstock & Spooner 2015, p.161), and it is useful to examine this contextualisation to see how it confirms the series as being complex television. Agent Cooper faces evil forces from not only within the town, but the surrounding woods – a historical link could be drawn to many 1950s science fiction films which presented monsters as a displaced form of communism (The Invasion of the Body Snatchers, for example) as both an internal and external threat to the country. Lynch also includes many direct links to the decade throughout the series, from the inclusion of actors who rose to prominence in the 1950s in Piper Laurie, Russ Tamblyn and Richard Beymer, to the fashion, style and music taste of Audrey Horne (Sherilyn Fenn), and the pristine image of the 1950s diner. As the second series moves deeper into the realm of science-fiction, Major Briggs’ superiors further reference the era by warning Cooper that Brigg’s abduction “could make the Cold War seem like a case of the sniffles” (Hundley 2007, p.26).

Additionally, much of the ambiguity concerning the natural and supernatural elements of the murder of Laura can be seen as being influenced by 1980s science-fiction (Hundley 2007, pp.26-27). Lingering doubt over the extent to which Leland Palmer’s possession played in Laura’s murder, and the cliffhanger ending as Agent Cooper is himself possessed by Bob, leave the audience unsure of many elements of the story. It is uncommon for a traditional detective story to leave unresolved issues, further cementing the idea that Twin Peaks fits with Mittell’s (2015, p.17) definition of complex television, in that it is “highly unconventional in many ways”.

Another element of Twin Peaks‘ complexity, which can be seen throughout the history of horror and science fiction, is the inclusion of sites of deviance or different behaviour (Loacker & Peters 2015, p.622): places where otherworldly occurrences take place. These include The Great Northern Hotel, The Roadhouse and One-Eyed Jacks, and other sites which appear in an imaginary or dreamlike state: the Red Room, the Black and White Lodges, and the Ghostwood Country Club and Estate – a “space in the business imagination of Benjamin Horne” (Loacker & Peters 2015, p.622). Similar sites are used as spaces of deviance throughout television and film history, from the Overlook Hotel in Stanley Kubrick’s The Shining, Bates’s motel in Hitchcock’s Psycho and others, and in several screen adaptations of Stephen King’s work. The sites in Twin Peaks which exist between the real and the imaginary bring about many rapid changes in the narrative, add many layers of complexity to plotlines, and can leave the viewer puzzled or intrigued (Davis, 2010). There also exist sites which are presented as less deviant or evil, but are often just as affective in altering the course of the narrative: the Double R Diner or Twin Peaks Sheriff’s Department, for example (Loacker & Peters 2015, p.622). Agent Cooper’s meditative states and dreams are also arguably sites of deviance, although they are used for good in the solving of crime. Hence, it could be argued, the physical landscape of the town of Twin Peaks, and hence the series itself, is a “maze” (Blassmann 1999, p.49), made up of “multiple, seemingly contradicting and obscure formulas, codes and landmarks” (Westwood 2004, p.775): again adding to the complexity and overall quality of the viewing experience.

It is also useful to examine television’s history to see which factors may have influenced Twin Peaks‘ production and to contextualise it within the evolution of television in the United States over a number of decades. Beginning with visual and narrative style, it can be argued that Twin Peaks has been influenced by film noir; a genre of film which emerged in the 1940s and 1950s consisting of drama infused with fear, crime, shadows and violent death, or “films filled with trust and betrayal” (Duncan 2000, p.7). Lynch has drawn on many of the themes and styles from film noir throughout his career, most especially in his choice of settings in Mulholland Drive and Lost Highway. In Twin Peaks, the ‘otherness’ of the cold northern climate mirrors the psychological state of many of its characters. In his version of small-town America, a majority of characters feel and act like outsiders.

The town of Twin Peaks itself has multiple significances, and is the basis for much of the complexity throughout the narrative. Dienst (1994, p.95) explains that Lynch and Frost wrote the first storylines for the series based on an idea of the town, rather than any particular plotline. Small towns have a long tradition in the American narrative and are often mythologised in American television (Carroll 1993, p288), but this concept is quickly revealed to be a construct in Twin Peaks (Pollard 1993, p.303).

Much of Twin Peaks‘ style is deeply steeped in the Gothic genre of television: a genre generally including plot devices which “produce fear or dread, the central enigma of the family, and a difficult narrative structure or one that frustrates attempts at understanding” (Ledwon 1993, p.260). The Gothic is “a literature of nightmare” (MacAndrew 1979, p.3), where “fear is the motivating and sustaining emotion” (Gross 1989, p.1), and in Twin Peaks, the viewer is exposed to devices such as “incest, the grotesque, repetition, interpolated narration, haunted settings, mirrors, doubles, and supernatural occurrences” (Ledwon 1993 p.260). Its narrative breaks away from the uniformity of traditional television through transgression and uncertainty in a distinctly post-modern fashion. Lynch combines the mundane with the horrific repeatedly throughout the series; most especially when the evil Bob appears to Laura while she is performing simple tasks like writing in her diary or changing clothes. By “exploit[ing] the … potential of Gothic devices to the hilt” and “challeng[ing] the most deep-seated expectations of … television” (Ledwon 1993, p.269) Lynch blurs the distinction between the normal and abnormal, the everyday and the extraordinary, so that the Gothic becomes normal.

Additionally, the influence of many cultural factors are evident in Twin Peaks‘ narratives and its modes of production, and the combination of these lend further complexity to the series. A prominent cultural factor is that of gender and its treatment within the series. Following a decade in which concepts of masculinity and feminism had undergone significant public shifts and homosexuals had “moved from a position of outlaw to one of respectable citizen” (Rich 1986, p.532), Twin Peaks‘ writers were more free to challenge gender boundaries and “open up space for a wider range of acceptable masculinities” (Comfort 2009, p.44). This is done partly through giving value to a wide range of eccentric characters: many of the main male characters exhibit eccentric behaviours, and it can be argued that traditional gender roles are “freed up”(Comfort 2009, p.44) and the idea of what masculinity entails is opened up to greater scope as a result. This is most evident by the inclusion of the character of DEA Agent Denis/Denise (David Duchovny, future star of The X-Files), who alludes to Cooper that he is heterosexual despite dressing as a woman. In one short scene, the idea of masculinity is challenged and eccentricity is accepted at the same time.

However, another element of the culture which influenced Twin Peaks is of a more unsavoury nature. The series suggests that “the worst secrets of all … are the secret connections between culture and self that allow men to brutalise women” (Davenport 1993, p.258). Laura Palmer is first presented as a “stunning corpse wrapped in plastic” (Moore 2015, online), and while Lynch extended the narrative possibilities of television, he did so by telling a story of a girl whose downfall consisted of being abused – sexually and otherwise – by a variety of powerful men, although it has also been argued that Lynch is simply following a well-known formula of “exploiting our love affair with … sex and death” (George 1995, p.110). It is easy to ignore the reality of violence in Twin Peaks, as, when watching TV, people are “in their own homes and…well placed for entering into a dream” (Henry 1999 p.103), a mode of viewing that often overrides the opportunity television gives us to “critically and creatively reflect upon established, often idealizing images” (Weiskopf 2014, p.152). Upon release of the series, Lynch downplayed the violence, describing the plot as simply being “about a woman in trouble … and that’s all I want to say about it” (Blassman 1999, online).

Storey (2015, p.210) describes all television as “hopelessly commercial”: and Twin Peaks‘ displays commercial intertextuality, in the form of its follow-up feature film and The Secret Diary of Laura Palmer, to international sales of T-shirts featuring the words ‘I Killed Laura Palmer’. The series was produced to win back sections of a fragmented audience partially lost to cable, cinema and video (Storey 2015, p.210) and was marketed to different audiences in various ways, based on factors ranging from “Gothic horror, police procedural, science fiction and soap opera” (Collins 1992, p.345). Producers hoped the series would “appeal to fans of Hill Street Blues, St Elsewhere and Moonlighting, along with people who enjoyed nighttime soaps” (Allen 1992, p.342). This attempt to create new, post-modern productions is now well-established in complex television (Nelson 1996, p.677).

In conclusion, it can be said that if complex television texts can be defined as being encoded with dense meaning and imagery, employing a range of serial techniques that build over time, containing elaborate networks of characters, and being highly unconventional in many ways, it must be said that Twin Peaks qualifies as complex television. Its signs and codes are open to a range of interpretations, and its influences are as varied as the range of television shows it went on to influence in turn. A plethora of factors are played out in the making of the series: historical, institutional, economic and cultural, and it presents many different genre resonances to audiences. It can be considered a particularly high-quality example of complex television: the wealth of academic study it has attracted is evidence of this. Twin Peaks is an important example of everything television can be.

References

Allen, RC, 1992. Channels of Discourse Reassembled, London: Routledge, p.342

Baderoon, G, 1999. ‘Happy Endings: The Story of Twin Peaks’, Journal of Literary Studies, Volume 15, pp.94-107

Blassmann, A, 1999. ‘The Detective in Twin Peaks’, online, accessed 4th February 2017: http://www.thecityofabsurdity.com

Booker, MK, 2002. Strange TV: Innovative Television Series from the Twilight Zone to the X-Files, Westport, Connecticut: Greenwood Press, p.98

Booth, P, 2011. ‘Memories, Temporalities, Fictions: Temporal Displacement in Contemporary Television’, Television & New Media, Sage, p.371

Bordwell, D, 1985. Narration in the Fiction Film, Madison: University of Wisconsin Press, p.1

Carrion, MM, 1993. ‘Twin Peaks and the Circular Ruins of Fiction’, Literature/Film Quarterly, Volume 21, p.242

Carroll, M, 1993. ‘Agent Cooper’s Errand in the Wilderness: Twin Peaks and American Mythology’, Literature/Film Quarterly, Volume 24, p.288

Collins, J, 1992. ‘Television and Postmodernism’, The Politics of Postmodernism, p.345

Comfort, B, 2009. ‘Eccentricity and Masculinity in Twin Peaks’, Gender Forum, Volume 27, p.44

Davenport, R, 1993. ‘The Knowing Spectator of Twin Peaks: Culture, Feminism, and Family Violence’, Literature/Film Quarterly, Volume 21, pp.255-259

Dienst, R, 1994. Still Life in Real Time: Theory after Television, Durham & London: Duke University Press, pp.95,99

Dolan, M, 1995. ‘The Peaks and Valleys of Social Creativity: What Happened to/on Twin Peaks’, Full of Secrets: Critical Approaches to Twin Peaks, Detroit, Michigan: Wayne State University Press, pp.33-50

Duncan, P, 2000. Film Noir, Pocket Essentials, p.7

Fiske, J, 1987. Television Culture, London & New York: Routledge, p.237

George, DH, 1995. ‘Lynching Women: A Feminist Reading of Twin Peaks’, Full of Secrets: Critical Approaches to Twin Peaks, Wayne State University Press, p.110

Gross, LS, 1989. Redefining the American Gothic: From Wieland to Day of the Dead, Ann Arbor: UMI Research, p.1

Henry, M, 1999. ‘David Lynch: A 180-Degree Turnaround’, in Barney, RA (ed.), David Lynch: Interviews, Jackson: University Press of Mississippi, p.103

Hundley, K, 2007. ‘Narrative Complication through Science Fiction Television: From “Twin Peaks” to “The X-Files” and “Lost”’, Theater and Film, University of Kansas, pp.1-15

Jensen, PM & Waade, AM, 2013. ‘Nordic Noir Challenging “the Language of Advantage”: Setting, Light and Language as Production Valued in Danish Television Series’, Journal of Popular Television, Volume 1, pp.259-265

Lavery, D, 1996. ‘Introduction’, in Lavery, D, (ed.), Full of Secrets: Critical Approaches to Twin Peaks, Detroit: Wayne State University Press, p.16

Ledwon, L, 1993. ‘Twin Peaks and the Television Gothic’, Literature/Film Quarterly, Volume 21, pp.260-270

Loacker B & Peters, L, 2015. ‘Exploring Absurdity and Sites of Alternate Ordering in Twin Peaks’, Ephemera, Volume 15, pp.621-649

Lost, 2004-2010. Television series, Touchstone Television/ABC Studios, United States

MacAndrew, E, 1979. The Gothic Tradition in Fiction, New York: Columbia UP, p.3

Marc, D, 1987. ‘Beginning to Begin Again’, Television: The Critical View, New York: Oxford University Press, pp.323-60.

Mittell, J, 2015. Complex TV: The Poetics of Contemporary Television Storytelling, New York University Press, pp.17-25

Moore, S, 2015. ‘Never Mind How “Cool” Twin Peaks is, What About Taking it Seriously?’, The Guardian, online, accessed 6th February 2017: https://www.theguardian.com/commentisfree/2015/apr/06/cool-twin-peaks-david-lynch-abuse-sexual-murder-young-women

Muir, J, 2001. Terror Television: American Series, 1970-1999, Jefferson, North Carolina: McFarland & Co, p.250

Nelson, R, 1996. ‘From Twin Peaks, USA, to Lesser Peaks, UK: Building the Postmodern TV Audience’, Media, Culture and Society, Sage: London, p.677

Newman, M, 2006. ‘From Beats to Arcs: Toward a Poetics of Television Narrative’, Velvet Light Trap, pp.16-28

Pollard, S, 1993. ‘Cooper, Details, and the Patriotic Mission of Twin Peaks’, Literature/Film Quarterly, Volume 21, p.303

Rich, R, 1986. ‘Feminism and Sexuality in the 1980s’, Feminism Studies, University of Maryland Press, p.532

Storey, J, 2015. Cultural Theory and Popular Culture: An Introduction, Routledge, p.210

Telotte, JP, 1995. ‘The Disorder of things in Twin Peaks’, in Lavery, D, (ed.), Full of Secrets: Critical Approaches to Twin Peaks, Detroit: Wayne State University Press, p.165

Thompson, K, 2003. Storytelling in Film and Television, Cambridge: Harvard University Press, p.120

Twin Peaks, 1990-1991. Television series, Lynch/Frost Productions, United States

The X-Files, 1993-2002. Television series, 20th Century Fox Television, United States

Weinstock J & Spooner, C, 2015. Return to Twin Peaks: New Approaches to Materiality, Theory and Genre, Palgrave, p.161

Weiskopf, R, 2014. ‘Ethical-Aesthetic Critique of Moral Organization: Inspirations from Michael Haneke’s cinematic work’, Culture and Organization, Volume 20, pp.152-174

Westwood, R, 2004. ‘Comic Relief: Subversion and Catharsis in Organizational Comedic Theatre’, Organization Studies, Volume 25, pp.775-795

Woodward, RB, 1990. ‘A Dark Lens on America’, in Barney, RA, (ed.) David Lynch: Interviews, Jackson: University Press of Mississippi, p.50