Personalisation as currency

Renée Ridgway, PhD fellow at Copenhagen Business School, and Research Associate with the Digital Cultures Research Lab, Leuphana University.

Capital burns off the nuance in a culture. Foreign investment, global markets, corporate acquisitions, the flow of information through transnational media, the attenuating influence of money that’s electronic… untouched money… the convergence of consumer desire (DeLillo 785).

‘Cybercapitalism’, commonly termed ‘digital capitalism’, refers to the Internet, or ‘cyberspace’ and seeks to engage in business models within this territory in order to make financial profit. Both cybercapitalism and cyberspace refer back to the etymology of their prefix, cyber, from the Greek ‘kybernetes’ (cybernetics) meaning science, governance, or stewardship, yet the inherent complexity of cyberspace reflects communication between peoples, societies and cultures in virtual reality. With the application of media technologies, social interactions occur in “the place between” or “the indefinite place out there” (Sterling 11) where people interconnect and navigate through computational networks. Drawing upon the metaphor of a wider cyberculture in literature, cyberspace alludes to information streaming across a borderless world of “unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data” (Gibson 69).

However hierarchies do exist in cyberspace, the same infrastructures that prohibit access for some people enable ‘cybercapitalism’ to take hold. Not just streams of social communication but flows of money faster than the speed of light constitute a globalised world of commerce. “The predominant economic model behind most internet services is to offer the service for free, attract users, collect information about and monitor these users, and monetize this information” (Mikians et al 1). We search, tweet, post, blog and upload – giving away our words, thoughts, images and intimacies. As a consequence of ‘the network effect’ more people contribute online because others also choose to do so, causing the value and power of the network to increase exponentially as it grows (Leach). This donation of data is reciprocated in the form of power constructs by the private sector (Google, Facebook, Twitter, Yahoo). Google, for example, is dependent on users willingly furnishing data that is then filtered, as value is simultaneously extracted from the data. This enables Google to have a completely free database, and by designing specific algorithms that are able to index and crawl the Internet, they provide ‘relative’ results.

Cybercapitalism is structured by a highly intricate series of communication networks, which connect us through our participation on social platforms, but outside of these platforms how do we navigate and explore this information superhighway? We do so predominantly through search requests. Algorithms ostensibly know what we want before we even type them, as with Google’s ‘autocomplete’. Thus search is not merely an abstract logic but a lived practice that helps manage and sort the nature of information we seek as well as the direction of our queries. Google’s ‘PageRank’ (Page, Brin) is based on hyperlinks and has emerged not only as an algorithm for sorting and indexing information on the world wide web but also as a dominant paradigm that establishes the new social, cultural and political logics of search-based information societies – a phenomenon that Siva Vaidhyanathan characterizes as the “googlization of everything” (20). Whether search will become more semantic or contextual, including understanding what words mean and their intent or how they relate to other concepts, is currently under research and development. However, as of writing, Google is the world’s most used search engine, answering 3 billion requests per day (Wikipedia). The implications of this hegemony in regard to questions of identity, free speech, control, mobilization and so on, should not be underestimated.

PageRankTheRankingSystemFig. 1. Excerpt from “The Anatomy of a Large-Scale Hypertextual Web Search Engine”, Page, Lawrence and Brin, Sergey (1999), p.12.

Are most users aware of the hidden control of search algorithms and how they affect obtained results, whether for the production of knowledge, information retrieval or just surfing? Since December 4, 2009, Google uses ‘personalisation’ where it captures and logs users’ histories and adapts previous search queries into real-time search results, even if one is not signed into a Google account. This search engine bias retains user data as algorithms gather, extract, filter and monitor our online behaviour, offering suggestions for subsequent search requests. In exchange for our data we receive ‘tailored’ advertising, making things fit, turning ourselves into commodities for advertisers and receiving free Internet usage. As we search every day, many users allow this personalisation to occur without deleting the cookies or installing plug-ins that would inhibit it. This personalisation becomes a currency in the online marketing of our data.

We enable this form of voluntary ‘personalisation as currency’ with our data or, in the words of venture capitalists, ‘powerful information’, by participating in online activities. The selling of our individual desires, wants and needs to large multinational corporations on the internet was already articulated by ‘Humdog’ in her prescient text from 1994, pandora’s vox: on community in cyberspace, where she argued that computer networks had not led to a reduction in hierarchy but rather a commodification of personality and a complex transfer of power and information to companies (Hermosillo). By remitting all of this information to corporations (Google, Apple, Facebook, Amazon) we receive the benefit of supposedly incredible recommendations. Nowadays it has become clear that users pay with their data, which is increasingly the means to finance various corporations’ growth as they sell this data to third party advertisers. It is a transaction and in the exchange we get relevance. But is this really true?

Aestheticisation of personalisation

Perhaps our futures are bound now inextricably by two works of literature. Orwell declared with 1984 that we will be destroyed by the things we fear and we will have a surveillance state whereas Aldous Huxley in Brave New World claimed that we would be destroyed by the things that delight us. We now have the Orwellian surveillance companies who produce the things we really like: social networking, cloud computing, free email, iPhone, all in one package and all in one generation. (Leach)

In the recent film by Spike Jonze, Her, it is not the operating system called Samantha that captivates Theodore, the film’s main protagonist, but Element Software, the company where Theodore purchased Her. “In Jonze’s all too plausible dystopia, we are enslaved not to robots but corporations, and the invisibility, even desirability of that enslavement is what makes Her so chilling.” (Farago)

Her
Fig. 2. Poster for the film “Her” by Spike Jonze

The hidden aspect of corporate control is not something new; advertising has successfully drawn on the emotions of consumers to create brand loyalty and sell products for some time. In the here and now all forms of psychology are applied to coerce us to buy things we don’t need and the process behind how we are manipulated remains hidden. Advertising agencies incorporate users’ wants and desires as they capture the data and then attempt to predict what the audience will consume. Most people enjoy the recommendations that they receive from Amazon or suggestions based upon what their friends like on Facebook. This is the power of suggestion at work. The efficacy with which Google delivers (popular) results when we type in keywords enforces its dominance. Google earns 96% of its profit from advertising.[1]

Technology and how it controls our attention is emerging as a 21st century zeitgeist. However certain information on the Internet is kept invisible and obscured, thus we are deterred from learning about things we do not already know. Eli Pariser’s The Filter Bubble reminds us that the information age not only spews data but also creates a sense of deprivation. This leads to the ‘distortion effect’, one of the challenges posed by personalised filters.

Like a lens, the filter bubble invisibly transforms the world we experience by controlling what we see and don’t see. It interferes with the interplay between our mental processes and our external environment. In some ways it can act as a magnifying glass, helpfully expanding our view of a niche area of knowledge. (Pariser 82-83)

At the same time, these filters limit what we are exposed to and therefore affect our ability to think and learn. In this way, personalisation has legitimised an online public sphere that is manipulated by algorithms.

Semantic Capitalism

We don’t want to know everything about you. What we want to do is to try to help to connect you with the peoples, ideas, and things you are looking for. You decide which information you decide to give to us. It is a utility that improves if you decide to share information. (Google spokesperson)

Google states above that the more users share information, the higher the relevance of search results they will obtain. In order to test this statement, Martin Feuz, Matthew Fuller and Felix Stalder designed the empirical study, Personal Web Searching in the age of Semantic Capitalism: Diagnosing the mechanisms of personalisation. Published on the First Monday blog in February 2011, the research was carried out with great difficulty in the preceding years. Google interfered with the testing while it was being conducted by blocking IP addresses and adding personalisation. The study began with the premise that not all users are looking for the same information when they type in a keyword and therefore the quality of search results is decreasing. In order to combat this problem search engines (Google in particular) had been working on ways of obtaining better search results for the user, one of these being personalisation. The study began first by assigning identities (one Gmail email account per user) for Immanuel Kant, Friedrich Nietzsche and Michel Foucault, representing the 18th, 19th and 20th centuries respectively, along with diverse vocabularies representing their likely search keywords and then programmed thousands of search requests from the same server in London.

Fig. 3. Infograph Hypothesis 3

For this empirical enquiry they tested three ways of profiling, as it was assumed Google does something similar in order to produce personalised search results, and by doing so developed new digital methods. The three types of profiling were labelled as ‘the knowledge person’, ‘the social person’ and ‘the embodied person’. The first looked at what people are interested in, based on search histories. The second looked at networks and who the person is connected with using email, social networks and communication technology. The last looked at the environs of where the person is located and their bodily state. By merging the three profiles Google promised to deliver relevant search results for each individual user, where the machine interprets the user’s behaviour and decides what is relevant for the user.

Their findings suggest that Google’s personalised search, “does not fully provide the much-touted benefits for its search users. More likely, it seems to serve the interest of advertisers in providing more relevant audiences to them” (Feuz, Fuller, Stalder). What can be drawn from the research is that the benefits of personalised search goes to the advertiser, thus Google has sold us, the audience, to them. Google draws on the well-known business model of television, which involves giving away content for free in order to attract an audience, who are then sold to advertisers who have paid the TV channel for time. Yet Google does not stream the same ad to its billions of users and users do not type in the same query. Instead they deal in targeted advertising. This exchange provides search results to users and sells users to advertisers. Also of importance is that this study produced the first evidence that:

Google is actively matching people to groups, which are produced statistically, thus giving people not only the results they want (based on what Google knows about them for a fact), but also generating results that Google thinks might be relevant for users (or advertisers) thus more or less subtly pushing users to see the world according to criteria pre–defined by Google. (Feuz, Fuller, Stalder)

This type of ‘collaborative filtering’ continues today with machine-learning algorithms as the amount of data captured and correlated increases exponentially.

Hidden Infrastructures

In 1967 advertising executive Robert MacBride’s The Automated State already described modern computer systems that would produce “a bureaucracy of almost celestial capacity” that can “discern and define relationships in a manner which no human bureaucracy could ever hope to do” (MacBride qtd. in Morosov). In his book Protocol, Alexander Galloway exposes the hidden infrastructures that enable the Internet to function, drawing on TCP/IP, DNS and HTML and arguing that code is a natural language that can be analysed like any other. If the Internet enables communication between people it is also the greatest surveillance machine ever invented. Control is exercised through covert operations that include surveillance but is not limited to the form of the panopticon. Rather, according to Wendy Hui Kyong Chun in Control and Freedom: power and paranoia in the age of fibre optics, “The problem is not with the control protocols that drive the Internet – which themselves assume the networks’ fallibility – but rather with the way these protocols are simultaneously hidden and amplified.” (6)

Search algorithms crawl vast amounts of data and organise it according to, for example, what the advertiser has paid the programmers of algorithms to find. As they sort through the data a hyper-complex infrastructure of daily search requests emerges. We cannot, however, see the mechanisms of how our searches are manipulated by the assumed 200+ proprietary algorithms employed by Google. Search is thus a ‘hidden organisation’ – or a hidden organising process that keeps its secrets of control sequestered from the user. The act of concealment, when we hide and do not want to participate, could be considered an act of critique. In other words, being so overt that we are covert might be the only way to escape capture. “But when do we reach a point where not using them (corporate algorithms) is seen as a deviation – or, worse, an act of concealment – that ought to be punished with higher premiums?” (Morosov)

GoogleArchitectureOverviewFig. 4. Excerpt from “The Anatomy of a Large-Scale Hypertextual Web Search Engine”, Page, Lawrence and Brin, Sergey (1999), p.12.

In an era of Big Data (Mayer-Schöneberger and Cukier), where information about everything and everyone is collated and gathered, it is only the machines that can process all of the data and what is visible will only be translatable as correlation. Antoinette Rouvroy’s assertion that “with big data we have the impression that knowledge is not constructed anymore”, suggests a transformation in the field of visibility in which not everything will be allowed to be utterable. “All we need is the automatic, a processing of algorithms on huge databases in order for knowledge to surface, as if by magic?” (Rouvroy, Society of the Query #2). It is already there, hidden in huge databases, however it is signified through calculation, in the form of data visualizations and data statistics. Datamining operates according to a new statistical practice where notions of causality have given way to correlations as computer systems aggregate data from different entities and synthesize the information in order to identify patterns of behaviour and predictive assessments.

Data Behaviourism

What we experience then is a new truth regime, what Rouvroy calls ‘data behaviourism’, “anchored in the purely statistical observations of correlations (independent from any kind of logic) among data collected in a variety of heterogeneous contexts” (Rouvroy, The end(s) of critique 8). Although predictive personalisation has been shown to tailor recommendations to specific users based on search histories, as demonstrated by previous empirical studies, personalisation offers suggestions to the user based on their past preferences which have been assigned to groups. “A query is now evaluated in the context of a user’s search history and other data compiled into a personal profile and associated with statistical groups” (Feuz, Fuller, Stalder). Based on buying habits, search histories and so on, the user is first classified and assigned according to demographics, not as an individual, rather with mass personalisation. Behavioural targeting schemes use analogous technology by collating data to define ‘audience segments’, dependent on users with similar profiles.

Significance becomes calculable without signification and therefore rendered meaningless. This unseen structure has become an increasingly prominent issue in the way we seek knowledge, not only from an epistemological point of view, but also with regards to how this infrastructure orders and classifies knowledge in taxonomies of computable data. ‘Welcome to the City of Discipline’ (Foucault) where we govern ourselves through our ‘behaviours’ being captured and cultivated in ‘personalised’ machines, sharing everything we do as huge amounts of data, surrendering our privacy for free services and participation in the attention economy. This state of discipline is reflected in the logistical capture of our data, preferences, intimacies and search queries as our subjectivity is exploited in these deterritorialised spaces.

The environs of digital labour are subject to what Félix Guattari termed ‘deterritorialisation’ describing how the classic Fordist modes of production have moved from the factory to the Internet and have lost their ‘territorial identity’ (Lazzaroto 16). The disciplinary societies of the 18th and 19th centuries transcended sovereignty and were instead spaces of enclosure defined by families, schools, barracks, factories and possibly hospitals or prisons (Foucault qtd. in Deleuze 1). In the 19th century nation states were constructed because capitalism had become deterritorialised through colonialism and the industrial revolution. The 20th century saw the dawn of the society of control as, after WWII, liberating and enslaving forces confronted one another as the factories were replaced by corporations (Deleuze 4). Now this control is modulated through code and, in order to survive, capitalism brings deterritorialisation back to individualisation, placing individual initiative in the foreground.

Everyone is required to become an entrepreneur selling one’s own brand of ‘creative’ activity, leaving our traces of data everywhere. It is then the entrepreneur, embodied as both producer and consumer (prosumer), whose behaviour and daily online activities are monitored by algorithms. The technological advancement of instant communication through email, VOIP, comment writing, posts, likes and visits to websites, comprise not only knowledge production but also telecommunications. The concept of subjectivity produced by this reproduction of communication is what underlines Post-Fordist activities. The new technologies invest in human subjectivity through social networks and user-generated content and therefore differ from those of the industrial era. The subject is the consumer and this consumption is captured in the net of big data. Deleuze’s profound description of capitalism in the control society in which services are sold and stocks bought, finds that “individuals have become ‘dividuals’ and masses, samples, data, markets, or banks” (Deleuze 5). Where has agency gone when our subjectivities are objectified, reified, datafied and commensurated?

Fig. 5. The Personal Data Ecosystem: A Complex Web from Data creation to Data consumption

Data as an asset class

In a digital economy that bids farewell to the client and welcomes instead the user/collaborator (prosumer), the personalisation of searches has become commonplace, while the infrastructures that enable these protocols remains hidden. With personalised search, our subjectivity is correlated through algorithmic technologies as our personal information (data) is acquired by marketers, or third parties.[2] The serendipity of searching online ended with personalisation. Now we search through hyperlinks in Twitter, social media platforms and apps as the exponentially increasing usage of mobile phones enables 24/7 connectivity. As search migrates from desktop computers and social media to mobile phones, the integration of mobile operating systems with the web ensures that we become ever more entrenched in the filter bubble.

Instead of supplying our data, we could be hiding it, or in control of it, and therefore need not give it away in exchange for free service. Well-designed browser extensions such as Ad Nauseum “obfuscate browsing data and protect users from surveillance and tracking by advertising networks”.[3] Working in conjunction with Ad Block Plus,[4] an open source plug-in that removes ads whilst browsing, this intervention clicks and likes all ads, concomitantly visualizing the ads over time. By “clicking ads so you don’t have to”, it addresses the lack of standards for tracking, privacy issues, user profiling and “excessive universal surveillance” (Nissenbaum, Howe, Zer-Aviv).

AdNauseum1Fig. 6. AdNauseum

The development of applying search algorithms to various calculation models in advertising, risk calculation and crawling vast amounts of data comes from within the industry itself. Employing machine learning, search algorithms that can handle dynamic large-scale data variables ad-infinitum, to enhance the scoring of the subprime population with regard to consumer credit was one of these innovations. The more signals, generating an expansive big data model, the better the ability to underwrite or determine the creditworthiness of an individual to receive financial support (for insurance, mortgage, credit, and so on).[5] Ostensibly the calculation is based on personal data, never knowing when to begin or for that matter where to stop with the collation of data because, “all data is credit data, we just do not know how to use it properly” (Merrill, “Alle Daten sind Kreditdaten”). Privacy is becoming more difficult to protect and anonymisation has almost become obsolete as individual consent is reduced to ‘agreeing’ to the Terms of Service. Many consumers remain unaware of the specific nature of these data collection activities and do not exercise their rights to access, opt out or delete ‘their’ data. With big data techniques value resides not in its primary purpose but rather in innovative, secondary uses that were not even imagined when it was first collected (Mayer-Schönberger and Cukier).

The issue is not just one of data’s contemporary use value but of its value as a future investment. The monetization of data is presently a $156-billion-a-year industry for the data brokers and the companies who trade in such commodities (Pasquale). Sometimes data is transacted at a few cents per name, or insurance companies use it to calculate premiums. For some brokers of personal data pricing is based on the attributes of individual accounts, ranked high, medium or low, these are currently the data attributes regarding a person’s spending habits.[6] Organisations such as the OECD are well aware of the value of data in the information economy and the benefits and costs of disclosed and protected data (Acquisti 4). “As some put it, personal data will be the new ‘oil’ – a valuable resource in the 21st century. It will emerge as a new asset class touching all aspects of society.” (World Economic Forum, 5) Data is the ‘raw’ material of business; markets will be created with this data (Mayer-Schönberger and Cukier). We are then the greatest asset, reflected by personal data. Determining if the ownership of this data belongs to the subject, who will have the access to this ‘natural resource’, along with the recycling of this raw material, remains an open-ended debate.

MoneyTalksFig. 7. Money Walks

In the recent article, Money Walks: A Human-Centric Study on the Economics of Personal Mobile Data, the authors investigated the monetary value that participants assigned to different kinds of PII (Personally Identifiable Information), which was collected by their mobile phone, including location and communication information, focussing only on web-browsing (Staiano et al 1). Over a period of 60 days, qualitative surveys were conducted along with analysis of behavioural attitudes towards sharing and the value attached to this activity. These were divided into four categories: communications, apps, location, media – and then exacted. In their study they found that communication data was more saleable compared to locative data, which accrues more value the greater the distance travelled. “Several participants also expressed that they did not want to be geolocalized and considered location information to be highly sensitive and personal” (Staiano et al 10). Concerning the economic valuation some participants would allow access to their data if they were well paid, while others less concerned with privacy would exchange it for a few cents.[7] “The overall median bid value in the study was ~x = € 2” (Staiano et al 9). However, in reality, people give away their data to companies all of the time. Notably, participants who exercised intentional control when disclosing personal information were more aware of the monetary value of their data. Another important conclusion that could be drawn from the study was the issue of trust. Participants were asked who they would trust to handle their information and to order the following entities from most to least trusted. Individuals overwhelmingly (.997) trusted themselves the most with their personal data, followed by banks (.537), telcos (.513), government (.49) and insurance companies (.46). The authors conclude by suggesting the adoption of a decentralised and user-centric architecture for personal data management (Staiano et al 8).

Trading in privacy for personalisation and convenience has become the default modus operandi as the tools we use every day, from smartphones to search engines and websites, capture our personal data. This data is traded, reused, repurposed, auctioned off, sold and resold. Obviously our data has value to many third parties who know how to use it but who owns ‘our’ data? Whether we will be coerced into negotiating our rights to its retention, enact the “right to be forgotten” or be forced to make a living selling our data instead of giving it away, has yet to be determined. The question of what our data is actually worth to us remains open.


Fig. 8. Individual end users are at the center of diverse types of personal data

 

Notes

[1] Although Adwords is relevant to the monetization of search queries and Google’s greatest source of revenue, it is beyond the scope of this short article to go into greater detail. However, the recently launched Google Contributor enables one to pay monthly fees so as not to see ads, although at the moment by invitation only. <https://www.google.com/contributor/welcome/>

[2] In the latest update for Apple’s operating system Yosemite, the default setting is to upload users’ search terms in ‘Spotlight’ directly to their servers. If enabled, both ‘location services’ as well as ‘commonly searched terms’ are sent to Microsoft’s Bing. One solution is to download and install developer Landon Fuller’s Python script, which respects users privacy as should have been the default in the first place, even when using Safari’s ‘Spotlight Suggestions’. <http://www.wired.com/2014/10/how-to-fix-os-x-yosemite-search/?mbid=social_fb>

[3] Ad Nauseum. <http://dhowe.github.io/AdNauseam/>

[4] Ad Block Plus. <https://adblockplus.org/>

[5] Douglas Merrill (ZestFinance CEO) is a former Google CIO who previous worked for the RAND Corporation. <http://www.rand.org/>

[6] According to Datacoup, “The foundation of our pricing model is based on the individual data attributes within each account. When you connect an account, we check for each attribute within the account. If it’s available, then we factor that attribute into the final price. Based on initial conversations with many potential data purchasers, we’ve ranked data attributes as either high, medium or low value. As of writing, spending data attributes have the highest value in our pricing model.”  <https://datacoup.com/docs#how-it-works>

[7] The total amount won by participants in the form of auction awards was € 262, which was paid in Amazon vouchers.

Works cited

Acquisti, Alessandro. “The Economics of Personal Data and the Economics of Privacy.” Organisation for Economic Co-operation and Development. Presented at Joint WPISP-WPIE (Working Party for Information Security and Privacy (WPISP); Working Party on the Information Economy (WPIE) Roundtable: “The Economics of Personal Data and Privacy: 30 Years after the OECD Privacy Guidelines” at OECD Conference Centre, 1 December 2010. Chapter 4, p.3. Web.

Böhm, Steffen & Land, Chris. “The new ‘hidden abode’: reflections on value and labour in the new economy”. The Sociological Review, Volume 60, Issue 2 (2012). Print.

Chun, Wendy Hui Kyong. Control and Freedom: Power and Paranoia in the Age of Fiber Optics. Cambridge: MIT Press, 2005. Print.

Deleuze, Gilles. “Postscript on the Societies of Control” October 59. Winter. 1992. Web.

DeLillo, Don. “Das Kapital” Underworld. New York: Scribner, 1997. Print.

Feuz, Martin; Fuller, Matthew; Stalder, Felix. “Personal Web Searching in the age of Semantic Capitalism: Diagnosing the Mechanics of Personalisation”. First Monday, peer-reviewed journal on the internet. Volume 16, Number 2-7, February 2011. Web. <http://firstmonday.org/article/view/3344/2766>

Farago, Jason. “Her is the Scariest Movie of 2013”. New Republic. 2013. Web. <http://www.newrepublic.com/article/116063/spike-jonzes-her-scariest-movie-2013>

Foucault, Michel. Discipline & Punish: The Birth of the Prison. London: Vintage Books, 1975. Print.

Galloway, Alexander. Protocol: How Control Exists after Decentralization. Cambridge: MIT Press, 2006. Print.

Gibson, William. Neuromancer. New York City: Ace Publishing (Penguin). 1984. Web.

Hermosillo, Carmen a.k.a. Humdog. pandora’s vox: on community in cyberspace. 1994. Web. <https://gist.github.com/kolber/2131643>

Jones, Spike. Her. (2013). Film.

Lazzaroto, Maurizio. “Conversation with Maurizio Lazzaroto, from “Exhausting Immaterial Labour in Performance” Le Journal des Laboratoires and TkH Journal for Performing Arts Theory (no 17) Public Editing Session #3, June 23, October 2010. Web.

Leach, Fiona. “The Quantified Self: Can Life Be Measured?” Analysis, 2014. Web.

MacBride, Robert. The automated state: computer systems as a new force in society. Philadelphia: Chilton Book Co, 1967. Print.

Mayer-Schönberger, Viktor; Cukier, Kenneth. Big Data: A Revolution That Will Transform How We Live, Work, and Think. London: John Murray. 2013. Print.

Merrill, Douglas.“Alle Daten sind Kreditdaten, wir wissen nur noch nicht, wie wir sie richtig einsetzen”. Web. <http://www.faz.net/aktuell/feuilleton/debatten/eine-neue-studie-ueber-kommerzielle-ueberwachung-13253649.html>

Mikians, Jakub; Gyarmati, László; Erramilli, Vijay; Laoutaris, Nikolaos. “Detecting price and search discrimination on the Internet” HotNets-XI, Proceedings of the 11th ACM Workshop on Hot Topics in Networks, 2012, pp.79-84. Web.

Morosov, Evgevny. ‘The Rise of Data, the Death of Politics’. The Guardian. 2014. Web. <http://www.theguardian.com/technology/2014/jul/20/rise-of-data-death-of-politics-evgeny-morozov-algorithmic-regulation>

Nissenbaum, Helen; Howe, Daniel C.; Zer-Aviv, Mushon. Ad Nauseum. 2014 Web. <http://dhowe.github.io/AdNauseam/>

Page, Lawrence & Brin, Sergey. The Anatomy of a Large-Scale Hypertextual Web Search Engine (1999). Web. <http://infolab.stanford.edu/~backrub/google.html>

Pariser, Eli. The Filter Bubble. New York: Penguin Books, 2012. Print.

Pasquale, Frank. “The Dark Market for Personal Data” N.Y. Times, October 16, 2014. Web. <http://www.nytimes.com/2014/10/17/opinion/the-dark-market-for-personal-data.html?_r=0>

Rouvroy, Antoinette. “Algorithmic Governmentality and the End(s) of Critique,” Society of the Query #2. Web. <http://vimeo.com/79880601>

Rouvroy, Antoinette. “The end(s) of critique: Data behaviourism versus due process.” Privacy, due process and the computational turn: the philosophy of law meets the philosophy of technology. Mireille Hildebrandt and Katja de Vries (eds.) New York: Routledge, Taylor & Francis Group, 2013. Print.

Staiano, Jacopo; Oliver, Nuria; Lepri, Bruno; de Oliveira, Rodrigo; Caraviello, Michele; Sebe, Nicu. “Money Walks: A Human-Centric Study on the Economics of Personal Mobile Data” ACM International Joint Conference on Pervasive and Ubiquitous Computing (Ubicomp 2014) Web.

Sterling, Bruce. Hacker Crackdown. New York City: Bantam, 1992. Print.

Vaidhyanathan, Siva. Googlization of everything (And why we should worry). Oakland: University of California Press, 2011. Print.

Wikipedia. <http://en.wikipedia.org/wiki/Google_Search>

World Economic Forum. “Personal Data: The Emergence of a New Asset Class”.Initiative of the World Economic Forum, In Collaboration with Bain & Company, Inc. January 2011. Web.

Posted in Datafied Research, Journal Issues Tagged with: , ,

News

The latest issue APRJA Machine Research is now published. We will soon be releasing a call for our next workshop ...

Share

Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedInShare on RedditEmail this to someone