Posts Tagged ‘privacy’

“Assessing the value of anonymous communication online”

April 6, 2012

PowerPoint and outline of recent invited talk, “Assessing the value of anonymous communication online” by Robert Bodle, PhD (USC), Associate Professor of Communication and New Media Studies, College of Mount St. Joseph.


  • “We are all Khaled Said” – anonymity as political freedom
  • Facebook wants anonymity to go away
    • real name only policy
    • sociotechnical design
    • culture and condition for sharing
  • fb’s arguments for upholding fixed user identity
    • safety
    • the civilizing effect
    • market incentives
  • double edged attributes of anonymity
    • prevents accountability
    • disinhibition
    • depersonalization
  • human rights dimensions of anonymity
    • privacy
    • freedom of assembly
    • freedom of expression
  • democratic freedoms and anonymous speech
    • tolerate offensive speech
    • free speech, free press
    • encourage uninhibited, robust and wide open speech
    • avoid elements of an authoritarian regime (Bollinger)
  • human rights and democratic freedoms online and offline/assert the conditions for sharing
    • pseudonyms
    • anonymous communication
    • privacy
    • freedom from surveillance

Monetizing Social Media: The conditions for sharing

November 18, 2011

A presentation for the Critical and Cultural Studies Division, “Voices for Sale: Monetizing Social Media” National Communication Association, New Orleans, LA (November 2011), organized by Christopher M. Boulton, University of Massachusetts, Amherst; with Kathleen M. Kuehn, Christopher Newport University; and James Hamilton, University of Georgia.

Monetizing Social Media: The conditions for sharing by Robert Bodle, College of Mount St. Joseph

I have been researching, writing, and talking about the conditions for sharing on Google, Facebook, and other online spaces and services for some time,

Mostly interrogating the tacit agreement between Internet companies and users, that we get wonderful applications for free, but not really for free, at a cost.

This cost, as many of us already know, is the disclosure of our personal information (and sale of that info to advertisers)

But so what? Why does this matter?

I was worried about this question, especially when I first began to cover this tacit agreement in my new media and society classes.

After covering at length how our information is shared online, many of my students were somewhat concerned but mostly appreciative of the benefits of Google, Facebook, and other online services, regardless of how they function.

Today, I am happy to see that my students are becoming more aware of the conditions of sharing on SNSs, without my help,  when a freshman recently told me that she was repelled by the “creep factor” of FB’s “Real-Time Activity” feature on the upper right hand of her Facebook wall. Hooray!

Recent polls conducted by the Pew Internet and American Life Project indicate that people are very concerned about their privacy from all demographic groups, bearing out the truth of my anecdote.

Perhaps this is why there are rumors circulating in the industry press that Facebook will start to make its privacy changes opt-in, instead of opt out (by default).

Yet, while we’re becoming more concerned about how our information may be used, it is still difficult to get the full picture, and of course, this is intentional.

To get a fuller picture of the conditions of our sharing, I ask how are we sharing? what is being shared? what are the market incentives and the consequences?

I cover “the how” at length in another article, Regimes of Sharing.

But quickly, in order to participate in  the main currents of social and political life many of us are participating on social media sites, whether we actually are on these sites or not (e.g., “Liking” an op-ed on

And while we are sharing with one another, and moving from site to site, we are also sharing with our social media services, advertisers, and other third party websites ( within and outside of social media spaces. (Technically thiis done with the help of cookies and Open APIs).

What was once surreptitious gathering of our data under the awareness of users and without our consent (e.g., Beacon),

is now freely given through social plug-ins, such as the “Like” button.

Whenever we “Like” something using FB’s Like button on websites outside of we share this preference

Which is combined with much more personally identifiable data, including a running log that tracks our web browsing session with “date, time and web address of the webpage you’ve clicked to . . . IP address, screen resolution, operating system and browser version” for 90 days.

What is not clearly known is how much of this information is being shared with Facebook’s 300 million third-party websites, including powerful yet little known marketing and data processing firms like Acxiom Corporation (largest data mining company in the world).

Again, the obvious answer as to why this information is shared is the profit motive, our social labor is being converted to financial value for companies, and some have suggested that not only should we get the use of online spaces but that we also get a cut. I would argue that we should at least have the choice, because,

The condition for use on social media sites and cloud services is that we submit to surveillance, monitoring, and targeted advertising for “personalized web experiences.”

My Central Claim is that we need to change the conditions for sharing (granular control, transparency of how info is used, security measures to protect our data).

Yet, we might still ask, “what’s the big deal?”

There are intended and unintended consequences.

Among the intended consequences are that our online history is used to serve relevant and interesting news and information, advertisements, and money saving deals that coincide with our past interests.

Another intended consequence is that while Facebook turns our information into product – where we are the product, it takes away what Tavani calls our informational privacy, or “control over one’s daily activities, personal lifestyle, finances, medical history, and academic achievement stored and transmitted over ICTs.”

This loss of control over one’s own information deprives us of the freedom to make informed decisions on our own behalf.

When we lose the ability to control our information, we lose our autonomy and self-determination.

In political economy terms, this loss of control over our information reconfigures social relations between social media sites and individuals

establishing a power imbalance,

where we become more dependent and vulnerable to the SNS, and to the intended and unintended consequences such as government access to our data, and the implications of a personalized Web . . .

As our information is shared with third parties to serve ads inside and outside of facebook, it helps to construct,

what Eli Pariser calls the Filter Bubble or the personalization of content,

Through a human/algorithmic hybridization our past clicks help rank most of what we see online, including:

our friends feeds, news stories, and search queries.

This personalization makes us four times more likely to click on a link (FastCompany) which is the intended consequence, but

it also helps “to shape the information diets of its users” (Novey, July 1st, 2011).

The problem is that this process is invisible, we’re not aware this is going on (because we don’t see it happening), and ultimately,

it limits our exposure to different points of views, enabling entrenched political polarization, preventing real consensus, critical thinking, and tolerance of diversity and appreciation of our irreducible differences in society.

My question, then, is that by the time we all come around to what is happening, when navigating the Web feels like exploring our own subconscious such as in Being John Malkovich, will the conditions for sharing matter to us?

Thank you.

Cloud computing can reign in generativity, reducing its subversive potential

July 22, 2009

cloud-computing-kitchen-sinkZittrain OP-ED about a topic I’ve written about recently (waiting for editors to review), applies his generativity argument to reasons why we should worry about the cloud from a development perspective. Issues that we should worry about include privacy, lack of control over our data, and lack of functionality (preventing the freedom to innovate). However, third parties are not mentioned, which pose an increasing privacy risk on sites like Facebook with over 950,000 application developers accessing user data for secondary purposes (see: Facebook needs to improve privacy practices, investigation finds).

The chief worry is that our computing and content will exist in an environment controlled by a cabal of “gated cloud communities,” providing platforms that discriminate against developers, “hindering revolutionary software.” Zittrain’s recommendations for a better cloud environment include: 1) requiring companies, under fair practices law, to allow users to access and erase their digital dossiers 2) requiring companies to adopt more secure communication practices and password protections 3) demanding companies to keep their word about how users can use content sold and accessed online (in the cloud) 4) applying a regulatory requirement – governments or independent judiciaries to demand better safeguards for data held in the cloud 5) provide a “subtle set of incentives . . . tax breaks and liability relief”

Zittrain’s most emphatic point, again, is the generativity argument. Cloud computing environments that are controlled by “mighty incumbents” like Google, Apple, Facebook, are gated. That is, they prevent the freedom to develop applications for these sites and services, thereby control their uses, and reign in the radical potential of ICT innovation. When we fight against poor applications, wonder why there aren’t better ones that perhaps enable more interoperability and more syndication features, its due to a closed “cloud-computing infrastructure” that prevents it.

Image courtesy of

Filtering PC’s in China, and monitoring the filtering

June 17, 2009
_iceUrlFlag=1EFF reports “The Chinese Ministry of Industry and IT’s announcement that all PCs sold in China must include government-approved filtering software is a profoundly worrying development for online privacy and free speech in that country.”

The software called “Green Dam Youth Escort” would be able to “collect IM and email conversations, install keyloggers, relay microphone and webcam recordings. It could prevent or detect the use of web proxies (the primary method of Chinese citizens seeking an uncensored Internet), and scan for privacy-protecting software like Tor and PGP.”

“Herdict Web” – Berkman Center for Internet and Society’s  tool for “tracking global web (in)accessibility” is now available in Mandarin.

Not sure if this tool will be able to monitor the new filtering by PCs, but if Herdict Web itself is filtered, how will Chinese know what they are not getting access to?

Is the government spying on Americans? Yes, yes they are…

October 13, 2008

NSA Whistleblowers blow lid off program that violates US citizens’ privacy. “Intercept operators” have come forward about their jobs spying on Americans, including listening in on “pillow talk between US military officers and their spouses.” President Bush assured us that they only listened in on phone calls that involved al qaeda. Turns out, they were instructed to listen to and store everything. Egregious – an indictment of all “who have been saying that the executive branch can be trusted with surveillance powers that are essentially unchecked,”   Jameel Jaffer, director of the national security program at the American Civil Liberties Union.

Government data mining an exercise in futility for war on terror

October 13, 2008

Data mining and behavioral surveillance technologies currently besing used by federal agencies to identify potential terrorists have been found to be “too unreliable to be of any real value,” and not feasible,” according to a new 376-page report “Protecting Individual Privacy in the Struggle Against Terrorists,” issued by the National Research Council (NRC).

Interesting to note the failure of data mining for identifying terrorists when it is driving online ad targeting practices (ad serving) based on behavioral advertising methods (DoubleClick). Maybe terrorists are giving up far less than regular online consumers, and thus cannot generate a rich profile.

New paper predicts a storm of unprecedented and invasive ISP surveillance

September 9, 2008

Exerpt from: Ohm, Paul,The Rise and Fall of Invasive ISP Surveillance(August 30, 2008). Available here.

“Nothing in society poses as grave a threat to privacy as the Internet Service Provider (ISP). ISPs carry their users’ conversations, secrets, relationships, acts, and omissions. Until the very recent past, they had left most of these alone because they had lacked the tools to spy invasively, but with recent advances in eavesdropping technology, they can now spy on people in unprecedented ways. Meanwhile, advertisers and copyright owners have been tempting them to put their users’ secrets up for sale, and judging from a recent flurry of reports, ISPs are giving in to the temptation and experimenting with new forms of spying. This is only the leading edge of a coming storm of unprecedented and invasive ISP surveillance.”

companies can surveil your web browsing

August 4, 2008

Web Filtering Moves to the Cloud