Limits to Growth

In 1972 the Club of Rome commissioned a study on growth trends in world population, industrialisation, pollution, food production, and resource depletion which was eventually published as a book called “The Limits to Growth.” They simulated different scenarios predicting what would happen until 2100 depending on whether humanity takes decisive action on environmental and resource issues. 40 years later the world pretty much matches the worst prediction.

Someone Ate This

Someone Ate This is a great collection of bad taste and horrific laziness in food preparation … 😀

http://someoneatethis.tumblr.com/post/86554707126/yes-all-things-i-associate-with-a-kraft-single-on

http://someoneatethis.tumblr.com/post/87446400439/if-youre-confused-about-whether-to-eat-this-with

http://someoneatethis.tumblr.com/post/86339378489/i-appreciate-the-attempt-at-fancy-plating-but-it

http://someoneatethis.tumblr.com/post/86317788836/crap-i-forgot-i-invited-30-people-over-for-a

http://someoneatethis.tumblr.com/post/86647480714/ugghhhh-kill-it-with-fire

http://someoneatethis.tumblr.com/post/94776189565/i-kinda-feel-like-someone-made-this-just-to-get-on

Less “Social Media,” More Passive Data Collection, Yay!

Foursquare had a great idea:

  • remove the social aspect of sharing, just track people silently all the time, it’s easier anyway
  • why bother with user-generated content, just feed them follow “experts” and feed them tips ads

Among the great features of the revamped app are:

  • tracking your location all the time
  • virtually no privacy controls
  • virtually no way to interact
  • suggestions almost solely based on paid advertisements expert opinions and tips
  • promise of more targeted ads outside of Foursquare

ArsTechnica has a nice quote on this:

This is the cleverest portion of the service’s revamp: make customers feel like they are sharing nothing, when in reality they are sharing everything. Passive information sharing and collection without the social friction—why didn’t anyone think of this before? The tragic, realistic answer is most likely “battery life.”
— Casey Johnston, ArsTechnica

Individual Mass Manipulation

There is great commentary on how and why Facebook’s infamous “emotion study” is unethical. The main point being that the researchers and Facebook violated the “informed consent” principle of research on humans.

There have been other “individual mass manipulation” studies. e.g. you could tip the outcome of close elections by manipulating search results. But manipulating the mood of people on a massive scale is “new.” Don’t get confused, I don’t mean it like “they try to influence what we’re thinking through TV and ads.” I mean individual manipulation. Different things are manipulated in varying amounts for everyone individually … basically anything that claims “to only show you the X most relevant to you” falls into this category (especially if they don’t offer a way out of the filter bubble).

But what should we do, now that we known we have the tools to enforce emotions? Why not actually press the “button of happiness“?

Imagine if Facebook could have a button which says “make the billion people who use Facebook each a little bit happier”. It’s quite hard to imagine a more effective, more powerful, cheaper way to make the world a little bit better than for that button to exist. I want them to be able to build the button of happiness. And then I want them to press it.

My dystopian senses tell me: it will be used, but not in the way suggested above. We can probably draw some conclusions from the fact that one of the authors’ work is funded by the DoD. Why would the DoD (or any military/government organization for that matter) fund anything useful to the general good of mankind?

I see three use cases manipulating emotions:

Or to put it more eloquently:

… large corporations (and governments and political campaigns) now have new tools and stealth methods to quietly model our personality, our vulnerabilities, identify our networks, and effectively nudge and shape our ideas, desires and dreams.
[…]
I identify this model of control as a Gramscian model of social control: one in which we are effectively micro-nudged into “desired behavior” as a means of societal control. Seduction, rather than fear and coercion are the currency, and as such, they are a lot more effective. (Yes, short of deep totalitarianism, legitimacy, consent and acquiescence are stronger models of control than fear and torture—there are things you cannot do well in a society defined by fear, and running a nicely-oiled capitalist market economy is one of them).

I think netzpolitik.org put it best in their conclusion (German):

The problem that these kinds of experiments and the systems that actually enable them pose is not that they are illegal, creatively or intentionally evil. This isn’t the case even if it might feel like it.
Instead [the problem is] that they’re only a tiny step away from legitimate everyday practice. That they look a lot like ordinary ads. That they sit on top of an already-accepted construction of reality by non-transparent providers. That because of their scale and stealth they can be so efficiently and easily hidden. That they don’t devise our loss of control, but only exploit it.

The actual study: “Experimental evidence of massive-scale emotional contagion through social networks” (PDF)

They Used to Say That About Content

Facebook wants you to help them optimize their ads. You’re supposed tell them which ones you like or dislike so they can replace the ones you didn’t like with other you might “like more.” … This seems so bizarre … In essence Facebook is telling you to curate their ad stream for you the way you curate your own content stream. In doing so they blurt out things like

giving people more control about the ads they see

and

show you the ads that are most relevant to you

Is it just me, or is this exactly the way they used to talk about content?!? o.O