Individual Mass Manipulation

There is great commentary on how and why Facebook’s infamous “emotion study” is unethical. The main point being that the researchers and Facebook violated the “informed consent” principle of research on humans.

There have been other “individual mass manipulation” studies. e.g. you could tip the outcome of close elections by manipulating search results. But manipulating the mood of people on a massive scale is “new.” Don’t get confused, I don’t mean it like “they try to influence what we’re thinking through TV and ads.” I mean individual manipulation. Different things are manipulated in varying amounts for everyone individually … basically anything that claims “to only show you the X most relevant to you” falls into this category (especially if they don’t offer a way out of the filter bubble).

But what should we do, now that we known we have the tools to enforce emotions? Why not actually press the “button of happiness“?

Imagine if Facebook could have a button which says “make the billion people who use Facebook each a little bit happier”. It’s quite hard to imagine a more effective, more powerful, cheaper way to make the world a little bit better than for that button to exist. I want them to be able to build the button of happiness. And then I want them to press it.

My dystopian senses tell me: it will be used, but not in the way suggested above. We can probably draw some conclusions from the fact that one of the authors’ work is funded by the DoD. Why would the DoD (or any military/government organization for that matter) fund anything useful to the general good of mankind?

I see three use cases manipulating emotions:

Or to put it more eloquently:

… large corporations (and governments and political campaigns) now have new tools and stealth methods to quietly model our personality, our vulnerabilities, identify our networks, and effectively nudge and shape our ideas, desires and dreams.
[…]
I identify this model of control as a Gramscian model of social control: one in which we are effectively micro-nudged into “desired behavior” as a means of societal control. Seduction, rather than fear and coercion are the currency, and as such, they are a lot more effective. (Yes, short of deep totalitarianism, legitimacy, consent and acquiescence are stronger models of control than fear and torture—there are things you cannot do well in a society defined by fear, and running a nicely-oiled capitalist market economy is one of them).

I think netzpolitik.org put it best in their conclusion (German):

The problem that these kinds of experiments and the systems that actually enable them pose is not that they are illegal, creatively or intentionally evil. This isn’t the case even if it might feel like it.
Instead [the problem is] that they’re only a tiny step away from legitimate everyday practice. That they look a lot like ordinary ads. That they sit on top of an already-accepted construction of reality by non-transparent providers. That because of their scale and stealth they can be so efficiently and easily hidden. That they don’t devise our loss of control, but only exploit it.

The actual study: “Experimental evidence of massive-scale emotional contagion through social networks” (PDF)

Declaring People Terrorists so They Don’t Become It

Remarkable (para-)phrase attributed to French examining magistrate Marc Trévidic :

Declaring people terrorists–who are not–so they don’t become it.

This is to become the basis for new French “anti-terror” legislation.

I couldn’t find the original (French) quote, but a German translation of it.

Es gibt Leute, die man als Terroristen kennzeichnet, damit sie es nicht werden.

Which translates into something like

There’re people who get branded terrorists so they don’t become it.

Passwörter und Fingerabdrücke mittels Smartphone abfilmen

Forscher der TU Berlin haben herausgefunden, dass die Frontkameras von Smartphones so gut auflösen, dass man an den Reflexionen in den Augen oder Brillen Passwörter auslesen kann.

Außerdem gelang es auch Fingerabdrücke mittels Rückkamera beim Greifen nach dem Gerät abzufilmen.

… man kann es auch als Nachtrag zu diesem Paper sehen.

Patlabor 2

This movie was truly way before its time … living in the post-9/11 world, seeing that this plot is from 1993 gives me goose bumps. *shiver*

Anyway, one of the most beautiful scenes has two characters have a more philosophical discussion set to a very “dreamy” (almost hypnotic) visual and audio backdrop:

Arakawa:
What are you, the police officer, and I the JSDF officer, trying to defend? It’s been half a century since the last war. Neither you nor I have experienced a war. “Peace” … Peace is what we’re supposed to defend. But what is the peace of this city, this nation? The all-out war and the defeat. The US occupation policy. The Cold War under the nuclear umbrella and the proxy wars. And civil wars still go on in many nations of the world. Ethnic clash, military conflict. Blood-drenched economical prosperity created and sustained by those countless wars. That’s what’s behind our peace. Peace created by an indiscriminate fear of war. An unjust peace that is maintained by having the wars elsewhere, but we keep denying ourselves this truth.

Goto:
No matter how phony the peace may be, it’s our job to defend it. No matter how unjust it may be, it’s better than a just war.

Arakawa:
I understand how you hate “just wars.” Whoever said that word was never half decent. History is filled with people who fell from grace believing in that. But you know only too well that there isn’t much of a difference between a just war and an unjust peace. Ever since the word “Peace” became the excuse of liars, we lost our faith in peace. Just as war creates peace, peace also creates war. A make-believe peace that’s merely the period between two wars will eventually give way to real war. Have you ever thought about that?
While receiving the benefits of war, they’re hiding the truth behind the TV screen. Forgetting that they’re merely at the rear of the battle front … or rather pretending to forget about it. Such deceit will be punished sooner or later.

Schizophrenics Were Right, Probably, Maybe, Hopefully Not …

An interesting article on how schizophrenics’ thoughts that they are controlled by an outside power or living in a world crafted for them has become a matter of possibility for all of us – or “how reality caught up with paranoid delusions.” Exploring advances in technology, its ubiquity and the way we consume it, we assume we perceive an altered *cough* enriched and augmented version of the world around us. We silently ignore that this allows us to be easily toyed with and manipulated without us necessarily noticing it.

This is an interesting phenomenon that is not widely known and mostly ignored. But the matter of the fact is that if you have two computers, side-by-side, open up your browser and search for the exact same thing, you won’t get the same list of results. The same happens on social networks: try looking for a non-person and compare the results and their order.
Search gurus will tell you this is the magic of “personalized results” and finding things “most interesting to you” … but what they don’t tell you is that this comes at the price of having  the possibility of doing a global and unbiased search.

Any search you do is biased, by the region you are accessing the internet from (continent, country, city), your internet history, your search history, your language preferences, time of day … basically anything quantifiably different will alter your search results. You can’t (even if you try) do a unfiltered, repeatable and global search on the internet. And anything you click in those already tailored results will only reinforce your perceived “interest.”
Eli Pariser also talks about this in his “Beware online filter bubbles” Ted Talk where he quotes Google’s Eric Schmidt:

It will be very hard for people to watch or consume something that has not in some sense been tailored for them.
Eric Schmidt, Google

So, what would prevent any of those search providers from manipulating results deliberately? Actually, pretty much nothing. The amount of manipulation they would have to do e.g. to influence voter preferences in an already close election would probably be too little to be noticed and it wouldn’t even be illegal. So that’s why people like Bruce Schneier demand regulation for secret algorithms that have become part of our infrastructure.

One thing is clear: it can’t stay the way it is now.

OK … enough dystopic thoughts for today. 😛

Update 2013-12-08:
Seems like reality caught up already. Case in point: South Korea.