What’s so new about RC cars? Oh … they’re the real, “off-the-shelve” ones, those where people ride in … and they can be hijacked by anybody with the same cellular provider … over the Internet, no direct access required … o.O … WIRED has a piece.
Messenger apps show your friends’ online status. Anytime you open the app, they’ll notify the service that you’re “online” at the moment. Now everybody else can see it in their contact lists.
And with everybody I mean anybody! If you have a phone number you can check that person’s online status as often as you want from wherever you want (no need to be friends or anything).
So did a group of researchers at the Friedrich-Alexander-Universität Erlangen-Nürnberg. They used this “feature” to “find out how frequently and how long users spent with their popular messenger” on a random sample of 1000 people in different countries for over eight months.
Looking through the project’s website should make it clear how little the creators of those apps care …
Moreover, we were able to run our monitoring solution against the WhatsApp services from July 2013 to April 2014 without any interruption. Although we monitored personal information of thousands of users for several months — and thus strongly deviated from normal user behaviour — our monitoring efforts were not inhibited in any way.
… and that they don’t want you to be able to care.
Unfortunately, affected messenger services (like WhatsApp, Telegram, etc.) currently provide no option for disabling access to a user’s “online” status. Even WhatsApp’s newly introduced privacy controls fail to prevent online status tracking, as users still cannot opt-out of disclosing their availability to anonymous parties.
Researchers seemingly have found a way to tell-apart students which will do well in computer science classes and those who won’t. More eloquently put they’ve devised a way “[to] separate programming sheep from non-programming goats.” 😀
And they come to an interesting conclusion:
Formal logical proofs, and therefore programs – formal logical proofs that particular computations are possible, expressed in a formal system called a programming language – are utterly meaningless. To write a computer program you have to come to terms with this, to accept that whatever you might want the program to mean, the machine will blindly follow its meaningless rules and come to some meaningless conclusion. In the test the consistent group showed a pre-acceptance of this fact: they are capable of seeing mathematical calculation problems in terms of rules, and can follow those rules wheresoever they may lead. The inconsistent group, on the other hand, looks for meaning where it is not. The blank group knows that it is looking at meaninglessness, and refuses to deal with it.
— Saeed Dehnadi and Richard Bornat, 2006, “The camel has two humps (working title)”
I have accepted it. -.-
In 1972 the Club of Rome commissioned a study on growth trends in world population, industrialisation, pollution, food production, and resource depletion which was eventually published as a book called “The Limits to Growth.” They simulated different scenarios predicting what would happen until 2100 depending on whether humanity takes decisive action on environmental and resource issues. 40 years later the world pretty much matches the worst prediction.
There is great commentary on how and why Facebook’s infamous “emotion study” is unethical. The main point being that the researchers and Facebook violated the “informed consent” principle of research on humans.
There have been other “individual mass manipulation” studies. e.g. you could tip the outcome of close elections by manipulating search results. But manipulating the mood of people on a massive scale is “new.” Don’t get confused, I don’t mean it like “they try to influence what we’re thinking through TV and ads.” I mean individual manipulation. Different things are manipulated in varying amounts for everyone individually … basically anything that claims “to only show you the X most relevant to you” falls into this category (especially if they don’t offer a way out of the filter bubble).
But what should we do, now that we known we have the tools to enforce emotions? Why not actually press the “button of happiness“?
Imagine if Facebook could have a button which says “make the billion people who use Facebook each a little bit happier”. It’s quite hard to imagine a more effective, more powerful, cheaper way to make the world a little bit better than for that button to exist. I want them to be able to build the button of happiness. And then I want them to press it.
My dystopian senses tell me: it will be used, but not in the way suggested above. We can probably draw some conclusions from the fact that one of the authors’ work is funded by the DoD. Why would the DoD (or any military/government organization for that matter) fund anything useful to the general good of mankind?
I see three use cases manipulating emotions:
- “Protecting” friendly governments from “civil unrest” either by manipulating search results in favor of a friendly faction or by discrediting the opposing faction with false information.
- Trying to “topple” unfriendly governments
- Driving individuals into depression and/or suicide
Or to put it more eloquently:
… large corporations (and governments and political campaigns) now have new tools and stealth methods to quietly model our personality, our vulnerabilities, identify our networks, and effectively nudge and shape our ideas, desires and dreams.
I identify this model of control as a Gramscian model of social control: one in which we are effectively micro-nudged into “desired behavior” as a means of societal control. Seduction, rather than fear and coercion are the currency, and as such, they are a lot more effective. (Yes, short of deep totalitarianism, legitimacy, consent and acquiescence are stronger models of control than fear and torture—there are things you cannot do well in a society defined by fear, and running a nicely-oiled capitalist market economy is one of them).
I think netzpolitik.org put it best in their conclusion (German):
The problem that these kinds of experiments and the systems that actually enable them pose is not that they are illegal, creatively or intentionally evil. This isn’t the case even if it might feel like it.
Instead [the problem is] that they’re only a tiny step away from legitimate everyday practice. That they look a lot like ordinary ads. That they sit on top of an already-accepted construction of reality by non-transparent providers. That because of their scale and stealth they can be so efficiently and easily hidden. That they don’t devise our loss of control, but only exploit it.
The actual study: “Experimental evidence of massive-scale emotional contagion through social networks” (PDF)
All “Galaxy compatible” headphones work with a Nexus 5. 😉
I’ve actually tried the following headphones:
- Samsung EHS64
- Sennheiser MM 30G
Both the call/mute and the volume buttons work.
Forscher der TU Berlin haben herausgefunden, dass die Frontkameras von Smartphones so gut auflösen, dass man an den Reflexionen in den Augen oder Brillen Passwörter auslesen kann.
Außerdem gelang es auch Fingerabdrücke mittels Rückkamera beim Greifen nach dem Gerät abzufilmen.
… man kann es auch als Nachtrag zu diesem Paper sehen.
Interesting chart about the lexical distance among the languages of Europe.