Tag: Dystopia
Dracula Untold, For A Good Reason
I like good stories and came across Dracula Untold. I didn’t like it very much. Maybe it’s because of my heightened sensitivity for anti-islamic racism. Or maybe it’s because the main theme of the movie seems to be that: it’s OK to join the forces of evil as long as your intention is to protect your family and your country … if that makes sense to you, it doesn’t to me.
They try to accomplish this by twisting the historic context both with regards to the time and place, the persons involved and in the loyalties they had. Also they try to convey that Evil is not something despicable in itself, but a tool to be used by the powers in charge.
I assume you’ve seen the movies and can relate tho following facts to the plot and the characters.
My first pain point are the movie’s extremely distorted “Vlad” and “Mehmet” figures. They are created from greatly mixing Vlad II …
- actually ruling in 1442
- but Wallachia, not Transilvania
- vested into the Order of the Dragon
- “made a treaty with the Ottomans insuring that he would give them annual tribute, as well as sending Wallachian boys to them yearly to be trained for service in their armies“
- left his two sons Vlad and Radu with the Ottomans
… and Vlad III.
- was called the “Prince of Wallachia”
- who was later called the “Impaler”
- grew up as a political captive under the Ottomans (together with his brother Radu)
- Radu had a friendship with Mehmet II, not Vlad
- had a personal hatred for Radu and Mehmet
- known for The Night Attack
- he is often characterized as a tyrant who took sadistic pleasure in torturing and killing his enemies
And by greatly mixing Murad II …
- actually ruling in 1442
- tried to establish Ottoman-friendly rulers in Wallachia
… and Mehmet II.
- actually conquered Wallachia, but 20 years later
- known for the Conquest of Constantinople
My second and more general pain point are the movie’s morals which are kind of strange to say the least. :/ Among those seem to be:
- pacting with the devil is OK, as long as it’s against Muslims
- choosing to become a monster is alright, as long as you can protect your family and your country
- you can do whatever you like to your enemies (especially using torture or excessively cruel ways of killing), as long as you’re good-looking
- you can both be a pious Christian and a henchman of the Devil
- being “the son of the devil” is a source of pride
- revenge is good
- prominent characters in western literature must be made to fight Muslims
- Muslims must be defeated, even if you have rewrite history
I find this extremely troubling. o_O
Was man von Pegida halten soll
Ich glaube Herr Theisen trifft es ganz gut. 😀

Trotzdem ist “ganze Ding” sehr eigenartig. :/
Der französische Redner auf der Bühne kann sich nicht vorstellen “dass unter uns Rassisten sind”, während ein Chorleiter aus Würzburg alle Muslime erschießen will. Solche Widersprüche fallen bei Pegida nicht auf. Da weht die Fahne der rechtsextremen “German Defense League” friedlich unweit der israelischen im Dresdner Abendwind. Da ist quer durch den Wutbürgerkatalog für jeden etwas dabei zum “Jawoll”-Schreien – auch das Gegenteil. Hauptsache, irgendwas ist schlecht und jemand anderes hat Schuld.
— Telepolis
Aber eins ist klar, Deutschland hat ein massives Rassismussproblem!
Und für alle, denen das alles zu viel Text ist, gibt es istdasabendlandschonislamisiert.de. 😀
Edja Snodow
Die Geschichten um diesen Edja Snodow sind wie das Spiegelbild einer Welt, die man nicht haben will.
https://twitter.com/riyadpr/status/543043457766653952
Government agents ‘directly involved’ in most high-profile US terror plots
Human Rights Watch has examined about 500 U.S. trials related to terrorism and came to a “shocking” conclusion.
- 18% of those cases are “tenuous” “material support” charges (e.g. “providing military gear to al-Qaida” actually mans having “waterproof socks” in your luggage)
- another 30% are “sting” operations, where government agents play a significant role in inciting, planning, supplying, preparing for execution and finally arresting
So this means that at least 50% of cases where they were “confident” enough to even go to trial fall flat on their faces when taking a closer look. :/
Impostors, Sadists and Immoral Lunatics
It’s sad that it takes 120+ scholars to refute a bunch of lunatics. As we all know ISIL stands for „Impostors, Sadists and Immoral Lunatics.“
https://twitter.com/riyadpr/status/518732469089353729
Hacked or Haunted?
https://twitter.com/moonpolysoft/status/510993148302983168
Limits to Growth
In 1972 the Club of Rome commissioned a study on growth trends in world population, industrialisation, pollution, food production, and resource depletion which was eventually published as a book called “The Limits to Growth.” They simulated different scenarios predicting what would happen until 2100 depending on whether humanity takes decisive action on environmental and resource issues. 40 years later the world pretty much matches the worst prediction.
Next Future Terrifying Technology Will Blow Your Mind
An awesome talk with Bruce Schneier and Julian Sanchez. Asking just the right questions and giving and eyeopening view into what is possible even with todays technology.
https://youtu.be/JbQeABIoO6A
Individual Mass Manipulation
There is great commentary on how and why Facebook’s infamous “emotion study” is unethical. The main point being that the researchers and Facebook violated the “informed consent” principle of research on humans.
There have been other “individual mass manipulation” studies. e.g. you could tip the outcome of close elections by manipulating search results. But manipulating the mood of people on a massive scale is “new.” Don’t get confused, I don’t mean it like “they try to influence what we’re thinking through TV and ads.” I mean individual manipulation. Different things are manipulated in varying amounts for everyone individually … basically anything that claims “to only show you the X most relevant to you” falls into this category (especially if they don’t offer a way out of the filter bubble).
But what should we do, now that we known we have the tools to enforce emotions? Why not actually press the “button of happiness“?
Imagine if Facebook could have a button which says “make the billion people who use Facebook each a little bit happier”. It’s quite hard to imagine a more effective, more powerful, cheaper way to make the world a little bit better than for that button to exist. I want them to be able to build the button of happiness. And then I want them to press it.
My dystopian senses tell me: it will be used, but not in the way suggested above. We can probably draw some conclusions from the fact that one of the authors’ work is funded by the DoD. Why would the DoD (or any military/government organization for that matter) fund anything useful to the general good of mankind?
I see three use cases manipulating emotions:
- “Protecting” friendly governments from “civil unrest” either by manipulating search results in favor of a friendly faction or by discrediting the opposing faction with false information.
- Trying to “topple” unfriendly governments
- Driving individuals into depression and/or suicide
Or to put it more eloquently:
… large corporations (and governments and political campaigns) now have new tools and stealth methods to quietly model our personality, our vulnerabilities, identify our networks, and effectively nudge and shape our ideas, desires and dreams.
[…]
I identify this model of control as a Gramscian model of social control: one in which we are effectively micro-nudged into “desired behavior” as a means of societal control. Seduction, rather than fear and coercion are the currency, and as such, they are a lot more effective. (Yes, short of deep totalitarianism, legitimacy, consent and acquiescence are stronger models of control than fear and torture—there are things you cannot do well in a society defined by fear, and running a nicely-oiled capitalist market economy is one of them).
I think netzpolitik.org put it best in their conclusion (German):
The problem that these kinds of experiments and the systems that actually enable them pose is not that they are illegal, creatively or intentionally evil. This isn’t the case even if it might feel like it.
Instead [the problem is] that they’re only a tiny step away from legitimate everyday practice. That they look a lot like ordinary ads. That they sit on top of an already-accepted construction of reality by non-transparent providers. That because of their scale and stealth they can be so efficiently and easily hidden. That they don’t devise our loss of control, but only exploit it.
The actual study: “Experimental evidence of massive-scale emotional contagion through social networks” (PDF)