After wrapping Flot for Polymer I needed an element that would present a sparklines style graph.
I made one and put it into a Gist along with a demo on how to use it.
Tag: Open Source
Flot in Polymer
I was playing with Polymer at work, building a service monitor with it. At some point I needed charts and Flot seemed to be the simplest solution.
So after a little work I managed to wrap Flot for Polymer (on GitHub). ?
Making RabbitMQ Recover from (a)Mnesia
In the company I work for we’re using RabbitMQ to offload non-timecritical processing of tasks. To be able to recover in case RabbitMQ goes down our queues are durable and all our messages are marked as persistent. We generally have a very low number of messages in flight at any moment in time. There’s just one queue with a decent amount of them: the “failed messages” dump.
The Problem
It so happens that after a botched update to the most recent version of RabbitMQ (3.5.3 at the time) our admins had to nuke the server and install it from scratch. They had made a backup of RabbitMQ’s Mnesia database and I was tasked to recover the messages from it.
This is the story of how I did it.
Since our RabbitMQ was configured to persist all the messages this should be generally possible. Surely I wouldn’t be the first one to attempt this. ?
Looking through the Internet it seems there’s no way of ex/importing a node’s configuration if it’s not running. I couldn’t find any documentation on how to import a Mnesia backup into a new node or extract data from it into a usable form. ?
The Idea
My idea was to setup a virtual machine (running Debian Wheezy) with RabbitMQ and then to somehow make it read/recover and run the broken server’s database.
In the following you’ll see the following placeholders:
- RABBITMQ_MNESIA_BASE will be
/var/lib/rabbitmq/mnesia
on Debian (see RabbitMQ’S file locations)
- RABBITMQ_MNESIA_DIR is just $RABBITMQ_MNESIA_BASE/$RABBITMQ_NODENAME
- BROKEN_NODENAME the $RABBITMQ_NODENAME of the broken server we have backups from
- BROKEN_HOST the hostname of said server
One more thing before we start: if I say “fix permissions” below I mean
sudo chown -R rabbitmq:rabbitmq $RABBITMQ_MNESIA_DIR
1st Try
My first try was to just copy the broken node’s Mnesia files to the VM’s $RABBITMQ_MNESIA_DIR failed. The files contained node names that RabbitMQ tried to reach but were unreachable from the VM.
Error description: {could_not_start,rabbit, {{failed_to_cluster_with, ['$BROKEN_NODENAME'], "Mnesia could not connect to any nodes."}, {rabbit,start,[normal,[]]}}}
So I tried to be a little bit more picky on what I copied.
First I had to reset $RABBITMQ_MNESIA_DIR by deleting it and have RabbitMQ recreate it. (I needed to do this way too many times ?)
sudo service rabbitmq-server stop rm -r $RABBITMQ_MNESIA_DIR sudo service rabbitmq-server start
Stopping RabbitMQ I tried to feed it the broken server’s data in piecemeal fashion. This time I only copied the
rabbit_*.[DCD,DCL]
and restarted RabbitMQ.
Looking at the web management interface there were all the queues we were missing, but they were “down” and clicking on them told you
The object you clicked on was not found; it may have been deleted on the server.
Copying any more data didn’t solve the issue. So this was a dead end. ?
2nd Try
So I thought why doesn’t the RabbitMQ in the VM pretend to be the exact same node as on the broken server?
So I created a
/etc/rabbitmq/rabbitmq-env.conf
with
NODENAME=$BROKEN_NODENAME
in there.
I copied the backup to $RABBITMQ_MNESIA_DIR (now with the new node name) and fixed the permissions.
Now starting RabbitMQ failed with
ERROR: epmd error for host $BROKEN_HOST: nxdomain (non-existing domain)
I edited
/etc/hosts
to add $BROKEN_HOST to the list of names that resolve to 127.0.0.1.
Now restarting RabbitMQ failed with yet another error:
Error description: {could_not_start,rabbit, {{schema_integrity_check_failed, [{table_attributes_mismatch,rabbit_queue, [name,durable,auto_delete,exclusive_owner,arguments,pid, slave_pids,sync_slave_pids,recoverable_slaves,policy, gm_pids,decorators,state], [name,durable,auto_delete,exclusive_owner,arguments,pid, slave_pids,sync_slave_pids,mirror_nodes,policy]}]}, {rabbit,start,[normal,[]]}}}
Now what? Why don’t I try to give it the Mnesia files piece by piece again?
- Reset $RABBITMQ_MNESIA_DIR
- Stop RabbitMQ
- Copy
rabbit_*
files in again and fix their permissions
- Start RabbitMQ
All our queues were back and all their configuration seemed OK as well. But we still didn’t have our messages back yet.
Solution
So I tried to copy more and more files over from the backup repeating the above steps. I finally reached my goal after copying
rabbit_*
,
msg_store_*
,
queues
and
recovery.dets
. Fixing their permissions and starting RabbitMQ it had all the queues restored with all the messages in them. ?
Now I could use ordinary methods to extract all the messages. Dumping all the messages and examining them they looked OK. Publishing the recovered messages to the new server I was pretty euphoric. ?
I Helped Find And Fix A Bug In RabbitMQ
Bottle Plugin Lifecycle
If you use Python‘s Bottle micro-framework there’ll be a time where you’ll want to add custom plugins. To get a better feeling on what code gets executed when, I created a minimal Bottle app with a test plugin that logs what code gets executed. I uesed it to test both global and route-specific plugins.
When Python loads the module you’ll see that the plugins’
__init__()
and
setup()
methods will be called immediately when they are installed on the app or applied to the route. This happens in the order they appear in the code. Then the app is started.
The first time a route is called Bottle executes the plugins’
apply()
methods. This happens in “reversed order” of installation (which makes sense for a nested callback chain). This means first the route-specific plugins get applied then the global ones. Their result is cached, i.e. only the inner/wrapped function is executed from here on out.
Then for every request the
apply()
method’s inner function is executed. This happens in the “original” order again.
Below you can see the code and example logs for two requests. You can also clone the Gist and do your own experiments.
https://twitter.com/riyadpr/status/617681143538786304
Android Backup and Restore with ADB
Updating my OnePlus One recently to Cyanogen OS 12 I had to reset my phone a few times before everything ran smoothly … so I wrote a pair of scripts to help me copy things around.
It uses the Android SDK’s ADB tool to do the copying since the Android File Transfer Tool for Mac has a laughable quality for Google’s standards.
Update 2018-11-22:
Since the scripts became more sophisticated I moved them to a proper project on GitHub.
Synchronize directories between computers using rsync (and SSH)
MagicDict
If you write software in Python you come to a point where you are testing a piece of code that expects a more or less elaborate dictionary as an argument to a function. As a good software developer we want that code properly tested but we want to use minimal fixtures to accomplish that.
So, I was looking for something that behaves like a dictionary, that you can give explicit return values for specific keys and that will give you some sort of a “default” return value when you try to access an “unknown” item (I don’t care what as long as there is no Exception raised (e.g.
KeyError
)).
My first thought was “why not use MagicMock?” … it’s a useful tool in so many situations.
from mock import MagicMock m = MagicMock(foo="bar")
But using MagicMock where dict is expected yields unexpected results.
>>> # this works as expected >>> m.foo 'bar' >>> # but this doesn't do what you'd expect >>> m["foo"] <MagicMock name='mock.__getitem__()' id='4396280016'>
First of all attribute and item access are treated differently. You setup MagicMock using key word arguments (i.e. “dict syntax”), but have to use attributes (i.e. “object syntax”) to access them.
Then I thought to yourself “why not mess with the magic methods?”
__getitem__
and
__getattr__
expect the same arguments anyway. So this should work:
m = MagicMock(foo="bar") m.__getitem__.side_effect = m.__getattr__
Well? …
>>> m.foo 'bar' >>> m["foo"] <MagicMock name='mock.foo' id='4554363920'>
… No!
By this time I thought “I can’t be the first to need this” and started searching in the docs and sure enough they provide an example for this case.
d = dict(foo="bar") m = MagicMock() m.__getitem__.side_effect = d.__getitem__
Does it work? …
>>> m["foo"] 'bar' >>> m["bar"] Traceback (most recent call last): File "<stdin>", line 1, in <module> File ".../env/lib/python2.7/site-packages/mock.py", line 955, in __call__ return _mock_self._mock_call(*args, **kwargs) File ".../env/lib/python2.7/site-packages/mock.py", line 1018, in _mock_call ret_val = effect(*args, **kwargs) KeyError: 'bar'
Well, yes and no. It works as long as you only access those items that you have defined to be in the dictionary. If you try to access any “unknown” item you get a
KeyError
.
After trying out different things the simplest answer to accomplish what I set out to do seems to be sub-classing defaultdict.
from collections import defaultdict class MagicDict(defaultdict): def __missing__(self, key): result = self[key] = MagicDict() return result
And? …
>>> m["foo"] 'bar' >>> m["bar"] defaultdict(None, {}) >>> m.foo Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'MagicDict' object has no attribute 'foo'
Indeed, it is. 😀
Well, not quite. There are still a few comfort features missing (e.g. a proper
__repr__
). The whole, improved and tested code can be found in this Gist:
Spacegrey UI Theme for Sublime Text
Spacegrey – an awesome and really good-looking UI theme for Sublime Text.
Explaining Shell Commands
Let explainshell.com explain your shell commands and let it look up arguments and flags. 😀
I found this command line magic gem some time ago and was using it ever since.
I started using it for synchronizing directories between computers on the same network. But it felt kind of clunky and cumbersome to get the slashes right so that it wouldn’t nest those directories and copy everything. Since both source and destination machine had the same basic directory layout, I thought ‘why not make it easier?’ … e.g. like this:
It uses rsync for the heavy lifting but does the tedious source and destination mangling for you. 😀
You can find the code in this Gist.