Build Rsync for Android Yourself

To build rsync for Android you’ll need to have the Android NDK installed already.

Then clone the rsync for android source (e.g. from CyanogenMod LineageOS) …

git clone https://github.com/LineageOS/android_external_rsync.git
cd android_external_rsync
# checkout the most recent branch
git checkout cm-14.1

… create the missing

jni/Application.mk

build file (e.g. from this Gist) and adapt it to your case

… and start the build with

export NDK_PROJECT_PATH=pwd ndk-build -d rsync

You’ll find your self-build rsync in

obj/local/*/rsync

. ?

Update 2017-10-06:

  • Updated sources from CyanogenMod to LineageOS.
  • Added links to Gist and Andoid NDK docs
  • Updated steps to work with up-to-date setups

If you get something like the following warnings and errors …

[...]
./flist.c:454:16: warning: implicit declaration of function 'major' is invalid in C99
      [-Wimplicit-function-declaration]
                        if ((uint32)major(rdev) == rdev_major)
                                    ^
./flist.c:458:41: warning: implicit declaration of function 'minor' is invalid in C99
      [-Wimplicit-function-declaration]
                        if (protocol_version < 30 && (uint32)minor(rdev) <= 0xFFu)
                                                             ^
./flist.c:467:11: warning: implicit declaration of function 'makedev' is invalid in C99
      [-Wimplicit-function-declaration]
                        rdev = MAKEDEV(major(rdev), 0);
                               ^
./rsync.h:446:36: note: expanded from macro 'MAKEDEV'
#define MAKEDEV(devmajor,devminor) makedev(devmajor,devminor)
                                   ^
3 warnings generated.
[...]
./flist.c:473: error: undefined reference to 'makedev'
./flist.c:454: error: undefined reference to 'major'
./flist.c:457: error: undefined reference to 'major'
./flist.c:458: error: undefined reference to 'minor'
./flist.c:467: error: undefined reference to 'major'
./flist.c:467: error: undefined reference to 'makedev'
./flist.c:617: error: undefined reference to 'major'
./flist.c:619: error: undefined reference to 'minor'
./flist.c:621: error: undefined reference to 'minor'
./flist.c:788: error: undefined reference to 'makedev'
./flist.c:869: error: undefined reference to 'makedev'
./flist.c:1027: error: undefined reference to 'minor'
clang++: error: linker command failed with exit code 1 (use -v to see invocation) 
make: *** [obj/local/armeabi-v7a/rsync] Error 1

… you probably need to update

config.h

and change

/* #undef MAJOR_IN_SYSMACROS */

to

#define MAJOR_IN_SYSMACROS 1

.

CFSSL FTW

After reading how CloudFlare handles their PKI and that LetsEncrypt will use it I wanted to give CFSSL a shot.

Reading the project’s documentation doesn’t really help in building your own CA, but searching the Internet I found Fernando Barillas’ blog explaining how to create your own root certificate and how to create intermediate certificates from this.

I took it a step further I wrote a script generating new certificates for several services with different intermediates and possibly different configurations (e.g. depending on your distro and services certain cyphers (e.g. using ECC) may not be supported).
I also streamlined generating service specific key, cert and chain files. 😀

Have a look at the full Gist or just the most interesting part:

You’ll still have to deploy them yourself.

Update 2016-10-04:
Fixed some issues with this Gist.

  • Fixed a bug where intermediate CA certificates weren’t marked as CAs any more
  • Updated the example CSRs and the script so it can now be run without errors

Update 2017-10-08:

  • Cleaned up `renew-certs.sh` by extracting functions for generating root CA, intermediate CA and service keys.

A Service Monitor built with Polymer

I tried to build a service monitor having the following features:

  • showing the reachability of HTTP servers
  • plotting the amount of messages in a specific RabbitMQ queue
  • plotting the amount of queues with specific prefixes
  • showing the status of RabbitMQ queues i.e. how many messages are in there? are there any consumers? are they hung?
  • showing the availability of certain Redis clients

Well, you can find the result on GitHub.
It uses two things I published before: polymer-flot and flot-sparklines. 😀

An example dashboard:

polymer-service-monitor screen shot

Bottle Plugin Lifecycle

If you use Python‘s Bottle micro-framework there’ll be a time where you’ll want to add custom plugins. To get a better feeling on what code gets executed when, I created a minimal Bottle app with a test plugin that logs what code gets executed. I uesed it to test both global and route-specific plugins.

When Python loads the module you’ll see that the plugins’

__init__()

and

setup()

methods will be called immediately when they are installed on the app or applied to the route. This happens in the order they appear in the code. Then the app is started.

The first time a route is called Bottle executes the plugins’

apply()

methods. This happens in “reversed order” of installation (which makes sense for a nested callback chain). This means first the route-specific plugins get applied then the global ones. Their result is cached, i.e. only the inner/wrapped function is executed from here on out.

Then for every request the

apply()

method’s inner function is executed. This happens in the “original” order again.

Below you can see the code and example logs for two requests. You can also clone the Gist and do your own experiments.

https://twitter.com/riyadpr/status/617681143538786304

Android Backup and Restore with ADB

Updating my OnePlus One recently to Cyanogen OS 12 I had to reset my phone a few times before everything ran smoothly … so I wrote a pair of scripts to help me copy things around.

It uses the Android SDK’s ADB tool to do the copying since the Android File Transfer Tool for Mac has a laughable quality for Google’s standards.

Update 2018-11-22:
Since the scripts became more sophisticated I moved them to a proper project on GitHub.

Synchronize directories between computers using rsync (and SSH)

https://twitter.com/climagic/status/363326283922419712

I found this command line magic gem some time ago and was using it ever since.

I started using it for synchronizing directories between computers on the same network. But it felt kind of clunky and cumbersome to get the slashes right so that it wouldn’t nest those directories and copy everything. Since both source and destination machine had the same basic directory layout, I thought ‘why not make it easier?’ … e.g. like this:

sync-to other-pc ~/Documents
sync-to other-pc ~/Music --exclude '*.wav'
sync-from other-pc ~/Music --dry-run --delete

It uses rsync for the heavy lifting but does the tedious source and destination mangling for you. 😀

You can find the code in this Gist.

MagicDict

If you write software in Python you come to a point where you are testing a piece of code that expects a more or less elaborate dictionary as an argument to a function. As a good software developer we want that code properly tested but we want to use minimal fixtures to accomplish that.

So, I was looking for something that behaves like a dictionary, that you can give explicit return values for specific keys and that will give you some sort of a “default” return value when you try to access an “unknown” item (I don’t care what as long as there is no Exception raised (e.g.

KeyError

 )).

My first thought was “why not use MagicMock?” … it’s a useful tool in so many situations.

from mock import MagicMock
m = MagicMock(foo="bar")

But using MagicMock where dict is expected yields unexpected results.

>>> # this works as expected
>>> m.foo
'bar'
>>> # but this doesn't do what you'd expect
>>> m["foo"]
<MagicMock name='mock.__getitem__()' id='4396280016'>

First of all attribute and item access are treated differently. You setup MagicMock using key word arguments (i.e. “dict syntax”), but have to use attributes (i.e. “object syntax”) to access them.

Then I thought to yourself “why not mess with the magic methods?”

__getitem__

  and 

__getattr__

  expect the same arguments anyway. So this should work:

m = MagicMock(foo="bar")
m.__getitem__.side_effect = m.__getattr__

Well? …

>>> m.foo
'bar'
>>> m["foo"]
<MagicMock name='mock.foo' id='4554363920'>

… No!

By this time I thought “I can’t be the first to need this” and started searching in the docs and sure enough they provide an example for this case.

d = dict(foo="bar")

m = MagicMock()
m.__getitem__.side_effect = d.__getitem__

Does it work? …

>>> m["foo"]
'bar'
>>> m["bar"]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File ".../env/lib/python2.7/site-packages/mock.py", line 955, in __call__
    return _mock_self._mock_call(*args, **kwargs)
  File ".../env/lib/python2.7/site-packages/mock.py", line 1018, in _mock_call
    ret_val = effect(*args, **kwargs)
KeyError: 'bar'

Well, yes and no. It works as long as you only access those items that you have defined to be in the dictionary. If you try to access any “unknown” item you get a

KeyError

 .

After trying out different things the simplest answer to accomplish what I set out to do seems to be sub-classing defaultdict.

from collections import defaultdict

class MagicDict(defaultdict):
    def __missing__(self, key):
        result = self[key] = MagicDict()
        return result

And? …

>>> m["foo"]
'bar'
>>> m["bar"]
defaultdict(None, {})
>>> m.foo
Traceback (most recent call last):
&nbsp; File "<stdin>", line 1, in <module>
AttributeError: 'MagicDict' object has no attribute 'foo'

Indeed, it is. 😀

Well, not quite. There are still a few comfort features missing (e.g. a proper

__repr__

). The whole, improved and tested code can be found in this Gist: