Build Rsync for Android Yourself

To build rsync for Android you’ll need to have the Android NDK installed already.

Then clone the rsync for android source (e.g. from CyanogenMod LineageOS) …

git clone https://github.com/LineageOS/android_external_rsync.git
cd android_external_rsync
# checkout the most recent branch
git checkout cm-14.1

… create the missing

jni/Application.mk

build file (e.g. from this Gist) and adapt it to your case

APP_ABI := armeabi-v7a arm64-v8a
APP_OPTIM := release
APP_BUILD_SCRIPT := $(NDK_PROJECT_PATH)/Android.mk
APP_PLATFORM := android-29
view raw Application.mk hosted with ❤ by GitHub

… and start the build with

export NDK_PROJECT_PATH=pwd ndk-build -d rsync

You’ll find your self-build rsync in

obj/local/*/rsync

. ?

Update 2017-10-06:

  • Updated sources from CyanogenMod to LineageOS.
  • Added links to Gist and Andoid NDK docs
  • Updated steps to work with up-to-date setups

If you get something like the following warnings and errors …

[...]
./flist.c:454:16: warning: implicit declaration of function 'major' is invalid in C99
      [-Wimplicit-function-declaration]
                        if ((uint32)major(rdev) == rdev_major)
                                    ^
./flist.c:458:41: warning: implicit declaration of function 'minor' is invalid in C99
      [-Wimplicit-function-declaration]
                        if (protocol_version < 30 && (uint32)minor(rdev) <= 0xFFu)
                                                             ^
./flist.c:467:11: warning: implicit declaration of function 'makedev' is invalid in C99
      [-Wimplicit-function-declaration]
                        rdev = MAKEDEV(major(rdev), 0);
                               ^
./rsync.h:446:36: note: expanded from macro 'MAKEDEV'
#define MAKEDEV(devmajor,devminor) makedev(devmajor,devminor)
                                   ^
3 warnings generated.
[...]
./flist.c:473: error: undefined reference to 'makedev'
./flist.c:454: error: undefined reference to 'major'
./flist.c:457: error: undefined reference to 'major'
./flist.c:458: error: undefined reference to 'minor'
./flist.c:467: error: undefined reference to 'major'
./flist.c:467: error: undefined reference to 'makedev'
./flist.c:617: error: undefined reference to 'major'
./flist.c:619: error: undefined reference to 'minor'
./flist.c:621: error: undefined reference to 'minor'
./flist.c:788: error: undefined reference to 'makedev'
./flist.c:869: error: undefined reference to 'makedev'
./flist.c:1027: error: undefined reference to 'minor'
clang++: error: linker command failed with exit code 1 (use -v to see invocation) 
make: *** [obj/local/armeabi-v7a/rsync] Error 1

… you probably need to update

config.h

and change

/* #undef MAJOR_IN_SYSMACROS */

to

#define MAJOR_IN_SYSMACROS 1

.

CFSSL FTW

After reading how CloudFlare handles their PKI and that LetsEncrypt will use it I wanted to give CFSSL a shot.

Reading the project’s documentation doesn’t really help in building your own CA, but searching the Internet I found Fernando Barillas’ blog explaining how to create your own root certificate and how to create intermediate certificates from this.

I took it a step further I wrote a script generating new certificates for several services with different intermediates and possibly different configurations (e.g. depending on your distro and services certain cyphers (e.g. using ECC) may not be supported).
I also streamlined generating service specific key, cert and chain files. 😀

Have a look at the full Gist or just the most interesting part:

#!/bin/bash
#
# Author: Riyad Preukschas <riyad@informatik.uni-bremen.de>
# License: MIT
#
# Generates root, internediate CA and service keys.
function generate_root_ca() {
local root_ca="$1"
echo "Generating root CA into ${root_ca}/"
[[ ! -d "${root_ca}" ]] && mkdir "${root_ca}"
cfssl genkey -initca "${root_ca}_csr.json" | cfssljson -bare "${root_ca}/root"
}
function generate_intermediate_ca() {
local root_ca="$1"
local intermediate_ca="$2"
echo "Generating intermediate CA into ${intermediate_ca}/ (for root CA ${root_ca}/)"
[[ ! -d "${intermediate_ca}" ]] && mkdir "${intermediate_ca}"
cfssl gencert -ca "${root_ca}/root.pem" -ca-key "${root_ca}/root-key.pem" -config="config.json" -profile="intermediate" "${intermediate_ca}_csr.json" | cfssljson -bare "${intermediate_ca}/intermediate"
}
function generate_service_keys() {
local root_ca="$1"
local intermediate_ca="$2"
local service_type="$3"
local key_name="$4"
local dist_dir="${intermediate_ca}/dist"
echo "Generating ${service_type} key pair into ${intermediate_ca}/"
[[ ! -d "${dist_dir}" ]] && mkdir "${dist_dir}"
cfssl gencert -ca "${intermediate_ca}/intermediate.pem" -ca-key "${intermediate_ca}/intermediate-key.pem" -config="config.json" -profile="server" "${key_name}_csr.json" | cfssljson -bare "${intermediate_ca}/${key_name}"
cp "${intermediate_ca}/${key_name}.pem" "${dist_dir}/"
cp "${intermediate_ca}/${key_name}-key.pem" "${dist_dir}/"
case "${service_type}" in
dovecot)
cat "${intermediate_ca}/intermediate.pem" "${root_ca}/root.pem" > "${dist_dir}/${key_name}-cachain.pem"
;;
nginx)
cat "${intermediate_ca}/${key_name}.pem" "${intermediate_ca}/intermediate.pem" "${root_ca}/root.pem" > "${dist_dir}/${key_name}.pem"
;;
postfix)
cat "${intermediate_ca}/intermediate.pem" "${root_ca}/root.pem" > "${dist_dir}/${key_name}-cachain.pem"
;;
esac
}
# Foo root CA
ROOT_CA="foo-root-ca"
#generate_root_ca "${ROOT_CA}"
# Some intermediate CA
generate_intermediate_ca "${ROOT_CA}" "some-intermediate-ca"
generate_service_keys "${ROOT_CA}" "some-intermediate-ca" "nginx" "foo-some-web"
# Another intermediate CA
generate_intermediate_ca "${ROOT_CA}" "another-intermediate-ca"
generate_service_keys "${ROOT_CA}" "some-intermediate-ca" "dovecot" "foo-another-imap"
generate_service_keys "${ROOT_CA}" "some-intermediate-ca" "postfix" "foo-another-smtp"
view raw renew-certs.sh hosted with ❤ by GitHub

You’ll still have to deploy them yourself.

Update 2016-10-04:
Fixed some issues with this Gist.

  • Fixed a bug where intermediate CA certificates weren’t marked as CAs any more
  • Updated the example CSRs and the script so it can now be run without errors

Update 2017-10-08:

  • Cleaned up `renew-certs.sh` by extracting functions for generating root CA, intermediate CA and service keys.

A Service Monitor built with Polymer

I tried to build a service monitor having the following features:

  • showing the reachability of HTTP servers
  • plotting the amount of messages in a specific RabbitMQ queue
  • plotting the amount of queues with specific prefixes
  • showing the status of RabbitMQ queues i.e. how many messages are in there? are there any consumers? are they hung?
  • showing the availability of certain Redis clients

Well, you can find the result on GitHub.
It uses two things I published before: polymer-flot and flot-sparklines. 😀

An example dashboard:

polymer-service-monitor screen shot

Bottle Plugin Lifecycle

If you use Python‘s Bottle micro-framework there’ll be a time where you’ll want to add custom plugins. To get a better feeling on what code gets executed when, I created a minimal Bottle app with a test plugin that logs what code gets executed. I uesed it to test both global and route-specific plugins.

When Python loads the module you’ll see that the plugins’

__init__()

and

setup()

methods will be called immediately when they are installed on the app or applied to the route. This happens in the order they appear in the code. Then the app is started.

The first time a route is called Bottle executes the plugins’

apply()

methods. This happens in “reversed order” of installation (which makes sense for a nested callback chain). This means first the route-specific plugins get applied then the global ones. Their result is cached, i.e. only the inner/wrapped function is executed from here on out.

Then for every request the

apply()

method’s inner function is executed. This happens in the “original” order again.

Below you can see the code and example logs for two requests. You can also clone the Gist and do your own experiments.

#!/usr/bin/env python
# -*- coding: utf-8 -*-
import bottle
import logging
logger = logging.getLogger(__name__)
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.DEBUG)
logger.info("module load")
class LifecycleTestPlugin(object):
name = "lifecycle_test"
api = 2
def __init__(self, name):
self.name = name
logger.info("%s: plugin __init__", self.name)
def setup(self, app):
logger.info("%s: plugin setup", self.name)
def apply(self, callback, context):
logger.info("%s: plugin apply", self.name)
def _wrapper(*args, **kwargs):
logger.info("%s: plugin apply wrapper", self.name)
return callback(*args, **kwargs)
return _wrapper
def close(self):
logger.info("plugin close %s", self.name)
app = bottle.Bottle()
logger.info("installing plugins ...")
app.install(LifecycleTestPlugin("app_plugin1"))
app.install(LifecycleTestPlugin("app_plugin2"))
@app.get('/ping',
apply=[
LifecycleTestPlugin("route_plugin1"),
LifecycleTestPlugin("route_plugin2"),
]
)
def ping():
return 'pong'
if __name__ == "__main__":
logger.info("start app ...")
bottle.run(app, host="0.0.0.0", port="9000", reloader=True)
module load
installing plugins ...
app_plugin1: plugin __init__
app_plugin1: plugin setup
app_plugin2: plugin __init__
app_plugin2: plugin setup
route_plugin1: plugin __init__
route_plugin2: plugin __init__
start app ...
module load
installing plugins ...
app_plugin1: plugin __init__
app_plugin1: plugin setup
app_plugin2: plugin __init__
app_plugin2: plugin setup
route_plugin1: plugin __init__
route_plugin2: plugin __init__
start app ...
Bottle v0.12.8 server starting up (using WSGIRefServer())...
Listening on http://0.0.0.0:9000/
Hit Ctrl-C to quit.
route_plugin2: plugin apply
route_plugin1: plugin apply
app_plugin2: plugin apply
app_plugin1: plugin apply
app_plugin1: plugin apply wrapper
app_plugin2: plugin apply wrapper
route_plugin1: plugin apply wrapper
route_plugin2: plugin apply wrapper
127.0.0.1 - - [05/Jul/2015 14:07:25] "GET /ping HTTP/1.1" 200 4
app_plugin1: plugin apply wrapper
app_plugin2: plugin apply wrapper
route_plugin1: plugin apply wrapper
route_plugin2: plugin apply wrapper
127.0.0.1 - - [05/Jul/2015 14:07:28] "GET /ping HTTP/1.1" 200 4
view raw output.log hosted with ❤ by GitHub

https://twitter.com/riyadpr/status/617681143538786304

Android Backup and Restore with ADB

Updating my OnePlus One recently to Cyanogen OS 12 I had to reset my phone a few times before everything ran smoothly … so I wrote a pair of scripts to help me copy things around.

It uses the Android SDK’s ADB tool to do the copying since the Android File Transfer Tool for Mac has a laughable quality for Google’s standards.

Update 2018-11-22:
Since the scripts became more sophisticated I moved them to a proper project on GitHub.

Synchronize directories between computers using rsync (and SSH)

https://twitter.com/climagic/status/363326283922419712

I found this command line magic gem some time ago and was using it ever since.

I started using it for synchronizing directories between computers on the same network. But it felt kind of clunky and cumbersome to get the slashes right so that it wouldn’t nest those directories and copy everything. Since both source and destination machine had the same basic directory layout, I thought ‘why not make it easier?’ … e.g. like this:

sync-to other-pc ~/Documents
sync-to other-pc ~/Music --exclude '*.wav'
sync-from other-pc ~/Music --dry-run --delete

It uses rsync for the heavy lifting but does the tedious source and destination mangling for you. 😀

You can find the code in this Gist.

#!/usr/bin/env python3
#
# Author: Riyad Preukschas <riyad@informatik.uni-bremen.de>
# License: Mozilla Public License 2.0
#
# Synchronize directories between computers using rsync (and SSH).
#
# INSTALLATION:
# Save this script as something like `sync-to` somewhere in $PATH.
# Link it to `sync-from` in the same location. (i.e. `ln sync-to sync-from`)
import os
import re
import shlex
import subprocess
import sys
PROGRAM_NAME = os.path.basename(sys.argv[0])
RSYNC = 'rsync'
RSYNC_BASIC_OPTIONS = [
'--rsh="ssh"', '--partial', '--progress', '--archive', '--human-readable']
RSYNC_EXCLUDE_PATTERNS = ['.DS_Store', '.localized']
#
# helpers
#
def print_usage_and_die():
print(re.sub(r'^[ ]{8}', '',
f"""
Synchronize directories between computers using rsync (and SSH).
Usage: {PROGRAM_NAME} HOST DIR [options]
HOST any host you'd use with SSH
DIR must be available on both the local and the remote machine
Options:
You can pass any options rsync accepts.
-v, --verbose will also print the command that'll be used to sync
Examples:
sync-to other-pc ~/Documents
sync-to other-pc ~/Music --exclude '*.wav'
sync-from other-pc ~/Music --dry-run --delete
""".strip(),
flags=re.MULTILINE
))
exit(1)
def is_verbose():
return any(
re.search(r'^--verbose|-\w*v\w*$', arg) is not None
for arg
in sys.argv
)
#
# main
#
def main():
#
# parse options
#
if len(sys.argv) < 3:
print_usage_and_die()
host = sys.argv[1]
dir = sys.argv[2]
rsync_excludes = [f"--exclude='{pattern}'" for pattern in RSYNC_EXCLUDE_PATTERNS]
rsync_user_options = sys.argv[3:]
if re.search(r'from$', PROGRAM_NAME):
rsync_src_dest = [f"{host}:{dir}/", dir]
elif re.search(r'to$', PROGRAM_NAME):
rsync_src_dest = [f"{dir}/", f"{host}:{dir}"]
else:
print('Error: unknown command')
print_usage_and_die()
#
# copy
#
exec_args = RSYNC_BASIC_OPTIONS + rsync_excludes + rsync_user_options + rsync_src_dest
if is_verbose():
print(f"{RSYNC} {' '.join(RSYNC_BASIC_OPTIONS + rsync_excludes)} {shlex.join(rsync_user_options + rsync_src_dest)}")
os.execvp(RSYNC, exec_args)
#subprocess.run([RSYNC] + exec_args)
if __name__ == '__main__':
main()
view raw sync-to-from hosted with ❤ by GitHub

MagicDict

If you write software in Python you come to a point where you are testing a piece of code that expects a more or less elaborate dictionary as an argument to a function. As a good software developer we want that code properly tested but we want to use minimal fixtures to accomplish that.

So, I was looking for something that behaves like a dictionary, that you can give explicit return values for specific keys and that will give you some sort of a “default” return value when you try to access an “unknown” item (I don’t care what as long as there is no Exception raised (e.g.

KeyError

 )).

My first thought was “why not use MagicMock?” … it’s a useful tool in so many situations.

from mock import MagicMock
m = MagicMock(foo="bar")

But using MagicMock where dict is expected yields unexpected results.

>>> # this works as expected
>>> m.foo
'bar'
>>> # but this doesn't do what you'd expect
>>> m["foo"]
<MagicMock name='mock.__getitem__()' id='4396280016'>

First of all attribute and item access are treated differently. You setup MagicMock using key word arguments (i.e. “dict syntax”), but have to use attributes (i.e. “object syntax”) to access them.

Then I thought to yourself “why not mess with the magic methods?”

__getitem__

  and 

__getattr__

  expect the same arguments anyway. So this should work:

m = MagicMock(foo="bar")
m.__getitem__.side_effect = m.__getattr__

Well? …

>>> m.foo
'bar'
>>> m["foo"]
<MagicMock name='mock.foo' id='4554363920'>

… No!

By this time I thought “I can’t be the first to need this” and started searching in the docs and sure enough they provide an example for this case.

d = dict(foo="bar")

m = MagicMock()
m.__getitem__.side_effect = d.__getitem__

Does it work? …

>>> m["foo"]
'bar'
>>> m["bar"]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File ".../env/lib/python2.7/site-packages/mock.py", line 955, in __call__
    return _mock_self._mock_call(*args, **kwargs)
  File ".../env/lib/python2.7/site-packages/mock.py", line 1018, in _mock_call
    ret_val = effect(*args, **kwargs)
KeyError: 'bar'

Well, yes and no. It works as long as you only access those items that you have defined to be in the dictionary. If you try to access any “unknown” item you get a

KeyError

 .

After trying out different things the simplest answer to accomplish what I set out to do seems to be sub-classing defaultdict.

from collections import defaultdict

class MagicDict(defaultdict):
    def __missing__(self, key):
        result = self[key] = MagicDict()
        return result

And? …

>>> m["foo"]
'bar'
>>> m["bar"]
defaultdict(None, {})
>>> m.foo
Traceback (most recent call last):
&nbsp; File "<stdin>", line 1, in <module>
AttributeError: 'MagicDict' object has no attribute 'foo'

Indeed, it is. 😀

Well, not quite. There are still a few comfort features missing (e.g. a proper

__repr__

). The whole, improved and tested code can be found in this Gist:

# -*- coding: utf-8 -*-
#
# Author: Riyad Preukschas <riyad@informatik.uni-bremen.de>
# License: Mozilla Public License 2.0
#
from collections import defaultdict
class MagicDict(defaultdict):
def __init__(self, _name=None, **kwargs):
super(MagicDict, self).__init__(**kwargs)
self._name = _name
def __missing__(self, key):
name = "%s[\"%s\"]" %(
(self._name if self._name is not None else "mock"),
key
)
result = self[key] = MagicDict(_name=name)
return result
def __eq__(self, other):
return self is other
def __ne__(self, other):
return not self == other
def __repr__(self):
"""Overriden to mimic the output of mock.MagicMock
"""
if self._name is not None:
name_string = " name='%s'" % self._name
else:
name_string = ""
return "<%s%s id='%s'>" % (
type(self).__name__,
name_string,
id(self)
)
view raw magicdict.py hosted with ❤ by GitHub