Usefulness of Swap Explained

Chris Down explains how swap’s main role is being the missing backing store for anonymous (i.e. allocated by malloc) pages. While all other kinds of data (e.g. paged in files) can be reclaimed easily and later reloaded, because their “source of truth” is elsewhere. There’s no such source for anonymous pages hence these pages can “never” be reclaimed unless there’s swap space available (even if those pages aren’t “hot”).

Linux has historically had poor swap (and by extension OOM) handling with few and imprecise means for configuration. Chris describes the behavior of a machine with and without swap in different scenarios of memory contention. He thinks that poor swap performance is caused by having a poor measure “memory pressure.” He explains how work on cgroup v2 might give the kernel (and thus admins) better measures for memory pressure and knobs for dealing with it.

Moving LXD Containers From One Pool to Another

When I started playing with LXD I just accepted the default storage configuration which creates an image file and uses that to initialize a ZFS pool. Since I’m using ZFS as my main file system this seemed silly as LXD can use an existing dataset as a source for a storage pool. So I wanted to migrate my existing containers to the new storage pool.

Although others seemed to to have the same problem there was no ready answer. Digging through the documentation I finally found out that the lxc move  command had a  -s  option … I had an idea. ? Here’s what I came up with …

Preparation

First we create the dataset on the existing ZFS pool and add it to LXC.

lxc storage list should show something like this now:

pool1 is the old pool backed by the image file and is used by some containers at the moment as can be seen in the “Used By” column.  pool2 is added by not used by any contaiers yet.

Moving

We now try to move our containers to pool2.

We can check with  lxc storage list whether we succeeded.

Indeed  pool2 is beeing used now. ? Just to be sure we check that zfs list -r mypool/lxd  also reflects this.

Awesome!

⚠ Note that this only moves the container, but not the LXC image it was cloned off of.

We can repeat this until all containers we care about are moved over to pool2.

Cleanup

To prevent new containers to use pool1  we have to edit the default  profile.

Finally …. when we’re happy with the migration and we’ve verified that everything works as expected we can now remove pool1.

 

Backup And Restore Your Android Phone With ADB (And rsync)

Based on my previous scripts and inspired by two blog posts that I stumbled upon I tackled the “backup all my apps, settings and data” problem for my Android devices again. The “new” solutions both use rsync  instead of adb pull  for file transfers. They both use ADB to start a rsync daemon on the device, forward its ports to localhost and run rsync against it from your host.

Simon’s solution assumes your phone has rsync already (e.g. because you run CyanogenMod) and can become root via adb root . It clones all files from the phone (minus /dev , /sys , /proc  etc.). He also configures udev to start the backup automatically when the phone is plugged in.

pts solves the setup without necessarily becoming root. He also has a way of providing a rsync binary to phones that don’t have any (e.g. when running OxygenOS). He also has a few tricks on how to debug the rsync daemon setup on the phone.

I’ve tried to combine both methods. My approach doesn’t require adb or rsync to be run as root. It’ll use the the system’s rsync when available or temporarily upload and use a backup one extracted from Cyanogen OS (for my OnePlus One). Android won’t allow you to  chmod +x a file uploaded to /sdcard , but in /data/local/tmp it works. ?

The scripts will currently only backup and restore all of your  /sdcard directory. Assuming you’re also using something like Titanium Backup you’ll be able to backup and restore all your apps, settings and data. To reduce the amount of data to copy it uses rsync filters to exclude caches and other files that you definitely don’t want synced ( .DS_Store  files anyone?).

At the moment there’s one caveat: I had to disable restoring modification times (i.e. use --no-times ) because of an obnoxious error (they will be backuped fine, only restoring is the problem): ?

mkstemp “…” (in root) failed: Operation not permitted (1)

Additionally if you’re on the paranoid side you can also build your own rsync for Android to use as the backup binary.

The code and a ton of documentation can be found on GitHub. Comments and suggestions are welcome. ?

Build Rsync for Android Yourself

To build rsync for Android you’ll need to have the Android NDK installed already.

Then clone the rsync for android source (e.g. from CyanogenMod LineageOS) …

… create the missing
jni/Application.mk
build file (e.g. from this Gist) and adapt it to your case

… and start the build with

You’ll find your self-build rsync in
obj/local/*/rsync
. ?

Update 2017-10-06:

  • Updated sources from CyanogenMod to LineageOS.
  • Added links to Gist and Andoid NDK docs
  • Updated steps to work with up-to-date setups

If you get something like the following warnings and errors …

… you probably need to update
config.h
and change
/* #undef MAJOR_IN_SYSMACROS */
to
#define MAJOR_IN_SYSMACROS 1
.

CFSSL FTW

After reading how CloudFlare handles their PKI and that LetsEncrypt will use it I wanted to give CFSSL a shot.

Reading the project’s documentation doesn’t really help in building your own CA, but searching the Internet I found Fernando Barillas’ blog explaining how to create your own root certificate and how to create intermediate certificates from this.

I took it a step further I wrote a script generating new certificates for several services with different intermediates and possibly different configurations (e.g. depending on your distro and services certain cyphers (e.g. using ECC) may not be supported).
I also streamlined generating service specific key, cert and chain files. 😀

Have a look at the full Gist or just the most interesting part:

You’ll still have to deploy them yourself.

Update 2016-10-04:
Fixed some issues with this Gist.

  • Fixed a bug where intermediate CA certificates weren’t marked as CAs any more
  • Updated the example CSRs and the script so it can now be run without errors

Update 2017-10-08:

  • Cleaned up renew-certs.sh by extracting functions for generating root CA, intermediate CA and service keys.

A Service Monitor built with Polymer

I tried to build a service monitor having the following features:

  • showing the reachability of HTTP servers
  • plotting the amount of messages in a specific RabbitMQ queue
  • plotting the amount of queues with specific prefixes
  • showing the status of RabbitMQ queues i.e. how many messages are in there? are there any consumers? are they hung?
  • showing the availability of certain Redis clients

Well, you can find the result on GitHub.
It uses two things I published before: polymer-flot and flot-sparklines. 😀

An example dashboard:

polymer-service-monitor screen shot

too long for Unix domain socket

If you’re an Ansible user and encounter the following error:

you need to set the control_path option in your ansible.cfg file to tell SSH to use shorter path names for the control socket. You should have a look at the ssh_config(5) man page  (under
ControlPath
) for a list of possible substitutions.

I chose: