Donnerstag, 27. Juli 2017

Accessing data on a server with a self-signed certificate

Accessing encrypted data on the web is relatively simple. Even the somewhat more complicated API using HttpURLConnection is relatively straightforward:

(Examples in Kotlin)

val url = URL(urlString) 
val conn = url.openConnection() as HttpURLConnection 
conn.requestMethod = "GET" 
val data = conn.inputStream.bufferedReader().use { it.readText() }

This is even true for encrypted communications via https. The system simply adds another layer doing the encryption and decryption and you can use it the same way you would do with unencrypted traffic.

But there is more happening in the background that can go wrong.

When the encrypted channel is being establish, the server sends a certificate. This certificate contains among others:

  • the host name of the server
  • the public key of the server
  • the period for which the key is valid
  • and usually some signatures from well know certificate authorities (CA)

If the host name in the certificate does not match the one in the URL or the key has expired or is not yet valid Java/Kotlin will raise an exception and refuse to connect.

The problematic part with self-signed certificates is that they don't carry a signature from a well-known CA because they are “self-signed”. Well-known in this context means that the certificate of the CA is present in the Java keystore. A standard connect request will fail.

The way around this is to create a keystore with the self-signed certificate in it and tell Kotlin/Java to use it.

How you do you get this certificate?

If you use a server with a self-signed certificate, chances are that you have installed it yourself. The certificate can be found in that installation.
Or you open the connection in Firefox. The certificate info page lets you export the cert. A certificate looks similar to this:

cm5ld ... 

Which is essentially a base64 encoded version with a special header and footer.
You could then store that file on your filesystem or in a ressource file on Android, and tell Kotlin to use it instead of the standard keystore (that's what TlsTest.setCertSocketFactory(cert) will do for you) .

But there is a nicer way. As the certificate is sent from the server when the connection is establish, why can't we use it?

The answer is, we can. However when using the standard functions there is a “chicken and egg” problem: You can access the certificate when the connection is established but you need the certificate to make the connection in the first place...

The solve the class shown below implements a “self-signed certificate friendly” TrustManager. The Trust Manager is responsible to check the certificate validity (expiration) and the chain of trust (back to the certificate authorities). This special Trust Manager calls the functions of the original Trust Manager with one exception.  If the chain of trust has a length of 1 (which is true for self-signed certificates) it forgoes checking the chain of trust.

It uses this connection to obtain the certificate. If also provides a method to get some user readable information, so that the user can decide to trust it or not.

A third method can be used to install a keystore with the obtained self-signed certificate before opening the data connection.

Here is a typical use case:

val (cert, except) = TlsTest.testConnection("https://xxxxxxx")
if (except != null) { .. abort, there was a fatal error ... }

print(TlsTest.certInfo(cert))   // possibly asking the user for confirmation
val conn ="https://xxxxxx/foo").openConnection() as HttpURLConnection
TlsTest.setCertSocketFactory(conn, cert)
val data = conn.inputStream.bufferedReader().use { it.readText() }

There is one edge case this class does not cover. During its executing no “real” data is being  transferred. Its all done during the establishment phase. If you try to securely connect to a server that only “speaks” http, this mismatch only becomes apparent when data is sent, which then causes a “ Unrecognized SSL message, plaintext connection?".

Here is the class – available as a Gist on Github

P.S. I'm neither a security expert nor a proficient Kotlin programmer. If you spot errors or can suggest improvements, let me know.

Mittwoch, 15. Februar 2017

Playing HLS streams with mpd

If you're playing with the idea to turn your Raspberry Pi into an internet radio you will sooner or later come across the MusicPlayerDaemon (mpd).  It's a server daemon without an user interface with the sole objective to play music and manage playlists.  It exposes however a control port over which command line utilities, apps on mobile phones, or even desktop applications can talk to it using a common “command language”.  This way they can make mpd create playlists which can be stored on the server, play songs from a playlist, stop the playback, skip titles etc.

Even though the content usually resides on your hard disc mpd can fetch it from other devices, e.g. a NAS on your LAN, or even from (web radio stations on) the internet.  The list of your favourite stations boils down to a simple playlist containing their urls, switching stations is the same as skipping to the next song in the playlist of your favourite artists.

Most stations use MP3 or AAC streams which are easily handled by the version of mpd available in the repository of your Linux distribution, even though that version might be a little dated.

HLS streams - as deployed by the BBC for example - are a little trickier.  In order to get to the music various playlists have to be downloaded and parsed, and every 10 seconds or so, the next chunk has to be downloaded from another url.  ffmpeg (or its fork avconv) can handle this overhead but even though ffmpeg is compiled into most version of mpd installed from repositories, trying to open an HLS stream will not work.

The current (Feb. 2017) repository version of mpd is 0.19.1.  You can check the decoder plugins:

$ ./mpd --version
Music Player Daemon 0.19.1
Decoders plugins:
[ffmpeg] 16sv 3g2 3gp 4xm …

The solution to remedy this situation is already in the source code – perhaps a little hidden. You will have to compile mpd as described below.

However, I did install the version from the repository first. It comes with some system integration like start and stop scripts for the boot process, setting up of an mpd user account on the Raspberry, etc. I've modified the scripts in a few places to point them to the freshly compiled mpd version.

As we have to compile it anyway, let's us the newest version from the mpc homepage.

(The following steps have been tested with mpc 0.20.4 on a Raspberry Pi 3.
In the description below change the version number accordingly.)

Start by installing the required libraries:

sudo apt-get install g++ \
  libmad0-dev libmpg123-dev libid3tag0-dev \
  libflac-dev libvorbis-dev libopus-dev \
  libadplug-dev libaudiofile-dev libsndfile1-dev libfaad-dev \
  libfluidsynth-dev libgme-dev libmikmod2-dev libmodplug-dev \
  libmpcdec-dev libwavpack-dev libwildmidi-dev \
  libsidplay2-dev libsidutils-dev libresid-builder-dev \
  libavcodec-dev libavformat-dev \
  libmp3lame-dev \
  libsamplerate0-dev libsoxr-dev \
  libbz2-dev libcdio-paranoia-dev libiso9660-dev libmms-dev \
  libzzip-dev \
  libcurl4-gnutls-dev libyajl-dev libexpat-dev \
  libasound2-dev libao-dev libjack-jackd2-dev libopenal-dev \
  libpulse-dev libroar-dev libshout3-dev \
  libmpdclient-dev \
  libnfs-dev libsmbclient-dev \
  libupnp-dev \
  libavahi-client-dev \
  libsqlite3-dev \
  libsystemd-daemon-dev libwrap0-dev \
  libcppunit-dev xmlto \
  libboost-dev \

Download the source code:


You might want to check the GPG signature:

gpg --verify mpd-0.20.4.tar.xz.sig

Unpack the archive:

tar xvfJ mpd-0.20.4.tar.xz
cd mpd-0.20.4

Now to the special magic. The ./configure utility has scanned the system environment and has written its findings into configure.h. Open that file with an editor. You should be able to find the following line:


This means that the ffmpeg libraries have been detected and will be used.
Now add the following line and save the file:

#define HAVE_FFMPEG 1

This line will change the fallback decoder in src/decoder/DecoderThread.cxx from “mad” to “ffmpeg”. ffmpeg can handle m3u8 playlists typically used by HLS while mad can not.

Now start the build process with


and if there are no errors install mpd:

sudo make install

On the Raspberry this new version is stored in /usr/local/bin while the original version still remains in /usr/bin.

Confirm the version of the new file:

$ /usr/local/bin/mpd --version
Music Player Daemon 0.20.4

Additional changes

The following changes of the initial mpd install are necessary to get the new version running on the Raspberry Pi.

Raspberry uses systemd.  There is a control file for the mpd service that needs to be changed:

sudo nano /lib/systemd/system/mpd.service

ExecStart=/usr/bin/mpd --no-daemon $MPDCONF
to the new location:
ExecStart=/usr/local/bin/mpd --no-daemon $MPDCONF

Keep in mind that this change might be overwritten if the repository version of mpd is being updated later on... which doesn't happen that often.

Uncomment the following line in /etc/default/mpd. This will define the variable MPDCONF.


Let us change some settings in mpd configuration file /etc/mpd.conf

sudo nano /etc/mpd.conf

In the default configuration mpd and its client must run on the same machine. In order to allow access via the network change:

bind_to_address         "localhost"
bind_to_address         "any"

For convenience I've changed my music_directory to a place where I can more easily add music files. Keep in mind that his folder needs to be world readable so that mpd running as user “mpd” can access it.

music_directory         "/home/pi/Music"

Now we have to tell the system to read the new configuration and restart the mpd service.

sudo systemctl daemon-reload
sudo service mpd restart

Check the status of the service:

sudo service mpd status

Unrelated problem

In my first attempts mpd froze after playing the first title.  Someone suggested to remove pulseaudio... and it worked.

sudo apt-get remove pulseaudio
sudo reboot



Sonntag, 8. Mai 2016

Accessing servers with self-signed certificates in Python

As long as I can remember Python was always capable of retrieving web pages from encrypted servers.  In the early days it didn't bother verifying the ssl certificate.  In newer version it does so by default - which is good - and you usually don't have any problems. And if you do it should merit your attention.

However there are situations where this verification breaks things: self-signed certificates. E.g. the ones you use in your local network or as in my case a web cam which actually offers https.  It uses an self-signed certificate - probably the same in all cameras of this type - but hey... beggars can't be choosers.

To access the cam in Firefox you would create a security exception to access the cam, in Python life is not that simple.

The following post shows:
  • how to disable the verification
  • how to pull the server certificate
  • how to use it in Python3
  • how to install it in the system

Please note: The following description works on Ubuntu 16.04 LTS. On your distro the directory paths may vary. Change IP addresses, hostnames, filenames, etc. to your needs.

I'm using a small script pulling images from the above mentioned web cam:

import urllib.request
hp = urllib.request.urlopen("")
pic =

which now results in

urllib.error.URLError: < urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed

The following "context" disables the certificate verification

import ssl
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE

hp = urllib.request.urlopen("", context=ctx)

That works, but having a certificate verification would be nice. To do that we need the server certificate:

cert = ssl.get_server_certificate( ('', 443) )

The cert looks like this


Now you can create a context which uses that cert:

ctx2 = ssl.create_default_context()

hp = urllib.request.urlopen("", context=ctx2)

Which results in:
ssl.CertificateError: hostname '' doesn't match 'IPC'

Well, that didn't work, at least the error message has changed.

Let's have a look at the cert:

>>> ctx2.get_ca_certs()
[{'issuer': ((('countryName', 'ch'),), (('stateOrProvinceName', 'guangdong'),), (('localityName', 'zhenzhen'),), (('organizationName', 'IPCam'),), (('organizationalUnitName', 'IPCam'),), (('commonName', 'IPC'),)), 'notBefore': 'Mar  7 01:24:16 2013 GMT', 'subject': ((('countryName', 'ch'),), (('stateOrProvinceName', 'guangdong'),), (('localityName', 'zhenzhen'),), (('organizationName', 'IPCam'),), (('organizationalUnitName', 'IPCam'),), (('commonName', 'IPC'),)), 'notAfter': 'Feb 23 01:24:16 2063 GMT', 'serialNumber': '831067E51AFDB917', 'version': 3}]

As you can see the commonName for this cert is IPC and we're trying to access the server using the hostname They don't match.

You can fix this in two ways. Either tell Python to ignore the hostname:

ctx3 = ssl.create_default_context()
ctx3.check_hostname = False

hp = urllib.request.urlopen("", context=ctx3)

or put an entry into /etc/hosts (you need root privileges for that)   IPC

System wide integration
Using contexts is fine, but I have to change every piece of code: create the context and use it. It would be nice to have it "simply" work.
For this you need root access. Then you can put the cert into the system wide certificate store and Python will use it like any normal cert - including the one from the Hongkong Post Office :-)

First create the above mentioned entry in /etc/hosts to get the hostname check right.

Then create a the directory /etc/ssl/mycerts and copy ipcamera.crt into it.

The system wide certs are stored in /etc/ssl/certs. In order for your certificate to be found, it must be renamed. Calculate its hash using openssl:

$ openssl x509 -noout -hash -in /etc/ssl/mycerts/ipcamera.crt

Now goto /etc/ssl/certs and create the appropriate named link (you must append .0 to the hash).

sudo ln -s ../mycerts/ipcamera.crt ab0cd04d.0

Now it simply works:

w = urllib.request.urlopen("https://IPC")

If there are easier ways to do it, please let me know.


Mittwoch, 25. Februar 2015

TP-LINK TL-WN725N v2 working on Raspberry Pi (Raspbian)

A few days back I got my first Raspberry Pi. To make things easier, I bought a starter pack which contained – among others – a small Wifi adapter TL-WM725N from TP-Link.

As it turned out, this USB device does not work out-of-the-box with the current version of Raspbian. According to the sources listed below, the TL-WM725N once did work without problems with the Raspberry Pi until the new and shiny version 2 of the TL-WM725N was released. It seems that this fact slipped by the vendor of this starter pack.

The USB ID of this particular Wifi adapter is: 0bda:8179

The current version (Feb. 2015) of the kernel, as returned by uname -a is:
Linux raspberrypi 3.18.7+ #758 PREEMPT Mon Feb 23 19:27:03 GMT 2015 armv6l GNU/Linux

dmesg showed this message:

[   23.690020] r8188eu 1-1.4:1.0: Direct firmware load for rtlwifi/rtl8188eufw.bin failed with error -2
[   23.690072] r8188eu 1-1.4:1.0: Firmware rtlwifi/rtl8188eufw.bin not available

The most helpful posting on this topic was this one. It contains a link to a ZIP file and instructions to install it.

The ZIP files contains firmware for the Wifi adapter and a kernel module.

Contrary to the claims of the author of the blog post, the firmware was missing in my version of Raspbian. On the other hand the kernel module is not needed. This is a good thing because kernel modules depend on a specific kernel version and the one in the ZIP file is outdated and must not be installed. It would delete the existing module which is complaining about the missing firmware.

Which means, I only had to copy the firmware (rtl8188eufw.bin):

tar -zxvf 8188eu-20140307.tar.gz
sudo cp rtl8188eufw.bin /lib/firmware/rtlwifi
sudo reboot

The dropbox link may vanish any time. A better solution would be appreciated.

As an alternative that should run out of the box, this blog post suggests the EDIMAX EW-7811U.

  • (german)
  • (german)

Sonntag, 30. November 2014

Pitfalls installing a GTX 970 on an Ubuntu 14.04 LTS system

If you are going to put a GTX 970 into a Ubuntu box running 14.04 LTS, you should update to nvidia driver 323 (if you don't mind running the proprietary driver) to take advantage of its features.

This driver is available via the ppa repository xorg/edgers.

However, skip that part, if you want to use its GPU in Blender.

For this you need CUDA 6.5, the new shiny package from nvidia - only 1 GB!

You can get it here. But be aware that there also is a CUDA 6.5 package without GTX 9xx support. So make sure that it says "with Support for GeForce GTX9xx GPUs" in the title.

Grab the DEB file and install it using.

sudo dpkg -i sudo dpkg -i cuda-repo-ubuntu1404-6-5-prod_6.5-19_amd64.deb

This will copy a handful of other DEB files to /var/cuda-repo-6-5-prod.

Import them into the package system with

sudo apt-get update

and install them in one go with

sudo apt-get install cuda

It contains the nvidia driver (343.19), the CUDA files and various other stuff.

After a reboot check the running version of the NVIDIA driver using the nvidia-settings utility. If the version is not 343.19, the nvidia driver hasn't been replaced (most likely because you were still using it). In this case you have to bring the system into a terminal-only mode.

The usual procedure is:
  • log-out
  • switch to a terminal (Ctrl-Alt-F1)
  • shut down the graphical login using: sudo service lightdm stop
    (depends on the Ubuntu flavour: lightdm for the vanilla version)
  • and proceed from there.

Disclaimer: Replacing a video driver is no fun if it fails and you end up without any GUI. Don't blame me.

The install will also take care of the initfs (needed during boot time).

In order to use the GTX 9xxx in Blender, you have currently use a development build from as v 2.72 will fail, reporting an Unknown or unsupported CUDA architecture in the terminal.

All versions as of Nov. 28, 2014.

Donnerstag, 23. Oktober 2014

Accessing library materials, groups and other stuff via the Blender Python API

Information valid for Blender 2.72

The API documentation for the equivalent of the Link and Append command is a bit cryptic regarding what values to use for the parameters of this operator.  After some Googleing I found the solution in a few snippets of code here.

The main parameters of the Blender Python append command are:
  • filepath
  • filename
  • directory

The values of these parameters combine the file system path of the blend file and its "inner" structure.

You can see this structure if you use the Link/Append command of the Blender GUI.  Once you click on the blend file its inner structure with materials, groups, objects etc. becomes visible.

In order to append for example the Material with the name white55 from a file named test.blend located in my home directory /home/mike, I have to use:

   # // + name of blend file + \\Material\\

   # name of the material

   # file path of the blend file + name of the blend file + \\Material\\
   # append, don't link
   link = False

Check the API entry for the other parameters.

This is an example from a Unix system, where the file path separator is a normal slash; on Windows systems you have to use a backslash.

However, the backslash is also the Python escape character, i.e. for one \ in the path name, you have to type \\

You can modify this example using \\Object\\ or \\Group\\ ...

Note: The operator append has recently (v2.71) been renamed.  It's earlier name was link_append.

This operator doesn't return an error code if the material, object etc. couldn't be loaded.  You have to be sure that it is present in the blend file.

You can iterate over the material names in a blend library file using the following snippet:

with"/home/mike/test.blend") as (data_from, data_to):

data_from.materials is a list of strings with the material names. The list is empty, if there aren't any materials available.

The command dir(data_from) returns
['actions', 'armatures', 'brushes', 'cameras', 'curves', 'fonts', 'grease_pencil', 'groups', 'images', 'ipos', 'lamps', 'lattices', 'linestyles', 'masks', 'materials', 'meshes', 'metaballs', 'movieclips', 'node_groups', 'objects', 'paint_curves', 'palettes', 'scenes', 'sounds', 'speakers', 'texts', 'textures', 'worlds']

Three guesses what data_from.groups or data_from.textures will return and how to modify the append command...

Freitag, 10. Oktober 2014

Updating Syncthing on a Synology DM1812+ (Part 2)

In a previous post I described a way to keep your Syncthing installation on a Synology DM1812+ up-to-date as the package from the Cytec repository lags significantly behind the current version.

This method is no longer feasible. The main reason is that the current builds of Syncthing have become dynamically linked, i.e. Syncthing uses certain libraries located on the host machine. If these are incompatible (like the ones on the Diskstation) strange things happen.

Until recently these libraries were part of the Syncthing binary. This so called statically linked binary had no external references and incompatible libraries on the host machine weren't a problem.

If you use the upgrade mechanism built-into Syncthing you get the dynamically linked version. This works with most machines, but not with the DM1812+.

You have to build a statically linked version yourself and replace the binary on Diskstation manually. Fortunately this isn't so hard.

If you haven't done so already, install Syncthing from the Cytek repository. This will copy all the other necessary scripts for the Syncthing integration onto the Diskstation, e.g. scripts to start and stop Syncthing via the Synology web GUI.

The following paragraphs describe how to compile Syncthing amd64 on a fresh Ubuntu LTS 14.04 amd64 machine. It mostly repeats the steps found here on the Syncthing site, but with additional information on how to get a statically linked binary.

The CPU running in the Diskstation is amd64 compatible, but it can also run 32 bit binaries. By tweaking the environment variable GOARCH, it should be possible to cross-compile it for other architectures. I haven't tried this.

Syncthing is written in Go. In order to compile Syncthing you first have to compile Go (also as statically linked version).


You only need (apart from the source code) the Ubuntu packages for git and mercurial.

I suggest to install an additional packages: libc-bin. It contains the utility ldd which can be used to check if a binary is indeed not a dynamic executable.

$ sudo apt-get install git mercurial libc-bin

Compiling Go

First get the source code for the go language. Currently Syncthing requires Go 1.3, which can be found on the golang website. There you also find the source code for other versions and architectures, if needed.

Download it:

$ wget

The Go source code wants to live in /usr/local/go and Syncthing expects it there. You need root privileges to create this folder; then change permissions so that you have access to it as a normal user (in this example “mike” - change as required).

$ sudo mkdir -p /usr/local/go
$ sudo chown mike:mike /usr/local/go

Unpack the source into /usr/local/go

$ tar -C /usr/local -xzf go1.3.linux-amd64.tar.gz

Set the following environment variables

$ export PATH=$PATH:/usr/local/go/bin
$ export GOOS=linux
$ export GOARCH=amd64
$ export CGO_ENABLED=0

CGO_ENABLED=0 is responsible to get statically linked binaries.

Start compiling

$ cd /usr/local/go/src
$ ./all.bash

This can take a while.

Check if the resulting binary is indeed statically linked

$ ldd /usr/local/go/bin/go
    not a dynamic executable

(German message:  Das Programm ist nicht dynamisch gelinkt )

If you get something that looks like the following, the binary is dynamically linked

$ ldd /usr/local/go/bin/go =>  (0x00007fff0ed31000) => /lib/x86_64-linux-gnu/ (0x00007f4cf9726000) => /lib/x86_64-linux-gnu/ (0x00007f4cf9360000)
    /lib64/ (0x00007f4cf9966000)

Do only continue if you have the statically linked version.

Check Go

$ go version
go version go1.3 linux/amd64

Compiling Syncthing

Create a directory for the syncthing source (the example shows its preferred location) and get the source:

$ mkdir -p ~/src/
$ cd ~/src/
$ git clone

Compile it

$ cd syncthing
$ go run build.go

This doesn't take long.

Check it

$ cd bin
$ ldd syncthing
    not a dynamic executable

This is the statically linked binary we need.

This file has to be copied to the Diskstation. If you have installed it in its default location (volume1), the Syncthing binary is located at:


Stop the Syncthing process via the Synology web GUI, rename the old one (just in case), and copy the new one to that location. Restart the process in the GUI.

You're good to sync.