Sonntag, 8. Mai 2016

Accessing servers with self-signed certificates in Python

As long as I can remember Python was always capable of retrieving web pages from encrypted servers.  In the early days it didn't bother verifying the ssl certificate.  In newer version it does so by default - which is good - and you usually don't have any problems. And if you do it should merit your attention.

However there are situations where this verification breaks things: self-signed certificates. E.g. the ones you use in your local network or as in my case a web cam which actually offers https.  It uses an self-signed certificate - probably the same in all cameras of this type - but hey... beggars can't be choosers.

To access the cam in Firefox you would create a security exception to access the cam, in Python life is not that simple.

The following post shows:
  • how to disable the verification
  • how to pull the server certificate
  • how to use it in Python3
  • how to install it in the system

Please note: The following description works on Ubuntu 16.04 LTS. On your distro the directory paths may vary. Change IP addresses, hostnames, filenames, etc. to your needs.

I'm using a small script pulling images from the above mentioned web cam:

import urllib.request
...
hp = urllib.request.urlopen("https://192.168.0.100/pic.jpg")
pic = hp.read()
...


which now results in

urllib.error.URLError: < urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed

The following "context" disables the certificate verification

import ssl
...
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE

hp = urllib.request.urlopen("https://192.168.0.100/pic.jpg", context=ctx)


That works, but having a certificate verification would be nice. To do that we need the server certificate:

cert = ssl.get_server_certificate( ('192.168.0.100', 443) )
open('/tmp/ipcamera.crt','w').write(cert)


The cert looks like this

-----BEGIN CERTIFICATE-----
MIIClDCCAf2gAwIBAgIJAIMQZ+Ua/bkXMA0GCSqGSIb3DQEBBQUAMGIxCzAJBgNV
...
4XAVFCBZOPwflj9Ug0YNSIgcSfDOxha06C9hwZ0+ZuafkjXv16sGEA==
-----END CERTIFICATE-----



Now you can create a context which uses that cert:

ctx2 = ssl.create_default_context()
ctx2.load_verify_locations("/tmp/ipcamera.crt")

hp = urllib.request.urlopen("https://192.168.0.100", context=ctx2)
...


Which results in:
 
ssl.CertificateError: hostname '192.168.0.100' doesn't match 'IPC'


Well, that didn't work, at least the error message has changed.

Let's have a look at the cert:

>>> ctx2.get_ca_certs()
[{'issuer': ((('countryName', 'ch'),), (('stateOrProvinceName', 'guangdong'),), (('localityName', 'zhenzhen'),), (('organizationName', 'IPCam'),), (('organizationalUnitName', 'IPCam'),), (('commonName', 'IPC'),)), 'notBefore': 'Mar  7 01:24:16 2013 GMT', 'subject': ((('countryName', 'ch'),), (('stateOrProvinceName', 'guangdong'),), (('localityName', 'zhenzhen'),), (('organizationName', 'IPCam'),), (('organizationalUnitName', 'IPCam'),), (('commonName', 'IPC'),)), 'notAfter': 'Feb 23 01:24:16 2063 GMT', 'serialNumber': '831067E51AFDB917', 'version': 3}]


As you can see the commonName for this cert is IPC and we're trying to access the server using the hostname 192.168.0.100. They don't match.

You can fix this in two ways. Either tell Python to ignore the hostname:

ctx3 = ssl.create_default_context()
ctx3.load_verify_locations("/tmp/ipcamera.crt")
ctx3.check_hostname = False

hp = urllib.request.urlopen("https://192.168.0.100", context=ctx3)


or put an entry into /etc/hosts (you need root privileges for that)

192.168.0.100   IPC

System wide integration
Using contexts is fine, but I have to change every piece of code: create the context and use it. It would be nice to have it "simply" work.
For this you need root access. Then you can put the cert into the system wide certificate store and Python will use it like any normal cert - including the one from the Hongkong Post Office :-)

First create the above mentioned entry in /etc/hosts to get the hostname check right.

Then create a the directory /etc/ssl/mycerts and copy ipcamera.crt into it.

The system wide certs are stored in /etc/ssl/certs. In order for your certificate to be found, it must be renamed. Calculate its hash using openssl:

$ openssl x509 -noout -hash -in /etc/ssl/mycerts/ipcamera.crt
ab0cd04d


Now goto /etc/ssl/certs and create the appropriate named link (you must append .0 to the hash).

sudo ln -s ../mycerts/ipcamera.crt ab0cd04d.0

Now it simply works:

w = urllib.request.urlopen("https://IPC")

If there are easier ways to do it, please let me know.



Links:
  • https://docs.python.org/2/library/ssl.html
  • http://gagravarr.org/writing/openssl-certs/others.shtml#ca-openssl

Mittwoch, 25. Februar 2015

TP-LINK TL-WN725N v2 working on Raspberry Pi (Raspbian)

A few days back I got my first Raspberry Pi. To make things easier, I bought a starter pack which contained – among others – a small Wifi adapter TL-WM725N from TP-Link.

As it turned out, this USB device does not work out-of-the-box with the current version of Raspbian. According to the sources listed below, the TL-WM725N once did work without problems with the Raspberry Pi until the new and shiny version 2 of the TL-WM725N was released. It seems that this fact slipped by the vendor of this starter pack.

The USB ID of this particular Wifi adapter is: 0bda:8179

The current version (Feb. 2015) of the kernel, as returned by uname -a is:
Linux raspberrypi 3.18.7+ #758 PREEMPT Mon Feb 23 19:27:03 GMT 2015 armv6l GNU/Linux

dmesg showed this message:

[   23.690020] r8188eu 1-1.4:1.0: Direct firmware load for rtlwifi/rtl8188eufw.bin failed with error -2
[   23.690072] r8188eu 1-1.4:1.0: Firmware rtlwifi/rtl8188eufw.bin not available


The most helpful posting on this topic was this one. It contains a link to a ZIP file and instructions to install it.

The ZIP files contains firmware for the Wifi adapter and a kernel module.

Contrary to the claims of the author of the blog post, the firmware was missing in my version of Raspbian. On the other hand the kernel module is not needed. This is a good thing because kernel modules depend on a specific kernel version and the one in the ZIP file is outdated and must not be installed. It would delete the existing module which is complaining about the missing firmware.

Which means, I only had to copy the firmware (rtl8188eufw.bin):

wget https://dl.dropboxusercontent.com/u/80256631/8188eu-20140307.tar.gz
tar -zxvf 8188eu-20140307.tar.gz
sudo cp rtl8188eufw.bin /lib/firmware/rtlwifi
sudo reboot


The dropbox link may vanish any time. A better solution would be appreciated.

As an alternative that should run out of the box, this blog post suggests the EDIMAX EW-7811U.


Links
  • http://laurenthinoul.com/how-to-install-tp-link-tl-wn725n-on-raspberry-pi/
  • http://www.mendrugox.net/2013/08/tp-link-tl-wn725n-v2-working-on-raspberry-raspbian/
  • http://blog.pi3g.com/2013/05/tp-link-150mbps-wireless-n-nano-usb-adapter-tl-wn725n-und-raspberry-pi/ (german)
  • http://www.amazingcode.de/tp-link-tl-wn725n-auf-raspbian/ (german)

Sonntag, 30. November 2014

Pitfalls installing a GTX 970 on an Ubuntu 14.04 LTS system

If you are going to put a GTX 970 into a Ubuntu box running 14.04 LTS, you should update to nvidia driver 323 (if you don't mind running the proprietary driver) to take advantage of its features.

This driver is available via the ppa repository xorg/edgers.

However, skip that part, if you want to use its GPU in Blender.

For this you need CUDA 6.5, the new shiny package from nvidia - only 1 GB!

You can get it here. But be aware that there also is a CUDA 6.5 package without GTX 9xx support. So make sure that it says "with Support for GeForce GTX9xx GPUs" in the title.

Grab the DEB file and install it using.

sudo dpkg -i sudo dpkg -i cuda-repo-ubuntu1404-6-5-prod_6.5-19_amd64.deb

This will copy a handful of other DEB files to /var/cuda-repo-6-5-prod.

Import them into the package system with

sudo apt-get update

and install them in one go with

sudo apt-get install cuda

It contains the nvidia driver (343.19), the CUDA files and various other stuff.

After a reboot check the running version of the NVIDIA driver using the nvidia-settings utility. If the version is not 343.19, the nvidia driver hasn't been replaced (most likely because you were still using it). In this case you have to bring the system into a terminal-only mode.

The usual procedure is:
  • log-out
  • switch to a terminal (Ctrl-Alt-F1)
  • shut down the graphical login using: sudo service lightdm stop
    (depends on the Ubuntu flavour: lightdm for the vanilla version)
  • and proceed from there.

Disclaimer: Replacing a video driver is no fun if it fails and you end up without any GUI. Don't blame me.

The install will also take care of the initfs (needed during boot time).

In order to use the GTX 9xxx in Blender, you have currently use a development build from https://builder.blender.org/download/ as v 2.72 will fail, reporting an Unknown or unsupported CUDA architecture in the terminal.


All versions as of Nov. 28, 2014.

Donnerstag, 23. Oktober 2014

Accessing library materials, groups and other stuff via the Blender Python API

Information valid for Blender 2.72

The API documentation for the equivalent of the Link and Append command is a bit cryptic regarding what values to use for the parameters of this operator.  After some Googleing I found the solution in a few snippets of code here.

The main parameters of the Blender Python append command are:
  • filepath
  • filename
  • directory

The values of these parameters combine the file system path of the blend file and its "inner" structure.

You can see this structure if you use the Link/Append command of the Blender GUI.  Once you click on the blend file its inner structure with materials, groups, objects etc. becomes visible.


In order to append for example the Material with the name white55 from a file named test.blend located in my home directory /home/mike, I have to use:

bpy.ops.wm.append(
   # // + name of blend file + \\Material\\
   filepath="//test.blend\\Material\\",

   # name of the material
   filename="white55",

   # file path of the blend file + name of the blend file + \\Material\\
   directory="/home/mike/test.blend\\Material\\", 
  
   # append, don't link
   link = False
)


Check the API entry for the other parameters.

This is an example from a Unix system, where the file path separator is a normal slash; on Windows systems you have to use a backslash.

However, the backslash is also the Python escape character, i.e. for one \ in the path name, you have to type \\

You can modify this example using \\Object\\ or \\Group\\ ...

Note: The operator append has recently (v2.71) been renamed.  It's earlier name was link_append.

This operator doesn't return an error code if the material, object etc. couldn't be loaded.  You have to be sure that it is present in the blend file.

You can iterate over the material names in a blend library file using the following snippet:

with bpy.data.libraries.load("/home/mike/test.blend") as (data_from, data_to):
   print(data_from.materials)


data_from.materials is a list of strings with the material names. The list is empty, if there aren't any materials available.

The command dir(data_from) returns
['actions', 'armatures', 'brushes', 'cameras', 'curves', 'fonts', 'grease_pencil', 'groups', 'images', 'ipos', 'lamps', 'lattices', 'linestyles', 'masks', 'materials', 'meshes', 'metaballs', 'movieclips', 'node_groups', 'objects', 'paint_curves', 'palettes', 'scenes', 'sounds', 'speakers', 'texts', 'textures', 'worlds']

Three guesses what data_from.groups or data_from.textures will return and how to modify the append command...

Freitag, 10. Oktober 2014

Updating Syncthing on a Synology DM1812+ (Part 2)

In a previous post I described a way to keep your Syncthing installation on a Synology DM1812+ up-to-date as the package from the Cytec repository lags significantly behind the current version.

This method is no longer feasible. The main reason is that the current builds of Syncthing have become dynamically linked, i.e. Syncthing uses certain libraries located on the host machine. If these are incompatible (like the ones on the Diskstation) strange things happen.

Until recently these libraries were part of the Syncthing binary. This so called statically linked binary had no external references and incompatible libraries on the host machine weren't a problem.

If you use the upgrade mechanism built-into Syncthing you get the dynamically linked version. This works with most machines, but not with the DM1812+.

You have to build a statically linked version yourself and replace the binary on Diskstation manually. Fortunately this isn't so hard.

If you haven't done so already, install Syncthing from the Cytek repository. This will copy all the other necessary scripts for the Syncthing integration onto the Diskstation, e.g. scripts to start and stop Syncthing via the Synology web GUI.

The following paragraphs describe how to compile Syncthing amd64 on a fresh Ubuntu LTS 14.04 amd64 machine. It mostly repeats the steps found here on the Syncthing site, but with additional information on how to get a statically linked binary.

The CPU running in the Diskstation is amd64 compatible, but it can also run 32 bit binaries. By tweaking the environment variable GOARCH, it should be possible to cross-compile it for other architectures. I haven't tried this.


Syncthing is written in Go. In order to compile Syncthing you first have to compile Go (also as statically linked version).

Prerequisites


You only need (apart from the source code) the Ubuntu packages for git and mercurial.

I suggest to install an additional packages: libc-bin. It contains the utility ldd which can be used to check if a binary is indeed not a dynamic executable.

$ sudo apt-get install git mercurial libc-bin

Compiling Go

First get the source code for the go language. Currently Syncthing requires Go 1.3, which can be found on the golang website. There you also find the source code for other versions and architectures, if needed.

Download it:

$ wget https://storage.googleapis.com/golang/go1.3.linux-amd64.tar.gz

The Go source code wants to live in /usr/local/go and Syncthing expects it there. You need root privileges to create this folder; then change permissions so that you have access to it as a normal user (in this example “mike” - change as required).

$ sudo mkdir -p /usr/local/go
$ sudo chown mike:mike /usr/local/go


Unpack the source into /usr/local/go

$ tar -C /usr/local -xzf go1.3.linux-amd64.tar.gz

Set the following environment variables

$ export PATH=$PATH:/usr/local/go/bin
$ export GOOS=linux
$ export GOARCH=amd64
$ export CGO_ENABLED=0


CGO_ENABLED=0 is responsible to get statically linked binaries.

Start compiling

$ cd /usr/local/go/src
$ ./all.bash



This can take a while.

Check if the resulting binary is indeed statically linked

$ ldd /usr/local/go/bin/go
    not a dynamic executable


(German message:  Das Programm ist nicht dynamisch gelinkt )

If you get something that looks like the following, the binary is dynamically linked

$ ldd /usr/local/go/bin/go
    linux-vdso.so.1 =>  (0x00007fff0ed31000)
    libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f4cf9726000)
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f4cf9360000)
    /lib64/ld-linux-x86-64.so.2 (0x00007f4cf9966000)


Do only continue if you have the statically linked version.

Check Go

$ go version
go version go1.3 linux/amd64


Compiling Syncthing


Create a directory for the syncthing source (the example shows its preferred location) and get the source:

$ mkdir -p ~/src/github.com/syncthing
$ cd ~/src/github.com/syncthing
$ git clone https://github.com/syncthing/syncthing


Compile it

$ cd syncthing
$ go run build.go





This doesn't take long.

Check it

$ cd bin
$ ldd syncthing
    not a dynamic executable


This is the statically linked binary we need.

This file has to be copied to the Diskstation. If you have installed it in its default location (volume1), the Syncthing binary is located at:

/volume1/@appstore/syncthing/bin/syncthing

Stop the Syncthing process via the Synology web GUI, rename the old one (just in case), and copy the new one to that location. Restart the process in the GUI.

You're good to sync.

Sonntag, 14. September 2014

Blender: Change Cycles render seed automatically

Here is a small Blender add-on to change the seed value for the Cycles render engine automatically before rendering a frame.

The seed value determines the noise pattern of a cycles render.

If you only render one image, this is of no concern for you.

If you render an animation, you get the same pattern for each frame which is very obvious for the viewer.

To counter this, you can enter #frame as seed value (creating a so called driver) which returns the frame number as seed value. This gives you a different noise pattern per frame which makes it look like film grain.

This obviously only works if you have an animation to render, but not with a single frame.

After installing the add-on you see an additional checkbox in the render samplig panel where you can switch this feature on or off.



Why to go the trouble to write an add-on for this?

Lately I have used image stacking a lot.

This technique allows you to reduce noise in pictures created by Cycles rendering engine by rendering the same frame multiple times - provided you change the render seed. You then can calculate the "average" of these pictures (e.g. using ImageMagick) cancelling out some noise.

If you want to a clearer version of an image that has just rendered over an hour, you save it and render it again, then stack the images. This is much faster than scrapping the first image and re-rendering it with a larger sample count.

After forgetting to change the seed value a couple of times, the level of suffering was high enough to make an add-on. :-)


Dienstag, 9. September 2014

How to "average" PNG files with ImageMagick

In a recent post on CgCookie Kent Trammell explained that you can improve the quality of a Cycles render by rendering the image multiple times if you use a different value for the sampling seed.


In his case those images were calculated by different computers of his render farm.

But the same is true, if you waited for an hour for a render to finish, and you are pleased with the image except for the noise.  In this case save the image, change the seed value and render it again.  This way the first hour was not in vain.

In any case you will end up with one or more PNG files with look similar except for the noise pattern which changes with the seed value.

If you calculate the mean image of these images the noise level will be reduced.

Kent Trammell showed how this calculation can be achieved with the Blender node editor (his math was a bit off, but the principle is sound).

The same could be accomplished with other programs like Photoshop or The Gimp or command line tools like ImageMagick.

The ImageMagick command for doing this is:

convert image1 image2 image3 -evaluate-sequence mean result

However, if you try this with PNG files that contain an alpha channel (RGBA) the result is a totally transparent image. 

In this case use the option "-alpha off", e.g. like this:

convert render*.png -alpha off -evaluate-sequence mean result.png

A final note: Keep in mind that all images will be loaded into memory during this process - i.e. you want to limit the number of images.