Nach dem automatischen Update des Kernel auf die Version 6.14.0.24 bricht VirtualBox die Ausführung einer virtuellen Umgebung mit der folgenden Fehlermeldung ab:
VirtualBox can't enable the AMD-V extension. Please disable the KVM kernel extension, recompile your kernel and reboot (VERR_SVM_IN_USE).
Grund dafür ist – diesem Posting zufolge – eine Änderung in der Initialisierung von KVM während des Bootvorgangs.
KVM wird im neuen Kernel während des Bootens initialisiert und verhindert so den Start von VirtualBox-Containern.
Das Posting schlägt den folgenden Fix vor:
Die Verwendung des folgenden Kernel-Parameters unterbindet diese Initialisierung:
kvm.enable_virt_at_load=0
Am einfachsten lässt sich das durch Ändern der Konfigurationsdatei /etc/default/grub erreichen, indem dort der Parameter GRUB_CMDLINE_LINUX_DEFAULT um diesen Eintrag ergänzt wird, z. B. so
GRUB_CMDLINE_LINUX_DEFAULT="kvm.enable_virt_at_load=0"
Mit
sudo update-grub
wird /boot/grub/grub.cfg neu erstellt.
Beim nächsten Booten ist der Parameter dann aktiv und VirtualBox kann wieder gestartet werden.
Montag, 28. Juli 2025
Ubuntu 24.04: Kernel-Update verhindert Start von VirtualBox
Samstag, 12. April 2025
Double-sided printing on Brother MFC-L2710DW now working again on Ubuntu
I'm using a Brother MFC L2710DW with my Ubuntu machines.
I've chosen this printer (which supports duplex printing) years ago because it was supported by Ubuntu out of the box.
However, after updating to Ubuntu 24.04 (Noble Numbat) the duplex printing capability was gone... The "front" page was printed OK, but the "back" page looked like a
memory dump: parts of the intended output were visible but interleaved
with random and geometric pixel patterns.
It turned out that the printer driver für the MFC-L2710DW had changed to "brlaser" between Ubuntu versions. This driver is now responsible for various brother laser printers.
I opened a bug report on the brlaser Github repo. It turned out that this was a know problem that had meanwhile been fixed.
However - the fix hadn't made it into the branch that Ubuntu is using.
I created a bug report in Launchpad with a link to the GitHub-Thread.
The maintainer promptly created a new version which has now (mid-April 2025) "pre-release" status.
If you can't wait: the binaries are available if you follow the links on Launchpad.
After downloading and installing it (sudo dpkg -i printer-driver-brlaser_6.2.7-0ubuntu1_amd64.deb) double-sided printing is again working.
Sonntag, 9. Januar 2022
Docker: temporary error ... try again later
In case you encounter this error message while creating docker images I want to draw your attention to a more unusual reason (DNS), how to fix it, and the somewhat embarrassing root cause in my case.
Have fun.
When setting up a docker installation on an Ubuntu Server 20.04 LTS system the build process for a docker image that had worked fine on my desktop computer failed with this misleading error message:
fetch http://dl-cdn.alpinelinux.org/alpine/v3.4/main/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.4/main: temporary error (try again later)
Strangely enough I could wget this file from the command line.
If you google this error the standard advice is to update docker. I did (to version 20.10.12) ... but it didn't help though.
With problems like this it's always good advise to try to replicate it with the most simple setup possible.
In this case:
- Get a small Linux system image from the repo (alpine)
- and start a command line inside the container: sh
- try to install a package (curl) within the container
$ docker run -it alpine:3.4 sh
/ # apk --update add curl
fetch http://dl-cdn.alpinelinux.org/alpine/v3.4/main/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.4/main: temporary error (try again later)
WARNING: Ignoring APKINDEX.167438ca.tar.gz: No such file or directory
fetch http://dl-cdn.alpinelinux.org/alpine/v3.4/community/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.4/community: temporary error (try again later)
WARNING: Ignoring APKINDEX.a2e6dac0.tar.gz: No such file or directory
ERROR: unsatisfiable constraints:
curl (missing):
required by: world[curl]
Docker did download the alpine image but inside the container downloading the APKINDEX failed. And yeah - I did wait and tried again later... no luck.
Back inside the container I tried:
/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=117 time=14.639 ms
64 bytes from 8.8.8.8: seq=1 ttl=117 time=13.921 ms
64 bytes from 8.8.8.8: seq=2 ttl=117 time=13.956 ms
^C
/ # ping google.com
ping: bad address 'google.com'
Which means: I can reach the internet from inside the container but the DNS resolution obviously isn't working. Let's see who is resolving:
/ # cat /etc/resolv.conf
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients directly to
# all known uplink DNS servers. This file lists all configured search domains.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 129.168.0.1
search local
Who is 129.168.0.1 and how did it became the nameserver? To be honest - it was my fault (more on that later).
Using the only text editor available in a base alpine install I changed it to
/ # vi /etc/resolv.conf
nameserver 8.8.8.8
And yes, when using vi I always have to think very hard how to get the changes written back to disk.
Trying to add the package now works like a charm:
/ # apk --update add curl
fetch http://dl-cdn.alpinelinux.org/alpine/v3.4/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.4/community/x86_64/APKINDEX.tar.gz
(1/4) Installing ca-certificates (20161130-r0)
(2/4) Installing libssh2 (1.7.0-r0)
(3/4) Installing libcurl (7.60.0-r1)
(4/4) Installing curl (7.60.0-r1)
Executing busybox-1.24.2-r14.trigger
Executing ca-certificates
So the problem is definitely a faulty DNS name server .... How can this be fixed?
If I start the container with the --dns option ...
docker run --dns 8.8.8.8 -it alpine:3.4 sh
apk add runs without a problem. And if I check /etc/resolv.conf it says 8.8.8.8
Slight problem: --dns works with docker run but not with docker build.
You have to tell docker itself to use a different DNS server.
Googles first advise is modifying /etc/default/docker like this
DOCKER_OPTS="--dns=my-private-dns-server-ip --dns=8.8.8.8"
But Ubuntu 20.04 LTS uses systemd and in this case these settings are ignored.
You have to create an override files for this systemd service using
sudo systemctl edit docker.service
with the following content
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --dns 8.8.8.8 -H fd:// --containerd=/run/containerd/containerd.sock
I first located the original docker.service file, copied the ExecStart line, and added the dns option.
The first empty ExecStart is needed to clear its original value before setting the new one... good to know (thanks - herbert.cc).
Everything worked.
So - who is 129.168.0.1? Well, it's a typo. It should read 192.168.0.1 - my cable modem.
I later found it in /etc/netplan/00-installer-config.yaml which sets up the machine's IP address, gateway, DNS resolver, etc.
I must have made this typo while installing the system onto the hard drive using the Ubuntu installer.
But why did the internet connection work at all? I could download files... the docker image for example.
My specific setup made the system use a fixed IP address (as servers usually need one) but it did NOT disable DHCP.
So eventually the DHCP server updated the faulty DNS resolver setting with the correct value and all worked fine.
It seems that docker samples the DNS nameserver during boot-up at a time after the yml-file had set the wrong value and before the DHCP server could replace it with the correct one. It then hands this value to the docker build and docker run processes instead of the nameserver currently in use. As those values are usually identical nobody does notice.
I don't know if I would call this a bug but it is unexpected behaviour.
Now you know.
Useful links
- https://serverfault.com/questions/612075/how-to-configure-custom-dns-server-with-docker
- https://serverfault.com/questions/1020732/docker-settings-in-ubuntu-20-04
- https://docs.docker.com/config/daemon/systemd/
- https://www.herbert.cc/blog/systemctl-docker-settings/
Sonntag, 8. Mai 2016
Accessing servers with self-signed certificates in Python
However there are situations where this verification breaks things: self-signed certificates. E.g. the ones you use in your local network or as in my case a web cam which actually offers https. It uses an self-signed certificate - probably the same in all cameras of this type - but hey... beggars can't be choosers.
To access the cam in Firefox you would create a security exception to access the cam, in Python life is not that simple.
The following post shows:
- how to disable the verification
- how to pull the server certificate
- how to use it in Python3
- how to install it in the system
Please note: The following description works on Ubuntu 16.04 LTS. On your distro the directory paths may vary. Change IP addresses, hostnames, filenames, etc. to your needs.
I'm using a small script pulling images from the above mentioned web cam:
import urllib.request
...
hp = urllib.request.urlopen("https://192.168.0.100/pic.jpg")
pic = hp.read()
...
which now results in
urllib.error.URLError: < urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
The following "context" disables the certificate verification
import ssl
...
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
hp = urllib.request.urlopen("https://192.168.0.100/pic.jpg", context=ctx)
That works, but having a certificate verification would be nice. To do that we need the server certificate:
cert = ssl.get_server_certificate( ('192.168.0.100', 443) )
open('/tmp/ipcamera.crt','w').write(cert)
The cert looks like this
-----BEGIN CERTIFICATE-----
MIIClDCCAf2gAwIBAgIJAIMQZ+Ua/bkXMA0GCSqGSIb3DQEBBQUAMGIxCzAJBgNV
...
4XAVFCBZOPwflj9Ug0YNSIgcSfDOxha06C9hwZ0+ZuafkjXv16sGEA==
-----END CERTIFICATE-----
Now you can create a context which uses that cert:
ctx2 = ssl.create_default_context()
ctx2.load_verify_locations("/tmp/ipcamera.crt")
hp = urllib.request.urlopen("https://192.168.0.100", context=ctx2)
...
Which results in:
ssl.CertificateError: hostname '192.168.0.100' doesn't match 'IPC'
Well, that didn't work, at least the error message has changed.
Let's have a look at the cert:
>>> ctx2.get_ca_certs()
[{'issuer': ((('countryName', 'ch'),), (('stateOrProvinceName', 'guangdong'),), (('localityName', 'zhenzhen'),), (('organizationName', 'IPCam'),), (('organizationalUnitName', 'IPCam'),), (('commonName', 'IPC'),)), 'notBefore': 'Mar 7 01:24:16 2013 GMT', 'subject': ((('countryName', 'ch'),), (('stateOrProvinceName', 'guangdong'),), (('localityName', 'zhenzhen'),), (('organizationName', 'IPCam'),), (('organizationalUnitName', 'IPCam'),), (('commonName', 'IPC'),)), 'notAfter': 'Feb 23 01:24:16 2063 GMT', 'serialNumber': '831067E51AFDB917', 'version': 3}]
As you can see the commonName for this cert is IPC and we're trying to access the server using the hostname 192.168.0.100. They don't match.
You can fix this in two ways. Either tell Python to ignore the hostname:
ctx3 = ssl.create_default_context()
ctx3.load_verify_locations("/tmp/ipcamera.crt")
ctx3.check_hostname = False
hp = urllib.request.urlopen("https://192.168.0.100", context=ctx3)
or put an entry into /etc/hosts (you need root privileges for that)
192.168.0.100 IPC
System wide integration
Using contexts is fine, but I have to change every piece of code: create the context and use it. It would be nice to have it "simply" work.
For this you need root access. Then you can put the cert into the system wide certificate store and Python will use it like any normal cert - including the one from the Hongkong Post Office :-)
First create the above mentioned entry in /etc/hosts to get the hostname check right.
Then create a the directory /etc/ssl/mycerts and copy ipcamera.crt into it.
The system wide certs are stored in /etc/ssl/certs. In order for your certificate to be found, it must be renamed. Calculate its hash using openssl:
$ openssl x509 -noout -hash -in /etc/ssl/mycerts/ipcamera.crt
ab0cd04d
Now goto /etc/ssl/certs and create the appropriate named link (you must append .0 to the hash).
sudo ln -s ../mycerts/ipcamera.crt ab0cd04d.0
Now it simply works:
w = urllib.request.urlopen("https://IPC")
If there are easier ways to do it, please let me know.
Links:
- https://docs.python.org/2/library/ssl.html
- http://gagravarr.org/writing/openssl-certs/others.shtml#ca-openssl
Sonntag, 30. November 2014
Pitfalls installing a GTX 970 on an Ubuntu 14.04 LTS system
This driver is available via the ppa repository xorg/edgers.
However, skip that part, if you want to use its GPU in Blender.
For this you need CUDA 6.5, the new shiny package from nvidia - only 1 GB!
You can get it here. But be aware that there also is a CUDA 6.5 package without GTX 9xx support. So make sure that it says "with Support for GeForce GTX9xx GPUs" in the title.
Grab the DEB file and install it using.
sudo dpkg -i sudo dpkg -i cuda-repo-ubuntu1404-6-5-prod_6.5-19_amd64.deb
This will copy a handful of other DEB files to /var/cuda-repo-6-5-prod.
Import them into the package system with
sudo apt-get update
and install them in one go with
sudo apt-get install cuda
It contains the nvidia driver (343.19), the CUDA files and various other stuff.
After a reboot check the running version of the NVIDIA driver using the nvidia-settings utility. If the version is not 343.19, the nvidia driver hasn't been replaced (most likely because you were still using it). In this case you have to bring the system into a terminal-only mode.
The usual procedure is:
- log-out
- switch to a terminal (Ctrl-Alt-F1)
- shut down the graphical login using: sudo service lightdm stop
(depends on the Ubuntu flavour: lightdm for the vanilla version) - and proceed from there.
Disclaimer: Replacing a video driver is no fun if it fails and you end up without any GUI. Don't blame me.
The install will also take care of the initfs (needed during boot time).
In order to use the GTX 9xxx in Blender, you have currently use a development build from https://builder.blender.org/download/ as v 2.72 will fail, reporting an Unknown or unsupported CUDA architecture in the terminal.
All versions as of Nov. 28, 2014.
Donnerstag, 17. Mai 2012
AVCHD and avidemux
Many current camcorders store video according to the AVCHD specification. This is a MPEG2 transport stream with video encoded in H.264 and audio in Dolby AC-3 format.
avidemux which usually is my Swiss army knife for video conversion could not handle the .MTS files produced by the camcorder - at least not the version which are currently available in the Ubuntu repositories (avidemux 2.5.x).
After visiting the avidemux homepage I was pleased to find out, that version 2.6 can handle that format.
This post describes how to compile avidemux 2.6. It mostly reflects the process laid out in the avidemux wiki with some additional information to avoid some pitfalls.
I tested it on vanilla installs of Ubuntu Natty and Precise and the compilation works like a charm. Please keep in mind that you compile from nightly builds and not all functions are implemented yet (May 2012, git revision 7949).
Requirements:
First we need git to pull the source code:
For the core application
For the GUI (QT4)
For the common plugins
For the PulseAudio plugin
Download the source
Compile it
This will produce four .deb files in the ./debs folder.
Install it
Run it
Configure it
Sometimes you have to select the correct audio device in Edit - Preferences - Audio - AudioDevice:
Links:
avidemux homepage + wiki
Mittwoch, 15. Dezember 2010
itrackutil 0.1.3 available
This version fixes the problem.
Developer information:
A previously unrelated kernel module (cdc_acm) suddenly started to grab the USB device. cdc_acm which usually supports USB modems now creates the device /dev/ttyACM0 as soon as the GPS mouse is plugged in.
If you read /dev/ttyACM0 you get the real-time GPS data in NEMA format. This is an unusual use case. The normal data connection is over Bluetooth.
However, the creation of this device file blocks all other USB access to the GPS mouse; in this case the direct USB communication which itrackutil.py uses to access the data stored within the unit's memory.
Fortunately there is an USB API call for this situation: detachKernelDriver.
This function does, what it says it does. You have to call it after opening the device. Root privileges are not necessary.
The call will fail, if there is no kernel driver to detach. You have to catch this exceptions:
... determine device, interface, and configuration ...
try:
handle = device.open()
handle.detachKernelDriver(interface)
except usb.USBError, err:
if str(err).find('could not detach kernel driver from interface') >= 0:
pass
else:
raise usb.USBError,err # any other USB error
handle.setConfiguration(configuration)
The companion API call attachKernelDriver is not available via PyUSB 0.1. But this is only a minor problem, because as soon as you unplug the unit and reconnect it, a new USB device file is created (with the kernel driver attached).
Samstag, 13. November 2010
WinTV-HVR-1900 under Ubuntu 10.04 and 10.10
A few weeks ago I bought a Hauppauge WinTV-HVR-1900 (USB id 2040:7300) which I wanted to use with a (32 bit) Ubuntu 10.04 and 10.10 system.
Quick installation summary:
- Does it work out of the box: No
- Does it work at all: Yes
- If you are able to update to Ubuntu 10.10, do it.
The HVR 1900 is a small box which is connected via a USB2 port (USB1 is not supported for bandwidth reasons). There are inputs for composite video (e.g. VCR or set-top box) an antenna input and a S-VHS input. The device comes with a UK power supply (with a UK mains plug) and a bulky mains plug adapter for continental Europe.
In order to get the digitizer running, this device needs firmware. Ubuntu comes with a selection of firmware images for various Hauppauge systems, but none was suitable for this device.
The firmware is included on the Windows driver disk you get with the device, but which files do you need? Fortunately there is a perl script available that scans the CD and extracts the files you need based on a bunch of MD5 sums stored in that script. Perl is installed by default on a Ubuntu system, so slide in the CD, open a terminal and enter
perl fwextract.pl /media/XXXXXXXXXX
(where XXXXX is the CD title)
In my case the following files were found:
- v4l-cx2341x-enc.fw
- v4l-cx25840.fw
- v4l-pvrusb2-29xxx-01.fw
- v4l-pvrusb2-73xxx-01.fw
For the next steps you need root privileges:
- Change the ownership of those files to root (for security reasons)
sudo chown root:root *fw - Copy the extracted files to /lib/firmware
sudo cp *fw /lib/firmware
Keep an eye on the system log when you now plug the digitizer into the USB port.
tail -f /var/log/syslog
Among other messages, it should confirm that the firmware was uploaded successfully.
...
cx25840 5-0044: firmware: requesting v4l-cx25840.fw
cx25840 5-0044: loaded v4l-cx25840.fw firmware (16382 bytes)
...
Utilities for controlling the device from the command line can be found in the package ivtv-utils (from the Ubuntu repo).
v4l2-ctl --set-input 1
switches to the composite video input
v4l2-ctl --set-ctrl=brightness=128,hue=0,contrast=68,volume=54000
sets basic parameters for the digitizer.
Call v4l2-ctl without parameters for more help, and you will definitively want to try the switch -L
Recording is as simple as:
cp /dev/video0 video.mp2
This works most of the time. Approx. 5% of the recordings contain a distorted audio stream. This distortion is present for the whole length of the recording and usually ends the next time the device /dev/video0 is opened.
If the audio is ok at the beginning, it stays that way. This looks like an initialization problem when the device is opened. I haven't found a fix, yet.
Now to the more difficult part:
Getting the device to work under Ubuntu 10.04.
As mentioned before, many Hauppauge devices need firmware, which is uploaded to the unit when you plug it into the USB port. Older hardware only needed 8192 bytes of firmware. The firmware for this device however is 16382 bytes long (see the above firmware upload message from the log). The device driver controlling the HVR-1900 (pvrusb2) that comes with kernel 2.6.32 and earlier is only capable of transferring 8192 bytes. And Ubuntu 4.10 LTS uses... 2.6.32.
Newer versions of the pvrusb2 driver can also upload the larger firmware. For older kernels (like the one used in Ubuntu 4.10), you have to compile the updated driver yourself.
Compiling a kernel is usually a simple task, because the kernel source code already contains all dependencies. But this time, there were complications.
You need:
- the kernel source
- the tools to compile the kernel
- and the source of the updated driver
You have to unpack it - make sure that you have plenty of disk space available:
cd /usr/src
tar xvfj linux-source-2.6.32.tar.bz2
Then run the following commands
cd linux-source-2.6.32
make oldconfig
make prepare
This will "regenerate" the .config file used by your distribution's kernel. This file is needed during the kernel compilation.
Now we have to download the source code of the current pvrusb2 driver which can be found here. Unpack it and copy the content of the directory driver to /usr/src/linux-source-2.6.32/drivers/media/video/pvrusb2/ overriding the current files with the same name.
(Please note: the pvrusb2 documentation is describing a different approach, that did not work for me (modprobe rejected the module))
The next step would be:
make modules
But due to a totally unrelated bug the compilation will fail, while trying to compile the "omnibook" files.
Download the patch for this bug from here and apply it
cd /usr/src
patch -p0 _name_of_the_path_file_
Now it's time to compile the modules:
cd /usr/src/linux-source-2.6.32
make modules
This step is very time consuming. If you have a multi core processor use the -j# option (where # is the number of cores you have).
Copy the new module from
/usr/src/linux-source-2.6.32/drivers/media/video/pvrusb2/pvrusb2.ko
to
/lib/modules/`uname -r`/kernel/drivers/media/video/pvrusb2/pvrusb2.ko
(where `uname -r` (backticks!) will be replaced by the name of your current kernel)
Keep in mind that you have to repeat that process after each kernel update.
After the next reboot the new module should be active. If you can't wait, unload the old one and load the new module manually:
rmmod pvrusb2
modprobe pvrusb2
Again, check /var/log/syslog for any problems.
Links:
- http://www.isely.net/pvrusb2
- http://www.isely.net/pipermail/pvrusb2/
- https://help.ubuntu.com/community/Kernel/Compile
- http://www.isely.net/downloads/fwextract.pl
Samstag, 26. Juni 2010
Integrating a re-branded Huawai UMTS modem into Ubuntu 10.04
It is also commonly used by European phone companies in their mobile internet starter packs. Unfortunately these sticks are usually sim-locked to the phone company and some identify themselves with a USB id unknown to current Ubuntu setup.
When you plug the stick into the computer, all you see is a small flash drive icon containing the Windows and sometimes Macintosh drivers. On these systems the drivers switches the stick from mass storage device mode into modem mode after the initial phase.
The Linux utility for this task is called modem-modeswitch, and can be found in /lib/udev. The actual task is to automate the execution of this utility when the UMTS stick is plugged in.
The UMTS stick in this example is/was sold by the Belgian phone company Mobistar. When you query the USB IDs:
sudo lsusb
it is identified as
12d1:1446 Huawei Technologies Co., Ltd.
The two numbers at the beginning of the line are the vendor ID and product ID of the stick in the current mode.
To check if the modem switch works as expected, call the utility manually:
sudo /lib/udev/modem-modeswitch -v 0x12d1 -p 0x1446 -t option-zerocd
Wait a few seconds and check the USB IDs again. The entry should now read:
12d1:1003 Huawei Technologies Co., Ltd. E220 HSDPA Modem / E270 HSDPA/HSUPA Modem
If nothing happens, David Santinoli suggests in a related document to unmount the device containing the drivers before attempting the switch.
You can automate the process by creating the file 62-huawai.rules in /lib/udev/rules.d:
ATTRS{idVendor}=="12d1", ATTRS{idProduct}=="1446", RUN+="modem-modeswitch -v 0x%s{idVendor} -p 0x%s{idProduct} -t option-zerocd"
(The file name is arbitrary, as long as it starts with “62-” and ends with “.rules”.) The rule itself is based on similar entries in 61-option-modem-modeswitch.rules.
Next time you boot your computer, the rule becomes active, and the modem should be recognized automatically.
Montag, 14. Juni 2010
Firefox - Ausfall der rechten Maustaste
Der eigentliche Grund des Problems ist weiterhin unklar. Mit dem folgenden Trick kann man Firefox jedoch in den Normalzustand zurücksetzen:
Hierzu beendet man Firefox vollständig und ruft es von der Kommandozeile mit
firefox -safe-mode
auf. Daraufhin erscheint der folgende Bildschirm:
Im Safe-Mode startet Firefox - ähnlich wie der Safe-Mode bei Windows - ohne Erweiterungen.
Wie man sieht kann man fast alles zurücksetzen, was man in den letzten Jahren in mühsamer Kleinarbeit zusammengetragen hat.
"Reset toolbars and controls" scheint die Option mit dem geringsten "Lösch-Potenzial" zu sein, und in meinem Fall reichte sie auch aus. Nach "Make Changes and Restart" reagierte Firefox wieder auf die rechte Maustaste.
Samstag, 10. April 2010
Ein Tag ohne E-Mail
Die Sicherheitskomponente der Anwendung konnte nichtund sämtliche mit SSL gesicherten Verbindungen zu POP- und SMTP-Servern schlugen fehl... und das sind bei mir alle.
initialisiert werden. Der wahrscheinlichste Grund dafür sind
Probleme mit Dateien im Profil-Ordner Ihrer Anwendung.
Bitte stellen Sie sicher, dass der Ordner keine Lese- und
Schreibbeschränkung hat und Ihre Festplatte nicht voll oder
fast voll ist. Es wird empfohlen, dass Sie die Anwendung
jetzt beenden und das Problem beheben. Wenn Sie diese Sitzung
weiter verwenden, können Sie unkorrektes Verhalten der Anwendung
beim Zugriff auf Sicherheitsfunktionen feststellen.
I hate it when this happens.
Gegenmaßnahmen, die ich bei einer ersten Google-Suche nach dieser Fehlermeldung gefunden hatte (Neuinstallation von Thunderbird, Löschen von cert8.db, Anlegen eines neuen Profils) waren jedoch wirkungslos.
Beim Sicherheitsupdate wurde unter anderem die Bibliothek libnss aktualisiert, die für Verschlüsselung zuständig ist.
Ein Eintrag im Ubuntu-Form bestätigte diesen Verdacht. In meinem Fall reichte es, die veraltete Bibliothek libnss3-0d in Synaptic zu deinstallieren.
Samstag, 13. März 2010
T60: Die Tastatur, die Henne und das Ei
Naja, ein Gebrauchtgerät - Lenovo T60 - erworben beim Gebraucht-Laptop-Händler meines Vertrauens, bei dem ich über die Jahre schon einige Geräte der jeweils vorletzten Generation erworben habe.
Wie bei früheren Laptops, so hatte auch dieser eine ausländische Tastatur. Der Laptop kam mit Etiketten zum Überkleben der Tasten. Das war bei früheren Käufen auch so gewesen und die Etiketten halten immer noch.
Unterdessen läuft Ubuntu Karmic mit Gnome auf dem Rechner. Die diversen Funktionstasten werden unterstützt (laut, leise, heller, dunker) und auch Blender ist dank der Kraft der zwei Kerne halbwegs flott.
Etwas gestutzt hatte ich dann gestern, als ich auf der Kommandozeile eine etwas längere Ausgabe mit | less seitenweise anzeigen wollte. Nirgendwo war der senkrechts Strich zu finden.
Sollten die Aufkleber doch nicht so vollständig gewesen sein? Nachdem ich alle Tasten mit gedrückter Alt-Gr-Taste durchprobiert hatte stand fest: kein senkrechter Strich. Nun gut, dann eben eine Umleitung in eine Datei - aber auch das Größerzeichen war nirgends zu finden.
Fehlt also eine Taste?
Die Antwort ist: ja. Nach ein wenig googlen fand ich auf Notebookcheck ein Bild des Laptops mit aufgeklappter Tastatur. Die dort gezeigte Tastatur (oben rechts) hat im Gegensatz zu meiner eine schmalere linke Shift-Taste. In dem freiwerdenden Zwischenraum liegt normalerweise: < > |
Wahrscheinlich hat Sprache, für die die Tastatur eigentlich ausgelegt ist, weniger Sonderzeichen, braucht weniger Tasten und das < > befinden sich irgendwo unter den Tasten Ä Ö oder Ü.
Was also tun? Ohne "größer" und "kleiner" ist das Programmieren recht umständlich. Und wie häufig man auf der Kommandoebene die Pipe benutzt merkt man erst dann, wenn es nicht mehr geht...
Umdefinieren der Tastatur mit Xmodmap
Im Ubuntu-Wiki gibt es hierzu einen ausführlichen Eintrag.
Erster Schritt: Auslesen der aktuellen Zuweisung.
Hierzu muss im Home-Verzeichnis der Befehl
eingegeben werden, was bei fehlendem ">" ein echtes Henne- und Ei-Problem ist. (Ich habe schließlich auf einem anderen Rechner (mit voller Tastatur) ein einzeiliges Skript geschrieben und auf den Laptop kopiert).
Sucht man in .Xmodmap nach "greater", so findet man zwar:
Versucht man aber - nach Aufruf des Programms xev - die Taste mit dem Keycode 94 zu finden, so wird man nicht fündig. Ich habe mich schließlich für die Tasten "Blättern" (links und rechts über "Cursor links" und "Cursor rechts", Keycode 166 bzw. 167) entschieden und die Belegung dort wie folgt geändert:
Nachdem Aufruf
sind die fehlenden Zeichen nun zugänglich.
Spezielle Behandlung von ~/.Xmodmap
Nach einer einmaligen Nachfrage beim nächsten Einloggen wird die geänderte Tastaturbelegung unter X dauerhaft aktiv.
Samstag, 20. Februar 2010
Blender 2.5 alpha 1 verfügbar
Seit heute ist Blender 2.5 alpha 1 verfügbar.
Dieses offizielle Release läuft auch unter Ubuntu Hardy - der (noch) aktuellen LTS-Version von Ubuntu.
Bei den aktuelleren Blender Builds auf GraphicAll.org war das seit dem Übergang von Alpha 0 auf Alpha 1 leider nicht mehr der Fall.
Dienstag, 1. Dezember 2009
Bisecting mesa - bug 446632

Bug 446632 is responsible for the segfaulting blender on start-up on machines with ATI graphics cards running Ubuntu Karmic.
Analysis showed that the segfault originated in the mesa library. The code of the mesa contains the OpenGL implementation under Linux, and is used by Blender and various other programs.
During the pre-release process of Karmic, various builds of the mesa package have been made and are still available online. Tests showed that the last build without the bug was 7.6.0~git20090817.7c422387-0ubuntu8. The next one was 7.6.0-1ubuntu1.
To determine which patch between those two releases is responsible for the bug, a process called git-bisecting is used. For this, you give git the id of a version with and without the bug. Git chooses a version in the middle. You check it out, compile it and test it. After that you tell git if this version was good or bad. After that git chooses another version halfway between the last good and the first bad one. This process is repeated until you find the bad commit.
Sounds simple enough.... but it raises the following questions:
- What Git repository does Ubuntu use for the mesa package?
- Which commits correspond to the above mentioned builds?
- And once you have the source, how do you compile, package and use it?
7c422387 is the commit id within the freedesktop repository (well, not quite. The real commit ID is 7c4223876b4f8a78335687c7fcd7448b5a83ad10, but the first few digits are usually sufficient to find it).
The last commit of the 7.6.0 branch in this repository has the label mesa_7_6
The way to compile the source is described later in this post. As you will see, packaging is not necessary. The compiled drivers can be used directly.
"Git"ting started
You need git-core - and also download gitk (which is not really necessary, but makes a nice graphical representation).
sudo apt-get install git-core gitkchoose a directory and download the entire repository (in this tutorial I use my home directory).
cd ~
git clone git://anongit.freedesktop.org/mesa/mesa
This will create the subdirectory mesa, and a subdirectory .git, that contains the content of the cloned repository.
Be patient. After counting the elements to be transferred it takes some time before the actual download begins. All in all around 100 MB.
The code that we are going to compile needs some additional source files:
sudo apt-get build-dep mesa
sudo apt-get install libx11-dev libxt-dev libxmu-dev libxi-dev
Further preparations
make cleanmake clean removes the "debris" from previous compilations. But we haven't created any yet... Do it anyway - it's good practice :-)
./autogen.sh
./configure --prefix=/usr --mandir=\${prefix}/share/man \
--infodir=\${prefix}/share/info --sysconfdir=/etc \
--localstatedir=/var --build=i486-linux-gnu --disable-gallium --with-driver=dri \
--with-dri-drivers="r200 r300 radeon" --with-demos=xdemos --libdir=/usr/lib/glx \
--with-dri-driverdir=/usr/lib/dri --enable-glx-tls --enable-driglx-direct --disable-egl \
--disable-glu --disable-glut --disable-glw CFLAGS="-Wall -g -O2"
./autogen.sh verifies that all prerequisites are met. If anything is missing, it will complain.
./configure sets up what is compiled, where and how.
During the tests remove -O2 (under CFLAGS). This disables compiler optimisations. The resulting code is a bit larger and a little bit slower, but it is easier to use during debugging.
--with-dri-drivers="..." determines which drivers are compiled. As the original bug only affects ATI machines, we only need the drivers we use. That saves a lot of compile time. If yours is not among them, check out ~/mesa/src/mesa/drivers/dri/ and add it.
You can find out which driver you are using with:
xdriinfo driver 0
Verify the good build
We know that build 7c4223876b4f8a78335687c7fcd7448b5a83ad10 still works with Blender. So let's check it out, compile it and test it. If Blender does not crash, we know that the process so far is working correctly.
git checkout 7c422387
make
We could enter the entire ID, but the first few digits are usually sufficient.
make should finish without errors. Now we start Blender using:
LD_LIBRARY_PATH="~/mesa/glx" LIBGL_DRIVERS_PATH="~/mesa/src/mesa/drivers/dri/radeon" Blender
LD_LIBRARY_PATH and LIBGL_DRIVERS_PATH make Blender (and only Blender, or any other program you specify) use the just compiled libraries. No need to reboot or to restart X. No effects to the remaining programs.
Please note, that you may need to replace the radeon part of the driver path with r200 or r300 depending on the driver you use.
Blender should run correctly.
Bisecting
We now officially start the bisecting process:
git bisect start... and tell git that this was a "good" build.
git bisect good
Checking out the bad build
As we can not pinpoint which git version corresponds to first bad Ubuntu build (7.6.0-1ubuntu1) we simply start at the newest commit in the mesa_7_6 branch:
git checkout mesa_7_6
This replaces the files in the mesa directories and its subdirectories (except .git) with the new ones.
We compile it:
makeand test it:
LD_LIBRARY_PATH="~/mesa/glx" LIBGL_DRIVERS_PATH="~/mesa/src/mesa/drivers/dri/radeon" Blender
This time Blender should crash. We notify git:
git bisect bad
With this command, git chooses a commit approx. in the middle:
Bisecting: 482 revisions left to test after this (roughly 9 steps)
[ee066eaf6d0dd3c771dc3e37390f3665e747af2a] llvmpipe: Allow to dump the disassembly byte code.
The make, test, bisect process is repeated until git displays the first bad commit.
bfbad4fbb7420d3b5e8761c08d197574bfcd44b2 is first bad commit
commit bfbad4fbb7420d3b5e8761c08d197574bfcd44b2
Author: Pauli Nieminen
Date: Fri Aug 28 04:58:50 2009 +0300
r100/r200: Share PolygonStripple code.
:040000 040000 1b1f09ef26e217307a5768bb9806072dc50f2a14 eb20bf89c37b2f59ce2c243b361587918d3c9021 M src
As an interesting side note, the driver from this commit does crash Blender, but not with a segfault. There is even an output on the console: "drmRadeonCmdBuffer: -22".
The next commit in this branch 4322181e6a07ecb8891c2d1ada74fd48c996a8fc makes Blender crash the way we have come to know.
The previous commit (e541845959761e9f47d14ade6b58a32db04ef7e4) would be a good candidate to keep Blender running until mesa is fixed:
git checkout e541845959761e9f47d14ade6b58a32db04ef7e4
make
LD_LIBRARY_PATH="~/mesa/glx" LIBGL_DRIVERS_PATH="~/mesa/src/mesa/drivers/dri/radeon" Blender
Ackowledgements
Tormod Volden for creating and updating https:/
References
https://bugs.launchpad.net/ubuntu/+source/mesa/+bug/446632
http://bugs.freedesktop.org/show_bug.cgi?id=25354
https://wiki.ubuntu.com/X/Bisecting
https://wiki.ubuntu.com/X/BisectingMesa
https://launchpad.net/~xorg-edgers/+archive/ppa
http://www.kernel.org/pub/software/scm/git/docs/user-manual.html
http://cgit.freedesktop.org/mesa/mesa
Montag, 30. November 2009
Update zum "Blender"-Bug
Freitag, 27. November 2009
Blender-Absturz in Karmic

Dass allerdings die ältere Version von Blender (2.49a) aus den Ubuntu-Repositories nicht funktioniert, ist dann schon ärgerlicher, zumal sich wahrscheinlich das nächste halbe Jahr nichts daran ändern wird.
Blender stürzt nach dem Aufruf mit einem Segfault ab.
Probleme scheinen mal wieder nur die ATI-Karten zu haben.
Bug-Report in Ubuntu's Launchpad
Sonntag, 26. Juli 2009
Webdav-Bug in Ubuntu Jaunty
Zum einen den klassischen Weg über das Web-Interface mit Javascript, oder - sehr viel einfacher - über Webdav.
Mit Webdav lässt sich der Webspace so einbinden, als wäre er ein normaler Ordner, auf den man dann mit den normalen Funktionen des Betriebssystems zugreifen kann. In der Windows-Welt ist diese Funktion als "Webordner" bekannt.
Zum Einbinden sind die folgenden Informationen nötig:
- Adresse: mediacenter.gmx.net
- Username: die GMX-Kundennummer
- Passwort: das GMX E-Mail-Passwort
Unter Gnome gibt es diverse Möglichkeiten eine Verbindung zu einem Webdav-Server herzustellen. Am häufigsten dürfte wohl "Verbindung zu Server" im Menü "Orte" eingesetzt werden.





Dies scheint ein Bug in der augenblicklichen Gnome-Version zu sein, unter Ubuntu Hardy und Intrepid arbeitet dieser Menüpunkt korrekt.
Der Work-Around
Einfach keinen Usernamen im ersten Fenster eintragen.

In diesem Fall wird in der zweiten Abfrage nach Usernamen und Passwort gefragt.

Nach deren korrekten Eingabe öffnet sich dann das gewünschte Nautlius-Fenster.

Ähnlich verhält es sich mit bei der zweiten Möglichkeit Webdav-Server anzusprechen:
Man gibt in der Adresszeile eines Nautilus-Fensters
davs://mediacenter.gmx.net
ein, statt einer URI mit Username:
davs://username@mediacenter.gmx.net

Die Ergebnisse und Fehlermeldungen sind entsprechend.
Und warum?
Schaut man sich mit einem Paket-Sniffer die Kommunikation zwischen Gnome und GMX an, dann läuft diese im Fall ohne Angabe des Usernamens wie folgt ab:
- Gnome sendet eine OPTIONS-Abfrage an GMX, ohne Username und Passwort.
- GMX antwortet mit dem Fehler 401 (Nicht authentifiziert) und Angabe eines sog. Realms.
- Gnome fordert den Usernamen und das Passwort vom Benutzer an und
- sendet eine erneute OPTIONS-Abfrage, diemal mit Username + Passwort.
- GMX sendet ein OK (200) - und es folgt der normale Datenaustausch.
In Schritt 3 wird nur noch das Passwort abgefragt.
Schritt 4 fehlt, und deshalb gibt es auch keinen Schritt 5.
Nur eine Fehlmeldung von Gnome, dass es nicht geklappt hat.
Der Bug ist unterdessen gemeldet. Bis dahin halt den Usernamen nicht direkt angeben.
Sonntag, 12. Juli 2009
Firefox 3.5 in Ubuntu (AMD 64)
Und da, wie zuletzt berichtet, bei der jetzigen Ubuntu-Version (Jaunty) nicht damit zu rechnen ist, dass ein Update verfügbar wird... auf zu mozilla.org, Firefox runtergeladen, läuft...
... wenn da nicht die Flash-Integration wäre. Die ist auf AMD64-Systemen nämlich ohne Tricks nicht so einfach zu haben.
Das Flash-Plugin von Adobe gibt es nur in 32 Bit, und läuft ohne weiteres nicht in 64-Bit-Versionen von Firefox. Dafür gibt es dann den npwrapper, mit dem dann 32-Bit-Plugins auch im 64-Bit-Firefox laufen.
Und auch wenn der Firefox von mozilla.org eine 32-Bit-Version war, so erkannte das von Adobe geladene Installationsprogramm für Flash eine 64-Bit-Umgebung, und brach ab.
Aber es geht auch einfacher!
Christoph Langner fasst in seinem Blog die Möglichkeiten in einem FAQ zusammen. Im Fall von Jaunty ist es besonders einfach - es gibt bereits ein Paket mit dem Namen "firefox-3.5". Dieser übernimmt auch die bestehenden Einstellungen (inklusive der Plugins).
Eine detaillierte Liste der installierten Plugins gibt es im übrigen mit der Eingabe about:plugins in der Adress-Zeile.

Dienstag, 16. Juni 2009
Die Kehrseite der häufigen Ubuntu-Releases

War es im vorletzten Release (Intrepid) - wegen Änderungen im Bluetooth-Stack - der nicht mehr funktionierende Bluetooth-Dongle, so schmerzen die beiden Bugs in Jaunty dann doch:
In OpenOffice verschwinden plötzlich Tabellen aus Word-Dokumenten, und Inkscape lässt sich nach dem Update wegen Durcheinander in den zu ladenden Bibliotheken nicht mehr starten.
Diese beiden Fehler treten bei mir nur in der wohl nicht so weit verbreiteten 64-Bit-Versionen auf und sollen teilweise schon behoben sein.
Auch wenn der Besuch auf dem Ubuntu-Bugtracker-Portal Launchpad einem das warme Gefühl vermittelt, dass man mit dem Problem nicht allein ist, dem Durchschnitts-Ubuntuaner (oder heißen Sie Ubuntuniken...) also dem Durchschnitts-Ubunter-User werden diese Updates wohl frühestens ab Oktober mit Karmic zur Verfügung stehen.
Montag, 8. Juni 2009
Firefox - online auch ohne Internet
Wenn eine Netzwerkverbindung fehlt schaltet Firefox in den Offline-Modus. Auf einen im Laptop laufenden Apache-Server kann so nicht zugegriffen werden.
Natürlich kann man nun mit "Datei - Offline arbeiten" diesen Modus wieder ausschalten. Auf die Dauer nervt diese Zwischenschritt; deshalb hier die Option, wie man ihn umgehen kann:
Mit der Eingabe about:config in der Adresszeile zeigt Firefox seine diversen Konfigurations-Optionen an, die so selten gebraucht werden, dass sich ein eigenes Panel dafür nicht lohnt. Hier sucht man nun nach
toolkit.networkmanager.disable
und setzt die Option auf True.
Man kann sich die Suche erleichtern, wenn man z.B. toolkit als Filterausdruck eingibt.
Hiermit wird Firefox angewiesen den Netzwerkstatus nicht beim Networkmanager zu erfragen und bleibt deshalb immer im Online-Modus. Das als Startseite konfigurierte Wiki wird nun direkt angezeigt.