Recovering Windows 7 Backup on Windows 8.1

My wife’s laptop recently died and was replaced. Unlike last time, there were regular backups with the last performed only a day or two before the failure.

So all is fine… Right?

Turns out not so much. The new laptop runs Windows 8.1. I plugged in the backup drive and opened the settings to look for the Backup and Restore link… and nothing. File History sounded promising, but different.

It turns out Windows 8 introduced a Time Machine like backup tool so they phased out the old Window 7 style backup. An option was provided in Windows 8 to restore “old” Windows 7 backups, but that was removed in Windows 8.1. That means people had a one year window to update from Windows 7 to Windows 8 and restore their files before Windows 8.1 was released. Not helpful!

The only method I could find that worked to restore this backup was:

  1. “Obtain” a copy of Windows 7 and make a VM
  2. Restore the files, selecting to restore the files to a different location (onto your backup USB drive if there is room)
  3. Copy the restored files across to your Windows 8.1 install (there will be a couple of files you can not overwrite – ignoring them seemed to work)

It is almost as if I should have never used the Windows 7 backup feature at all and just copied the files onto the backup drive… Now I have to decide how long the File History tool in Windows 8 will be supported or if I need to find another backup solution.

Routing Traffic With OpenVPN

Don’t you love messages like “This video is not available in your country”. That generally means it is not available outside the USA (unless you are in Germany in which case it means it is available anywhere but your country). They have finally annoyed me enough to do something about it and my web server is based in the USA so it should be easy… Right?

It turns out that while it should be easy, it took me far longer than expected, mainly because I found most tutorials assume you know things. Also, I found the Arch wiki not to be great in this area (and as you may notice below, I am not the person to fix it). So here is yet another “guide” on how to set up OpenVPN on your server and getting both Linux and Windows clients to access it (mainly posted as notes in case I need to set this up again in the future).

Step #1: OpenVPN

The first thing that needs done is the creation of a personal Certificate Authority and generating the needed keys for your server. Fortunately, OpenVPN comes with a set of scripts called easy-rsa. I am not going to cover this as these scripts are well documented and do all the work for you.

I am also not going into all the configuration of OpenVPN. But here is an /etc/openvpn/server.conf file I found to work:

dev tun
proto udp
port 1194
 
ca ca.crt
cert server.crt
key server.key
dh dh1024.pem
 
user nobody
group nobody
 
server 10.8.0.0 255.255.255.0
 
persist-key
persist-tun
 
client-to-client
 
push "redirect-gateway def1"
push "dhcp-option DNS 8.8.8.8"
 
log-append /var/log/openvpn

Not that the configuration file can be called anything you want. OpenVPN is launched using “systemctl start openvpn@server.service“, where “server” in this case is because my configuration file is “server.conf“.

The only bit of configuration I will directly mention is setting up users to be able to access the VPN using a username/password approach rather than generating individual keys for each client. This is purely because I am a lazy admin and everyone I want to use my VPN has an SFTP shell on my server already. Just add this to your server configuration file:

plugin /usr/lib/openvpn/plugins/openvpn-plugin-auth-pam.so login
client-cert-not-required
username-as-common-name

OpenVPN does give warnings about this configuration, so consider the security implications if you use it.

Step #2: Enable forwarding
By default, packet forwarding is disabled. You can see this by running:

$ cat /proc/sys/net/ipv4/ip_forward
0

We can enable this by adding these lines to /etc/sysctl.conf:

# Packet forwarding
net.ipv4.ip_forward = 1
net.inet.ip.fastforwarding = 1

Note that the file /etc/sysctl.conf is not going to be used from systemd-207 onwards, so this will need to be move the appropriate place. Also, the “fastforwarding” line is purely based on anecdotes I found on the internet, and may not do anything at all! In fact, I think it is a BSD thing, so I have no idea why I am mentioning it. Moving on…

Step #3: iptables

If you have iptables running, you will need to open up access to the VPN. You also have to forward the VPN client traffic through to the internet. This is the bit I found least documented anywhere. Also, I am not an iptables expert, so while this works, it might not be the best approach:

# OpenVPN
iptables -A INPUT -i eth0 -m state --state NEW -p udp --dport 1194 -j ACCEPT
 
# Allow TUN interface connections to OpenVPN server
iptables -A INPUT -i tun+ -j ACCEPT
 
# Allow TUN interface connections to be forwarded through other interfaces
iptables -A FORWARD -i tun+ -j ACCEPT
iptables -A FORWARD -i tun+ -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i eth0 -o tun+ -m state --state RELATED,ESTABLISHED -j ACCEPT
 
# NAT the VPN client traffic to the internet
iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE

If your default iptables OUTPUT value is not ACCEPT, you will also need a line like:

iptables -A OUTPUT -o tun+ -j ACCEPT

I have not tested that set-up, so you may need more.

Step 4: Clients
I went with using NetworkManager on Linux and the OpenVPN client on Windows. Note the two different Windows clients on that site are exactly the same, so there is no need to figure out the difference.

The Windows client requires a “.ovpn configuration file. Here is an example:

client
dev tun
proto udp
remote foobar.com 1194
resolv-retry infinite
nobind
persist-key
persist-tun
ca foobar.crt
verb 3
auth-user-pass

The only thing to note here is the “ca” line is the public certificate you generated in Step #1. Even with password authentication, you still need to provide that certificate. Also, it is important to have the compression setting being the same in both the server and client configuration. Given I mostly want to use the VPN for video, I have not enabled compression.

And there you have it. Configuring that took me much too much time, but now I can pretend to be in the USA and watch all the content that is geolocked to there that I want. Which as it turns out, is not often. I have only watched one video in the month since I set this up. Good use of my time…

Poor Man’s Pastebin

I was going to add a pastebin to my site (for reasons I can not determine other than “because”), but that ended up being too much effort. Then I remembered that GNU Source Highlight can output HTML… So here is my pastebin client in less than ten lines of bash!

#!/bin/bash
 
file=$(mktemp ~/web/paste/XXXXXX)
 
[[ ! -z "$1" ]] && lang="-s $1"
 
cat - > ${file}.in
source-highlight -i ${file}.in -o ${file} ${lang}
rm ${file}.in
 
lftp -e "put -O /dir/to/paste ${file}; bye"
     -u user,password sftp://example.com
 
echo "http://example.com/paste/${file##*/}"

Done. Just use “foo | pastebin” to send the output of foo to the web. Or for files, use “pastebin < file". If auto-detection of highlighting style is bad, just use "pastebin <lang>" with one of the supported languages. The seemingly wasteful cat is because GNU source highlight can not autodetect the style from a pipe if it is not provided.

Converting Video On The Command Line

I recently acquired an iPad for work purposes… so the most important thing to know is how to convert video to play on it. Use handbrake, done – short blog post.

But, that would be all too simple. Often I want to watch an entire season of a show that I have collected from various sources over the years and these often have widely varying sound levels. That is quite annoying if you set the season to play and then have to adjust the volume for every episode.

Here is a simple guide to convert your videos into a format suitable for the iPad with equalized volumes. I somewhat deliberately used a variety of tools for illustration purposes, but I think the one I selected tended to be the quickest for each step. The following code snippets assume that your videos have extension “.avi”. They also destroy source files, so make a backup.

Step 1. Extract the audio track using mplayer:
for i in *.avi; do
  mplayer -dumpaudio "$i" -dumpfile "${i//avi}mp3"
done

Step 2. Make sure all audio is mp3 and convert using ffmpeg if not:
for i in *.mp3; do
  if ! file "$i" | grep -q "layer III"; then
    mv "$i" "$i.orig"
    ffmpeg -i "$i.orig" "$i"
    rm "$i.orig"
  fi
done

Step 3. Normalize the audio levels using mp3gain:
mp3gain -r *.mp3

Step 4. Stick the adjusted audio back into the video file using mencoder (part of mplayer):
for i in *.avi; do
   mv "$i" "$i.orig"
  mencoder -audiofile "${i//avi}mp3" "$i.orig" -o "$i" -ovc copy -oac copy
  rm "$i.orig" "${i//avi}mp3"
done

Step 5. Convert to iPad format using the command line version of handbrake:
for i in *.avi; do
  HandBrakeCLI -i "$i" -o "${i//avi/m4v}" --preset="iPad"
  rm "$i"
done

This is probably not the most efficient way of doing this and will become less so once handbrake can normalize volume levels by itself (which appears to be being somewhat worked on by its developers…). But when you have several seasons of a show, each with more than 50 episodes (only a few minutes each), you quickly become glad to be able to make a simple script to do the conversion automatically.

Getting My Touchpad Back To A Usable State

I was happy to note the following in the release announcement of xf86-input-synaptics-1.5.99.901:

“… the most notable features are the addition of multitouch support and support for ClickPads.”

As a MacBook Pro user, that sounded just what I needed. No more patching the bcm5974 module to have some basic drag-and-drop support. I upgraded and everything seemed to work… for a while. I began noticing weird things occurring when I was trying to right and middle click (being a two and three finger click respectively). Specifically, my fingers would sometimes be registered in a click and sometimes not.

The two finger “right” click was easy enough to figure out. If my fingers were too far apart, the two finger click was not registered. It turns out I am what I have decided to call a “finger-spreader”, as my fingers can be quite relaxed across the touchpad when I click. Fair enough, I thought… I just have to train myself to click with my fingers closer together. Then came the three finger click. All three fingers close together did not seem to register as anything. A bit of spreading and a three finger click got registered, but too much spreading and it was down to two fingers. Also, it was not actual distance between fingers that mattered as rotating my hand on the touchpad with my fingers the same distance apart could result in different types of clicks registered.

A bit of looking in the xf86-input-synaptics git shortlog lead me to this commit as a likely candidate for my issues. The commit summary starting with “Guess” and it having a comment labelled “FIXME” were the give-aways… The first thing I noticed was that the calculation of the maximum distance between fingers to register as a multitouch click was done in terms of percentage of touchpad size. That means that the acceptable region where your two fingers need to be for a two finger click is an ellipse, which at least explains why physical distance appeared not to matter.

Attempted fix #1 was to increase the maximum allowed separation between fingers from 30% to 50%. That worked brilliant for two finger clicking, but made three finger clicking even worse, which lead me to another interesting discovery… The number of fingers being used is calculated as the number of pairs of fingers within the maximum separation, plus one. For two fingers, fingers 1 and 2 form one pair, plus one is “two fingers”. However, for three fingers, there are three possible pairs: (1,2), (2,3) and (1,3). This explains the weirdness in three finger clicking; finger pairs (1,2) and (2,3) must be withing the maximum allowed separation while finger pair (1,3) must be outside that. That explains why having fingers too close together did not register as a three finger click (as it was being reported as four fingers – three pairs plus one) and why things became worse when I increased the maximum allowed separation. I filed a bug report upstream and a patch to fix that quickly appeared.

After applying that patch, multifinger clicks all work fine provided your fingers are close enough together. I do not find the required finger closeness natural so I got rid of the finger distance restrictions altogether using this patch. I am not entirely sure what I break removing that, but it appears to be something I do not use as I have not noticed an issues so far. As always, use at your own risk…

Edit: 2012-05-21 – Updated patch for xf86-input-synaptics-1.6.1

How Secure Is The Source Code?

With the addition of source code PGP signature checking to makepkg, I have began noticing just how many projects release their source code without any form of verification available. Or even if some form of verification is provided, it is done in a way that absolutely fails (e.g. llvm which was signed by a key that was not even on any keyservers meaning it could not be verified). If code security fails at this point, actually signing packages and databases at a distribution end-point instills a bit of a false sense of security.

To assess how readily validated upstream source code is, I did a survey of what I would consider the “core” part of any Linux distribution. For me, that basically means the packages required to build a fairly minimal booting system. This is essentially the package list from Linux From Scratch with a few additions that I see as needed…

For each source tarball I asked the following questions: 1) Is a PGP signature available and is the key used for signing readily verified? 2) Are checksum(s) for the source available and if they are only found on the same server as the source tarball, are they PGP signed? The packages are rated as: green – good source verification; yellow – verification available but with concerns; red – no verification. Apologies to any colour-blind readers, but the text should make it clear which category each package is in…

Package Verification
autoconf-2.68 PGP signature, key ID in release announcement, key readily verifiable.
automake-1.11.3 PGP signature, key used to sign release announcement.
bash-4.2.020 PGP signature for release and all patches, link to externally hosted key from software website.
binutils-2.22 PGP signature, key used to sign release announcement (containing md5sums).
bison-5.2 PGP signature, key ID in release announcement, externally hosted keyring provided to verify key.
bzip2-1.0.6 MD5 checksum provided on same site as download.
cloog-0.17.0 MD5 and SHA1 checksums in release announcement posted on off-site list.
coreutils-8.15 PGP signature, key used to sign release announcement.
diffutils-3.2 PGP signature, key used to sign release announcement.
e2fsprogs-1.42.1 PGP signature, key readily verifiable.
fakeroot-1.18.2 MD5, SHA1 and SHA256 checksums provided in PGP signed file, key readily verifiable.
file-5.11 No verification available.
findutils-4.4.2 PGP signature, link to externally hosted key in release announcement.
flex-2.5.35 No verification available.
gawk-4.0.0 PGP signature, key difficult to verify.
gcc-4.6.3 MD5 and SHA1 checksums provided in release email. MD5 checksum provided on same site as download.
gdbm-1.10 PGP signature, key ID in release announcement (with MD5 and SHA1 checksums), key readily verifiable.
gettext-0.18.1.1 PGP signature, key readily verifiable.
glibc-2.15 No release tarball, download from git (PGP signature available when release tarball is made).
gmp-5.0.4 PGP signature, key ID and SHA1 and SHA256 checksums on same site as source, key difficult to verify otherwise.
grep-2.11 PGP signature, key used to sign release announcement.
groff-1.21 PGP signature, key difficult to verify.
grub-1.99 PGP signature, key used to sign release announcement.
gzip-1.4 PGP signature, key used to sign release announcement.
iana-etc-2.30 No verification available.
inetutils-1.9.1 PGP signature, key readily verifiable.
iproute-3.2.0 PGP signature, key readily verifiable.
isl-0.09 No verification available.
kbd-1.15.3 File size available in file in same folder as source.
kmod-0.05 PGP signature, key readily verifiable.
less-444 PGP signature, key posted on same site as download, key difficult to verify otherwise.
libarchive-3.0.3 No verification available.
libtool-2.4.2 PGP signature, key readily verifiable, MD5 and SHA1 checksums in release email.
linux-3.2.8 PGP signature, key readily verifiable.
m4-1.4.16 PGP signature, key used to sign release announcement.
make-3.82 PGP signature, key used to sign release announcement.
man-db-2.6.1 PGP signature, key used to sign release announcement.
man-pages-3.35 PGP signature, key readily verifiable.
mpc-0.9 (libmpc) PGP signature, key readily verifiable.
mpfr-3.1.0 PGP signature, key readily verifiable.
ncurses-5.9 PGP signature, key used to sign release announcement.
openssl-1.0.0g PGP signature, key readily verifiable.
pacman-4.0.2 PGP signature, key readily verifiable.
patch-2.6.1 PGP signature, key difficult to verify.
pcre-8.30 PGP signature, key readily verifiable.
perl-5.14.2 MD5, SHA1, SHA256 checksums provided on same site as download.
pkg-config-0.26 No verification available.
ppl-0.12 PGP signature, key readily verifiable.
procps-3.2.8 No verification available.
psmisc-22.16 No verification available.
readline-6.2.002 PGP signature for release and all patches, link to externally hosted key from software website.
sed-4.2.1 PGP signature, key difficult to verify.
shadow-4.1.5 PGP signature, key readily verifiable.
sudo-1.8.4p4 PGP signature, key difficult to verify.
sysvinit-2.88 PGP signature, key difficult to verify.
tar-1.26 PGP signature, key used to sign release announcement.
texinfo-4.13a PGP signature, key difficult to verify.
tzdata-2012b Many checksums provided in release announcement.
udev-181 PGP signature, key readily verifiable.
util-linux-2.21 PGP signature, key readily verifiable.
which-2.20 No verification available.
xz-5.0.3 PGP signature, key difficult to verify.
zlib-1.2.6 MD5 checksum provided on same site as download (although download mirrors available).

Note that some of these packages have additional methods of verification available (e.g. those that are PGP signed may also provide checksums and file sizes), but I stopped looking once I found suitable verification. When I label a key as “readily verifiable”, that means it is either signed by keys I trust, that it is used to sign emails that I can find or it is posted on the developers personal website (which must be different from where the source code is hosted). I personally found my preferred method of verification was packages whose release announcements were signed by the same key as the source.

While you might look at that table and think there is a lot of green (and yellow) there so everything is in reasonable shape, it is important to note that the majority of these are GNU software and all GNU software is signed. Also, 15% of the packages in that list have no source verification at all. From some limited checking, it appears the situation quickly becomes worse as you move further away from this core subset of packages needed for a fairly standard Linux system, but I have not collected actual numbers to back that up yet.

MBP Fan Daemon Update

For those using my simple MacBook Pro fan daemon, you probably want to check that it still works… At least on my system, the location of the core temperature measurements have changed from /sys/devices/platform/coretemp.{0,1}/temp1_input to /sys/devices/platform/coretemp.0/temp{2,3}_input. I think this occured with the update to Linux 3.0 (but I am too lazy to confirm that is the actual update to blame…).

If you also have this change, you can grab an updated version of the daemon here. As always, it is only tested on my machine (MBP 5.5 13″), so it may not work anywhere else without adjustment…

Syncing Files Across SFTP With LFTP

My webhost only provides SFTP access (which is not surprising given what I pay…). But this can become annoying for maintaining things like a package repository where I would like to keep the remote files in sync with my local copy. My first thought was to go with a FUSE based solution in combination with rsync. Looking into the current best options to mount the remote directory (probably sshfs), I was eventually lead to LftpFS and on to its underlying software LFTP.

LFTP is a sophisticate command-line file transfer program with its own shell-like command syntax. This allows syncing from my local repo copy to the remote server in a single command:

lftp -c "open -u <user>,<password> <host url>; mirror -c -e -R -L <path from> <path to>"

The -c flag tell LFTP to run the following commands (separated by a semicolon). I use two commands; the open command (should be obvious what it does…) and a mirror command. The only real “trick” there is to add -L to the mirror command, which makes symlinks be uploaded as the files they point to. This is required as the FTP protocol does not support symlinks and repo-add generates some.

That was exactly what I needed and it makes a nice bash alias being a single command.

Local WordPress Install On Arch Linux

After the WordPress update from 3.1.3 to 3.1.4 unexpectedly broke one of the plugins I use (My Link Order – why this was removed as a native feature in WordPress is beyond me…), I decided it was time to actually test updates locally before I pushed them to my site. That would also allow me to locally test theme changes and new plugins rather than just doing it live and attempting to quickly revert all breakages I made. It is still not the worlds best testing set-up as it does not use the same web server, PHP or MySQL version as my host, but I am fairly happy assuming the basics of WordPress will be compatibly with what my host provides and so only really need to test functionality that should not be affected by such differences.

Note I decided to go with Nginx as the web server as it seemed an easy way to go. I also did not use the WordPress package provided in the Arch Linux repos as it kind of defeats the whole purpose of testing the upgrade, requires slightly more set-up in nginx.conf and I think files in /srv/http should not be managed by the package manager (but that is another rant…).

So here is a super-quick ten-step guide to getting a local WordPress install up and running.

  • pacman -S nginx php-fpm mysql
  • Adjust /etc/nginx/conf/nginx.conf to enable PHP as described here
  • Enable the mysql.so and mysqli.so extensions in /etc/php/php.ini
  • sudo rc.d start mysqld php-fpm nginx
  • If this is your first MySQL install, run sudo mysql_secure_installation
  • Give yourself permission to write to /srv/http/nginx
  • Download and extract the WordPress tarball into /srv/http/nginx
  • Create the MySQL database and user as described here
  • Adjust the wp-config.php file as needed (see here)
  • Point your browser at http://127.0.0.1/wp-admin/install.php

And it is done! I have not attempted to set-up the auto-update features in WordPress as that involves either setting up and FTP or SSH server and I have no need to do either on my laptop.

As a bonus, I can now draft blog posts while offline and preview them with all their formatting. So you can all look forward to more rambling posts here from me…

The “python2″ PEP

When Arch Linux switched its /usr/bin/python from python-2.x to python-3.x, it caused a little controversy… There were rumours that it had been decided upstream that /usr/bin/python would always point at a python-2.x install (although what version that should be was unclear). Although these rumours were abundant and so more than likely such a discussion did occur (probably offline at PyCon 2009), this decision was never documented. Also, whether such a decision can formally be made off the main development list is debatable.

Enter PEP 394. Depending on how I am feeling, I call this the “justify Arch’s treatment of python” PEP or the “make Debian include a python2 symlink” PEP. Either way, the basic outcome is:

  • python2 will refer to some version of Python 2.x
  • python3 will refer to some version of Python 3.x
  • python should refer to the same target as python2 but may refer to python3 on some bleeding edge distributions

The PEP is still labeled as a draft, but all discussion is over as far as I can tell and I think it will probably be accepted without much of any further modification. The upshot is, using “#!/usr/bin/env python2” and “#!/usr/bin/env python3” in your code will become the gold standard (unless of course you code can run on both python-2.x and python-3.x). There is still no guarantee what versions of python-2.x or python-3.x you will get, but it is better than nothing…

One recommendation made by the PEP is that all distribution packages use the python2/python3 convention. That means the packages containing python-3.x code in Arch should have their shebangs changed to point at python3 rather than python. Given our experience doing the same thing with python2, this should not be too hard to achieve and is something that we should do once the PEP is out of draft stage. This has a couple of advantages. Firstly, we will likely get more success with upstream developers preparing their software to have a versioned python in their shebangs (or at least change all of them when installing with PYTHON=python2 ...). That would remove many sed lines from our PKGBUILDs. Secondly, if all packages only use python2 or python3, then the only use of the /usr/bin/python symlink would be interactively. That would mean that a system administrator could potentially change that symlink to point at any version of python that they wished.