Distri – Comparing Apples and Oranges?

Last weekend we had what I consider to be the very successful Arch Conf 2020. This included a talk by Michael Stapelberg about distri, his Linux distribution to research fast package management.

Michael showed an example of installing QEMU in Arch vs distri, very similar to the results given on his post about Linux package managers being slow. There are a couple of examples of installing packages in that blog post – one pacman comes out looking quite good, one it comes out looking OK (at least it beats apt on Debian and dnf on Fedora…).

Before I get into the teardown of these comparisons, I will point out that I am not disagreeing that distri is a very fast package manager. It is designed for pure speed, so it should definitely be the fastest! Also, I am not disagreeing that pacman could be faster in many areas. What I am concerned about is the conclusion that “The difference between the slowest and fastest package managers is 30x!“. This conclusion assumes that the package manager is the difference and not how the distribution chooses to implement the use of the package manager.

Lets look at the first comparison – ack. That is not me agreeing with myself, but rather a small perl program that depends on a single perl library (perl-file-next). Assuming perl is installed in the base image for the distros being compared, this is a reasonable comparison, if not slightly small. However, looking at the output from the installs (which is not provided in full for all package manager + distro combinations), we can see that is not the case. Note, Fedora and dnf require 114MB of downloads compared to Arch and pacman with 6.5MB. It appears that Arch is the only distro to have perl installed by the base image, so this makes pacman look better. The NixOS image does not even have coreutils installed (and why is that needed for ack?)!

This gets worse for the qemu comparison. Given the video comparison in Michael’s talk, I will just focus on Arch + pacman vs distri. Lets look at the dependency lists for the two packages. For distri we get:

$ wget https://repo.distr1.org/distri/supersilverhaze/pkg/qemu-amd64.meta.textproto
$ grep runtime_dep qemu-amd64-4.2.0-9.meta.textproto
runtime_dep: "gcc-libs-amd64-9.3.0-4"
runtime_dep: "glibc-amd64-2.31-4"
runtime_dep: "gmp-amd64-6.2.0-4"
runtime_dep: "mpc-amd64-1.1.0-4"
runtime_dep: "mpfr-amd64-4.0.2-4"
runtime_dep: "glib-amd64-2.64.2-5"
runtime_dep: "libffi-amd64-3.3-4"
runtime_dep: "util-linux-amd64-2.32.1-7"
runtime_dep: "ncurses-amd64-6.2-9"
runtime_dep: "pam-amd64-1.3.1-11"
runtime_dep: "zlib-amd64-1.2.11-4"
runtime_dep: "libcap-amd64-2.33-4"
runtime_dep: "pixman-amd64-0.38.4-7"

On a fresh Arch install, the only package in that list that would not be installed is pixman, which has no futher dependencies (beyond glibc), so that would require two packages to be downloaded. However compare that to the Arch package:

$ pacman -Si qemu
...
Depends On : virglrenderer sdl2 vte3 libpulse brltty seabios gnutls
libpng libaio numactl libnfs lzo snappy curl vde2
libcap-ng spice libcacard usbredir libslirp libssh zstd
liburing

That is a lot more dependencies? Lets quantify this with a number:

$ pactree -s -u base | sort > base.deps
$ pactree -s -u qemu | sort > qemu.deps
$ comm -23 qemu.deps base.deps | wc -l
123

That is very different! Why? Because Arch’s qemu package provides the graphical frontend. However, Arch does have a qemu-headless package, and …

$ pactree -s -u qemu-headless | sort > qemu-headless.deps
$ comm -23 qemu-headless.deps base.deps | wc -l
20

That is getting to be a fairer comparison. But we still are not comparing only package manager differences. In fact, this comparison is heavily skewed towards how a distribution chooses to package the software. Pacman appears at a disadvantage, because Arch packages all files for the software in a single package, and does not split binaries, libraries, docs, include files, etc into separate packages. That is a distribution decision, and not a package manager decision – all package managers in the comparison list are capable of splitting packages into smaller units. So maybe not apples to oranges, but rather oranges to orange segments? I don’t think I am good at analogies!

Finally, I’ll note comparisons “include metadata updates such as transferring an up-to-date package list“. This seems reasonable at first glance. However, consider two distributions, where the one has a small package set, and the second includes all the packages from the first plus many more. The second distro would take longer to refresh the packages metadata, but that does not mean the package manager is slower! I need to look at distri closer, but my guess from looking at its repo layout is that it downloads the metadata for a package on demand and includes a full dependency list so does not rely on any transitivity. This appears to be a nice trick, but a distro using pacman could be set-up such that package database refreshes were only needed when performing a system update and not when installing packages. Again, Arch makes the choice not to support older versions of packages and removes them from download immediately on updates, but that is not a pacman limitation.

In summary, package managers are slow… But some distributions decide to make them slower! So we need to be careful when drawing conclusions of relative speed of package managers when more variables are at play.

The Case of GCC-5.1 and the Two C++ ABIs

Recently, Arch Linux updated to gcc-5.1. This brought a lot of new features, but the one that I am focusing on today is the new C++ ABI that appears when the compiler is built using default option.

Supporting the C++11 standard required incompatible changes to libstdc++ ABI. Instead of bumping the library soname (I still do not understand why that was not done…), the GCC developers decided to have a dual ABI. This is achieved using C++11 inlined namespaces which will give a different mangled name allowing types that differ across the two ABIs to coexist. When that is insufficient, a new abi_tag attribute is used which adds [abi:cxx11] to the mangled name.

With the initial upload of gcc-5.1 to the Arch repos, we configured GCC to use the old ABI by default. Now the initial slew of bugs have been dealt with (or mostly dealt with…), we want to switch to the new ABI. Because this was going to be a much larger rebuild than usual, one of the Arch developers (Evangelos Foutras) developed a server that automatically orders the rebuilds, and provides the next rebuild when a client requests it (this may be the future of rebuild automation in Arch).

This discovered an issue when building software using the new C++ ABI with clang, which builds against the new ABI (as instructed in the GCC header file), but does not know about the abi_tag attribute. This results in problems such as (for example) any function in a library with a std::string return type will be mangled with a [abi:cxx11] ABI tag. Clang does not handle these ABI tags, so will not add the tag to the mangled name and then linking will fail.

This issue was pointed out on a GCC mailing list in April, and a bug was filed independently for LLVM/clang. Until it is fixed, Arch Linux will not be able to switch to the new ABI. Note, this also has consequences for all older versions of GCC wanting to compile C++ software against system libraries…

Who You Gonna Call?

GHOSTbusters

Nobody… Arch Linux has the latest glibc release (2.20) so is not affected by the GHOST bug, which was fixed upstream in glibc-2.18 (without realising the security implications).

For those that want to know more about security issues and updates in Arch Linux, check out the arch-security mailing list.

Shellshock and Arch Linux

I’m guessing most people have heard about the security issue that was discovered in bash earlier in the week, which has been nicknamed Shellshock. Most of the details are covered elsewhere, so I thought I would post a little about the handling of the issue in Arch.

I am the Arch Linux contact on the restricted oss-securty mailing list. On Monday (at least in my timezone…), there was a message saying that a significant security issue in bash would be announced on Wednesday. I let the Arch bash maintainer and he got some details.

This bug was CVE-2014-6271. You can test if you are vulnerable by running

x="() { :; }; echo x" bash -c :

If your terminal prints “x“, then you are vulnerable. This is actually more simple to understand than it appears… First we define a function x() which just runs “:“, which does nothing. After the function is a semicolon, followed by a simple echo statement – this is a bit strange, but there is nothing stopping us from doing that. Then this whole function/echo bit is exported as an environmental variable to the bash shell call. When bash loads, it notices a function in the environment and evaluates it. But we have “hidden” the echo statement in that environmental variable and it gets evaluated too… Oops!

The announcement of CVE-2014-6271 was made at 2014-09-24 14:00 UTC. Two minutes and five seconds later, the fix was committed to the Arch Linux [testing] repository, where it was tested for a solid 25 minutes before releasing into our main repositories.

About seven hours later, it was noticed that the fix was incomplete. The simplified version of this breakage is

X='() { function a a>\' bash -c echo

This creates a file named “echo” in the directory where it was run. To track the incomplete fix, CVE-2014-7169 was assigned. A patch was posted by the upstream bash developer to fix this issue on the Wednesday, but not released on the bash ftp site for over 24 hours.

With a second issue discovered so quickly, it is important not to take an overly reactive approach to updating as you run the risk of introducing even worse issues (despite repeated bug reports and panic in the forum and IRC channels). While waiting for the dust to settle, there was another patch posted by Florian Weimer (from Red Hat). This is not a direct fix for any vulnerability (however see below), but rather a hardening patch that attempts to limit potential issues importing functions from the environment. During this time there was also patches posted that disabled importing functions from the environment all together, but this is probably an over reaction.

About 36 hours after the first bug fix, packages were released for Arch Linux that fixed the second CVE and included the hardening patch (which upstream appears to be adopting with minor changes). There were also two other more minor issues found during all of this that were fixed as well – CVE-2014-7186 and CVE-2014-7187.

And that is the end of the story… Even the mystery CVE-2014-6277 is apparently covered by the unofficial hardening patch that has been applied. You can test you bash install using the bashcheck script. On Arch Linux, make sure you have bash>=4.3.026-1 installed.

And because this has been a fun week, upgrade NSS too!

Recovering Windows 7 Backup on Windows 8.1

My wife’s laptop recently died and was replaced. Unlike last time, there were regular backups with the last performed only a day or two before the failure.

So all is fine… Right?

Turns out not so much. The new laptop runs Windows 8.1. I plugged in the backup drive and opened the settings to look for the Backup and Restore link… and nothing. File History sounded promising, but different.

It turns out Windows 8 introduced a Time Machine like backup tool so they phased out the old Window 7 style backup. An option was provided in Windows 8 to restore “old” Windows 7 backups, but that was removed in Windows 8.1. That means people had a one year window to update from Windows 7 to Windows 8 and restore their files before Windows 8.1 was released. Not helpful!

The only method I could find that worked to restore this backup was:

  1. “Obtain” a copy of Windows 7 and make a VM
  2. Restore the files, selecting to restore the files to a different location (onto your backup USB drive if there is room)
  3. Copy the restored files across to your Windows 8.1 install (there will be a couple of files you can not overwrite – ignoring them seemed to work)

It is almost as if I should have never used the Windows 7 backup feature at all and just copied the files onto the backup drive… Now I have to decide how long the File History tool in Windows 8 will be supported or if I need to find another backup solution.

Routing Traffic With OpenVPN

Don’t you love messages like “This video is not available in your country”. That generally means it is not available outside the USA (unless you are in Germany in which case it means it is available anywhere but your country). They have finally annoyed me enough to do something about it and my web server is based in the USA so it should be easy… Right?

It turns out that while it should be easy, it took me far longer than expected, mainly because I found most tutorials assume you know things. Also, I found the Arch wiki not to be great in this area (and as you may notice below, I am not the person to fix it). So here is yet another “guide” on how to set up OpenVPN on your server and getting both Linux and Windows clients to access it (mainly posted as notes in case I need to set this up again in the future).

Step #1: OpenVPN

The first thing that needs done is the creation of a personal Certificate Authority and generating the needed keys for your server. Fortunately, OpenVPN comes with a set of scripts called easy-rsa. I am not going to cover this as these scripts are well documented and do all the work for you.

I am also not going into all the configuration of OpenVPN. But here is an /etc/openvpn/server.conf file I found to work:

dev tun
proto udp
port 1194
 
ca ca.crt
cert server.crt
key server.key
dh dh1024.pem
 
user nobody
group nobody
 
server 10.8.0.0 255.255.255.0
 
persist-key
persist-tun
 
client-to-client
 
push "redirect-gateway def1"
push "dhcp-option DNS 8.8.8.8"
 
log-append /var/log/openvpn

Not that the configuration file can be called anything you want. OpenVPN is launched using “systemctl start openvpn@server.service“, where “server” in this case is because my configuration file is “server.conf“.

The only bit of configuration I will directly mention is setting up users to be able to access the VPN using a username/password approach rather than generating individual keys for each client. This is purely because I am a lazy admin and everyone I want to use my VPN has an SFTP shell on my server already. Just add this to your server configuration file:

plugin /usr/lib/openvpn/plugins/openvpn-plugin-auth-pam.so login
client-cert-not-required
username-as-common-name

OpenVPN does give warnings about this configuration, so consider the security implications if you use it.

Step #2: Enable forwarding
By default, packet forwarding is disabled. You can see this by running:

$ cat /proc/sys/net/ipv4/ip_forward
0

We can enable this by adding these lines to /etc/sysctl.conf:

# Packet forwarding
net.ipv4.ip_forward = 1
net.inet.ip.fastforwarding = 1

Note that the file /etc/sysctl.conf is not going to be used from systemd-207 onwards, so this will need to be move the appropriate place. Also, the “fastforwarding” line is purely based on anecdotes I found on the internet, and may not do anything at all! In fact, I think it is a BSD thing, so I have no idea why I am mentioning it. Moving on…

Step #3: iptables

If you have iptables running, you will need to open up access to the VPN. You also have to forward the VPN client traffic through to the internet. This is the bit I found least documented anywhere. Also, I am not an iptables expert, so while this works, it might not be the best approach:

# OpenVPN
iptables -A INPUT -i eth0 -m state --state NEW -p udp --dport 1194 -j ACCEPT
 
# Allow TUN interface connections to OpenVPN server
iptables -A INPUT -i tun+ -j ACCEPT
 
# Allow TUN interface connections to be forwarded through other interfaces
iptables -A FORWARD -i tun+ -j ACCEPT
iptables -A FORWARD -i tun+ -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i eth0 -o tun+ -m state --state RELATED,ESTABLISHED -j ACCEPT
 
# NAT the VPN client traffic to the internet
iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE

If your default iptables OUTPUT value is not ACCEPT, you will also need a line like:

iptables -A OUTPUT -o tun+ -j ACCEPT

I have not tested that set-up, so you may need more.

Step 4: Clients
I went with using NetworkManager on Linux and the OpenVPN client on Windows. Note the two different Windows clients on that site are exactly the same, so there is no need to figure out the difference.

The Windows client requires a “.ovpn configuration file. Here is an example:

client
dev tun
proto udp
remote foobar.com 1194
resolv-retry infinite
nobind
persist-key
persist-tun
ca foobar.crt
verb 3
auth-user-pass

The only thing to note here is the “ca” line is the public certificate you generated in Step #1. Even with password authentication, you still need to provide that certificate. Also, it is important to have the compression setting being the same in both the server and client configuration. Given I mostly want to use the VPN for video, I have not enabled compression.

And there you have it. Configuring that took me much too much time, but now I can pretend to be in the USA and watch all the content that is geolocked to there that I want. Which as it turns out, is not often. I have only watched one video in the month since I set this up. Good use of my time…

Poor Man’s Pastebin

I was going to add a pastebin to my site (for reasons I can not determine other than “because”), but that ended up being too much effort. Then I remembered that GNU Source Highlight can output HTML… So here is my pastebin client in less than ten lines of bash!

#!/bin/bash
 
file=$(mktemp ~/web/paste/XXXXXX)
 
[[ ! -z "$1" ]] && lang="-s $1"
 
cat - > ${file}.in
source-highlight -i ${file}.in -o ${file} ${lang}
rm ${file}.in
 
lftp -e "put -O /dir/to/paste ${file}; bye"
     -u user,password sftp://example.com
 
echo "http://example.com/paste/${file##*/}"

Done. Just use “foo | pastebin” to send the output of foo to the web. Or for files, use “pastebin < file". If auto-detection of highlighting style is bad, just use "pastebin <lang>" with one of the supported languages. The seemingly wasteful cat is because GNU source highlight can not autodetect the style from a pipe if it is not provided.

Converting Video On The Command Line

I recently acquired an iPad for work purposes… so the most important thing to know is how to convert video to play on it. Use handbrake, done – short blog post.

But, that would be all too simple. Often I want to watch an entire season of a show that I have collected from various sources over the years and these often have widely varying sound levels. That is quite annoying if you set the season to play and then have to adjust the volume for every episode.

Here is a simple guide to convert your videos into a format suitable for the iPad with equalized volumes. I somewhat deliberately used a variety of tools for illustration purposes, but I think the one I selected tended to be the quickest for each step. The following code snippets assume that your videos have extension “.avi”. They also destroy source files, so make a backup.

Step 1. Extract the audio track using mplayer:
for i in *.avi; do
  mplayer -dumpaudio "$i" -dumpfile "${i//avi}mp3"
done

Step 2. Make sure all audio is mp3 and convert using ffmpeg if not:
for i in *.mp3; do
  if ! file "$i" | grep -q "layer III"; then
    mv "$i" "$i.orig"
    ffmpeg -i "$i.orig" "$i"
    rm "$i.orig"
  fi
done

Step 3. Normalize the audio levels using mp3gain:
mp3gain -r *.mp3

Step 4. Stick the adjusted audio back into the video file using mencoder (part of mplayer):
for i in *.avi; do
   mv "$i" "$i.orig"
  mencoder -audiofile "${i//avi}mp3" "$i.orig" -o "$i" -ovc copy -oac copy
  rm "$i.orig" "${i//avi}mp3"
done

Step 5. Convert to iPad format using the command line version of handbrake:
for i in *.avi; do
  HandBrakeCLI -i "$i" -o "${i//avi/m4v}" --preset="iPad"
  rm "$i"
done

This is probably not the most efficient way of doing this and will become less so once handbrake can normalize volume levels by itself (which appears to be being somewhat worked on by its developers…). But when you have several seasons of a show, each with more than 50 episodes (only a few minutes each), you quickly become glad to be able to make a simple script to do the conversion automatically.

Getting My Touchpad Back To A Usable State

I was happy to note the following in the release announcement of xf86-input-synaptics-1.5.99.901:

“… the most notable features are the addition of multitouch support and support for ClickPads.”

As a MacBook Pro user, that sounded just what I needed. No more patching the bcm5974 module to have some basic drag-and-drop support. I upgraded and everything seemed to work… for a while. I began noticing weird things occurring when I was trying to right and middle click (being a two and three finger click respectively). Specifically, my fingers would sometimes be registered in a click and sometimes not.

The two finger “right” click was easy enough to figure out. If my fingers were too far apart, the two finger click was not registered. It turns out I am what I have decided to call a “finger-spreader”, as my fingers can be quite relaxed across the touchpad when I click. Fair enough, I thought… I just have to train myself to click with my fingers closer together. Then came the three finger click. All three fingers close together did not seem to register as anything. A bit of spreading and a three finger click got registered, but too much spreading and it was down to two fingers. Also, it was not actual distance between fingers that mattered as rotating my hand on the touchpad with my fingers the same distance apart could result in different types of clicks registered.

A bit of looking in the xf86-input-synaptics git shortlog lead me to this commit as a likely candidate for my issues. The commit summary starting with “Guess” and it having a comment labelled “FIXME” were the give-aways… The first thing I noticed was that the calculation of the maximum distance between fingers to register as a multitouch click was done in terms of percentage of touchpad size. That means that the acceptable region where your two fingers need to be for a two finger click is an ellipse, which at least explains why physical distance appeared not to matter.

Attempted fix #1 was to increase the maximum allowed separation between fingers from 30% to 50%. That worked brilliant for two finger clicking, but made three finger clicking even worse, which lead me to another interesting discovery… The number of fingers being used is calculated as the number of pairs of fingers within the maximum separation, plus one. For two fingers, fingers 1 and 2 form one pair, plus one is “two fingers”. However, for three fingers, there are three possible pairs: (1,2), (2,3) and (1,3). This explains the weirdness in three finger clicking; finger pairs (1,2) and (2,3) must be withing the maximum allowed separation while finger pair (1,3) must be outside that. That explains why having fingers too close together did not register as a three finger click (as it was being reported as four fingers – three pairs plus one) and why things became worse when I increased the maximum allowed separation. I filed a bug report upstream and a patch to fix that quickly appeared.

After applying that patch, multifinger clicks all work fine provided your fingers are close enough together. I do not find the required finger closeness natural so I got rid of the finger distance restrictions altogether using this patch. I am not entirely sure what I break removing that, but it appears to be something I do not use as I have not noticed an issues so far. As always, use at your own risk…

Edit: 2012-05-21 – Updated patch for xf86-input-synaptics-1.6.1

How Secure Is The Source Code?

With the addition of source code PGP signature checking to makepkg, I have began noticing just how many projects release their source code without any form of verification available. Or even if some form of verification is provided, it is done in a way that absolutely fails (e.g. llvm which was signed by a key that was not even on any keyservers meaning it could not be verified). If code security fails at this point, actually signing packages and databases at a distribution end-point instills a bit of a false sense of security.

To assess how readily validated upstream source code is, I did a survey of what I would consider the “core” part of any Linux distribution. For me, that basically means the packages required to build a fairly minimal booting system. This is essentially the package list from Linux From Scratch with a few additions that I see as needed…

For each source tarball I asked the following questions: 1) Is a PGP signature available and is the key used for signing readily verified? 2) Are checksum(s) for the source available and if they are only found on the same server as the source tarball, are they PGP signed? The packages are rated as: green – good source verification; yellow – verification available but with concerns; red – no verification. Apologies to any colour-blind readers, but the text should make it clear which category each package is in…

Package Verification
autoconf-2.68 PGP signature, key ID in release announcement, key readily verifiable.
automake-1.11.3 PGP signature, key used to sign release announcement.
bash-4.2.020 PGP signature for release and all patches, link to externally hosted key from software website.
binutils-2.22 PGP signature, key used to sign release announcement (containing md5sums).
bison-5.2 PGP signature, key ID in release announcement, externally hosted keyring provided to verify key.
bzip2-1.0.6 MD5 checksum provided on same site as download.
cloog-0.17.0 MD5 and SHA1 checksums in release announcement posted on off-site list.
coreutils-8.15 PGP signature, key used to sign release announcement.
diffutils-3.2 PGP signature, key used to sign release announcement.
e2fsprogs-1.42.1 PGP signature, key readily verifiable.
fakeroot-1.18.2 MD5, SHA1 and SHA256 checksums provided in PGP signed file, key readily verifiable.
file-5.11 No verification available.
findutils-4.4.2 PGP signature, link to externally hosted key in release announcement.
flex-2.5.35 No verification available.
gawk-4.0.0 PGP signature, key difficult to verify.
gcc-4.6.3 MD5 and SHA1 checksums provided in release email. MD5 checksum provided on same site as download.
gdbm-1.10 PGP signature, key ID in release announcement (with MD5 and SHA1 checksums), key readily verifiable.
gettext-0.18.1.1 PGP signature, key readily verifiable.
glibc-2.15 No release tarball, download from git (PGP signature available when release tarball is made).
gmp-5.0.4 PGP signature, key ID and SHA1 and SHA256 checksums on same site as source, key difficult to verify otherwise.
grep-2.11 PGP signature, key used to sign release announcement.
groff-1.21 PGP signature, key difficult to verify.
grub-1.99 PGP signature, key used to sign release announcement.
gzip-1.4 PGP signature, key used to sign release announcement.
iana-etc-2.30 No verification available.
inetutils-1.9.1 PGP signature, key readily verifiable.
iproute-3.2.0 PGP signature, key readily verifiable.
isl-0.09 No verification available.
kbd-1.15.3 File size available in file in same folder as source.
kmod-0.05 PGP signature, key readily verifiable.
less-444 PGP signature, key posted on same site as download, key difficult to verify otherwise.
libarchive-3.0.3 No verification available.
libtool-2.4.2 PGP signature, key readily verifiable, MD5 and SHA1 checksums in release email.
linux-3.2.8 PGP signature, key readily verifiable.
m4-1.4.16 PGP signature, key used to sign release announcement.
make-3.82 PGP signature, key used to sign release announcement.
man-db-2.6.1 PGP signature, key used to sign release announcement.
man-pages-3.35 PGP signature, key readily verifiable.
mpc-0.9 (libmpc) PGP signature, key readily verifiable.
mpfr-3.1.0 PGP signature, key readily verifiable.
ncurses-5.9 PGP signature, key used to sign release announcement.
openssl-1.0.0g PGP signature, key readily verifiable.
pacman-4.0.2 PGP signature, key readily verifiable.
patch-2.6.1 PGP signature, key difficult to verify.
pcre-8.30 PGP signature, key readily verifiable.
perl-5.14.2 MD5, SHA1, SHA256 checksums provided on same site as download.
pkg-config-0.26 No verification available.
ppl-0.12 PGP signature, key readily verifiable.
procps-3.2.8 No verification available.
psmisc-22.16 No verification available.
readline-6.2.002 PGP signature for release and all patches, link to externally hosted key from software website.
sed-4.2.1 PGP signature, key difficult to verify.
shadow-4.1.5 PGP signature, key readily verifiable.
sudo-1.8.4p4 PGP signature, key difficult to verify.
sysvinit-2.88 PGP signature, key difficult to verify.
tar-1.26 PGP signature, key used to sign release announcement.
texinfo-4.13a PGP signature, key difficult to verify.
tzdata-2012b Many checksums provided in release announcement.
udev-181 PGP signature, key readily verifiable.
util-linux-2.21 PGP signature, key readily verifiable.
which-2.20 No verification available.
xz-5.0.3 PGP signature, key difficult to verify.
zlib-1.2.6 MD5 checksum provided on same site as download (although download mirrors available).

Note that some of these packages have additional methods of verification available (e.g. those that are PGP signed may also provide checksums and file sizes), but I stopped looking once I found suitable verification. When I label a key as “readily verifiable”, that means it is either signed by keys I trust, that it is used to sign emails that I can find or it is posted on the developers personal website (which must be different from where the source code is hosted). I personally found my preferred method of verification was packages whose release announcements were signed by the same key as the source.

While you might look at that table and think there is a lot of green (and yellow) there so everything is in reasonable shape, it is important to note that the majority of these are GNU software and all GNU software is signed. Also, 15% of the packages in that list have no source verification at all. From some limited checking, it appears the situation quickly becomes worse as you move further away from this core subset of packages needed for a fairly standard Linux system, but I have not collected actual numbers to back that up yet.