Keeping Packages Vanilla – 2. Configuration

Normally when you write an article and label it with part one, it is followed soon after with a part two. Well, a lot more than two years later, I discovered this draft…. So this is part two of me rambling about what I think it means to keep packages “vanilla”. See here for the first part in which I discussed patching. Looking at configure options and dependencies is probably less clear than patching, but lets see if I come to a conclusion in this wall of text!

As I said in the previous post, in an ideal world we could just do “./configure; make; make install” and all packages would build perfectly and interact with each other the way they are supposed to. I will attempt to categorize the various options that can be added to configure and by how they change the package. This will be mostly done by looking at examples from the packages I maintain (or now, packages that I used to maintain) for Arch Linux.

The first type of configure option that will (almost) always be used is setting paths for where various files are located. Most Linux distributions will build their packages with “--prefix=/usr” and perhaps several other configuration options to set file paths. Arch Linux is not a fan of the /libexec directory so uses --libexecdir=/usr/lib where needed. (It appears that moving away from that directory is becoming widespread these days, right after the draft FHS added it…) There used to be a lot of moving of man and info pages to the “right place”, but that is automatically done in most packages these days. I doubt that anyone would consider these types of configuration options to make a package non-vanilla, unless they were set to very extreme values.

The second set of configure flags are those that enable additional features. For example, GMP can be built using the configure flag --enable-cxx to enable C++ support. That builds an additional library and adds an extra header to the package, but does not alter the primary library. Similarly, using --enable-pcre16 and --enable-pcre32 when configuring PCRE adds 16 and 32 bit character support libraries. Given these options have no effect on the primary part of the software and just add completely separate parts to it, it would be hard to argue that such configuration options are not vanilla.

So lets move onto configuration options that actually alter the software. Lets start with glibc, binutils and gcc which all have


set in the Arch Linux packages. Is that vanilla? It can be argued that I am setting a value not considered by upstream, but clearly upstream thought allowing the package to set such a value was a good idea given there is a configuration option. So I’d say that is still vanilla.

How about Less which is configured with --with-regex=pcre and adds a dependency to the software. There is actually quite a number of possible values for this configuration option:


Looking at that list, I would suppose auto is the most vanilla, as that is what happens if you do not specify the option. But it is also the least deterministic, in that it will pick a different option based on what is installed on your system. In fact, on my system it picks “posix” by default, which was a moderate surprise to me. Given that these options are all provided by the software developer (and so should all be supported), I think you could make the case they all are vanilla. But I would be surprised to see (at least) the last five options used on any Linux system, so calling them vanilla is a stretch.

What is the conclusion? I suppose that configuration options are mostly put there to be used by the upstream developers so any use of them would be considered vanilla. But keep in mind what a software developer would expect. Picking strange configurations compared to what is usual for your operating system is likely to get you (virtual) weird looks from upstream, so that can not be considered a vanilla configuration.

Perhaps this part is more boring than the discussion of patching. But the second part of a trilogy is rarely the greatest. Part 3 should appear soon…

Improvements on Manjaro Security Updates

I’ll give credit where it is due. I had previously criticized Manjaro for holding back all package updates as this ignored security issues. But it appears that Manjaro has a new security policy, which means that packages that are rated as “Critical” or “High” in the Arch Security Advisories get pushed through their “quality assurance” process more quickly.

Comparison of Security Issue Handling

More follow-up from the afore mentioned Frostcast featuring Manjaro developer Philip Müller. Just past the 16 minute mark.

We learn and everyone makes mistakes. And the new server change every package is new synced from Arch Linux so there is no security issues. … We sync daily so if there is any problems with our system it’s ninety percent from Arch itself, so I don’t know why they bash us.

I am not going to claim Arch is the bastion of all things security – in fact I know Arch is far from perfect here – but Manjaro claiming that they are on par with Arch is wrong. Saying “we sync daily” is frankly deceptive. The daily syncs are to the Manjaro unstable branch, so packages can take a while to reach the stable branch where the vast majority of users get the package. As I have pointed out previously, Arch does not separate out security updates from plain upstream updates, so when Manjaro holds back updates on the unstable branch in the name of stability, they are also holding back security fixes. The updates need monitored for security fixes and either 1) pushed more quickly to the users, or 2) have the fixes backported to the “stable” packages.

But, lets use an example, because facts are good. Recently there was an privilege escalation issue found in polkit. This was made public on 2013-09-18. And over the next couple of days there were a lot of distribution updates to fix this issue. So, I have not picked an obscure bug given the number of distros dealing with the issue, and it is a privilege escalation one (potentially with proof-of-concept available, although I have not checked that out). Lets compare the Arch and Manjaro response to this issue by monitoring the location of the polkit-0.112 package:

Date Arch Manjaro
2013-09-18 Testing
2013-09-19 Stable
2013-09-20 Stable Unstable
2013-09-21 Stable Unstable
2013-09-22 Stable Unstable
2013-09-23 Stable Unstable
2013-09-24 Stable Unstable
2013-09-25 Stable Unstable
2013-09-26 Stable Unstable
2013-09-27 Stable Unstable
2013-09-28 Stable Testing
2013-09-29 Stable Testing
2013-09-30 Stable Testing
2013-10-01 Stable Stable

I will admit that this is actually better than I thought it would be… I thought packages stayed longer in Manjaro’s testing repositories to catch bugs. Then again, I noticed that there are packages that were pulled into Manjaro from Arch and put into their stable repos within ten minutes, including packages in the [core] repo, so I’ll assume that the testing that occurs in the Unstable and Testing branches is rather limited. (Evidence: pool/ directory with timestamp file was synced from Arch, stable/extra/x86_64/ directory with repo database timestamp.)

In summary, the indiscriminate holding back of all updates in the name of testing(?) is why I “bash” Manjaro security. With this system, Manjaro is always running behind Arch, so claiming the Manjaro security issues are “ninety percent from Arch itself” is full of… optimism. And before the “leave Manjaro alone” comments, I will stop posting about it when I have no need to correct such false statements.

Keeping Packages Vanilla – 1. Patching

I have recently been thinking about a Linux distribution providing “vanilla packages” and what this really means. The basic idea is simple – provide packages as upstream released them. In practice, this is an impossibility (for reasons I will cover below).

Firstly, lets cover the reason for providing vanilla packages. And I use “reason” singular deliberately, because I think it comes down to a single point. The software developer knows their software better than you do. Your patch could unintentionally introduce a security issue – it has happened before… Or you could introduce a new feature that is then introduced by the developer in a different and incompatible way in the next official release. Also, any bugs that are found in your modified piece of software will need to be triaged in an unpatched version of the software in order to report the issue upstream.

So, in an ideal world, we would just run the equivalent of “./configure; make; make install” and all software would install perfectly. But the world is far from ideal… I will cover two points across two posts: firstly patching, followed by configure options and dependencies.

Patching software is a necessity in any Linux distribution. I am only considering rolling release distributions in the discussion below, so that removes backporting fixes and features from newer versions of a software package. So, what patching is minimally required:

  1. Patches for build issues
  2. Patches for security issues
  3. Patches to fix major software features

I am completely ignoring patches released by upstream as part of their update process. For example the Linux kernel provides a large patch that updates from their x.y.0 release to x.y.z. Bash releases a patch for each minor update, so bash-4.2.042 requires applying 42 patches… These are obviously required patching and are fully sanctioned by the developer so do not deviate from vanilla.

It should be fairly obvious what patches for build issues are… the software will not compile for some reason so you need to do patching to fix that. A piece of software is almost never tested in every single environment before release and, even if it was, updates to other pieces of software can cause build issues. This is particularly common with gcc updates which have become progressively stricter on which headers needed included for a function. Even worse, a software developer might release with the “-Werror” flag enabled by default, meaning any new warning will result in a build failure (I do not have kind words about software developers that do that…). Then there are more complicated issues involving a library update with API changes requiring much more extensive fixes. While adding a single extra include really does not require upstream approval before applying, even that should be forwarded upstream.

Security issues are an important part of patching for non-rolling release distributions. However, new versions of software are usually released whenever a security hole is found, so rolling release distributions only need to update. Again, just grabbing a patch from anywhere on the internet and applying it is not a good idea – I have seen this actually result in a larger security issue than the original.

The final category of patches are those that fix major software features. For example, if an IRC client has a bug preventing it connecting to any channels, firstly shake your fist at the developer and tell them to do some testing before release, and then patch it. If there is a typo in the help output, file a bug or submit a patch upstream, but there is no need to patch it. The guiding principle should be something like “if this is the only issue found in the software, would the developer consider making a new release?”. The answer to that question “yes”, then it will be “yes” to provide the patch.

One guiding factor that can not be stressed enough here is that all patches should be approved by upstream. The best situation is if upstream have committed the patch into the version control repository – preferably on the branch for the version you are using so no mistakes can be made back-porting. Failing that, a post on a mailing list or bug tracker by one of the main developers of the software approving a patch is acceptable.

Of course, much of this is subjective. Is that broken feature big enough to patch? Does this bug constitute a security issue? If upstream is rather unresponsive at the moment, should I apply this fix for a security bug? Is this build fix minor enough that I do not need to wait on an upstream comment before applying? Given it is hard to formulate these ideas into precise rules, I think the answer becomes one of how strict the packager is. I was far more likely to include patches when I started packaging for Arch Linux than I am now. So maybe it is not how strict the packager is, but rather how grumpy…

Manjaro Linux: Ignoring Security For Stability

I feel like having a rant today… so nothing particularly unusual there. But after reading yet another post saying:

I used Arch for two years and it was perfectly fine until one day when I updated and it broke my system. Now I have been using Manjaro for a month and it is completely stable.

I have found what to rant about! Is it just me that notices the issue with that statement?

I have no issue ranting about Manjaro because every time I read their forums I see one of their Core Team being less than congenial regarding Arch – but I suppose they have to be given one of the main selling points of their distribution is they can fix the Arch Linux updating “mess”. Also, the defensiveness of their community to any criticism and the amount of self congratulations for being a Manjaro user astounds me – and I spend a lot of time in the Arch forums and IRC channel where the community are widely considered to be elitist pricks (with me being no exception, as this post will plainly show).

For those who do not know how Manjaro works, I will paraphrase this post. The Arch stable repos are synced into Manjaro Unstable on a roughly daily basis. They sit there for 1-2 weeks before being declared stable and moving to Manjaro Testing. Then their test squad declares that stable enough to move to Manjaro Stable, about 3-4 weeks after the packages arrive in Arch Linux.

And this is the issue. There is four weeks until Manjaro users get package updates. That is still a lot quicker than a non-rolling release distribution I hear you say, but it ignores one of the fundamentals of a rolling release distribution. Security fixes come with a new software release. On a fixed-point release distribution, security fixes are backported into your out-of-date software versions to maintain stability. On a rolling release distribution, you just release the newer version of the software that comes with most security fixes (some backporting from the upstream VCS is required if a release is not made).

That means, Manjaro users are vulnerable to security bugs for around a month after Arch users are safe, unless of course the Manjaro Core Team monitors every package and pushes those with security fixes. How many packages in a distribution? Arch Linux has >6000 in its binary repositories. I suppose it is not impossible to monitor that many packages, unless of course your Core Team consists of three people. And given those three people provide five variants of their installation ISO (net install, XFCE, KDE, Cinnamon, MATE – with OpenBox and E17 on the way…) and provide a series of kernel packages and systemd… Things are looking bleak.

And so, Manjaro users are stuck with packages having security issues for a while. I’d assume the big ones get through quicker. Although their firefox package has not been updated to version 18 yet, which fixes 21 security issues – 12 of which are marked critical. In fact, firefox version 18 has not even made their Unstable repo as I am writing this…

Lets say Manjaro had the man-power to monitor all updates in Arch Linux for security issues. Could they be brought to the Stable repositories more quickly? Maybe… But remember Arch Linux rebuilds against new versions of libraries with soname bumps all the time and our toolchain gets updated very quickly after any upstream release. So any security update built against new libraries or with a new toolchain version require those components moved too. And they are the types of updates that could introduce stability issues.

In the end, I think the idea behind Manjaro – rolling release at a more relaxed pace – can be achieved. I am not entirely familiar with these distributions, but I guess that is exactly what apotsid and LMDE achieve. And they start form Debian Unstable, which is reportedly far more of a minefield than Arch Linux.

Ubuntu is an Evil Dictatorship!

It seems that the Ubuntu community has found out that their opinion does not count as much as they thought and that Ubuntu “is not a democracy“. So the purple, I mean aubergine, theme is here to stay and the window control button placement will be on the top left (and in the opposite order from OSX for the moment). I guess this is a really big issue because you can not change themes or configuration files in Ubuntu… wait… you can? Oh well then, move along, nothing to see here. I can almost guarantee that there will be a script released that changes the window control placement to the right side, just as many scripts are available to install all the “restricted” multimedia codecs that are not installed by default.

We all have known for a long time that Arch is not a democracy. So Ubuntu users set to move to a new distro where they can contribute nothing but still have their opinion count should not look towards Arch.