Werner's own blurbs

Gpg4win and the feds

16 July 2013 8:29 PM (gnupg | gpg4win | trust)

The current issue 16/2013 of the German c't magazine runs a bunch of articles on PRISM et al. So far so expected. On page 118 the article “Tarnkappen” mentions GnuPG and claims that only a self compiled version is trustworthy:

Wenn man eine Verschlüsselungssoftware aussuchen kann, sollte man die bevorzugen, deren Quelltext offengelegt ist. Ein Beispiel dafür ist GnuPG. Es nützt aber nichts, wenn man ein fertig kompiliertes Paket wie Gpg4win installiert, das im Auftrag des BSI entwickelt wurde — einer Bundesbehörde. Um wirklich das zu nutzen, was geprüft wurde, muss man die Quelltexte schon selbst kompilieren. Wir haben das mit TrueCrypt versucht.

[If you are able to choose encryption software, you should prefer those with published source code. An example for this is GnuPG. However it is worthless if you install a ready compiled package like Gpg4win, which has been developed on behalf of the BSI — an federal office. To really use what has been checked, it is required to to compile the source code. We tried this with TrueCrypt.]

Let me comment on this.

First, Gpg4win has indeed be developed on behalf of the BSI. Actually the BSI has ordered quite a lot of free software over the last decade and helped to offer solutions to make communication safer and to have replacements for proprietary PIM suites (e.g. by supporting the development of KDE's Kontact). In fact they migrated most of their work places from Windows to Debian. To help with the migration several projects to port existing applications from Unix to Windows have been conducted by external companies. Gpg4win is one of these projects. My company g10code joined up with Intevation and KDAB for this project and our bid was accepted in 2006. The actual development happened openly and could be followed by anyone at the Gpg4win repository. Compare that to the original SE-Linux code, which was secretly done at the NSA, published in 2000, and only 3 years later merged into upstream Linux.

One of our goals was to entirely avoid proprietary tools for development by cross-building the system. This required that we put a lot of work into making the dozens of included software projects cross-buildable. To make this verifiable the documentation clearly explains how to use a Debian system to build a Gpg4win installer from scratch. Of course, not everything worked as expected. In particular the included KDE based Kleopatra key manager had a hard time to get ready for cross-building and we achieved this only recently. To keep the build times at bay we also use some pre-compiled packages of standard free software libraries but they are meanwhile in the minority.

The c't article may be read as if the BSI does the binary version. This is definitely not the case. Almost all releases downloadable at gpg4win.org have been build on one of the machines which are located in my office. The included KDE and Kleopatra packages have been pre-compiled by KDAB in Berlin or by Intevation in their offices. Granted, I can't vouch for the KDE code but I can't do that either for the Pango code, which we currently using as pre-compiled binary. But can I be sure that the Debian system which I use for development has really been build from the published sources? I can only assume that there is no backdoor in any of the software used to bootstrap the installer building.

Second, and to continue on the last argument: Is it actually possible to check the source code? The number of source lines in Gpg4win is immense: More than 3 million lines of code are build and this does not include the pre-compiled packages, like Pango, Cairo, and the huge Qt and KDE libaries. How can malicious code be detected in that amount text? It is too easy to slip malicious code in (for example in the 280000 lines of shell code).

For many years during GnuPG development I checked each line of the diff files between the releases to have a chance to notice strange code. Eventually I gave up on this because it is not anymore possible and, worse, the OS and the toolchain would also need to be checked if one wants to substantial increase the trust in the software. It is just not doable anymore. We need to trust our developers to do the Good Thing(tm). Thus we develop in the public. This gives a somewhat increased probability that malicious code can be kept out.

Last, I have to ask why the authors suggest to compile the software yourself only to use that software then on a closed source, non-verifiable OS, delivered by a company which has a secret spying partnership with the NSA?

The article goes on and describes the experienced problems compiling TrueCrypt for Windows. This requires the use of Visual Studio 2008, another assembler, and even an extra 20 years old C compiler. All of them are proprietary and would thus be able to insert all kind of spying code in the resulting executable. For 64bit Windows the authors finally suggest to better use pre-compiled TrueCrypt drivers.

Isn't it like protecting the gate to your town with barbed wire and expensive locks but hiring the Daltons to guard the fence?

One response

  1. Walther says:

    If an expert like Werner gives up on controlling the updates by diffs, it clearly proves that we run at a much too fast speed, gait becomes insecure and we will stumble.
    We are drowned in new releases, progs developed from scratch with functions well handled by the old ones, another meta-surface to cover the functional code or simply a new GUI to keep a user busy. By that, no simple user like me will ever have a chance to get its head out of the mud.
    Only if you use a prog for a long time, a user will detect its bugs and will realize abnormalities and suspicious slight changes. Knowing the next release comes next month, it makes no sense to check my actual software in depth.

Leave a Reply

CC-BY-SA 3.0
Copyright 2012 Werner Koch. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License