bzr commit emails

When I started to use bzr, I did not find an easy way to have commit emails. I have the following setup running for more than one year now without any problem.

Get the latest revision of bzr-hookless-email :
bzr branch lp:bzr-hookless-email

Upgrading to the latest revision, will be very simple. Just go in the directory where you have your branch, and bzr pull.

Now, you need to have a cron “inspecting” your repository. I will advice to setup a /etc/cron.d/bzr-hookless-mail like this:

*/5 * * * * root /srv/bzr/bzr-hookless-mail/ -e -r /srv/bzr

This cron will check every 5min if there is a new revision in /srv/bzr. if there is a new revision, the diff will be send to

You will receive a mail like this:

revno: 121
committer: Lionel Porcheron branch nick: test
timestamp: Thu 2008-09-25 12:17:06 +0200
my commit message
[full diff]

Setup your Debian/Ubuntu repository with reprepro

Since several years, I have some Ubuntu/Debian repositories on my servers for some custom packages and/or some local backports. I use to have my hand-made repositories, and mrpouit introduced me reprepro. It covers almost all the features we can expect from a package repository and it’s quite easy to setup. Here is a quick installation process on Ubuntu Hardy.

# sudo apt-get install reprepro
# mkdir /srv/reprepro
# cd /srv/reprepro
# mkdir conf dists incoming indices logs pool project tmp

Let’s configure our repository:
# cd conf
We will have three files:
* “distributions”: the distributions the repository support
* “incoming”: what to do with the incoming directory
* “uploaders”: uploaders authorization

Let’s begin with uploaders. We will make our life easy: all uploaders have an ssh access on the server, so we will limit with Unix file system access to this directory. You can limit with gpg key, but we will not do that. Here our uploaders file:

allow * by unsigned

Here is our incoming file:

Name: default
IncomingDir: incoming
TempDir: tmp
Allow: hardy hardy-backports
Cleanup: on_deny on_error

Our default queue is called “default”, it take files from incoming, temp directory is “tmp”, and by default, it goes in intrepid.

Let’s have a look to distributions file. You have a section for each distribution in your configuration file. We describe here a sample for “hardy”:

Origin: Alveonet
Label: Alveonet
Suite: hardy
Codename: hardy
Version: 8.04
Architectures: i386 amd64 source
Components: main
Description: Alveonet specific (or backported) packages
SignWith: xxxxxxxxx
DebOverride: ../indices/override.hardy.main
UDebOverride: ../indices/override.hardy.main.debian-installer
DscOverride: ../indices/override.hardy.main.src
DebIndices: Packages Release . .gz .bz2
UDebIndices: Packages . .gz .bz2
DscIndices: Sources Release .gz .bz2
Contents: . .gz .bz2

For the full explanation, you should refer to the reprepro manual. Most of the parameters are explicit, and this is the advised configuration.

Let’s create our GPG key. Be warned that you have to do that in root, not with sudo otherwise it will go in your home directory.

# gpg --gen-key

Answer the question with the entity that run your package repository. Insert your key id (found with gpg --list-secret-keys) in the distribution file.

You have to touch the override (for now, we will let them empty) :

# cd /srv/reprepro/indices
# touch override.hardy.main
# touch override.hardy.main.debian-installer
# touch override.hardy.main.src

You can verify that it’s all functional with exporting the (empty) repository:

# reprepro -Vb /srv/reprepro export

You should see on your console something like the following:

Exporting hardy...
generating Contents-i386...
generating Contents-amd64...
Successfully created './dists/hardy/'
Exporting hardy...
Created directory "./dists/hardy"
Created directory "./dists/hardy/main"
Created directory "./dists/hardy/main/binary-i386"
Created directory "./dists/hardy/main/binary-amd64"
Created directory "./dists/hardy/main/source"

For uploading, nothing more easier. When building your package, you have to copy in the incoming directory the “.changes” and all the files listed here (the .dsc, .diff.gz and .orig.tar.gz). And then launch the command:

# reprepro -Vb /srv/reprepro processincoming default

munin: migration from a 32bit to a 64bit host

When you migration your munin from a 32bits to a 64bits installation, you have to dump restore all your rrd files. Saying that looks like a pain, but in fact, it is easy to do ;-). We migrate some months ago our munin/nagios server at work from an old 32bits server to a brand new rack server using a 64bit Ubuntu 8.04.1, we performed the following scripts and kept our historic.

Here are the two scripts (you need some disk space to perform the dump / restore) . I assume you have copied the content of you “old” server on the new server. On Ubuntu/Debian servers, rrd files usually lives in /var/lib/munin.

First dump script :

for f in `find /var/lib/munin -name '*.rrd' -print` ; do
    xml_file=`dirname $f`/`basename $f .rrd`.xml
   rrdtool dump "$f" > "${xml_file}"
   chown root:root "${xml_file}"

Import script :

for f in `find /var/lib/munin -name '*.xml' -print` ; do
   rrd_file=`dirname $f`/`basename $f .xml`.rrd
   mv -f "${rrd_file}" "${rrd_file}.bak"
   chown root:root "${rrd_file}.bak"
   rrdtool restore "$f" "${rrd_file}"
   chown munin:munin "${rrd_file}"

Using listadmin to manage mailman

I’m managing a bunch of mailman lists for work and out-of-work activities. The web interface of mailman is great, but sometimes, it is just slow to have to go through the web interface. It is unproductive when you have several pending messages and you have to go through all the queues. Hopefully, listadmin is there :)

Setup is easy on Ubuntu or Debian:

$ sudo apt-get install listadmin

Configuration is not really difficult. It is a file ~/.listadmin.ini, here is a sample configuration:

password secret

Launching with the command line:

lionel@brehat:~$ listadmin
fetching data for ... nothing in queue
fetching data for ...
[1/1] ============== =====================
Subject:  Faire de l'argent en ligne - Plus de 500 euro/jour! ba
Reason:   Message avec destination implicite                   Spam? 0
Approve/Reject/Discard/Skip/view Body/Full/jump #/Undo/Help/Quit ? D
Submit changes? [yes] yes

By default, listadmin will go through all the lists in your configuration file. Note that you can have several lists on several hosts. You can limit to some list / host appending a list name or a regexp to the command line. In my example, I just limited to a host.

Ubuntu install party at Toulouse

Some of you may have noticed the announce made on the well known linuxfr website, but there will be an Ubuntu party near Toulouse (in Blagnac) next saturday. During this party, you will be able to:

  • install Ubuntu on your computer. People from Ubuntu-fr and Toulibre will be delighted to help you in installing our favourite OS.
  • Assist to conference about Ubuntu and FOSS in general.
  • Talk about Ubuntu and FOSS
  • Buy goodies :)

I will give three presentations (damn, I have to finish my slides!): “What’s New in Ubuntu 8.10″, “Ubuntu at Home and at Work” and “Contributing to Ubuntu” with Christophe (Ubuntu-fr president). I will post my the slides when they are ready and photos.

You can find on Toulibre website all the details about the event.

Hope to see you on saturday!

6 months with apt-cacher

At work we started to use apt-cacher few days before Ubuntu 8.04 (Hardy) release. I previously explained the setup in this blog. Ubuntu 8.10 (Intrepid) is about to be released, it’s time to do a review of how things have gone during the last six months.

  • Figures: you can find attached to this post a screenshot of our report page (you can get a similar page on http://your_cacher:3142/report.html). As you can see, we have avoided around 140 MB. We have around 40 servers running Ubuntu and around 40 desktops/laptops running diffent Ubuntu flavours (Ubuntu/Kubuntu/Xubuntu).
  • Feeling: More than figures, apt-cacher is a nice tool to use: you do not wait anymore your downloads. It’s often in the local cache. Applying kernel security update does not mean anymore to wait for downloads. You downlard your kernel around 30MB/s!

We have no regrets in the short time investment we made on installing and configuring this tool. We may have a look in the future on apt-cacher-ng, but now, everything is running well.

Better jesred rules

A friend of mine give me better rules for jesred. You can now browse the repository in the browser:

regex ^http://((.*)|pool)/.*(deb|bz2|Release|Release.gpg))$ http://apt-cacher:3142/\1
regex ^http://(|pool)/.*(deb|bz2|Release|Release.gpg))$ http://apt-cacher:3142/\1

Thanks Jerôme!

Bandwith optimization: squid, apt-cacher and jesred

At work, I now have around 50 desktops running Ubuntu and around 40 servers (including customers machines) also running Ubuntu. As you can imagine, when you have a security update of X, this represents a lot of bandwith usage! Not to speak about Hardy upgrade! We started to look at different solutions to optimize our precious bandwith.

Some search gave:

  • local mirror: huch… this is a bit too much for us :)
  • squid usage: good, but you need to tweak too much your squid installation to keep your .deb inside the pool. And squid can make .deb expires even they are still valid.
  • apt-proxy/apt-cacher/apt-cacher-ng: all looks good but… you have to modify your client configuration. As I am lazy, I don’t want to do that (and also, because I have mobile users who only want to use the cache when they are on the corporate network). Between the three, I chose apt-cacher, just based on some reading on the web… Other may be as good as apt-cacher!

We selected the association: squid + apt-cacher + jesred. Let’s have a look on each component:

  • apt-cacher: .deb and Packages/Source cache. You can also import data from another source (for exemple from a cd-rom).
  • squid: THE proxy, we use it as a transparent proxy in our case.
  • jesred: rewrite squid URL and redirect access to the Ubuntu archive to apt-cacher.

The installation described below was made on a Ubuntu 8.04. The machine is a Xen virtual machine (I’ll talk about Xen another time ;-)). All the softwares are taken from Ubuntu repositories: squid beeing in main, other packages are in universe (ensure universe is enabled). Installation and configuration is really easy!

squid installation

# apt-get install squid

squid configuration

Edit /etc/squid.conf and add in ACL definititions:
acl mylan src

Allow traffic from you network:
http_access allow mylan

You can now test your squid. It should be operational.

apt-cacher installation

# apt-get install apt-cacher

I just changed the admin_email value in /etc/apt-cacher/apt-cacher.conf

As a quick test, set http_proxy env value and try to use apt. Everything should go throught the cache (check the logs).

jesred installation

# apt-get install jesred

jesred configuration

Edit /etc/jesred.acl to authorize your network (just add you lan at the end of the file).

Edit /etc/jesred.rules and add:
regex ^http://((.*)|pool)/.*)$    http://localhost:3142/\1
regex ^http://(|pool)/.*)$    http://localhost:3142/\1

I have also added two aborts in order to use upgrade-manager:
abort .gpg
abort ReleaseAnnouncement

Last but not least, the glue between all the elements:

Edit /etc/squid.conf and add:
redirect_program /usr/lib/squid/jesred

Finished ! Now your squid redirect all requests to * and to apt-cacher. Happy installation / upgrades!

Mandatory Ubuntu 8.04 LTS release post

I guess everybody has already read it, but Ubuntu has released a new release : 8.04 (8 for 2008, 4 for april). Note that this release is a LTS release (Ubuntu and Ubuntu Server only, Kubuntu and orther derivates are not ong term surpport release). As a result, you can upgrade from Ubuntu 7.10 (aka Gutsy) and 6.06.2 (aka Dapper).

I run 8.04 on my laptop and on my personal servers for several months now, and it run quite well. This bug on Ubuntu Kernel made my life at work a bit more difficult, but it should hopefully be fixed in 8.04.1 (due july 3rd).

Now, it’s time to be intrepid!