Stories tagged PlanetDebian:RSS Feed

Open Game Art did it right; tags=Debian, Web, Unknown-Horizons, FOSS

Open Game Art is a newly started site for exchanging free Artwork. While one can easily get the impression that there are loads of such sites around, Open Game Art is one of the very few that actually is done right.

As a Member of the Debian Games Team and the Unknown Horizons Project I was way too often in the need for good artwork searching around the web. I've also already reported once about my trouble.

There are quite some sites like Free Sounds around offering free artwork -- but only free as in beer as the saying goes, not as in speech which of course is really unhelpfull for FOSS projects. And even most of the sites that have free content often only tell you the license on some special pice of arts details page.

Open Game Art is quite different from that. All the license you may choose as a contributor are free (both in Debian and in FSF terms) and the license is available through a search filter so you can find stuff that fits you project's licensing policy. This list, and that's another thing I really like about that site, is the availability of choice among common licenses including, next to the copyleft class of licenses a fair share of more liberal licenses like my personal favourite, the zlib License.

And because such a site is just as good as it's amount and quality of data I've started sharing some recordings. I'm currently really new to audio recording so I guess it'll take some time for me to become really good. I'm considering putting some of my experiences and stuff I've learned here.


-- Christoph Egger <christoph@christoph-egger.org> Fri, 19 Mar 2010 19:23:33 +0100

Another piece of well done software; tags=Debian, Spaceshooter, Programmieren, FOSS

As I really liked saying why I think Open Game Art is a good project I decided to start a small serie of well done free (well not only software) projects. This time SFML got to be the one.

SFML is, as the name already tells, a Simple and Fast Multimedia Library written in C++ but providing bindings for a whole bunch of other languages like Python, Ruby, C, D and others. Debian currently provides the original library as well as the C and Python bindings maintained by the Games Team and myself. On a side remark, SFML also uses my favourite License, zlib.

What I really like about SFML is the readable code all through the project. Every time I was unsure what some function does having a look at the actual implementation (and some OpenGL and X11) knowledge turned out to be quite satisfactory. This is, of course, aided by the fact that SFML's development is driven by a single Developer, Laurent Gomila.

On the rather weak points I'm still hoping the to-be-released 2.0 Version of SFML will introduce something like a stable API which it currently lacks (although the API has settled and there are no such huge changes as from 1.2 to 1.3 in recent updates any more). SFML also uses hand-made Makefiles for building (now supporting DESTDIR at least -- in some non-standard way) and has the usual load of embedded libraries which results in it's current load of patches.

For a nice time burner make sure you take a look at the python binding's snake-like clone. It clearly misses some important aspects to form a full game but it's nice nontheless. I have a (not-quite) small SFML based Project myself, a forward ported game from my old DirectX days, however it's unfortunately not yet playable again und rather stalled at the moment due to lack of time.

So much for SFML. If you feel like it feel free to join me on writing about well done pieces of software or just about pieces on how you think it should™ be done and tell us where you found it happening.


-- Christoph Egger <christoph@christoph-egger.org> Wed, 31 Mar 2010 21:45:53 +0200

[Review] AI Touchbook; tags=Debian, Linux, FOSS, Uni

Having my primary working Computer, a Lenovo Thinkpad, going into repair at the end of December I finally got up to ordering on of those TouchBook ARM based netbooks I was looking at for some time. After some processing time it finally got shipped in April and arrived here last Monday, time to write up my first impressions.

Some words about the Hardware. The TouchBook ships with a so called "Beagle Board" featuring a OMAP3 Processor, ARM Cortex A8 running at 600MHz, 512MiB of RAM and a 8GiB SD Card for storage. It has a 8.9" touch screen and comes with USB and Bluetooth Sticks for wireless connectivity. The Display part contains all the needed Hardware and is detachable from the bottom that is just a keyboard sitting on the secondary Battery. You can open the Top to get at 4 intern USBs (3 USB-A and one Mini-USB) where 2 of these spots are occupied already for wireless networking and Bluetooth.

First experience

The TouchBook comes with an US Power Adaptor only so when I got the device I was running for some tiny Adaptor to get the plug into a normal EU Power Outlet (it's incredibly hard to get one for this Direction while it's easy to get some travelling stuff to plug EU Hardware into various different Outlets!).

When I finally booted it the first thing you'll notice is the touch interface for the bootloader. That's quite a difference to all-text-based old grub! The shipped SD Card offers 3 Operating Systems, one custom Linux that might well be interesting to the average User, a Ubuntu Karmic that really OK for a Debianoid Hacker- both running a XFCE Desktop - and a Android that is really slow and doesn't seem to be good at anything. Needless to say I sticked with the Ubuntu for now.

What to not expect

Well this is a 600MHz CPU with half a Gig of RAM running of a SD Card. So don't expect it to be good at anything that can profit from today's High-End Hardware.

The good Points

First of all, I have to admit that the touch screen is a neat interface, way superior to the Touchpad Area you'll normally find on a Notebook - at least if you use the stylus. It's quite different from the inside-the-keyboard trackball the thinkpads have of course.

The Website claims 10h of Battery life and while I've emptied the battery much faster under certain workloads (e.g. Playing cards) it does hold that promise with emacs fired up in org-mode, IRCing on a server over SSH and the mandatory wireless working. Same for a always-on on campus day which just works.

Again putting the screen on the keyboard the wrong way 'round will give you a touchscreen tablet with the keyboard out of your way, an ideal configuration for playing. And I have to admit playing games like gtkballs or aisle riot real fun. So much fun actually I'm currently thinking on whether it would be feasible to get openpandora working on it.

What I'm really missing

There are two Properties that are really lacking from the device which would make it (in my personal opinion at least) a whole lot better: A simple Ethernet controller I could use to go online when sitting in the server room doing some maintenance without taking my WRT with me and some slot to store the stylus when not using it where it's easy to get out (currently I'm having it in my wallet).

Then there's something (maybe a Kernel Bug): The Wireless is unable to find any new Access Point after disconnecting from some and walking out of reach from that. Force-unloading the kernel module and waiting 30 minutes worked for me multiple times but that's purely inacceptable.

Finally there are some minor glitches. The shiny red cover just gets dirty every time you touch the thing and the Keyboard is really small (what a surprise on a 9" device) and has some of the special Keys (like the Home key) located at unusual spots (Page-Up/Down only available through the FN modifier). Shift and End at the right side are also labeled opposite from their actual function (at least on Ubuntu).

The last ugliness is the top part battery only charging when the device is running, which means you"ll have the TouchBook running all night to get the battery charged and the Battery Monitor not working at all (at least in the current version of the operating systems).

Where to go now

I've not yet come around to really play with the operating system (apart from installing wicd, urxvt-unicode and awesome getting the most needed of my working environment). As I'm a Debian Developer I'll definitely need a Debian running on it (although I was told it'll be slow with software compiled for armv4te) and, as it needs to be running all night anyway, I'll try out gentoo pending another SD Card for experiments.

Secondly there's currently no useable conforming Common Lisp Implementation in Debian for armel as far as I can tell. As arm was already working it shouldn't be that hard, let's see if I can change that but feel free to join me!

Final Notes

I was thinking of some mobile-ish note-taking device and remote ssh terminal for University which the device clearly can do even for 10h away from any power plug while being some non-standard non-x86 device to toy on (It's actually my second armel next to the sheeva plug mounted on my window board.

As a final Remark: This blogpost was written on the TouchBook hacking some markdown into emacs while traveling by train to Erlangen where I study on Sunday Night after having read some chapters of Cory Doctorow's Little Brother on my E-Slick E-Book reader and finished later in my Room.

Maybe I'll find some time to write a review for this device as well one day!


-- Christoph Egger <christoph@christoph-egger.org> Mon, 26 Apr 2010 10:32:35 +0200

[FAIL] Security; tags=Programmieren, Linux, FOSS, Rant, Fail

I'm all for security and really like encryption (my Notebook's harddrive is encrypted, I've recently got a GPG Smartcard, ...) but sometimes you see big failes where security is atemted but doesn't actually secure anything but only hinders the legitimate user.

Today one of these candidates ate way to much of my time again. I'm currently getting more and more used to GNU Emacs and currently experimenting with emacs-jabber. Therefore copying my jabber accounts over from psi. As with these passwords you never type in I couldn't remember some of my jabber passwords -- no problem psi has to store them so it should be easy to get them, right?

Well actually not. The configuration file (XML) had a password entry but all that was in it was just obviously hex-encoded numbers. These numbers turned out to be be 16bit packages of characters that are XOR-ed against the JID So now you have to read them in in junks of 16bit, XOR them against the JID and get the password.

Time to recapitulate what this security helped. I've written a hacky 10 lines C Program that can reliably retrieve passwords from any config file I might come across. Seems you can do the same in 2 lines of perl. Ergo no security at all was added.

Next question: What did it cost? Needed an hour or so of researching the encryption and trial&error out the right program fragment. For nothing gained at all. Fail.


-- Christoph Egger <christoph@christoph-egger.org> Wed, 02 Jun 2010 20:23:08 +0200

The erlang experience; tags=Programmieren, Linux, FOSS, Functional, HowTo, Erlang

This week I had to write a little tool that would collect input on different channels, via socket / netcat, via a http server, .... Calls for a parralel Design -- Maybe a good place to write something real in Erlang. While erlang itself was really nice to write -- I do like Prolog as well as the bunch of functional languages -- doing networking in erlang seems a bit special, the interfaces just aren't thin wrappers around the libc stuff.

Getting a Socket Text interface

What sounds like a easy challenge to accomplish was actually harder than expected. All I found was some way to pass binaries representing erlang code over a socket and evaluating it remotle. While it's nice that such things are as easy to do as they are it doesn't help me with my task of moving simple strings.

start() ->
    {ok, Listen} = gen_tcp:listen(51622, [binary, {packet, 0},
                                         {reuseaddr, true},
                                         {active, true}]),
    spawn(fun () -> accept_loop(Listen) end).

accept_loop(Listen) ->
    {ok, Socket} = gen_tcp:accept(Listen),
    accept_loop(Listen),
    handle(Socket)

handle(Socket) ->
    receive
        {tcp, Socket, Bin} ->
            io:format("~p~n", binary_to_list(Bin));
        {tcp_closed, Socket} ->
            Buffer
    end.

So the socket is now up and receives text as wanted. However, as we are already runnign a parralel program it would be nice to be hable to handle multiple socket connections in parralel right? For that we just need to add a spawn() at the right place. The right place is not the handle(Socket) but the accept_loop(Listen) because the process that called accept will receive all the tcp messages.

This last part was quite obvious after finding the documentation of the {active, _} properties for the socket. always here means that you'll receive all data from the socket as erlang Messages, once delivers one package and waits until it is activated again and false requires calling a method -- this would have been possible as well when forking handle(Socket).

The web server

Ok we also want a webserver. We do not want to run some webapplication inside appache or so, just do some post/get and simple pages. Erlang here provides a built-in httpd with some mod_esi that calls some function depending on the URL used. It doesn't do anything fancy like templating or DB backends or stuff, just taking strings and building the http answers.

Unfortunately there are no examples around and basically noone seems to be using this combination (apart from some hacks like mine probably). So as I needed to get some additional information into the handler function (a Pid to connect to some service), I, as a novice, just couldn't find a way. Asking on IRC the solution was rather simple: Just using erlang's process registry. For more complex stuf gproc might prove usefull here.

Result

I guess I've got a huge step farther in programming erlang now. The manpages are easily found by your search engine -- for python I have to either walk through the (well structured) online documentation or search for the right link in the search results, for erlang they're typically first. Also Joe Armstrong's Books as proven usefull. The most dificult part probably is getting around all the nice extras you can do (transfering functions over sockets et al) and find out how to do the thing you need.


-- Christoph Egger <christoph@christoph-egger.org> Sun, 05 Dec 2010 12:58:23 +0100

Debian GNU/kFreeBSD; tags=Debian, Programmieren, FOSS, kFreeBSD

So when I was travveling to my parent's for christmas it looked like I'd have limited computer access. My Netbook is quite OK for reading mail but not really useable for any real hacking. And my trusty Thinkpad (Z61m) was oopsing when X was running so not much useable either. But as some Live CDs placed here were working well I decided that this would be fixed by an reinstall. And as I was reinstalling anyway I decided I could just choose kfreebsd-amd64. Turned out to be a quite entertaining decision with lots of stuff to hack away with

wireless

Bad news: there's no wireless support on Debian GNU/kFreeBSD at the moment. This problem is tracked as Bug #601803 so for wireless internet you will need a (plain) Freebsd chroot. Haven't tried this myself yet -- busy figuring other stuff out.

SBCL

Having a FreeBSD chroot I decided to give SBCL on GNU/kFreeBSD another try after having failed to get it working in a VM some time ago. With quite some help on SBCL's IRC channel I managed to build a patch that enables building (you need to force a :os-provides-dlopen to the feature list additionally).

There's currently no multi-threading working so I hae a project for the rest of the hoidays (well lots of other stuff to do as well ;))

Audio

Some more user-related stuff now. As it is this time of the year I wanted to listen to some 27c3 streams so I needed working audio. However there's no OSS device available. Turned out you just need to kldload the right module (here snd_hda) to get sound working.

Volume was rather low although hardware controls of the soundcard where at max. As that's all OSS there's no point looking for alsamixer. Turns out aumix can do that here.

IPv6 aiccu stuff

Installing aiccu, copying the config in and starting did not work out as well. I already tried to do that from within the FreeBSD chroot already (which doesn't work for some reason) until I discovered just loading the if_tun kernel module solves the aiccu on Debian issue quite well. To get a default route up the last step was finding /lib/freebsd/route again -- /sbin/route is a wrapper around that abstracting differences in BSD route but not supporting IPv6.


-- Christoph Egger <christoph@christoph-egger.org> Wed, 29 Dec 2010 00:57:37 +0100

CSSH but without X; tags=Debian, Linux, FOSS, HowTo

There are many ways to run some commands simultaneously on multiple hosts like cssh or dsh. They come handy for example when you are installing software updates on a set of hosts.

dsh is a rather simple comandline tool allowing to execute a command over ssh on multiple hosts. However it doesn't allow any interactive input -- so you can't look at the potentially upgrading packages and press y to accept and you can't go through debconf promts or similar.

This is solved by cssh which opens a XTerm for every host and a input area that is broadcastet to all of them. this is working really well -- you can execute your update on all hosts and still do individual adjustments just as needed: switch focus from the broadcasted input to one of the terminal windows and anything you type just goes there.

Now cssh has a big disadvantage: it requires a running X server (and doesn't do too well with a fullscreen windowmanager). Requiring X is quite a blocker if you need to run that ssh multiplexer on a remote host, for example if the firewalling doesn't allow direct connections. Fortunately you can make tmux behave as we want -- in a simple terminal:

First you need a script spawning the ssh sessions in separate tmux panes and direct input to all of them -- here called ssh-everywhere.sh (you could also write a tmux config I guess):

#/bin/sh
# ssh-everywhere.sh
for i in $HOSTS
do
  tmux splitw "ssh $i"
  tmux select-layout tiled
done
tmux set-window-option synchronize-panes on

Now start the whole thing:

tmux new 'exec sh ssh-everywhere.sh'

And be done.

Update

If you want to type in just one pane (on one host) you can do that as well: C-b : set-window-option synchronize-panes off and moving to the right pane (C-b + Arrow keys)


-- Christoph Egger <christoph@christoph-egger.org> Sun, 20 Feb 2011 17:23:04 +0100

Thouhts on secure software archives; tags=Debian, Web, Linux, FOSS, Security

From the java point of view

Recently I had to get some Scala Tool working correctly. Unfortunately there are basically no packages in the Debian Archive at all so I had to use maven to install these (or download + install manually). Being a highly paranoid person downloading and executing code from the internet without any cryptographic verification at all one after the other practically drove me nuts. Looking a bit deeper I noticed that some of the software in maven's repository have some signatures next to them -- signed by the author or release manager of this specific project.

Why secure sources matters

With my experience in mind I got some Input from other people. One of the things I was told is that some scala tools just aren't security critical -- they're only installed and used as the current user. In my opinion this is, for my desktop system, totally wrong. The important things on my private Computers are my GPG and SSH keys as well as my private data. For messing with these no super user access is needed at all.

Comparing to the Common Lisp situation

Being a Common Lisp fan of course I noticed basically the same problem for installing Common Lisp libraries. Here the situation in Debian is quite a bit better -- and I'm working in the pkg-common-lisp Team to improve this even more. Common Lisp has some maven-alike tool for downloading and installing dependency trees called quicklisp -- without any cryptographic verification as well. However there's light at the end of this tunnel: There are plans to add GPG verification of the package lists really soon.

Comparing the maven and the quicklisp model

So there are basically two different approaches to be seen here. In maven the software author confirms with his signature the integrity of his software while in quicklisp the distributor confirms all users get the same software that he downloaded. Now the quicklisp author can't and won't check all the software that is downloadable using quicklisp. This won't be doable anyway as there's way to much software or a single person to check.

Now in some kind of perfect World the maven way would be vastly superior as there's a End-To-End verification and verification of the full way the software takes. However there's a big problem: I don't know any of these Authors personally and there's no reason I should just trust any of them.

Now comparing this to the Distribution / quicklisp model. Here I would just have to trust one person or group -- here the quicklisp team -- to benefit from the crypto which might be possible based on karma inside the using community. However here I don't gain the possibility that the software is integer.

However idealized if some of these pieces of software was forged between upstream and the quicklisp team and attacker would also intercept me downloading the software from the same address so I get the source from upstream matching the checksum from quicklisp -- assuming the quicklisp team does indeed know the correct website. Additionally I get the confirmation that all other quicklisp users get the same source (if the quicklisp guys are fine of course) so no-one inside the community complaining is a good indication the software is fine. For this to work there's of course a relevant user-base of the distributor (quicklisp) necessary.

Relevance for Debian

So how do conventional Linux Distributions like Debian fit in here. Ideally we would have maintainers understanding and checking the software and confirming the integrity using their private key or at least know their upstreams and having at least a secured way getting the software from upstream and a trust relationship with them. Of course that's just illusionary thinking of complex and important software (think libreoffice, gcc or firefox for example). Maintainers won't fully understand a lot simpler pieces of software. And loads of upstream projects don't provide a verified way of getting the correct source code though that's a bit better on the real high-impact projects where checksums signed by the Release Manager are more common than in small projects.

A misguided thought at the end

As I'm a heavy emacs user I like to have snapshots from current emacs development available. Fortunately binary packages with this are available from a Debian guy I tend to trust who is also involved upstream so adding the key from his repository to the keyring apt trusts. Now my first thoughts were along the lines "It would be really nice if I could pin that key to only the emacs snapshot packages" so this guy can't just put libc packages in his repository and my apt would trust them. Now thinking of it again a bogus upload of the emacs snapshot package could just as well put some binary or library on the system at some place in front of the real on in the system path which would be rather similar bad.

b

-- Christoph Egger <christoph@christoph-egger.org> Thu, 12 May 2011 21:19:49 +0200

Quick notes about PostgreSQL; tags=Programmieren, DBs, PostgreSQL

Imagine you have a old postgresql database. Further imagine it has it's encoding set to something like LATIN-1 but some PHP web application has put in UTF-8 strings. Now what would you do if you have some python application actually respecting the encoding and recoding the db content from latin-1 to UTF-8 giving you garbage. Seems you can easily trick postgresql to now believe it is UTF-8:

UPDATE pg_database SET encoding = 6 WHERE datname = 'foo';

For a summary of these magic numbers the PostgreSQL Manual is usefull.


-- Christoph Egger <christoph@christoph-egger.org> Tue, 17 May 2011 10:30:22 +0200

Maintaining kFreeBSD buildds since one month; tags=Debian, FOSS, kFreeBSD, Porting

At April 30, I took over maintenance of of Debian's kFreeBSD autobuilders. Means getting something like 4,5k e-Mails this month (gladly no need to sign all those 4k successful builds any more!), filling nearly 30 RC Bugs (quite a lot of which got fixed just within hours after filling, wow!), investigating some of the more strange build failures and such stuff. In general it turned out to be quite some fun.

Quite interesting which libraries turn out to be rather central to the Archive. I wouldn't have guessed that a uninstallable libmtp would prevent a reasonable fraction of the builds to fail -- including packages like subversion.

Packages builds failing because the disk space is exhausted may be something most of us have already witnessed, especially those here that use one of these small notebook hard drives. Build failures caused by a lack of RAM might certainly be imaginable as well, especially on highly parallel builds. But have you ever seen gcc fail because the virtual address space was exhausted on 32 bit architectures?

Also there's a interesting lot of packages with misspelled build dependencies which sbuild can't find and therefore can't build the package. Maybe having a lintian check for some of these problems would be a good idea?

I'm also regularly seeing build failures that look easy enough to fix -- like some glob in a *.install for some python package matching lib.linux*. I try to fix some of these as I see them but my time is unfortunately limited as well. Someone interested in quick&easy noticed about these kind of issues? I could put URLs to build-logs on identi.ca or somewhere on IRC.

There are also some really strange failures like llvm, which builds flawlessly on my local kFreeBSD boxes all the time, inside and outside schroot but hangs all the time in the same testcase when building on any of the autobuilders (any hints welcome!) or perl failing on kfreebsd-amd64 selectively but all the time.


-- Christoph Egger <christoph@christoph-egger.org> Mon, 30 May 2011 15:54:23 +0200

Marking all closed bug reports "read" in a Maildir; tags=Debian, FOSS, HowTo

#!/usr/bin/python

from btsutils.debbugs import debbugs
import mailbox
import re
import sys

mailbox =  mailbox.Maildir(sys.argv[1], factory=False)
bts = debbugs()

for key in mailbox.keys():
    message = mailbox[key]
    if not 'S' in message.get_flags():
        if message['X-Debian-PR-Message']:
            try:
                bugnr = message['X-Debian-PR-Message'].split()[1]
            except IndexError:
                continue
        else:
            test = re.search('Bug#(\d{6})', message['Subject'])
            if test:
                bugnr = test.group(0)[4:]
            else:
                continue

        try:
            bug = bts.get(bugnr)
        except AttributeError:
            print bugnr
            continue

        if bug.getStatus() == u'done':
            message.set_flags(message.get_flags() + 'S')
            mailbox[key] = message


mailbox.flush()

Run it like python cleanbugsmail.py ~/Maildir/.debian.bugs. Anyone aware of a better solution?


-- Christoph Egger <christoph@christoph-egger.org> Sun, 12 Jun 2011 16:53:11 +0200

Feeling young; tags=Hier, Web, Unknown-Horizons, Kurios, FOSS

Looking around old files I have put online ages ago I stumbled upon a Unknown Horizons Code Swarm Video I have created back in September 2009. Feeling more than a bit sad this piece of software died soon after being released. Searching the web for "Code Swarm" still finds lots of old Videos created back then.


-- Christoph Egger <christoph@christoph-egger.org> Wed, 15 Jun 2011 21:50:17 +0200

Unknown Horizons; tags=Unknown-Horizons, Linux, FOSS, FIFE

What is Unknown Horizons?

Unknown Horizons is a Free Strategy game I have written about from time to time about and do from time to time contribute. I also build Packages for Debianoids. Unknown Horizons is not a new thing, it has been there for some years now. It currently gets boosted by 3 very active GSoC Students and evolving steadily.

Preparing for 2011.2

Unknown Horizons is Preparing for another Release, 2011.2. As part of the effort I have updated the official Unknown Horizons Debian and Ubuntu weekly archives with new rc builds to get some more testing before the release happens. Please test!

Why isn't it in Debian main yet?

Unknown Horizons hasn't been uploaded to Debian yet. This is mostly due to 2 factors: The game engine Unknown Horizons build upon, FIFE isn't in Debian, evolving fast and not providing much of a stable interface and therefor Unknown Horizons has traditionally had a strict dependency to some VCS state of approximately the same age. Not good for a Debian package obviously. And second, Unknown Horizons got rid of some content that was not free (as in "main") just recently.

Personally I'm hoping we can have some packages soon as the freeness stuff is sorted out now and I haven't seen any troubling breakages with regard to the engine recently.


-- Christoph Egger <christoph@christoph-egger.org> Mon, 27 Jun 2011 19:40:56 +0200

Debconf11; tags=Debian, Linux, FOSS, kFreeBSD

debconf banner I'm coming as well! Really looking forward to meet the people from Debconf9 again. Also people from the Games Team and the buildd and kfreebsd folks. And ideally there will be some more people interested in (Common) Lisp as well, we'll see


-- Christoph Egger <christoph@christoph-egger.org> Wed, 29 Jun 2011 22:44:41 +0200

A week of Debian GNU/kFreeBSD; tags=Debian, Programmieren, FOSS, kFreeBSD

While other people are squashing RC bugs I was using this week to fix (or investigationg) some more kFreeBSD issues -- mostly looking at failed build logs and trying to fix the problems and after some nice fish for dinner writing things up.

  • First issue this week was #639178 a build failure in tar I had reported earlier and didn't manage to process the response. After sending some findings to the bug I noticed Petr was faster and did actually find out a lot more detail. Short story: success in that test suite requires linux behavior and the failure on kfreebsd is covered by what POSIX allows
  • #640156 multiarch related changes resulting in a nonfunctional ldd breaking clutter-gst build
  • #640012 postfix is hard-coding kFreeBSD versions … up to 7 and therefore won't build on a 8.2 kernel. It also doesn't handle absence of NIS on Hurd and kFreeBSD #545970
  • #640159 iozone3 just needed a bit of massaging to combine the FreeBSD backend with the linker flags needed for kFreeBSD
  • Installing the build depends for openjdk-* resulted in a installation failure for some time. Looking closer it turned out a minimal testcase was installing menu and python2.6 together. Turned out dash's test builtin wasn't working #640334 because it was relying on the intuitive but not POSIX mandated behavior of the faccessat syscall #640325
  • #640341 ed decided not to build on kfreebsd-i386 in the 40 minutes between -2 and -3 upload. Without any actual source changes. Just trying agan tricked it to build again but probably someone should look what went wrong actually
  • #640378 leveldb needed enabling the FREEBSD_OS kind of build with the linux style of linker flags (additional -lrt)
  • #640385 owfs was failing to some symbol difference (but otherwise building although being a *fs ;))
  • the gcc family of packages still has some heisenbug repeatedly failing when doing regular builds on the buildds. Independent which one. Multiple times in a row. Building on my test VM or my notebook doesn't show that problem (but takes ~10h). Building on the same buildd in the same chroot with the same sbuild flags and it's still building fine.

-- Christoph Egger <christoph@christoph-egger.org> Sun, 04 Sep 2011 23:07:12 +0200

PHP love; tags=Web, FOSS, Rant, Fail

Migrating a mediawiki instance from the old server to a new box. Of course it does not work (returns an empty 500 Error page). Of course there is no entry in error.log. Of course there is no obvious match of verbose/debug in a grep over the config files. Lovin' it


-- Christoph Egger <christoph@christoph-egger.org> Sun, 15 Jan 2012 12:33:34 +0100

Android; tags=FOSS, Rant, Fail, Android

Hardware

To make things clear: I'm having a Android 4.0.$recent tablet with considerably more horse-power than my Nokia n900 smartphone so don't tell me this is due to under-powered hardware – the android is 3 years newer both in hardware and software.

Background Tasks

Being somewhere with my Android Tablet. Network is kind of crappy and this site takes minutes again to load. So the most natural thing to do would be doing something else while the site continues loading in the background. This works really well on the n900. It might work with android. But of course when you switch to another Program the browser might also be shut down while you're doing something else and randomly when you switch back to your browser, not only the site hasn't loaded but the browser also forgot where you were heading. Now if you followed e.g. a link in a email you might have closed the mail program long ago (or the mail program has decided to stop) and you have to find the link again, wait again for the site to load. And remember not to background the browser or you might have to start over again.

With the n900 Maemo smartphone I was able to load several pages in the background with whatever application in the foreground (like playing tuxracer) so don't tell me android has to do this to give enough power to the foreground process. If a Meamo device can load 5 pages in the background while a OpenGL game is running in the foreground there is no reason Android, with more CPU and RAM, can't load a single page in the background while I check email.

Software installation

Can you imagine a system where you are unable to install software from your standard repository without registering an account first? Like after nearly two decades of Linux distributions? Maemo had this for mobile devices – more than five years ago. Plus, on Maemo you'll easily find tons of good, free (as in freedom) and banner-add-free software – try this on androids "Play Store".


-- Christoph Egger <christoph@christoph-egger.org> Sat, 20 Oct 2012 05:14:16 +0200

Generating .wot files now; tags=Web, Security, GnuPG

As you might have noticed, the original source of Web-Of-Trust Graph information went offline and probably won't come back. As a result also pathfinders like the one of Henk P. Penning are stuck in February 2012.

As I always found this kind of statistics interesting I've hacked the pks2wot python script that is part of the wotsap package to use normal hkp instead of the pks client and running it against my own sks keyserver which seems to work good enough to do a weekly dump of the current web-of-trust which can be found at http://wot.christoph-egger.org/download/. I'd be happy to hear if this is useful to anyone besides myself.


-- Christoph Egger <christoph@christoph-egger.org> Tue, 04 Dec 2012 00:12:56 +0100

RuCTFe nsaless; tags=Uni, HowTo, Security

Greetings from the FAU Security Team (FAUST), the Uni Erlangen CTF group. We were participating in the RuCTFe competition and made it to 4th place. Following is my write-up on the nsaless service, the main crypto challenge in the competition. nsaless is a nodejs webservice providing a short message service. People can post messages and their followers receive the message encrypted to their individual RSA key.

About the gameserver protocol

The gameserver created groups of 8 users on the service 7 were just following the first user (and authorized by the first user to do so) while the first user sent a tweet containing the flag. The service used 512bit RSA with 7 as public exponent. While RSA512 is certainly weak, it's strong enough to make it unfeasible to break directly.

Attacking RSA

There are some known attacks against RSA with small exponents if no proper padding is done. The most straightforward version just takes the e-th root of the cipher-text and, if the clear message was small enough, outputs that root as plain-text. As the flag was long enough to make this attack impossible, we need a somewhat improved Attack.

Håstad's Broadcast Attack

Reminder:

  • In RSA, given a plain-text A, the sender computes Aᵉ mod N to build the cipher-text B.
  • Given simultaneous congruences we can efficiently compute a x ∈ ℤ such that x satisfies all congruences using the Chinese remainder theorem.

For NSAless we actually get several such B for different N (each belonging to different users receiving the tweet because they follow the poster). This effectively means we get Aᵉ in mod N for different N. Using the Chinese remainder theorem we can now compute a x ∈ ℤ ≡ Aᵉ mod Π Nᵢ. If we use at least e different B for this we are guaranteed that x actually equals Aᵉ (in ): A needs to be smaller than N for all N used (otherwise we lose information during encryption), therefore Aᵉ needs to be smaller than Nᵉ.

Computing now the e-th root of x we get the plain-text A – the flag.

Fix

Fixing your service is easy enough, just increase e to an suitable number > 8. At the end of the contest 5 Teams had fixed this vulnerability by either using 17 or 65537.

EXPLOIT

The basic exploit is shown below. Unfortunately it needs to retrieve all tweets for all users the compute the flags which just takes too long to be feasible (at least at the end of the competition where tons of users already existed) so you would need some caching to make it actually work. Would have been a great idea to have users expire after an hour or two in the service!

#!/usr/bin/python

import httplib
import urllib
import re
import json
import pprint
import gmpy
import sys

userparse_re = re.compile('<a [^>]*>([^<]*)</a></div>\s*<div>([^<]*)</div>')
tweetparse_re = re.compile("<div id='last_tweet'>([0-9]+)</div>")
followingparse_re = re.compile('<div><a href="/[0-9]+">([0-9]+)</a></div>')

def my_parse_number(number):
    string = "%x" % number
    if len(string) != 64:
        return ""
    erg = []
    while string != '':
        erg = erg + [chr(int(string[:2], 16))]
        string = string[2:]
    return ''.join(erg)

def extended_gcd(a, b):
    x,y = 0, 1
    lastx, lasty = 1, 0

    while b:
        a, (q, b) = b, divmod(a,b)
        x, lastx = lastx-q*x, x
        y, lasty = lasty-q*y, y

    return (lastx, lasty, a)

def chinese_remainder_theorem(items):
  N = 1
  for a, n in items:
    N *= n

  result = 0
  for a, n in items:
    m = N/n
    r, s, d = extended_gcd(n, m)
    if d != 1:
      raise "Input not pairwise co-prime"
    result += a*s*m

  return result % N, N

def get_tweet(uid):
    try:
        conn = httplib.HTTPConnection("%s:48879" % sys.argv[1], timeout=60)
        conn.request("GET", "/%s" % uid)
        r1 = conn.getresponse()
        data = r1.read()
        tweet = re.findall(tweetparse_re, data)
        if len(tweet) != 1:
            return None
        followers = re.findall(followingparse_re, data)
        return tweet[0], followers
    except:
        return None

def get_users():
    conn = httplib.HTTPConnection("%s:48879" % sys.argv[1], timeout=60)
    conn.request("GET", "/users")
    r1 = conn.getresponse()
    data1 = r1.read(1024 * 1024)
    data = dict()
    for i in re.findall(userparse_re, data1)[:100]:
        userinfo = get_tweet(i[0])
        if userinfo != None:
            data[i[0]] = (json.loads(i[1].replace('&quot;', '"'))['n'], userinfo)

    return data

users = get_users()
allusers = users.keys()
masters = [ user for user in allusers if len(users[user][1][1]) > 0 ]

for test in masters:
    try:
        followers = users[test][1][1]
        data = []

        for fol in followers:
            n = int(users[fol][0])
            tweet = int(users[fol][1][0])
            data = data + [(tweet, n)]

        x, n = chinese_remainder_theorem(data)

        realnum = gmpy.mpz(x).root(7)[0].digits()
        print my_parse_number(int(realnum))
    except:
        pass

-- Christoph Egger <christoph@christoph-egger.org> Fri, 20 Dec 2013 13:59:29 +0100

[HOWTO] unsubscribe from a google group; tags=Hier, Web, Kurios, Rant, Fail, HowTo

Writing this because there seems to be no correct documentation on the relevant google websites and it turns out to be non-trivial. Our goal here is to unsubscribe from a ordinary google group.

Mails from the google group contain the quoted footer:

-- 
You received this message because you are subscribed to the Google
Groups "FOO" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to FOO+unsubscribe@googlegroups.com.
Visit this group at http://groups.google.com/group/FOO
For more options, visit https://groups.google.com/groups/opt_out.

Seems easy enough, so let's send a Mail to this FOO+unsubscribe address. Back comes a E-Mail:

From: FOO <FOO+unsubconfirm@googlegroups.com>
Subject: Unsubscribe request for FOO [{EJzZjpgFhDHd9seTdRA0}]
To: Christoph Egger <christoph@example.com>
Date: Tue, 18 Feb 2014 18:55:24 +0000 (38 minutes, 53 seconds ago)

 [Leave This Group]

Visit Go 

[Start] your own group, [visit] the help center, or [report]
abuse.

So click on the [Leave This Group] link and be done? Unfortunately not. Looking at the link you notice it's called http://groups.google.com/group/FOO/subscribe -- no token and "subscribe"? I actually want to unsubscribe! And indeed, clicking gets an Interface that offers to "Enter the email address to subscribe:" + Captcha. And whatever it does, it -- of course -- doesn't unsubscribe. (My guess is, it would actually work if you had a real google account associated with that email address and were logged in to that account but there's no way of verifying this as already the first condition is false in this case)

Now if you disable HTML completely for the email, a totally different content emerges:

Hello christoph@example.com,

We have received your request to unsubscribe from FOO. In order for us to complete the request, please reply to this email or visit the following confirmation URL:

http://groups.google.com/group/FOO/subscribe

If you have questions related to this or any other Google Group, visit the Help Center at http://groups.google.com/support/.

Thanks,

Google Groups

Still the non-functional link, however it also mentions a different solution: "please reply to this email" which was not present in the HTML mail at all. And it works.


-- Christoph Egger <christoph@christoph-egger.org> Tue, 18 Feb 2014 20:37:01 +0100

pass xdotool dmenu; tags=FOSS, GnuPG

I've written a small dmenu-based script which allows to select passwords from one's pass password manager and have it xdotool typed in. This should completely bypass the clipboard (which is distrusted by people for a reason). As I've been asked about the script a few times in the past here it is. Feel free to copy and any suggestions welcome.

#!/bin/bash

shopt -s nullglob globstar

list_passwords() {
	basedir=~/.password-store/
	passwords=( ~/.password-store/**/*.gpg )
	for password in "${passwords[@]}"
	do
		filename="${password#$basedir}"
		filename="${filename%.gpg}"
		echo $filename
	done
}

xdotool_command() {
	echo -n "type "
	pass "$1"
}

selected_password="$(list_passwords 2>/dev/null| dmenu)"

echo $selected_password
if [ -n "$selected_password" ]
then
	xdotool_command "$selected_password" | xdotool -
fi

-- Christoph Egger <christoph@christoph-egger.org> Fri, 27 Jun 2014 22:20:03 +0200

Backup Strategy; tags=FOSS, kFreeBSD, Backup

I've been working on my backup strategy for the notebook recently. The idea is to have full backups every now month and then incremental backups in between as fine-grained as possible. As it's a mobile device there's no point in time where it is guaranteed to be up, connected and within reach of the backup server.

As I'm running Debian GNU/kFreeBSD on it, using ZFS and specifically zfs send comes quite naturally. I'm now generating a new file system snapshot every day (if the notebook happens to be online during that day) using cron.

@daily zfs snapshot base/root@`date -I`
@daily zfs snapshot base/home@`date -I`
@reboot zfs snapshot base/root@`date -I`
@reboot zfs snapshot base/home@`date -I`

When connected to the home network I'm synchronizing off all incrementals that are not yet on the backup server. This is using zfs send together with gpg to encrypt the data and then put it off to some sftp storage. For the first snapshot every month a full backup is created. As there doesn't seem to be a way to merge zfs send streams without importing everything in a zfs pool I create additional incremental streams to the first snapshot of last month so I'm able to delete older full backups and daily snapshots and still keep coarse-gained backups for a longer period of time.

#!/usr/bin/python
# -*- coding: utf-8 -*-

####################
# Config
SFTP_HOST = 'botero.siccegge.de'
SFTP_DIR  = '/srv/backup/mitoraj'
SFTP_USER = 'root'
ZPOOL     = 'base'
GPGUSER   = '9FED5C6CE206B70A585770CA965522B9D49AE731'
#
####################

import subprocess
import os.path
import sys
import paramiko


term = {
    'green':  "\033[0;32m",
    'red':    "\033[0;31m",
    'yellow': "\033[0;33m",
    'purple': "\033[0;35m",
    'none':   "\033[0m",
    }

sftp = None

def print_colored(data, color):
    sys.stdout.write(term[color])
    sys.stdout.write(data)
    sys.stdout.write(term['none'])
    sys.stdout.write('\n')
    sys.stdout.flush()

def postprocess_datasets(datasets):
    devices = set([entry.split('@')[0] for entry in datasets])

    result = dict()
    for device in devices:
        result[device] = sorted([ entry.split('@')[1] for entry in datasets
                                    if entry.startswith(device) ])

    return result

def sftp_connect():
    global sftp

    host_keys = paramiko.util.load_host_keys(os.path.expanduser('~/.ssh/known_hosts'))
    hostkeytype = host_keys[SFTP_HOST].keys()[0]
    hostkey = host_keys[SFTP_HOST][hostkeytype]

    agent = paramiko.Agent()
    transport = paramiko.Transport((SFTP_HOST, 22))
    transport.connect(hostkey=hostkey)

    for key in agent.get_keys():
        try:
            transport.auth_publickey(SFTP_USER, key)
            break
        except paramiko.SSHException:
            continue

    sftp = paramiko.SFTPClient.from_transport(transport)
    sftp.chdir(SFTP_DIR)

def sftp_send(dataset, reference=None):
    zfscommand = ['sudo', 'zfs', 'send', '%s/%s' % (ZPOOL, dataset)]
    if reference is not None:
        zfscommand = zfscommand + ['-i', reference]

    zfs = subprocess.Popen(zfscommand, stdout=subprocess.PIPE)

    gpgcommand = [ 'gpg', '--batch', '--compress-algo', 'ZLIB',
                   '--sign', '--encrypt', '--recipient', GPGUSER ]
    gpg = subprocess.Popen(gpgcommand, stdout=subprocess.PIPE,
                                       stdin=zfs.stdout,
                                       stderr=subprocess.PIPE)

    gpg.poll()
    if gpg.returncode not in [None, 0]:
        print_colored("Error:\n\n" + gpg.stderr, 'red')
        return

    if reference is None:
        filename = '%s.full.zfs.gpg' % dataset
    else:
        filename = '%s.from.%s.zfs.gpg' % (dataset, reference)

    with sftp.open(filename, 'w') as remotefile:
        sys.stdout.write(term['purple'])
        while True:
            junk = gpg.stdout.read(1024*1024)
            if len(junk) == 0:
                break

            sys.stdout.write('#')
            sys.stdout.flush()
            remotefile.write(junk)
        print_colored(" DONE", 'green')

def syncronize(local_datasets, remote_datasets):
    for device in local_datasets.keys():
        current = ""
        for dataset in local_datasets[device]:
            last = current
            current = dataset

            if device in remote_datasets:
                if dataset in remote_datasets[device]:
                    print_colored("%s@%s -- found on remote server" % (device, dataset), 'yellow')
                    continue

            if last == '':
                print_colored("Initial syncronization for device %s" % device, 'green')
                sftp_send("%s@%s" % (device, dataset))
                lastmonth = dataset
                continue

            if last[:7] == dataset[:7]:
                print_colored("%s@%s -- incremental backup (reference: %s)" %
                              (device, dataset, last), 'green')
                sftp_send("%s@%s" % (device, dataset), last)
            else:
                print_colored("%s@%s -- full backup" % (device, dataset), 'green')
                sftp_send("%s@%s" % (device, dataset))
                print_colored("%s@%s -- doing incremental backup" % (device, dataset), 'green')
                sftp_send("%s@%s" % (device, dataset), lastmonth)
                lastmonth = dataset

def get_remote_datasets():
    datasets = sftp.listdir()
    datasets = filter(lambda x: '@' in x, datasets)

    datasets = [ entry.split('.')[0] for entry in datasets ]

    return postprocess_datasets(datasets)

def get_local_datasets():
    datasets = subprocess.check_output(['sudo', 'zfs', 'list', '-t', 'snapshot', '-H', '-o', 'name'])
    datasets = datasets.strip().split('\n')

    datasets = [ entry[5:] for entry in datasets ]

    return postprocess_datasets(datasets)

def main():
    sftp_connect()
    syncronize(get_local_datasets(), get_remote_datasets())

if __name__ == '__main__':
    main()

Rumors have it, btrfs has gained similar functionality to zfs send so maybe I'll be able to extend that code and use it on my linux nodes some future day (after migrating to btrfs there for a start).


-- Christoph Egger <christoph@christoph-egger.org> Fri, 27 Jun 2014 22:54:24 +0200


valid XHTML, CSS -- Django based -- ©2008 Christoph Egger