Latest Stories: RSS Feed

What's wrong with the web? -- authentication; tags=Web, OpenID, OAuth

The problem

One problem to solve when doing web authentication has always been one identity provider, so you don't have to remember which username (or email address) you used for that bugtracker you used three years ago or that website. And tie it to one login ideally. Five years ago this problem seemed to be basically solved. There was OpenID and while it may not have been great it worked. You could have your own provider, your institution (university, company, foss project, ..) could have one and you could use your university-provided ID for all university stuff.

Today's state

Looking at the problem again today and the situation seems to have changed. To the worse. A lot. People are actively removing OpenID support. There seemed to be a replacement with, at least, proper design goals: Mozilla's persona. However this seems to be a dead end, no-one (almost) actually supports it.

Then there is what people call OAuth2. However there does not seem to be such a thing as OAuth2 at all, at least not for logging into websites. So for example phabricator supports 12 different OAuth2 systems. That includes Google, Facebook, Twitter, Amazon Github and a whole bunch of other services. Each with a different implementation in the webapp of course. And of course you can not just have your university/company/.. provide an OAuth2 service for you to use -- you would need to write yet another adapter on the (foreign) website to talk to your implementation and your provider.

And the strange thing, people seem to still consider OAuth2 a replacement for OpenID while it does not even provide the core functionality of the older system. Plus there does not seem to be any awareness of that all together.

Other features

Now of course, OpenID is not (and never was) the ultimate answer to the web authentication problem. The most obvious problem being user tracking. Your identity provider will see every website you log into, will see when you log into it and even be able to log into that website with your credentials.

Of course, this problem is fully inherited by OAuth2. And in contrast to OpenID you can no longer run your own provider whom you can fully trust and who already knows about your surfing habits (because it's actually you already). Mozilla's persona might have solved that, they at least intended to. But, again, persona seems quite dead.


-- Christoph Egger <christoph@christoph-egger.org> Sat, 08 Aug 2015 19:30:02 +0200

Systemd pitfalls; tags=Debian, Linux, FOSS

If you just updated systemd and ssh to that host seems to hang, that's just a known Bug (Debian Bug #770135). Don't panic. Wait for the logind timeout and restart logind.

restart and stop;start

One thing that confused me several times and still confuses people is systemctl restart doing more than systemctl stop ; systemctl start. You will notice the difference once you have a failed service. A restart will try to start the service again. Both stop and start however will just ignore it. Rumors have it this has changed post jessie however.

sysvinit-wrapped services and stop

While there are certainly bugs with sysvinit services in general (I found myself several times without a local resolver as unbound failed to be started, haven't managed to debug further), the stop behavior of wrapped services is just broken. systemctl stop will block until the sysv initscript finished. It will even note the result of the action in its state. However systemctl will return with exitcode 0 and not output anything on stdout/stderr. This has been reported as Debian Bug #792045.

zsh helper

I found the following zshrc snipped quite helpful in dealing with non-reported systemctl failures. On root shells it will display a list of failed services as part of the prompt. This will give proper feedback whether your systemctl stop failed, it will give feedback if you still have type=simple services and if the sysv-init script or wrapper is broken.

precmd () {
    if [[ $UID == 0 && $+commands[systemctl] != 0 ]]
    then
      use_systemd=true
      systemd_failed="`systemctl --state=failed | grep failed | cut -d \  -f 2 | tr '\n' ' '`"
    fi
}

if [[ $UID == 0 && $+commands[systemctl] != 0 ]]
then
  PROMPT=$'%{$fg[red]>>  $systemd_failed$reset_color%}\n'
else
  PROMPT=""
fi

PROMPT+=whateveryourpromptis

zsh completion

Speaking of zsh, there's one problem that bothers me a lot and I don't have any solution for. Tab-completing the service name for service is blazing fast. Tab-completing the service name for systemctl restart takes ages. People traced down to truckloads of dbus communication for the completion but no further fix is known (to me).

type=simple services

As described in length by Lucas Nussbaum type=simple services are actively harmful. Proper type=forking daemons are strictly superior (they provide feedback of finished initialization and success thereof) and type=notify services are so simple there's no excuse for not using them even for private one-off hacks. Even if you're language doesn't provide libsystemd-daemon bindings:

(defun sd-notify (event-string)
  (let ((socket (make-instance 'sb-bsd-sockets:local-socket :type :datagram))
        (name (posix-getenv "NOTIFY_SOCKET"))
        (bufferlen (length event-string)))
    (when name
      (sb-bsd-sockets:socket-connect socket name)
      (sb-bsd-sockets:socket-send socket event-string bufferlen))))

This is a stable API guaranteed to not break in the future and implemented in less than ten lines of code with just basic socket functions. And if your language has support it becomes actually trivial:

    try:
        import systemd
        systemd.daemon.notify("READY=1")
    except ImportError:
        pass

Note that in both cases there is no drawback at all on systemd-free setups. It has the overhead of checking the process' environment for NOTIFY_SOCKET or for the systemd package and behaves like a simple service otherwise.

Actually the idea of separating the technical aspect (daemonizing) from the semantic aspect of signalizing "initialization finished, everything's fine" is a pretty good idea and hopefully has the potential to reduce the number of services signalizing the "everything's fine" too early. It could even be ported to non-systemd init systems easily given the API.


-- Christoph Egger <christoph@christoph-egger.org> Fri, 07 Aug 2015 22:17:21 +0200

unbreaking tt-rss; tags=Hier, Web, Kurios, FOSS, Android

TinyTiny-RSS has some nice failure modes. And upstream support forums aren't really helpfull so when you search for your current problem, chances are that there is one mention of it on the web, in the forum, and the only thing happening there is people making fun of the reporter.

Anyway. This installation has seen lots of error messages from the updater in the last several months:

Warning: Fatal error, unknown preferences key: ALLOW_DUPLICATE_POSTS (owner: 3) in /srv/www/tt-rss.faui2k9.de/root/classes/db/prefs.php on line 108

Warning: Fatal error, unknown preferences key: ALLOW_DUPLICATE_POSTS (owner: 3) in /srv/www/tt-rss.faui2k9.de/root/classes/db/prefs.php on line 108

Warning: Fatal error, unknown preferences key: ALLOW_DUPLICATE_POSTS (owner: 3) in /srv/www/tt-rss.faui2k9.de/root/classes/db/prefs.php on line 108

Warning: Fatal error, unknown preferences key: ALLOW_DUPLICATE_POSTS (owner: 3) in /srv/www/tt-rss.faui2k9.de/root/classes/db/prefs.php on line 108
  
And, more recently, the android app stopped working with ERROR:JSON Parse failed.. Turns out both things are related.

First thing I noticed was changing preferences in the web panel stopped working until you use the reset to Defaults option and then changed settings. Plugging wireshark in between showed what was going on (Note: API was displayed as enabled in Preferences/Preferences):

HTTP/1.1 200 OK
Server: nginx/1.8.0
Date: Thu, 06 Aug 2015 11:00:31 GMT
Content-Type: text/json
Transfer-Encoding: chunked
Connection: keep-alive
X-Powered-By: PHP/5.4.43
Content-Language: auto
Set-Cookie: [...]
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Api-Content-Length: 234

ea

Warning: Fatal error, unknown preferences key: ENABLE_API_ACCESS (owner: 2) in /srv/www/tt-rss.faui2k9.de/root/classes/db/prefs.php on line 108
{"seq":0,"status":1,"content":{"error":"API_DISABLED"}} 0

Solution for fixing the Android app (and the logspam on the way as well) seems to be to reset the preferences and then configure tt-rss again (In the webapp, not in the android thing!). Also silences tt-rss update_daemon as well, yay! One last thing: someone out there who wants to explain to me how to fix

Fatal error: Query INSERT INTO ttrss_enclosures
                                                        (content_url, content_type, title, duration, post_id, width, height) VALUES
                                                        ('https://2.gravatar.com/avatar/e6d6ceb7764252af8da058e30cd8cb5f?s=96&d=identicon&r=G', '', '', '', '0', 0, 0) failed: ERROR:  insert or update on table "ttrss_enclosures" violates foreign key constraint "ttrss_enclosures_post_id_fkey"
DETAIL:  Key (post_id)=(0) is not present in table "ttrss_entries". in /srv/www/tt-rss.faui2k9.de/root/classes/db/pgsql.php on line 46

  


-- Christoph Egger <christoph@christoph-egger.org> Thu, 06 Aug 2015 13:33:36 +0200

Export org-mode snippets to HTML; tags=Web

Mostly a mental note as I've reinvented this the second time now. If you just quickly want to share some org-mode notes with some non-Emacs-users the built-in HTML export comes handy. However it has one Problem: All source syntax highlighting is derived from your current theme. Which of course is a bad idea if your editor has a dark background (say Emacs.reverseVideo: on). The same if your terminal's background color is dark.

Running Emacs in batch mode and still getting colorful code formatting seems to be an unsolved problem. All that can be found on the Internet suggests adding a dark background to your HTML export (at least to the code blocks). Or maybe use an external style-sheet. Both not exactly the thing if you just want to scp snippets of HTML somewhere to share. However there's a working hack:

#!/usr/bin/make -f

%.html: %.org
	xvfb-run urxvt +rv -e emacs -nw --visit $< --funcall org-html-export-to-html --kill >/dev/null

So use a terminal you can easily force into light-background-mode (like urxvt +rv) so the emacs -nw runs in light-background-mode and wrap the thing in xvfb-run so you can properly run this over ssh (and don't get annoying windows pop up and disappear again when typing make)


-- Christoph Egger <christoph@christoph-egger.org> Thu, 16 Jul 2015 16:40:03 +0200

Backup Strategy; tags=FOSS, kFreeBSD, Backup

I've been working on my backup strategy for the notebook recently. The idea is to have full backups every now month and then incremental backups in between as fine-grained as possible. As it's a mobile device there's no point in time where it is guaranteed to be up, connected and within reach of the backup server.

As I'm running Debian GNU/kFreeBSD on it, using ZFS and specifically zfs send comes quite naturally. I'm now generating a new file system snapshot every day (if the notebook happens to be online during that day) using cron.

@daily zfs snapshot base/root@`date -I`
@daily zfs snapshot base/home@`date -I`
@reboot zfs snapshot base/root@`date -I`
@reboot zfs snapshot base/home@`date -I`

When connected to the home network I'm synchronizing off all incrementals that are not yet on the backup server. This is using zfs send together with gpg to encrypt the data and then put it off to some sftp storage. For the first snapshot every month a full backup is created. As there doesn't seem to be a way to merge zfs send streams without importing everything in a zfs pool I create additional incremental streams to the first snapshot of last month so I'm able to delete older full backups and daily snapshots and still keep coarse-gained backups for a longer period of time.

#!/usr/bin/python
# -*- coding: utf-8 -*-

####################
# Config
SFTP_HOST = 'botero.siccegge.de'
SFTP_DIR  = '/srv/backup/mitoraj'
SFTP_USER = 'root'
ZPOOL     = 'base'
GPGUSER   = '9FED5C6CE206B70A585770CA965522B9D49AE731'
#
####################

import subprocess
import os.path
import sys
import paramiko


term = {
    'green':  "\033[0;32m",
    'red':    "\033[0;31m",
    'yellow': "\033[0;33m",
    'purple': "\033[0;35m",
    'none':   "\033[0m",
    }

sftp = None

def print_colored(data, color):
    sys.stdout.write(term[color])
    sys.stdout.write(data)
    sys.stdout.write(term['none'])
    sys.stdout.write('\n')
    sys.stdout.flush()

def postprocess_datasets(datasets):
    devices = set([entry.split('@')[0] for entry in datasets])

    result = dict()
    for device in devices:
        result[device] = sorted([ entry.split('@')[1] for entry in datasets
                                    if entry.startswith(device) ])

    return result

def sftp_connect():
    global sftp

    host_keys = paramiko.util.load_host_keys(os.path.expanduser('~/.ssh/known_hosts'))
    hostkeytype = host_keys[SFTP_HOST].keys()[0]
    hostkey = host_keys[SFTP_HOST][hostkeytype]

    agent = paramiko.Agent()
    transport = paramiko.Transport((SFTP_HOST, 22))
    transport.connect(hostkey=hostkey)

    for key in agent.get_keys():
        try:
            transport.auth_publickey(SFTP_USER, key)
            break
        except paramiko.SSHException:
            continue

    sftp = paramiko.SFTPClient.from_transport(transport)
    sftp.chdir(SFTP_DIR)

def sftp_send(dataset, reference=None):
    zfscommand = ['sudo', 'zfs', 'send', '%s/%s' % (ZPOOL, dataset)]
    if reference is not None:
        zfscommand = zfscommand + ['-i', reference]

    zfs = subprocess.Popen(zfscommand, stdout=subprocess.PIPE)

    gpgcommand = [ 'gpg', '--batch', '--compress-algo', 'ZLIB',
                   '--sign', '--encrypt', '--recipient', GPGUSER ]
    gpg = subprocess.Popen(gpgcommand, stdout=subprocess.PIPE,
                                       stdin=zfs.stdout,
                                       stderr=subprocess.PIPE)

    gpg.poll()
    if gpg.returncode not in [None, 0]:
        print_colored("Error:\n\n" + gpg.stderr, 'red')
        return

    if reference is None:
        filename = '%s.full.zfs.gpg' % dataset
    else:
        filename = '%s.from.%s.zfs.gpg' % (dataset, reference)

    with sftp.open(filename, 'w') as remotefile:
        sys.stdout.write(term['purple'])
        while True:
            junk = gpg.stdout.read(1024*1024)
            if len(junk) == 0:
                break

            sys.stdout.write('#')
            sys.stdout.flush()
            remotefile.write(junk)
        print_colored(" DONE", 'green')

def syncronize(local_datasets, remote_datasets):
    for device in local_datasets.keys():
        current = ""
        for dataset in local_datasets[device]:
            last = current
            current = dataset

            if device in remote_datasets:
                if dataset in remote_datasets[device]:
                    print_colored("%s@%s -- found on remote server" % (device, dataset), 'yellow')
                    continue

            if last == '':
                print_colored("Initial syncronization for device %s" % device, 'green')
                sftp_send("%s@%s" % (device, dataset))
                lastmonth = dataset
                continue

            if last[:7] == dataset[:7]:
                print_colored("%s@%s -- incremental backup (reference: %s)" %
                              (device, dataset, last), 'green')
                sftp_send("%s@%s" % (device, dataset), last)
            else:
                print_colored("%s@%s -- full backup" % (device, dataset), 'green')
                sftp_send("%s@%s" % (device, dataset))
                print_colored("%s@%s -- doing incremental backup" % (device, dataset), 'green')
                sftp_send("%s@%s" % (device, dataset), lastmonth)
                lastmonth = dataset

def get_remote_datasets():
    datasets = sftp.listdir()
    datasets = filter(lambda x: '@' in x, datasets)

    datasets = [ entry.split('.')[0] for entry in datasets ]

    return postprocess_datasets(datasets)

def get_local_datasets():
    datasets = subprocess.check_output(['sudo', 'zfs', 'list', '-t', 'snapshot', '-H', '-o', 'name'])
    datasets = datasets.strip().split('\n')

    datasets = [ entry[5:] for entry in datasets ]

    return postprocess_datasets(datasets)

def main():
    sftp_connect()
    syncronize(get_local_datasets(), get_remote_datasets())

if __name__ == '__main__':
    main()

Rumors have it, btrfs has gained similar functionality to zfs send so maybe I'll be able to extend that code and use it on my linux nodes some future day (after migrating to btrfs there for a start).


-- Christoph Egger <christoph@christoph-egger.org> Fri, 27 Jun 2014 22:54:24 +0200

More entries

valid XHTML, CSS -- Django based -- ©2008 Christoph Egger