Category Archives: Security

640000 rounds shadow benchmarking

So the requirement over here is, “use SHA512 for /etc/shadow, but with 640000 rounds instead of the default 5000, to slow down brute force attacks”. (Not sure why exactly 640000 though.)

Let’s confirm that this slows down brute force attacks. First create one pure 5000-round hash file, and one pure 640000-round hash file. Note how 640000 rounds hashing takes much longer at this stage already:

$ openssl rand -hex 2 | (time mkpasswd --method=sha512crypt --stdin) | tee shadow-sha512
$6$ZcZ6RoMB5pSad9Ca$alLttTrpP1BezuOued3JrVgv/0tq7mkI5jypP4cZ/smgWF30HuLmtAl.DExd23j3xPLCWc6zWF4eLNLGKLr77.

real    0m0.006s <--
user    0m0.000s
sys     0m0.003s

$ openssl rand -hex 2 | (time mkpasswd --rounds=640000 --method=sha512crypt --stdin) | tee shadow-sha512-640000rounds
$6$rounds=640000$ZBpVIbg3SKT.KerX$hTLaX/OVOWQol5UeVMq2pO1EI2L4nG4WWOIXPhmujq7EqxohLu/dQn3f.TSE8upaPmw/5y1nHrA24Kx2OfCzE/

real    0m0.284s <--
user    0m0.281s
sys     0m0.000s

In hashcat‘s nomenclature, SHA512 with its $6$ prefix is hash type 1800:

1800 | sha512crypt $6$, SHA512 (Unix)

Start cracking the 5000-round hash. --attack-mode 3 means “brute force”:

$ hashcat --status --attack-mode 3 --hash-type 1800 --increment shadow-sha512

The hash rate on this system’s GPU turns out to be about 90000 hashes per second, and finding the 4-character password generated by openssl rand -hex 2 succeeds in about 30 seconds.

Speed.#1.........:    90438 H/s (4.43ms) @ Accel:64 Loops:512 Thr:64 Vec:1

On to the 640000-rounds hash:

$ hashcat --status --attack-mode 3 --hash-type 1800 --increment shadow-sha512-640000rounds

After a very long time grinding the really short password increments, which it obviously isn’t optimized for, hashcat eventually ramps up to around 500 hashes per second.

I stopped the attempt after an hour when the system was approaching 50 degrees on the outer case.

FTPS vs. SFTP, once and for all.

I had to provide an explanation about the differences between FTPS and SFTP today, which sound so similar, but are in reality extremely different and can easily confused by those who don’t spend lots of quality time with them.

SFTP (“SSH FTP”) is based on SSH (Secure Shell) version 2. It uses the same communication channels and encryption mechanisms as SSH.

FTPS (“FTP over SSL”) is based on the the legacy FTP protocol, with an additional SSL/TLS encryption layer. There are several implementations of FTPS, including those with “implicit SSL” where a distinct service listens for encrypted connections, and “explicit SSL” where the connection runs over the same service and is switched to an encrypted connection by a protocol option. In addition, there are several potential combinations of what parts of an FTPS connection are actually being encrypted, such as “only encrypted login” or “encrypted login and data transfer”.

FTPS uses the same communication channels as legacy unencrypted FTP, including dynamically negiotiated side-band connections. Due to these side-band connections, FTP has always been problematic with firewalls. The encryption layer further exacerbates these issues.

Due to this rather long list of ins-and-outs, FTPS can be considered an exotic protocol, while SFTP has widespread acceptance due to the omnipresence of SSH servers on all Linux or UNIX servers.

The only objective advantage of FTPS is that FTPS uses an SSL certificate that is signed by a trusted third party and can be used in an opportunistic way, similar to HTTPS encryption in web browsers. However, if password authentication is not enough and mutual authentication using X.509 client certificates comes into play, this advantage loses part of its validity, because mutual authentication nearly always requires manual intervention from both sides.

OpenSSH connection multiplexing

The Challenge
I was in touch with a developer the other day who used SSH to programmatically connect to a remote machine where he would start some kind of processing job. Unfortunately, he was in trouble when he wanted to kill the remote process. Killing the local SSH client would leave his job active. He claimed that there used to be some sort of signal forwarding feature in OpenSSH on the machine where he had developed his application in OpenSSH 3.x days, but this feature seems to have been removed by now.
I wasn’t able to confirm anything of this, but this gentleman’s problem got me curious. I started to wonder: Is there some kind of sideband connection that I might use in SSH to interact with a program that is running on a remote machine?
The first thing I thought of were port forwards. These might actually be used to maintain a control channel to a running process on the other side. On the other hand, sockets aren’t trivial to implement for a /bin/ksh type of guy, such as the one I was dealing with. Also, this approach just won’t scale. Coordination of local and remote ports is bound to turn into a bureaucratic nightmare.
I then started to skim the SSH man pages for anything that looked like a “sideband”, “session control” or “signaling” feature. What I found, were the options ControlMaster and ControlPath. These configure connection multiplexing in SSH.
Proof Of Concept
Manual one-shot multiplexing can be demonstrated using the -M and -S options:
1) The first connection to the remote machine is opened in Master mode (-M). A UNIX socket is specified using the -S option. This socket enables the connection to be shared with other SSH clients:

localhost$ ssh -M -S ~/.ssh/controlmaster.test.socket remotehost


2) A second SSH session is attached to the running session. The socket that was opened before is specified with the -S option. The remote shell opens without further authentication:

localhost$ ssh -S ~/.ssh/controlmaster.test.socket remotehost


The interesting thing about this is that we now have two login sessions running on the remote machine, who are children of the same sshd process:

remotehost$ pstree -p $PPID
sshd(4228)─┬─bash(4229)
           └─bash(4252)───pstree(4280)


What About The Original Challenge?
Well, he can start his transaction by connecting to the remote machine in Master mode. For simplicity’s sake, let’s say he starts top in one session and wants to be able to kill it from another session:

localhost$ ssh -t -M -S ~/.ssh/controlmaster.mytopsession.socket remotehost top


Now he can pick up the socket and find out the PIDs of all other processes running behind the same SSH connection:

localhost$ ssh -S ~/.ssh/controlmaster.mytopsession.socket remotehost 'ps --ppid=$PPID | grep -v $$'
  PID TTY          TIME CMD
 4390 pts/0    00:00:00 top


This, of course, leads to:

localhost$ ssh -S ~/.ssh/controlmaster.mytopsession.socket remotehost 'ps --no-headers -o pid --ppid=$PPID | grep -v $$ | xargs kill'


Then again, our shell jockey could just use PID or touch files. I think this is what he’s doing now anyway.
Going Fast And Flexible With Multiplexed Connections
With my new developer friend’s troubles out of the way, what else could be done with multiplexed connections? The SSH docs introduce “opportunistic session sharing”, which I believe might actually be quite useful for me.
It is possible to prime all SSH connections with a socket in ~/.ssh/config. If the socket is available, the actual connection attempt is bypassed and the ssh client hitches a ride on a multiplexed connection. In order for the socket to be unique per multiplexed connection, it should be assigned a unique name through the tokens %r (remote user), %h (remote host) and %p (destination port):

ControlPath ~/.ssh/controlmaster.socket.%r.%h.%p
# Will create socket as e.g.: ~/.ssh/controlmaster.socket.root.remotehost.example.com.22


If there is no socket available, SSH connects directly to the remote host. In this case, it is possible to automatically pull up a socket for subsequent connections using the following option in ~/.ssh/config:

ControlMaster auto


So Where’s The Actual Benefit?
I use a lot of complex proxied SSH connections who take ages to come up. However, connecting through an already established connection goes amazingly fast:

# Without multiplexing:
localhost$ time ssh remotehost /bin/true
real    0m1.376s
...
# With an already established shared connection:
localhost$ time ssh remotehost /bin/true
real    0m0.129s
...


I will definitely give this a try for a while, to see if it is usable for my daily tasks.
Update, 2009/05/04: No, it isn’t. Disconnecting slave sessions upon logout of the master session are too much of a nuisance for me.

Massengenerierung von PGP-Schlüsseln

Wenn man jemandem ins Pflichtenheft geschrieben hat, daß seine Software unter der Last von 10.000 PGP-Keys nicht schwächeln darf, muß man irgendwann auch mal ins kalte Wasser springen und tatsächlich 10.000 Keys generieren:

#!/bin/sh -e
export LANG=C
for I in `seq 1 10000`
do
        NUMBER=`printf "%5.5d\n" $I`
        USERNAME="Testuser #$NUMBER"
        COMMENT="auto-generated key"
        EMAIL="testuser-$NUMBER@pgp-loadtest.example.com"
        (
        cat <<EOF
        %echo Generating Key for $EMAIL
        Key-Type: DSA
        Key-Length: 1024
        Subkey-Type: ELG-E
        Subkey-Length: 1024
        Name-Real: $USERNAME
        Name-Comment: $COMMENT
        Name-Email: $EMAIL
        Expire-Date: 2009-01-01
        Passphrase: foo
        %commit
EOF
        ) | gpg --gen-key --batch --no-default-keyring \
            --secret-keyring /var/tmp/gpg-test.sec \
            --keyring /var/tmp/gpg-test.pub
done


Die schlechte Nachricht: Spätestens nach dem dritten Key wird GnuPG eine Pause einlegen, weil der Entropie-Pool ausgelutscht ist. Ich habe meine Lösung für dieses Problem gefunden, indem ich (und das ist wirklich sehr, sehr böse, bitte auf keinen Fall zuhause nachmachen) /dev/urandom zu /dev/random umgelötet habe. Kreative alternative Lösungsvorschläge für das schmutzige kleine Entropie-Problem werden auf jeden Fall gern entgegengenommen.