#!/bin/blog

June 8, 2010

Reset and disable password aging per-user

Filed under: UNIX & Linux — Tags: — martin @ 11:10 am

Long story short: High-security system, strict password rules, SSH users authenticated with keys, who don’t have valid password entries and are prompted to change their passwords.

chage -E -1 -I -1 -M -1 foo

-E removes the expiration date.
-I removes the inactivity timeout after password expiration.
-M removes the expiration date for the user’s password.

Advertisements

January 28, 2010

“Why is SVN so slow?”

Filed under: UNIX & Linux — Tags: , , — martin @ 11:25 am

I recently migrated a client from CVS to Subversion for his doc repository. I considered mailing this out after hearing a few complaints about speed (the repo is >1GB in size), but then decided to not send it.

All,

If you suffer from slow updates on Windoze, the following TortoiseSVN FAQ articles may offer a few insights:

http://tortoisesvn.net/node/41 “Why is SVN so much slower than CVS”
http://tortoisesvn.net/node/14 “Why is SVN slow on huge directories”

The key argument is that SVN due to its support for atomic transactions needs to do many more file operations than CVS in order to ensure integrity. This makes it slower than CVS, especially if there is an on-access virus scanner involved.

I’m keeping this on file here, just in case. 😉

P.S.: “Why not git?” – Because GIT has a 3-step commit process (add, commit, push) that is not justifiable for this application. The cool people (like me) use git-svn anyway.

August 21, 2009

Ethernet Bandbreite messen

Filed under: UNIX & Linux — Tags: , , , , , , , — martin @ 11:22 am

Den Datendurchsatz auf einer Ethernet-Verbindung mißt man z.B. mit NetPIPE oder mit iperf. Unter Debian sind diese unter den Paketnamen netpipe-tcp bzw. iperf installierbar.

Für die Bandbreitenmessung mit beiden Tools muß auf dem ersten beteiligen Rechner jeweils eine Serverkomponente gestartet werden:

Für NetPIPE: NPtcp
Für iperf: iperf -s

Auf dem anderen Rechner wird dann ein Client gestartet, der die Messung durchführt:

Für NetPIPE: NPtcp -h <gegenstelle>
Für iperf: iperf -c <gegenstelle>

Iperf produziert bei mir generell etwas höhere Meßwerte als NetPIPE. Wo NetPIPE bei knapp 700 Megabit aufhört, legt iperf noch eine Schippe drauf und landet bei etwas über 800 Megabit.

Wer sich über noch höhere Werte freuen will, darf sein Glück mit iperf und der Option -u (auf beiden Enden) versuchen. Damit wird die Messung nicht per TCP, sondern per UDP durchgeführt. Auf der hier gemessenen Leitung, die noch eine hochwertige Kat.5-Komponente an Bord hat, kommt iperf auf genau 1 Gigabit, was mir etwas zu sportlich vorkommt.

Des weiteren kann es sich lohnen, mit der Option -d (wieder auf beiden Enden) zu spielen, um die Performance im Vollduplexbetrieb zu testen. Dabei stinkt meine Verkabelung im TCP-Modus schrecklich ab, während im UDP-Modus das Märchen vom vollen Gigabit aufrecht erhalten wird. Ver viel Geduld hat, kann in solchen Situationen gern in die Ursachenforschung einsteigen. 🙂

May 16, 2009

Debian VMware woes

Filed under: UNIX & Linux — Tags: , , , , — martin @ 8:28 am

Had some trouble with Debian in VMware Server 2.0: I/O was horribly slow, 100% IOwait when doing the simplest things, hdparm showing 11MB/s throughput.

This was fixed by shutting down the VM, changing the SCSI adapter from Buslogic to LSI logic and booting up again. hdparm is at 63MB/s now. The machine, which will be an SMTP mail exchanger, now scans 8 mails per second (Postfix before-queue filter via amavisd-new+ClamAV) in 10 concurrent sessions. Not bad at all.

Debian is by far the simplest choice for an SMTP content filter because it’s not neccessary to bring in any dependencies by hand. Cool. 🙂

April 24, 2009

OpenSSH connection multiplexing

Filed under: Security, UNIX & Linux — Tags: , , — martin @ 6:44 am

The Challenge

I was in touch with a developer the other day who used SSH to programmatically connect to a remote machine where he would start some kind of processing job. Unfortunately, he was in trouble when he wanted to kill the remote process. Killing the local SSH client would leave his job active. He claimed that there used to be some sort of signal forwarding feature in OpenSSH on the machine where he had developed his application in OpenSSH 3.x days, but this feature seems to have been removed by now.

I wasn’t able to confirm anything of this, but this gentleman’s problem got me curious. I started to wonder: Is there some kind of sideband connection that I might use in SSH to interact with a program that is running on a remote machine?

The first thing I thought of were port forwards. These might actually be used to maintain a control channel to a running process on the other side. On the other hand, sockets aren’t trivial to implement for a /bin/ksh type of guy, such as the one I was dealing with. Also, this approach just won’t scale. Coordination of local and remote ports is bound to turn into a bureaucratic nightmare.

I then started to skim the SSH man pages for anything that looked like a “sideband”, “session control” or “signaling” feature. What I found, were the options ControlMaster and ControlPath. These configure connection multiplexing in SSH.

Proof Of Concept

Manual one-shot multiplexing can be demonstrated using the -M and -S options:

1) The first connection to the remote machine is opened in Master mode (-M). A UNIX socket is specified using the -S option. This socket enables the connection to be shared with other SSH clients:

localhost$ ssh -M -S ~/.ssh/controlmaster.test.socket remotehost

2) A second SSH session is attached to the running session. The socket that was opened before is specified with the -S option. The remote shell opens without further authentication:

localhost$ ssh -S ~/.ssh/controlmaster.test.socket remotehost

The interesting thing about this is that we now have two login sessions running on the remote machine, who are children of the same sshd process:

remotehost$ pstree -p $PPID
sshd(4228)─┬─bash(4229)
           └─bash(4252)───pstree(4280)

What About The Original Challenge?

Well, he can start his transaction by connecting to the remote machine in Master mode. For simplicity’s sake, let’s say he starts top in one session and wants to be able to kill it from another session:

localhost$ ssh -t -M -S ~/.ssh/controlmaster.mytopsession.socket remotehost top

Now he can pick up the socket and find out the PIDs of all other processes running behind the same SSH connection:

localhost$ ssh -S ~/.ssh/controlmaster.mytopsession.socket remotehost 'ps --ppid=$PPID | grep -v $$'
  PID TTY          TIME CMD
 4390 pts/0    00:00:00 top

This, of course, leads to:

localhost$ ssh -S ~/.ssh/controlmaster.mytopsession.socket remotehost 'ps --no-headers -o pid --ppid=$PPID | grep -v $$ | xargs kill'

Then again, our shell jockey could just use PID or touch files. I think this is what he’s doing now anyway.

Going Fast And Flexible With Multiplexed Connections

With my new developer friend’s troubles out of the way, what else could be done with multiplexed connections? The SSH docs introduce “opportunistic session sharing”, which I believe might actually be quite useful for me.

It is possible to prime all SSH connections with a socket in ~/.ssh/config. If the socket is available, the actual connection attempt is bypassed and the ssh client hitches a ride on a multiplexed connection. In order for the socket to be unique per multiplexed connection, it should be assigned a unique name through the tokens %r (remote user), %h (remote host) and %p (destination port):

ControlPath ~/.ssh/controlmaster.socket.%r.%h.%p
# Will create socket as e.g.: ~/.ssh/controlmaster.socket.root.remotehost.example.com.22

If there is no socket available, SSH connects directly to the remote host. In this case, it is possible to automatically pull up a socket for subsequent connections using the following option in ~/.ssh/config:

ControlMaster auto

So Where’s The Actual Benefit?

I use a lot of complex proxied SSH connections who take ages to come up. However, connecting through an already established connection goes amazingly fast:

# Without multiplexing:
localhost$ time ssh remotehost /bin/true
real    0m1.376s
...
# With an already established shared connection:
localhost$ time ssh remotehost /bin/true
real    0m0.129s
...

I will definitely give this a try for a while, to see if it is usable for my daily tasks.

Update, 2009/05/04: No, it isn’t. Disconnecting slave sessions upon logout of the master session are too much of a nuisance for me.

March 11, 2009

Software aus zweiter Hand

Filed under: Software, UNIX & Linux — Tags: — martin @ 1:10 am

Mal angenommen, wir hätten einen Linux-User und einen “Windows”-User. Die würden beide die gleiche Open-Source-Software benutzen.

Ebenfalls angenommen, diese Open-Source-Software – das könnte z.B. ein Multiprotokoll-Messaging-Client sein – würde über Nacht einen beträchtlichen Teil ihrer Funktionalität verlieren, weil einer der Messaging-Betreiber eine kleine Modifikation an seinem Protokoll gemacht hätte.

Dann würde der Linux-User den ganzen Tag bis Feierabend auf dem Bugtracker seiner Linux-Distribution lustige Diskussionen über Fixes, Patches, Diffs, Backports und inoffizielle Paketquellen verfolgen und auf sein Update warten. Unterdessen hätte der “Windows”-User schon morgens vor dem Frühstück, ohne sich weiter von der Situation beirren zu lassen, einfach eine neue “setup.exe” runtergeladen und per Doppelklick sein Update installiert gehabt.

Das wäre doof.

Schade nur, daß es wirklich so ist.

February 27, 2009

Packaging OpenSSH on CentOS

Filed under: Security, UNIX & Linux — Tags: , , , , — martin @ 8:29 am

March 30, 2010: It was pointed out to me that Redhat has backported chroot functionality into its OpenSSH 4.3 packages, so these directions may not be neccessary anymore.

My article on chrooted SFTP has turned out to be the most popular article on this blog. What a pity that its “companion article” on building current OpenSSH on CentOS 5 is such a bloody hell of a mess.

Fortunately, reader Simon pointed out a really simple method for building RPMs from current OpenSSH sources in a comment. We had the chance to try this out in a production deployment of chrooted SFTP the other day, and what can I say? It just works(tm)! Thanks a lot, dude! 🙂

# yum install gcc
# yum install openssl-devel
# yum install pam-devel
# yum install rpm-build

It certainly doesn’t hurt to make the GPG check a habit:

# wget http://ftp.bit.nl/mirror/openssh/openssh-5.2p1.tar.gz
# wget http://ftp.bit.nl/mirror/openssh/openssh-5.2p1.tar.gz.asc
# wget -O- http://ftp.bit.nl/mirror/openssh/DJM-GPG-KEY.asc | gpg –-import
# gpg openssh-5.2p1.tar.gz.asc
gpg: Signature made Mon 23 Feb 2009 01:18:28 AM CET using DSA key ID 86FF9C48
gpg: Good signature from "Damien Miller (Personal Key) "
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 3981 992A 1523 ABA0 79DB FC66 CE8E CB03 86FF 9C48

Prepare, build and install the RPM. Disable the building of GUI components in the spec file. We don’t need these on a server:

# tar zxvf openssh-5.2p1.tar.gz
# cp openssh-5.2p1/contrib/redhat/openssh.spec /usr/src/redhat/SPECS/
# cp openssh-5.2p1.tar.gz /usr/src/redhat/SOURCES/
# cd /usr/src/redhat/SPECS
# perl -i.bak -pe 's/^(%define no_(gnome|x11)_askpass)\s+0$/$1 1/' openssh.spec
# rpmbuild -bb openssh.spec
# cd /usr/src/redhat/RPMS/`uname -i`
# ls -l
-rw-r--r-- 1 root root 275808 Feb 27 08:08 openssh-5.2p1-1.x86_64.rpm
-rw-r--r-- 1 root root 439875 Feb 27 08:08 openssh-clients-5.2p1-1.x86_64.rpm
-rw-r--r-- 1 root root 277714 Feb 27 08:08 openssh-server-5.2p1-1.x86_64.rpm
# rpm -Uvh openssh*rpm
Preparing... ########################################### [100%]
1:openssh ########################################### [ 33%]
2:openssh-clients ########################################### [ 67%]
3:openssh-server ########################################### [100%]
# service sshd restart

The RPM should install cleanly on CentOS 4. On CentOS 5, after installation, service ssh restart throws a warning that initlog is obsolete. I work around this by keeping a copy of the old /etc/init.d/sshd and restoring it after RPM installation.

February 22, 2009

Unity

Filed under: UNIX & Linux — Tags: — martin @ 11:56 am

unity

VMWare Workstation 6.5 on Debian Testing. I didn’t know they also have the Unity feature on Linux.

February 15, 2009

Frickelbetriebssystem

Filed under: UNIX & Linux — Tags: , — martin @ 8:44 am

Seit 12 Jahren habe ich so gut wie ununterbrochen irgendwo mindestens eine Linux-Workstation am Start.

Das Leben ist aus der Ursuppe an Land gekrochen, der Mensch war auf dem Mond, der eiserne Vorhang ist gefallen. Aber Sound unter Linux wird niemals den aufrechten Gang erlernen. Soviel ist sicher.

February 14, 2009

Re-Layering LVM encryption

Filed under: Security, UNIX & Linux — Tags: , , — martin @ 11:48 pm

In an earlier article, I had promised live migration of LVM data to encrypted storage. I was able to acquire an external SATA disk for my backup server today, so here we go. 🙂

crypt-lvm1

The backup server is running headless, so I opted to store the key locally for now. Yes, I’m a moron. But hey, at least it’s not on the same medium.

# dd if=/dev/urandom of=/etc/luks.key count=256 ; chmod 600 /etc/luks.key

As long as the disk isn’t the only one, I can’t predict the device name it will come up as. Thus, it is referenced by its udev ID when formatting it with LUKS:

# cryptsetup luksFormat /dev/disk/by-id/scsi-SATA_WD_My_Book_WD-WCAU123-part1 /etc/luks.key

Open the new LUKS device:

# cryptsetup luksOpen -d /etc/luks.key /dev/disk/by-id/scsi-SATA_WD_My_Book_WD-WCAU123-part1 pv_crypt_1

The entry in /etc/crypttab makes the encrypted device come up on boot:

/etc/crypttab:

pv_crypt_1 /dev/disk/by-id/scsi-SATA_WD_My_Book_WD-WCAU123-part1 /etc/luks.key luks

Create a new Physical Volume on the crypted device:

# pvcreate /dev/mapper/pv_crypt_1

Now the Volume Group can be extended with the new PV:

# vgextend datavg /dev/mapper/pv_crypt_1

I rebooted at this point, in order to see if everything would come up as expected.

The new PV is now visible:

# pvs
  PV         VG     Fmt  Attr PSize   PFree
  /dev/dm-0  datavg lvm2 a-   931.51G 931.51G
  /dev/sdb1  datavg lvm2 a-   465.76G      0

The next step is to migrate the VG content to the new PV. Migration will take a very long time if the disk is full, so you may want to use a screen session for this.

# pvmove -v  /dev/sdb1

This is a classical LVM operation that may be cancelled at any time and picked up later. In fact, my Promise SATA driver crashed hard in the middle of the operation, and everything went along fine after a kernel upgrade.

When pvmove is done, throw out the original PV from the volume group:

# vgreduce datavg /dev/sdb1

The Volume Group is now on encrypted storage.

« Newer PostsOlder Posts »

Create a free website or blog at WordPress.com.