#!/bin/blog

January 5, 2016

SSH firewall bypass roundup

Filed under: UNIX & Linux — Tags: — martin @ 8:35 pm

So my SSH workflow has reached a turning point, where I’m going to clean up my ~/.ssh/config. Some entries had been used to leverage corporate firewall and proxy setups for accessing external SSH servers from internal networks. These are being archived here for the inevitable future reference.

I never use “trivial” chained SSH commands, but always want to bring up a ProxyCommand, so I have a transparent SSH session for full port, X11, dynamic and agent forwarding support.

ProxyCommand lines have been broken up for readability, but I don’t think this is supported in ~/.ssh/config and they will need to be joined again to work.

Scenario 1: The client has access to a server in a DMZ

The client has access to a server in an internet DMZ, which in turn can access the external server on the internet. Most Linux servers nowadays have Netcat installed, so this fairly trivial constellation works 95.4% of the time.

# ~/.ssh/config
Host host.external
ServerAliveInterval 10
ProxyCommand ssh host.dmz /usr/bin/nc -w 60 host.external 22

Scenario 2: As scenario 1, but the server in the DMZ doesn’t have Netcat

It may not have Netcat, but it surely has an ssh client, which we use to run an instance of sshd in inetd mode on the destination server. This will be our ProxyCommand.

# ~/.ssh/config
Host host.external
ServerAliveInterval 10
ProxyCommand ssh -A host.dmz ssh host.external /usr/sbin/sshd -i

Scenario 2½: Modern version of the Netcat scenario (Update)

Since OpenSSH 5.4, the ssh client has it’s own way of reproducing the Netcat behavior from scenario 1:

# ~/.ssh/config
Host host.external
ServerAliveInterval 10
ProxyCommand ssh -W host.external:22 host.dmz

Scenario 3: The client has access to a proxy server

The client has access to a proxy server, through which it will connect to an external SSH service running on Port 443 (because no proxy will usually allow connecting to port 22).

# ~/.ssh/config
Host host.external
ServerAliveInterval 10
ProxyCommand /usr/local/bin/corkscrew 
   proxy.server 3128 
   host.external 443 
   ~/.corkscrew/authfile
# ~/.corkscrew/authfile
username:password

(Omit the authfile part, if the proxy does not require authentication.)

Scenario 4: The client has access to a very restrictive proxy server

This proxy server has authentication, knows it all, intercepts SSL sessions and checks for a minimum client version.

# ~/.ssh/config
Host host.external
ServerAliveInterval 10
ProxyCommand /usr/local/bin/proxytunnel 
   -p proxy.server:3128 
   -F ~/.proxytunnel.auth 
   -r host.external:80 
   -d 127.0.0.1:22 
   -H "User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:29.0) Gecko/20100101 Firefox/29.0\nContent-Length: 0\nPragma: no-cache"
# ~/.proxytunnel.auth
proxy_user=username
proxy_passwd=password

What happens here:

  1. host.external has an apache web server running with forward proxying enabled.
  2. proxytunnel connects to the proxy specified with -r, via the corporate proxy specified with -p and uses it to connect to 127.0.0.1:22, on the forward-proxying apache.
  3. It sends a hand-crafted request header to the intrusive proxy, which mimics the expected client version.
  4. Mind you that although the connection is to a non-SSL service, it still is secure, because encryption is being brought in by SSH.
  5. What we have here is a hand-crafted exploit against the know-it-all proxy’s configuration. Your mileage may vary.

Super sensible discretion regarding the security of your internal network is advised. Don’t fuck up, don’t use this to bring in anything that will spoil the fun. Bypass all teh firewalls responsibly.

Advertisements

April 24, 2009

OpenSSH connection multiplexing

Filed under: Security, UNIX & Linux — Tags: , , — martin @ 6:44 am

The Challenge

I was in touch with a developer the other day who used SSH to programmatically connect to a remote machine where he would start some kind of processing job. Unfortunately, he was in trouble when he wanted to kill the remote process. Killing the local SSH client would leave his job active. He claimed that there used to be some sort of signal forwarding feature in OpenSSH on the machine where he had developed his application in OpenSSH 3.x days, but this feature seems to have been removed by now.

I wasn’t able to confirm anything of this, but this gentleman’s problem got me curious. I started to wonder: Is there some kind of sideband connection that I might use in SSH to interact with a program that is running on a remote machine?

The first thing I thought of were port forwards. These might actually be used to maintain a control channel to a running process on the other side. On the other hand, sockets aren’t trivial to implement for a /bin/ksh type of guy, such as the one I was dealing with. Also, this approach just won’t scale. Coordination of local and remote ports is bound to turn into a bureaucratic nightmare.

I then started to skim the SSH man pages for anything that looked like a “sideband”, “session control” or “signaling” feature. What I found, were the options ControlMaster and ControlPath. These configure connection multiplexing in SSH.

Proof Of Concept

Manual one-shot multiplexing can be demonstrated using the -M and -S options:

1) The first connection to the remote machine is opened in Master mode (-M). A UNIX socket is specified using the -S option. This socket enables the connection to be shared with other SSH clients:

localhost$ ssh -M -S ~/.ssh/controlmaster.test.socket remotehost

2) A second SSH session is attached to the running session. The socket that was opened before is specified with the -S option. The remote shell opens without further authentication:

localhost$ ssh -S ~/.ssh/controlmaster.test.socket remotehost

The interesting thing about this is that we now have two login sessions running on the remote machine, who are children of the same sshd process:

remotehost$ pstree -p $PPID
sshd(4228)─┬─bash(4229)
           └─bash(4252)───pstree(4280)

What About The Original Challenge?

Well, he can start his transaction by connecting to the remote machine in Master mode. For simplicity’s sake, let’s say he starts top in one session and wants to be able to kill it from another session:

localhost$ ssh -t -M -S ~/.ssh/controlmaster.mytopsession.socket remotehost top

Now he can pick up the socket and find out the PIDs of all other processes running behind the same SSH connection:

localhost$ ssh -S ~/.ssh/controlmaster.mytopsession.socket remotehost 'ps --ppid=$PPID | grep -v $$'
  PID TTY          TIME CMD
 4390 pts/0    00:00:00 top

This, of course, leads to:

localhost$ ssh -S ~/.ssh/controlmaster.mytopsession.socket remotehost 'ps --no-headers -o pid --ppid=$PPID | grep -v $$ | xargs kill'

Then again, our shell jockey could just use PID or touch files. I think this is what he’s doing now anyway.

Going Fast And Flexible With Multiplexed Connections

With my new developer friend’s troubles out of the way, what else could be done with multiplexed connections? The SSH docs introduce “opportunistic session sharing”, which I believe might actually be quite useful for me.

It is possible to prime all SSH connections with a socket in ~/.ssh/config. If the socket is available, the actual connection attempt is bypassed and the ssh client hitches a ride on a multiplexed connection. In order for the socket to be unique per multiplexed connection, it should be assigned a unique name through the tokens %r (remote user), %h (remote host) and %p (destination port):

ControlPath ~/.ssh/controlmaster.socket.%r.%h.%p
# Will create socket as e.g.: ~/.ssh/controlmaster.socket.root.remotehost.example.com.22

If there is no socket available, SSH connects directly to the remote host. In this case, it is possible to automatically pull up a socket for subsequent connections using the following option in ~/.ssh/config:

ControlMaster auto

So Where’s The Actual Benefit?

I use a lot of complex proxied SSH connections who take ages to come up. However, connecting through an already established connection goes amazingly fast:

# Without multiplexing:
localhost$ time ssh remotehost /bin/true
real    0m1.376s
...
# With an already established shared connection:
localhost$ time ssh remotehost /bin/true
real    0m0.129s
...

I will definitely give this a try for a while, to see if it is usable for my daily tasks.

Update, 2009/05/04: No, it isn’t. Disconnecting slave sessions upon logout of the master session are too much of a nuisance for me.

February 27, 2009

Packaging OpenSSH on CentOS

Filed under: Security, UNIX & Linux — Tags: , , , , — martin @ 8:29 am

March 30, 2010: It was pointed out to me that Redhat has backported chroot functionality into its OpenSSH 4.3 packages, so these directions may not be neccessary anymore.

My article on chrooted SFTP has turned out to be the most popular article on this blog. What a pity that its “companion article” on building current OpenSSH on CentOS 5 is such a bloody hell of a mess.

Fortunately, reader Simon pointed out a really simple method for building RPMs from current OpenSSH sources in a comment. We had the chance to try this out in a production deployment of chrooted SFTP the other day, and what can I say? It just works(tm)! Thanks a lot, dude! 🙂

# yum install gcc
# yum install openssl-devel
# yum install pam-devel
# yum install rpm-build

It certainly doesn’t hurt to make the GPG check a habit:

# wget http://ftp.bit.nl/mirror/openssh/openssh-5.2p1.tar.gz
# wget http://ftp.bit.nl/mirror/openssh/openssh-5.2p1.tar.gz.asc
# wget -O- http://ftp.bit.nl/mirror/openssh/DJM-GPG-KEY.asc | gpg –-import
# gpg openssh-5.2p1.tar.gz.asc
gpg: Signature made Mon 23 Feb 2009 01:18:28 AM CET using DSA key ID 86FF9C48
gpg: Good signature from "Damien Miller (Personal Key) "
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 3981 992A 1523 ABA0 79DB FC66 CE8E CB03 86FF 9C48

Prepare, build and install the RPM. Disable the building of GUI components in the spec file. We don’t need these on a server:

# tar zxvf openssh-5.2p1.tar.gz
# cp openssh-5.2p1/contrib/redhat/openssh.spec /usr/src/redhat/SPECS/
# cp openssh-5.2p1.tar.gz /usr/src/redhat/SOURCES/
# cd /usr/src/redhat/SPECS
# perl -i.bak -pe 's/^(%define no_(gnome|x11)_askpass)\s+0$/$1 1/' openssh.spec
# rpmbuild -bb openssh.spec
# cd /usr/src/redhat/RPMS/`uname -i`
# ls -l
-rw-r--r-- 1 root root 275808 Feb 27 08:08 openssh-5.2p1-1.x86_64.rpm
-rw-r--r-- 1 root root 439875 Feb 27 08:08 openssh-clients-5.2p1-1.x86_64.rpm
-rw-r--r-- 1 root root 277714 Feb 27 08:08 openssh-server-5.2p1-1.x86_64.rpm
# rpm -Uvh openssh*rpm
Preparing... ########################################### [100%]
1:openssh ########################################### [ 33%]
2:openssh-clients ########################################### [ 67%]
3:openssh-server ########################################### [100%]
# service sshd restart

The RPM should install cleanly on CentOS 4. On CentOS 5, after installation, service ssh restart throws a warning that initlog is obsolete. I work around this by keeping a copy of the old /etc/init.d/sshd and restoring it after RPM installation.

December 31, 2008

Using the SSH agent from daemon processes

Filed under: UNIX & Linux — Tags: , , , — martin @ 1:04 am

One of my more recent installations, the BackupPC server I wrote about earlier, needs full access as the user root to his clients in order to retrieve the backups. Here’s how I implemented authentication on this machine.

BackupPC runs as its own designated user, backuppc. All authentication procedures therefore happen in the context of this user.

The key component in ssh-agent operation is a Unix domain socket that the ssh client uses to communicate with the agent. The default naming scheme for this socket is /tmp/ssh-XXXXXXXXXX/agent.<ppid>. The name of the socket is stored in the environment variable SSH_AUTH_SOCK. The windowing environments on our local workstations usually run as child processes of ssh-agent. They inherit this environment variable from their parent process (the agent) and therefore the shells running inside our Xterms know how to communicate with it.

In the case of a background server using the agent, however, things are happening in parallel: On one hand, we have the daemon which is being started on bootup. On the other hand, we have the user which the daemon is running as, who needs to interactively add his SSH identity to the agent. Therefore, the concept of an automatically generated socket path is not applicable and it would be preferable to harmonize everything to a common path, such as ~/.ssh/agent.socket.

Fortunately, all components in the SSH authentication system allow for this kind of harmonization.

The option -a to the SSH agent allows us to set the path for the UNIX domain socket. This is what this small script, /usr/local/bin/ssh-agent-wrapper.sh does on my backup server:

#!/bin/bash
SOCKET=~/.ssh/agent.socket
ENV=~/.ssh/agent.env
ssh-agent -a $SOCKET > $ENV

When being started in stand-alone mode (without a child process that it should control), ssh-agent outputs some information that can be sourced from other scripts:

SSH_AUTH_SOCK=/var/lib/backuppc/.ssh/agent.socket; export SSH_AUTH_SOCK;
SSH_AGENT_PID=1234; export SSH_AGENT_PID;
echo Agent pid 1234;

This file may sourced from the daemon user’s ~/.bash_profile:

test -s .ssh/agent.env && . .ssh/agent.env

However, this creates a condition where we can’t bootstrap the whole process for the first time. So it might be somewhat cleaner to just set SSH_AUTH_SOCK to a fixed value:

export SSH_AUTH_SOCK=~/.ssh/agent.socket

Here’s the workflow for initializing the SSH agent for my backuppc user after bootup:

root@foo:~ # su - backuppc
backuppc@foo:~ $ ssh-agent-wrapper.sh
backuppc@foo:~ $ ssh-add

In the meantime, what is happening to the backuppc daemon?

In /etc/init.d/backuppc, I have added the following line somewhere near the top of the script:

export SSH_AUTH_SOCK=~backuppc/.ssh/agent.socket

This means that immediately after boot-up, the daemon will be unable to log on to other systems, as long as ssh-agent has not been initialized using ssh-agent-wrapper.sh. After starting ssh-agent and adding the identity, the daemon will be able to authenticate. This also means that tasks in the daemon that do not rely on SSH access (in the case of BackupPC, things like housekeeping and smbclient backups of “Windows” systems) will already be in full operation.

April 6, 2008

OpenSSH chrooted SFTP (e.g. for Webhosting)

Filed under: Security — Tags: , — martin @ 8:53 am

Over at Denny, I discovered that OpenSSH, since version 4.8, supports chrooted SFTP operation. This is, of course, a feature that many of us have been waiting for, so I immediately gave it a try, using a somewhat adventurous manually compiled OpenSSH on CentOS 5. (Update, February 27, 2009: See Packaging OpenSSH on CentOS for a more coherent installation method.) I also had a little help from the Debian Administration Blog.

In order to enable chrooted SFTP for some users, we’ll first create a separate group for users that will get the chroot treatment. I named this group chrooted, for no obvious reason. This group will be assigned as a supplementary group for chroot users.

The common application for this will be virtual WWW hosting, so I started with the assumption that a website called http://www.example.com will be accessed by SFTP. The directory /vhost/www.example.com will therefore serve as the user’s home directory. In order to make chrooting work along with key-based authentication, I found that it neccessary to make the user name and the name of his home directory identical, so the user was named http://www.example.com as well (login shell: /bin/false), along with a similar user private group http://www.example.com. This looks as if it may have a tendency to get awkward, but it really only is a first test. I’ll have to invest a bit more thought before this goes into production.

The directory /vhost/www.example.com was created and populated like this:

drwxr-xr-x 5 root            root            4096 Apr  5 22:01 .
drwxr-xr-x 3 root            root            4096 Apr  5 21:22 ..
drwxrwxr-x 2 www.example.com www.example.com 4096 Apr  6 08:45 htdocs
drwxr-xr-x 2 root            root            4096 Apr  5 21:22 logs
drwxr-xr-x 2 www.example.com www.example.com 4096 Apr  5 22:02 .ssh

With the user created and his directory populated, we’ll now edit sshd_config as follows:

#Replace the OpenSSH sftp-server backend with its internal SFTP engine:
#Subsystem      sftp    /opt/openssh/libexec/sftp-server
Subsystem       sftp    internal-sftp
# Configure special treatment of members of the group chrooted:
Match group chrooted
         # chroot members into this directory
         # %u gets substituted with the user name:
         ChrootDirectory /vhost/%u
         X11Forwarding no
         AllowTcpForwarding no
         # Force the internal SFTP engine upon them:
         ForceCommand internal-sftp

I actually had a quite hard time figuring out the proper constellation of user name, user home directory and ChrootDirectory. ChrootDirectory applies after the user has been authenticated. Before that, his home directory from /etc/passwd still applies. In order to enable the user to maintain his SSH key and to enable sshd to find the key, both environments must be congruent. However, the chroot destination must not be owned by the user for security reasons; the user’s home directory therefore belongs to root. Tricky, isn’t it? I must admit, though, that this would have been a lot more intuitive if I hadn’t strayed away from /home on the very first test. Doh! 😮

Here’s a sample session with the user “www.example.com”, authenticated by public key:

$ sftp www.example.com@192.168.1.24
Connecting to 192.168.1.24...
sftp> ls -la
drwxr-xr-x    5 0        0            4096 Apr  5 20:01 .
drwxr-xr-x    5 0        0            4096 Apr  5 20:01 ..
drwxr-xr-x    2 59984    59984        4096 Apr  5 20:02 .ssh
drwxrwxr-x    2 59984    59984        4096 Apr  6 07:32 htdocs
drwxr-xr-x    2 0        0            4096 Apr  5 19:22 logs
sftp> pwd
Remote working directory: /
sftp> cd ..
sftp> ls -la
drwxr-xr-x    5 0        0            4096 Apr  5 20:01 .
drwxr-xr-x    5 0        0            4096 Apr  5 20:01 ..
drwxr-xr-x    2 59984    59984        4096 Apr  5 20:02 .ssh
drwxrwxr-x    2 59984    59984        4096 Apr  6 07:32 htdocs
drwxr-xr-x    2 0        0            4096 Apr  5 19:22 logs
sftp> pwd
Remote working directory: /
sftp> ls -la .ssh
drwxr-xr-x    2 59984    59984        4096 Apr  5 20:02 .
drwxr-xr-x    5 0        0            4096 Apr  5 20:01 ..
-r--------    1 59984    59984         601 Apr  5 20:02 authorized_keys
sftp> bye

Pay attention to the UIDs, 0 and 59984: The SFTP subsystem, running under chroot, doesn't have access to /etc/passwd from the user's environment.

I am convinced that this is the most important update to OpenSSH for at least the past five years. This has the opportunity to entirely eradicate authenticated FTP from the internet, just as it has already happened with Telnet.

Thanks a lot to the OpenSSH developers for making this happen!

Quick and dirty manual compile of OpenSSH on CentOS 5

Filed under: UNIX & Linux — Tags: , , — martin @ 7:58 am

(Update, February 27, 2009 – Please click here, for goodness’ sake: Packaging OpenSSH on CentOS)

I wanted to try the new chroot feature of OpenSSH (see the companion post) but didn’t want to invest in building an OpenSSH RPM. Here are my notes from how I did a quick replacement of the stock SSH packages by a hand-rolled installation:

# yum install gcc
# yum install openssl-devel
# yum install pam-devel
# wget http://ftp.bit.nl/mirror/openssh/openssh-5.0p1.tar.gz
# wget http://ftp.bit.nl/mirror/openssh/openssh-5.0p1.tar.gz.asc
# wget -O- http://ftp.bit.nl/mirror/openssh/DJM-GPG-KEY.asc | gpg --import
# gpg openssh-5.0p1.tar.gz.asc
gpg: Signature made Thu 03 Apr 2008 12:02:00 PM CEST using DSA key ID 86FF9C48
gpg: Good signature from "Damien Miller (Personal Key) <djm@****.org>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: 3981 992A 1523 ABA0 79DB  FC66 CE8E CB03 86FF 9C48
# tar zxvf openssh-5.0p1.tar.gz
# cd openssh-5.0p1
# ./configure --prefix=/usr/local --sysconfdir=/etc/openssh --with-md5-passwords --with-pam
# make
# make install
# cp /etc/ssh/* /etc/openssh/
# sed 's/^\(GSSAPI.*\)$/#\1/' < /etc/ssh/sshd_config > /etc/openssh/sshd_config
# sed 's/^ *\(GSSAPI.*\)$/#\1/' < /etc/ssh/ssh_config > /etc/openssh/ssh_config
# cp /etc/pam.d/sshd /etc/pam.d/openssh
# service sshd stop
# yum remove openssh
# ln -s openssh /etc/pam.d/sshd
# /usr/local/sbin/sshd
# echo "echo Starting ssh daemon." >> /etc/rc.local
# echo "/usr/local/sbin/sshd" >> /etc/rc.local

No: I’m not quite conviced that this should go anywhere beyond a test system. 😉 If you have a quick way for building proper OpenSSH replacement RPMs, you’re welcome to share it.

November 27, 2004

Howto: Public Keys mit Putty

Filed under: Security — Tags: , — martin @ 3:36 pm

Aus gegebenem Anlaß schnell runtergeschrieben…

1. Puttygen starten.
2. Auf “Generate” klicken und mit der Maus wackeln, bis der Balken rechts angekommen und das Schlüsselpaar generiert ist.
3. Bei “Comment” die eigene Mailadresse, den Loginnamen oder eine andere aussagekräftige Bezeichnung eingeben.
4. Eine Passphrase für den Private Key vergeben (zweimal eingeben) und merken!
5. Im oberen Feld wird der Public Key für die Verwendung mit OpenSSH angezeigt. Diesen komplett auswählen und in die Zwischenablage kopieren.
6. Den Private Key irgendwo hinspeichern, wo man ihn auch in einer Minute wieder findet.

7. Putty starten und mit dem zugewiesenen UNIX-Kennwort auf dem Server anmelden.
8. Das userbezogene Konfigurationsverzeichnis für SSH anlegen: mkdir .ssh
9. Die Übertragung des Public Key auf den Server starten: cat >> .ssh/authorized_keys
10. Den in Schritt 5 markierten Public Key durch Drücken der rechten Maustaste einfügen (er erscheint auf dem Bildschirm) und nochmal Return drücken, um die Zeile abzuschließen.
11. Strg+D drücken, um die Übertragung des Public Key abzuschließen.
12. Putty beenden.

(Unter Linux kann man die Schritte 1-12 ja einfach erledigen, indem man “ssh-uploadkeys” ausführt, aber so einfach gehts unter *hüstel* benutzerfreundlichen *hüstel* Systemen halt nicht. 😉

13. Den in Schritt 6 gespeicherten Private Key im Explorer anklicken.
14. Pageant wird gestartet und fragt nach der in Schritt 4 gesetzten Passphrase. Diese eingeben.
15. Pageant schließen (“Close”), das Taskleisten-Icon von Pageant (Computer mit Schlapphut) wird angezeigt.

16. Putty erneut starten, die Anmeldung funktioniert jetzt ohne Kennwort.

Create a free website or blog at WordPress.com.