Discussion:
Clusters, known_hosts, host keys, and "REMOTE HOST
(too old to reply)
H. Kurth Bemis
2009-09-18 17:16:33 UTC
Permalink
I'm using OpenSSH in an environment with lots of clusters. These
clusters have IP addresses which are associated with a particular
application rather than with a particular host. Oftentimes
(especially for file transfers) it's helpful to ssh/scp to the IP
address associated with the application rather than the one associated
with the host. However, given that each host has its own host key, we
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
Which of course panics the user the first time they see it, and causes
them to ignore it the second time onward-- neither of which are
desired behaviors...
1) Make all the host keys the same (hundreds of hosts, kind of
diminishes the value of a host key...)
2) Configure ssh to ignore host key changes (harder than you might
think since often new ssh clients are brought in)
3) Give each application its own dedicated ssh and host key (tricky to
set up and monitor, fairly high effort)
4) Tweak OpenSSH so that it will accept any host key from a list
(requires some programming effort, might not be a good idea)
5) Other?
What do you all think of option 4? In particular, I was thinking that
there might be a way to allow hosts on the same subnet to simply
prompt to add the additional key for the same DNS name rather than
popping up the man-in-the-middle warning. If there were multiple keys
present in known_hosts for a given hostname, any of them would be
accepted.
Could this be done without weakening the host security of OpenSSH?
Should I instead just hold The Great Re-Keying and go with option 1?
I appreciate any advice.
Thanks,
-- Steve Bonds
Maybe the issue doesn't really involve modifying OpenSSH at all. If you
have access to the hosts, wouldn't it be possible to
pre-generate .known_hosts with all the host keys in your cluster? Then
each client would have every key in it's .known_hosts, so it wouldn't
matter which host the client was connecting to.

Then if one of the keys change, you can generate a new .known_hosts.
Users are still alerted if a key changes on it's own.

Whatever your final solution, please remember to share with the
class. :]

~k
Males, Jess
2009-09-18 18:59:03 UTC
Permalink
I didn't appreciate the flexibility of known_hosts or sshd_known_hosts unti=
l following up on Kurth's response; so thanks for that.

However, personally, I lean toward #1. When reading #1, it seems to sugges=
t having one key for the whole site. This I wouldn't do. Rather, whenever=
adding a second node and creating a cluster, just copy the keys from node =
one to node two. Each cluster will have unique keys. All applications on =
a cluster will have the same key. Doing this preserves notifications for p=
ossible MitM attacks, but doesn't require coordinated updates across the en=
tire infrastructure.

Implementing this solution does involve a great wipe, when you sync all exi=
sting clusters, but after that, it becomes merely procedural when you build=
new clusters or udpate existing clusters. Conceivably, you could update t=
he clusters one at a time, or in small batches, but I would plan this so cu=
stomers only see one broadcast announcement regarding key changes. One gre=
at wipe doesn't inure users; but three mini wipes on consecutive weekends c=
ould lull some.

All that said, some follow-up considerations:

If all your users are on relatively few systems, then implementing a client=
side sshd_known_hosts is more straight-forward. It would not require user=
s to understand or mess with known_hosts themselves. Similarly, if you hav=
e NFS /home drives or CIFS windows profiles where the network homogenizes t=
he known_hosts experience, then this solution gains favor.

Speaking against updating known_hosts are differing clients on differing pl=
atforms. How does putty handle known_hosts?

What are your current procedures for migrating applications? If an applica=
tion move requires a server change, even if the IP is moved, what do custom=
ers do when the key changes?


-- Jess Males


-----Original Message-----
From: ***@securityfocus.com [mailto:***@securityfocus.com] On=
Behalf Of Steve Bonds
Sent: Thursday, September 17, 2009 7:53 PM
To: ***@securityfocus.com
Subject: Clusters, known_hosts, host keys, and "REMOTE HOST IDENTIFICATION =
HAS CHANGED"

SSH List-dwellers:

I'm using OpenSSH in an environment with lots of clusters. These
clusters have IP addresses which are associated with a particular
application rather than with a particular host. Oftentimes
(especially for file transfers) it's helpful to ssh/scp to the IP
address associated with the application rather than the one associated
with the host. However, given that each host has its own host key, we
frequently get:

WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!

Which of course panics the user the first time they see it, and causes
them to ignore it the second time onward-- neither of which are
desired behaviors...

I've thought about several solutions to this including:

1) Make all the host keys the same (hundreds of hosts, kind of
diminishes the value of a host key...)
2) Configure ssh to ignore host key changes (harder than you might
think since often new ssh clients are brought in)
3) Give each application its own dedicated ssh and host key (tricky to
set up and monitor, fairly high effort)
4) Tweak OpenSSH so that it will accept any host key from a list
(requires some programming effort, might not be a good idea)
5) Other?

What do you all think of option 4? In particular, I was thinking that
there might be a way to allow hosts on the same subnet to simply
prompt to add the additional key for the same DNS name rather than
popping up the man-in-the-middle warning. If there were multiple keys
present in known_hosts for a given hostname, any of them would be
accepted.

Could this be done without weakening the host security of OpenSSH?
Should I instead just hold The Great Re-Keying and go with option 1?

I appreciate any advice.

Thanks,

-- Steve Bonds

Loading...