IPKI process question

Alright community, let’s say that you’re going to have a cloud service where servers are spun up, used, and then destroyed on a fairly regular basis. Where the lifetime of a server might only be a few hours. Now you need to distributed public/private keys to these servers. Could be for https or OpenVPN or whatever. Keys.

The typical way public key crypto works is there’s the CA, a server creates a CSR based on the server’s private key, sends it off to the CA, the CA signs it, returns certificate, the server now has its private key and its certificate. Happiness.

But if you control the CA and the server and the clients (like in the case of an internal public key infrastructure setup), does all of that really matter? Would it make just as much sense to just give the server a public and private key, as long as the method of transport is secure? Or would it make sense to go through the CSR process to keep the transferring of private keys across the wire to a minimum?

The only thing I could think to be gained from just issuing private and public keys to a server is that the logic for scripting the process is a lot easier. And to me, “just to make things easier” is generally a pretty lame reason to do things.

I dunno. Thoughts and opinions from the community?

1 Like

This is a heavily opinionated response, so take with a grain of salt:

Wouldn’t this be the equivalent of an SSH authentication? If that’s what you’re talking about, I would think it would be less secure than using a CA, even if the CA was controlled in an internal network. The CA has the ability to reject CSRs (the requests). Just using a public/private key could be broken if someone was able to see one side. That’s much more difficult if they also have to create a CSR with one server’s information and get it approved by the CA, as the CA can (and should) be built with security checks to eliminate duplicate certificates for each host/user/whatever.

That said, both are still fairly secure, so while I don’t see any reason not to change it, I also don’t think it would draw enough support to instantiate a change.

EDIT:
Just to be clear, I’m not an expert on any of this, so if I’m wrong, please point it out and we can discuss.

1 Like

Think about CRLs, and machine compromises, if you trust a single shared private key forever, you’re f-ed.

So, each machine needs it’s own private key, where its lifetime matches the lifetime of the machine.

The next question is whether you mint them on a central server where you also turn them into signed certs. If it’s the cert you trust it should be the same?

Well not quite, if your cert machine is compromised then all of your existing machines would be spoofable. If you generate private key on the machine itself, then only new machines after the compromise are spoofable. You could add then to a CRL.

For this to be useful, you need to do a little bit of extra checking on top of TLS before you cache the machine keys of your peers. For example, you might augment your TLS negotiation to require a second signed cert from your secondary CA first time two machines interact.
So kind of like SSH where your second factor is a human, here it would be a separate CA.
Due to compromise issues, you could make this 2/3 CAs.

And you could go on, thinking of an attack vector and adding policies to protect against the vectors.

But, something tells me your cloud is probably not all that big, and adding a lot of security would be cost prohibitive and not justifiable. If you’re the one making these decisions, it’s your call. If you’re not the only stakeholder then it’s also your job to cover your ass and document and rationalize your decisions in a policy doc, and try to build consensus that what you’re doing strikes the right balance.

2 Likes