Managing secrets in development machine: how do you do it?

API keys, OAuth private key, etc. How do you manage them?

Do you just use plain env variable on virtual environment? So far this is what I’ve done but I feel like this is no longer sustainable. I generally don’t like putting them inside a secret.env file too.

What’s your workflow with containers? What are the advantages & drawbacks?

Do you use Hashicorp’s Vault server already? I’ve been meaning to but I’m really lazy sob

Share with us how you do it!

2 Likes

No one? damn

I just use a .env file.

Depending on the secret I might lock this down with 600 permissions.

uhh, istio/spire… same way as prod but not prod.

Hmm… one thing I don’t like about this is that it’s not flexible thus fragile, and it’s already nightmare-ish when working with desktop & laptop. It’s the most straight forward though.

I think I’ll use gitlab & private gitlab repo for mounting dockerized Vault’s secret storage, easily swapped with self-hosted gitlab instance as well.

https://docs.gitlab.com/ee/integration/vault.html

I don’t know what you mean by fragile?

A dot env does not need to be flexible as It is just a basic key:value store.

When I use this with gitlab it is a generated file.

Simplicity is best IMO.

fragile in terms of these:
it’s a plain text file that needs to be read/piped/passed to OS shell environment or other apps,
doesn’t handle growth well (use one master file for adding new secret, or one file for each project, or when changing existing password, etc)
there’s a risk for leak if included in backups or remote git,
doesn’t really cover ssh certificates

I basically kind of want more secure, more automation friendly solution, as well as more portable across machines, whether that’s between desktop-laptop or swapping any one of those to a new machine.

Your code only needs secrets once deployed, right?

For testing you could disable auth or check in some throwaway test secrets, or have your test harness just come up with some when it starts.

You could rely on whatever ssh/u2f you normally use to identify yourself once it’s time for you to check in code, or launch prod rollouts.

How to increase security:
Step one: Upload private keys to someone else’s server.

Maybe it’s me, but those goals seems mutually exclusive…

IMO if you want somewhat secure and flexible then you rollout own VPN on premise, and use your own server to do whatever you need to do…

1 Like

Someone else’s server with MFA in place on top of encrypted database. Besides, it can be self hosted as well.

Yeah, that’s why we have Xp source available, thank you Azure :wink:
Leaving stuff around for unspecified time, no matter how strongly encrypted it is, its just a deadman switch. Sooner or later it will be breakable.
Sorry for that jab, i have bias. Usually I prevent people from getting to encrypted stuff, which is last measure, to give people time to change this encrypted stuff. So giving it away freely kinda irks me… :wink:

Your solution probably will be fine.

I confess that part of my reason to not go with the simpler ideas you guys have been suggesting is that I want to use Vault for practice at start & eventually be the integral part of my workflow, as I think in the end it pretty much can be applicable to any prod/project. So my plan is to start with personal ones.

Obviously it’s not going to be enterprise grade setup, even if I have elaborate homelab setup it’s not going to be as secure as big cloud service companies, this is for dev environment after all. My goal is to have a robust enough system/workflow in place that I won’t be really worry about throwing away secrets or generates new ones without having much security risks, and I don’t really worry about work projects in this case, at least not at the beginning.

My thinking is that for prod/work there should already be a protocol in place that I just follow in their project, not imposing my own personal flow.
This gives me an excuse to explore the ins & outs of secret automation, from setting it up to planning/implementing disaster plan.

What part of Azure caused the leak, btw, I can’t find the chronology…

You and the other people here have good points, and in terms of broken encryption, yeah absolutely It’s not going to be safe in long enough term. However most likely I won’t ever use this for any commercial/production stuffs, and I keep most dev accounts separate from personal accounts. I think I need to double check to make sure but I have done something in that sense in the past.
Since I don’t think I’m going to gain elite skills soon & suddenly able to create a killer million dollar app, it’s not that big of a risk at the start, and as long as I keep the components up to date (since Vault will be encrypting the data before it hits the storage db), I can harden the system & revise the security practices as I gained more knowledge along the way.

I was joking with that first statement, hence “;)” at the end.
Just wanted to illustrate, that history is literally littered with data leaks where people thought its safe “in cloud”.

Sorry I mislead you. I can’t find easily real source of the leak, just tabloids spewing clickbait.

You’re running around in circles…let me elaborate, how this plays out.
Hypothetically, let’s say you have a centralized service that’s a key:value store (in your case secret_id:secret_value). You can have some kind of API (e.g. http/json or plaintext) to fetch the secret on demand, so you don’t have to store it alongside the project code.

How do you build an ACL enforcement system around it that will authenticate the thing that’s fetching the secret - after all you don’t want your secrets leaking to anyone who asks.

  • You could give it another secret/password?
  • You could trust the network (ip based security)?
  • You could trust the kernel/os?
  • You could trust whoever is deploying an app to “bless” the deployment?
  • You could use some combination of the above?

Both kerberos (symmetric crypto / hmac used to generate authenticate-able / identifiable tickets), and in spiffe (newer asymmetric crypto used to sign authenticate-able/identifiable certs) solve it similarly. In both you have this “attestation” step, where you use something you already trust to bootstrap trust in something new. In old kerberos, you’d rely on admins to setup principals (identities) for users and services and do so “wisely”. Additionally for human principals, you’d bootstrap trust using a password or similar and you’d trust the os to not leak credentials across users.

In SPIFFE, you have an agent on a host OS (SPIRE), that is trusted that can sign certs based on uid of the requesting process which is using some local filesystem socket (e.g. when setting up a container, you can map the uid that container is using to a “service account”, identity in “your cloud” or your bunch of machines).

In both cases you trust the host OS on the machine to be setup to not leak credentials across UIDs. (because it’s a product of a controlled setup environment / blessed by admin doing setup manually, or blessed by an installation process, … turtles all the way down).


When time comes to do things in a prod environment you shouldn’t rely on a large number of usernames / passwords for access to services across a network in the first place. Rely on secure bootstrapping processes and mTLS and certs… and kerberos for old corporate workstation/user auth. (You can build a service to exchange krb tickets for certs in <100lines of python, no big deal). And if services you run don’t look like they support it, practice using proxies that do.
At home, you can play with kerberos or ssh tunnels or TLS proxies or fancy service mesh proxies… bootstrapping something like spire/kerberos with only a handful of machines and building out that infra is also fun practice.

I’m not sure if I failed to understand you or if you don’t understand me, it sounds like you’re thinking I want to write or build my own trust manager, CMIIW.

But i’m more keen on building a (robust) workflow based on off the shelf product that’s trusted enough & capable to handle multiple scenarios, including SSH, API keys, MFA, & OAuth2, and that I can automate this workflow to control almost every part of any interactions whether its local in OS or Docker, or to third party levels like Gitlab, Github, Google. That’s the part of Hashicorp Vault. Gitlab is the offsite infra part, it happens to support integration with Vault.

When time comes to do things in a prod environment you shouldn’t rely on a large number of usernames / passwords for access to services across a network in the first place. Rely on secure bootstrapping processes and mTLS and certs… and kerberos for old corporate workstation/user auth. (You can build a service to exchange krb tickets for certs in <100lines of python, no big deal). And if services you run don’t look like they support it, practice using proxies that do.

I agree 100% on this.

I re-read and it’s most likely I didn’t understand the first time, it sounds to me now that you’re mostly perceived that my plan/understanding was weak on ACL, and that caused me to overcomplicate things?

Yes. A network service to store/retrieve secrets needs to implement some auth and ACLs somehow, there are industry standard APIs for programmatic access to secrets used for authentication (keys/certs and what not). And there’s industry standard implementations of those. (e.g. look into envoy or citadel for containers).

Secrets that are not not used for authentication e.g. file/disk encryption keys will usually have a custom API that aligns with business processes to make it easier to guarantee e.g. “right to be forgotten” compliance and so on. (but basically it’s a key value service for the most part - nothing too exciting).

It’s true that I don’t have much experience in managing ACLs, so you’re most probably is correct that I overcomplicate things. But I’m still not sure what you’re suggesting, are you suggesting that I avoid Vault because it’s not good in how it implements ACL protocols or because it’s unnecessarily too complicated? Or to put it simply, what’s wrong with involving Vault, or if I don’t guess correctly, it’s the Gitlab part that sounds concerning?