Any primers on how you configured/created the certificate for Vault?
We used our companies own CA, which we manage through terraform
@Tom de Vries I was in a HashiCorp Vault bootcamp for partner company consultants, and they recommend using whichever CA works best for your organization, including just not using a CA and using a self-signed cert, provided you handle it securely.
Does anyone have multi-region Vault working in AWS? If so, can you use a single KMS key to auto unseal across both regions?
@David I guess that would work, since, if we’re going to be specific, you’re looking to deploy Vault in HA across multiple regions, and all that is required in terms of communication between the instances is port 8200 and 8201. Now, by default non-primary Vault instances forward requests to the primary instance over port 8201, so this inter-instance communication absolutely has to be in place. See https://www.vaultproject.io/docs/concepts/ha/
Also keep in mind you’re basically working around HashiCorp’s Vault Enterprise features: Performance Replicas. But that’s understandable since Enterprise is quite expensive.
Now mind you, you’re going to have to use a storage backend that supports HA. Consul is one of them, and now you’re dealing with doing cross-region Consul deploys
Alternatively, you can wait until Vault 1.4.0 is out and integrated storage is out of beta https://www.vaultproject.io/docs/configuration/storage/raft/
If it were me though, i would definitely hold off until integrated storage is production ready before attempting this.
Or at least deploying it as production infrastructure
That makes a lot of sense. Thanks!
I think I have a plan to test this out with using a dynamodb global table as the backend, but I’m sure my plan has at least a few holes I don’t see yet. Thanks for chatting, and I’ll report back with how it goes
Hello, i try consul-template with vault, but when i try “consult-template” with CLI i have an error :
Get <http://127.0.0.1:8500/v1/kv/kv/xxx/staging?recurse=&stale=&wait=60000ms>: dial tcp 127.0.0.1:8500: connect: connection refused (retry attempt 1 after "250ms")
but i don’t understant why consul-template want to use localhost because i use this argument :
consul-template -template="env-test:env.txt" -vault-addr=<https://vault.xx.com> -vault-ssl-verify=false -vault-token=xxx -once
and if i use the same parameter with envconsul all working fine ;
2020/03/10 15:32:16.625615 [WARN] (clients) disabling vault SSL verification 2020/03/10 15:32:16.625637 [INFO] (runner) creating watcher 2020/03/10 15:32:16.625654 [DEBUG] (watcher) adding vault.token 2020/03/10 15:32:16.625670 [INFO] looking at vault kv/xxx/staging 2020/03/10 15:32:16.626175 [INFO] (runner) starting 2020/03/10 15:32:16.626258 [DEBUG] (watcher) adding vault.read(kv/xxxx/staging) 2020/03/10 15:32:16.657553 [DEBUG] (runner) receiving dependency vault.read(kv/xxx/staging) 2020/03/10 15:32:16.657620 [INFO] (runner) quiescence timers starting 2020/03/10 15:32:20.657795 [INFO] (runner) quiescence minTimer fired 2020/03/10 15:32:20.657821 [INFO] (runner) running 2020/03/10 15:32:20.657833 [DEBUG] Found KV2 secret 2020/03/10 15:32:20.657888 [DEBUG] (runner) setting
@julien M. you didn’t add the
-consul-addr flag, so it’s defaulting to localhost for the consul port
There’s also a chance you don’t need Consul-Template, and instead want Vault Agent with the Consul-Template interpolation syntax.
These two tools have a terrible naming history.
@julien M. yes so that’s the older functionality, but regardless you need to specify both -consul-addr and -vault-addr, and you only did the latter
I’d personally recommend using Vault Agent if you don’t need data from Consul. It has something called AutoAuth, which allows for automatic authentication via one of the Authentication methods, meaning you just keep the daemon running and forget about it. Then, it has a local HTTP listener, meaning you can just query it locally without worrying about tokens. There’s even a section on how to combine consul-template with Vault Agent by connecting consul-template to Vault Agent’s HTTP listener. But I’m not sure if you need that.
Hey folks! I’m looking for some advice about how people are tackling the ‘chicken and egg’ problem with secret management. I had the idea to use terraform to provision Vault. But with this comes the question: from where do I get the secrets needed within the terraform scripts (of course, I’d love to use Vault for that!)? One solution I have heard is to place the tf scripts in a ‘super secret’ Git repository along with these secrets and restrict access to only a select few. While I guess this works, something about it feels dodgy. But I guess these init secrets have to be stored somewhere. How are others tackling this?
Adding @discourse_forum bot
@discourse_forum has joined the channel
It seems by default that KV secrets have the last 10 versions kept. Is there a way to raise this limit on all secrets at once?
Vault 0.10.0 introduced version 2 of the key-value secrets engine which supports versioning your secrets so that you can undo the accidental deletion of secrets or compare different versions of a secret.
Looks like that’s possible