#geodesic (2024-09)
Discussions related to https://github.com/cloudposse/geodesic
Archive: https://archive.sweetops.com/geodesic/
2024-09-09
v3.3.0 Smarter abbreviation of EKS cluster name #956
Footnote In every release, we update all unpinned packages to…
#956
Footnote In every release, we update all unpinned packages to their latest packaged versions. These changes are not detailed here.
Update the function that abbreviates the EKS cluster name for the command line prompt to just remove "-eks" and "-cluster" rather than everything after "-eks-" so that…
Geodesic is a DevOps Linux Toolbox in Docker. Contribute to cloudposse/geodesic development by creating an account on GitHub.
2024-09-24
Do you see any use cases or benefits for creating a devcontainer for Geodesic?
I do want to - like the one we have for Atmos
Just didn’t get around to it.
I think it would be great as a quick way to kick the tires on it to understand what it does
Are you signing up?
I can take a crack at it! Do you envision having it similar to the Atmos devcontainer with a quickstart?
Yes, though we don’t have a home for a quickstart yet. We want to launch a micro site for geodesic as well.
I would start with the dev container and a mention in the root README.md with a button
Similar to this:
Sweet, sounds good! I can take a crack at it this week
There is an atmos devcontainer?
How well does that work vs using geodesic?
And how does GitHub Codespaces fit into that?
Yep, there is!
If there is documentation or old office hours where this is covered, please link me to it
Sorry, brand new to GitHub. Will this launch the codespace under cloudposse org?
This simple tutorial works with the code space
Take 30 minutes to learn the most important Atmos concepts.
The note on the linked page implies that it’s free to give it a try?
Github says they are free to get started
You might need to reduce the instance size
Sweet, I just wanted to see a quick demo without it being on a bill somewhere I have to answer for Will let you know how it works out. YOLO, right
Note, terraform is painfully slow on machines with lower memory / cores
Yeah, that’s my concern. Anecdotally it feels like terraform is slower in geodesic as well on my m3.
At least with the part where it downloads resources from the registry. I have never quite figured out why the caching of the providers doesn’t work.
You would need to place the cache on the mounted filesystem in geodesic
I think it’s configurable via TF_DATA_DIR
So, is the Codespaces demo just to demo the devcontainer, or do you actually use it? (ie does it handle authentication into AWS, etc)
Looks great btw. Love it.
It’s just a demo. At this time, we predominantly use geodesic
However, I think the codespace is very comprehensive and could replace it. We just haven’t adopted it as a team.
I think a page on using Devcontainers could be added here https://docs.cloudposse.com/layers/project/
Like we have one for “Prepare the Toolbox”
@Jeremy G (Cloud Posse) I’ve created a basic devcontainer for Geodesic in a draft PR (https://github.com/cloudposse/geodesic/pull/957). You’re welcome to build on it or disregard it entirely. The Geodesic container starts up, but the prompt doesn’t display—I suspect there’s some initialization magic happening behind the scenes that isn’t working.
@Michael Here is the (slightly sanitized) devcontainer.json
I use for your customized Geodesic. See if there’s anything you’d like to take from it and add to yours and please LMK what you think.
{
"name": "Geodesic Dev Container",
"image": "<your-GH-org>/infra:latest",
"mounts": [
"source=${localEnv:HOME}/.aws,target=${localEnv:HOME}/.aws,type=bind",
"source=${localEnv:HOME}/.config,target=${localEnv:HOME}/.config,type=bind",
"source=${localEnv:HOME}/.emacs.d,target=${localEnv:HOME}/.emacs.d,type=bind",
"source=${localEnv:HOME}/.geodesic,target=${localEnv:HOME}/.geodesic,type=bind",
"source=${localEnv:HOME}/.kube,target=${localEnv:HOME}/.kube,type=bind",
"source=${localEnv:HOME}/.ssh,target=${localEnv:HOME}/.ssh,type=bind",
"source=${localEnv:HOME}/.terraform.d,target=${localEnv:HOME}/.terraform.d,type=bind",
"source=${localEnv:HOME}/.bashrc.d,target=${localEnv:HOME}/.bashrc.d,type=bind"
],
"workspaceMount": "source=${localWorkspaceFolder},target=${localWorkspaceFolder},type=bind",
"workspaceFolder": "${localWorkspaceFolder}",
"appPort": ["12345:12345"],
"containerEnv": {
"SHELL": "/bin/bash",
"GEODESIC_PORT": "12345",
"GEODESIC_HOST_CWD": "${localWorkspaceFolder}",
"GEODESIC_WORKDIR": "${containerWorkspaceFolder}",
"LOCAL_HOME": "${localEnv:HOME}",
},
"remoteEnv": {
"LOGIN_SHELL": "/bin/bash"
},
"settings": {
"terminal.integrated.defaultProfile.linux": "/bin/bash"
},
}
Note that appPort
and GEODESIC_PORT
have to match, but otherwise can be any non-privileged port.
@Jeremy G (Cloud Posse) I assume that localWorkspaceFolder
is set to the directory on your host machine that contains infra?
IIRC, localWorkspaceFolder
is a devcontainer standard that refers to the folder which the IDE has opened as the root of the project. So yes, it is the root of the Git repo, containing the components/
and stacks/
directories.
2024-09-25
Is there any migration path for geodesic now that direnv, use terraform x
and basename-pwd have been consigned to the bin?
So we used direnv in geodesic
before we developed atmos
Can you describe how you are/were using direnv in geodesic?
We use it to set vars mostly but the folders were using
use terraform 1.0
in the envrc file so the right version of terraform would load for stuff that hadnt been updated; less important for the version selection now as I think we’re all on 1+
The use terraform
seemed to also load in the settings for the s3 backend (generated key path / bucket / dynamo / tfvar cli) as all that functionality dropped off even though tfenv is still working.
We had the config separated in each aws org like
conf/eu-west-2/<vpc,etc>
conf/eu-central1/<vpc,etc>
The .envrc in the region folder was setting a wrkname
which was for the workspace and the TF_VAR_region
I tried adding direnv back in and copying the rootfs out of an older version back in and the env vars from the container but I’ve a feeling I’m missing something as it didnt like playing.
Adding envrc on its own is a pain with that damn security change which makes it as irritating as git without a whitelist “direnv allow .” I could probably work around it if I got that to work properly as I could script out the env vars that have evaporated into envrc in each folder.
Not a huge issue at the moment as I can just drop back to the older container; Just hoping I don’t need to go through and change every 5 repos and a few hundred folders
Admittedly only briefly looked at atmos; I still have burnt fingers from gruntworks abstraction of terraform
If you are using direnv mostly for version management, then asdf might be the best option
Also, you could copy in the direnv stuff to your local dockerfile
Using direnv
proved more challenging in a CI/CD context, as it was based on the act of changing directories. We had to jump through a lot of hoops to get that to work in things like GitHub Actions
(Note, the 10 stages before terraform bankruptcy that led to atmos as part of our chapter 11 “terraform reorg” https://atmos.tools/introduction/why-atmos/)”
Learn about the typical Terraform journey when not using Atmos.
The CICD bit makes sense tbh; we use teamcity for CI so our envrc tends to be loaded into the env params of the build task, which pretty much shortcuts it.
How are you setting the s3 backend now then? In a new geodesic it doesnt seem to standardise it (just get prompted to add the vars to the tf); or is this a side effect of moving to atmos (you don’t really intend for people to be using terraform outside of the /components folder? I have an app deployment (nlb/ec2/asg/r53/sg/iam etc) that is broken into modules, I’ll have to give it ago.
The docs are looking great by the way. You should add your shiny new logo to https://gallery.ecr.aws/cloudposse/ and make it pretty
Amazon ECR Public Gallery is a website that allows anyone to browse and search for public container images, view developer-provided details, and see pull commands
2024-09-26
This is awesome; thanks for sharing! Going to incorporate a few now @Jeremy G (Cloud Posse)