#build-harness (2024-03)

Help with the Cloud Posse build-harness https://github.com/cloudposse/build-harness

2024-03-06

Hans D avatar

Want to see if we can get a more abstract version of

## Run tests in docker container
docker/test:
	docker run --name terratest --rm -it -e GITHUB_TOKEN \
		-e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN -e AWS_ENDPOINT_URL \
		-e PATH="/usr/local/terraform/bin:/go/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" \
		-v $(CURDIR)/../../:/module/ cloudposse/test-harness:latest -C /module/test/src test

in the build (or test?) harness (and some other magic here as well). What would be the best place to add it? as it involves docker, aws and terraform

David Schmidt avatar
David Schmidt

My personal solution to creating “custom” targets that I don’t necessarily think are general-purpose enough to share with the rest of the world is to add my own personal extensions: https://github.com/cloudposse/build-harness?tab=readme-ov-file#extending-build-harness-with-targets-from-another-repo

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Hans D I want to thank you again for all your contributions.

Regarding enhancing docker/test in build-harness, I am conflicted: • Makefiles go in build-harness, but we run all our tests in test-harness • We are trying to get away from both of those systems/images to a new paradigm that has multi-platform, multi-architecture support, and while that is still quite a ways off, I am reluctant to keep enhancing build-harness more than needed • Without knowing the extent of your changes, it is hard to suggest where they might go or the best way to approach them The current docker/test target is what we use to run all our open source Terraform module terratest tests. I would not want to change that at all. It is pretty flexible as is except for the fact that it always runs make test in test/src. What do you want to change or add to it?

Hans D avatar

the scope is about: • adding the override to always pull the x86 docker imgs, even when on mac arm • adding some additional env vars to allow running with localstack in an easy way - would allow me to do more testing locally without having to apply the fix to all 160+ repos • having some observed variants either more generally available if they seem reasonable. Quite ok to have a lsightly different target name to keep the current structure in place, but keep the required local change to an absolute minimum. (eg docker/aws-tf )

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

• Adding the --platform arg to docker/test is fine. We are more likely to scrap the whole setup before we get to support ARM. :smirk: • Adding additional env vars via a Makefile variable, following the same pattern we use to add docker buildargs via ARGS would be fine, allowing you to extend the command without having to hack the recipe. However, due to security concerns, it would have to be a different target. Perhaps docker/custom-test. I don’t know what you mean by:
having some observed variants either more generally available if they seem reasonable.

Hans D avatar

so far, all the changes I need are like MAKE variables, with an additional target setting some of these (eg docker/custom-test/localstack , no explicit dependy needed).

having some observed variants Thats about some minor deviations/variants I’ve noticed in the various makefiles, eg to pick some terraform version. Adding the --platform arg to docker/testzw will be doing that via an env var so that causes less change - wll be part of the change.

WIll do a first draft, picking the docker module in build as first entry point.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

The docker/test target we want to be relatively hard coded, because it runs in hostile environment. Go ahead and just hard code --platform linux/amd64 into the recipe

Hans D avatar


We are trying to get away from both of those systems/images to a new paradigm that has multi-platform, multi-architecture support, and while that is still quite a ways off, I am reluctant to keep enhancing build-harness more than needed
<cheering> Yes please, but as we currently still have it will keep the change as small as possible while still being able to piggy back on the currently available structure

Hans D avatar

ps: dev containers? (<duck/>)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Variations about picking Terraform version are due to evolution over time. These variations should not be propagated. Show me some examples and I will sort them out for you.

Hans D avatar

as I will be touching most repos as well (other change/reviews), I can pick them up , adding you explicitly as reviewer.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Regarding dev containers, we have support on the roadmap as part of a larger overhaul of Geodesic we will be releasing as Geodesic Version 3.

It’s still not entirely clear what a Geodesic dev container will offer that Debian Slim does not, but that is for a different thread.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

If the variations you are talking about are test/Makefile or test/src/Makefile, those should all be standardized to what is in https://github.com/cloudposse/terraform-example-module.

cloudposse/terraform-example-module

Example Terraform Module Scaffolding

1
Hans D avatar
#45 add test/run/in-docker

what

• add test/run/in-docker target • adds AWS_ENDPOINT_URL

why

• use generic target in test harness, vs specific per repositpry • allows for bulk update of eg env vars to pass on • still allowing repo sepcific setup • AWS_ENDPOINT_URL allows for injection for eg localstacl

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Hans D I appreciate the effort, I really do, but I don’t understand what you are trying to accomplish.

For example make docker/test runs the tests in the docker image, so test/run/in-docker just seems redundant and in the wrong place besides. Maybe something like make docker/local-test or docker/custom-test or something like that would be a better target.

What does setting AWS_ENDPOINT_URL enable? When would you use that?

I’m still somewhat suspicious that you have seen the variations in our repo testing and thing that reflects a need for customization, when it more likely reflects a need for standardization where some repos have been left behind.

I fully understand that build-harness is very complex and difficult to understand, modify, or extend. Because of this, your PR is almost definitely the wrong way to do what you are trying to do, but if I can get clear on your goals, I can make the changes so you don’t have to get into the weeds of build-harness. (If we saw build-harness as part of our long-term plan, I might instead encourage you to dig in and learn it, but since we want to replace it as soon as is practical, I would say your time is better spent on other pursuits.)

Hans D avatar

Hi Jeremy, the AWS_ENDPOINT_URL is needed to be able to test against a local stack environment (so no need for an actual AWS account with elevated permissions). Some cases might even need more variables like that. (only a few cases the required api calls are not available in the pro version, otherwise saves a ton of time) By moving this run bit up a level (the test harness), instead of the local repo, it allows for deploying those kind of common structures to be rolled out to every repo, while not having to touch every repo. For me, having to do that change in all 150+ repos, potentially multiple times (changes upstream, cleaning out unstashed changes, etc) ito do the local testing - vs having to wait for terratest to be run in the pipeline, having to do some more clickops to checkout the results, restest, is …eh… not promoting active testing/contributing code level changes, nor investigating/resolving failing test runs.

About the location in the repo and the actual repo: I though this to be the best place as it was quite test-harness specific, and using this construct so that the actual makefile involved could be more or less the same and just delegating to the central target. (there you invoke docker/test, Some of the variables are there, as that is also a kind a of habbit I have for those kind of construct, as otherwise it would really tie into the specific repo layout.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)


having to do that change in all 150+ repos
This is where you lose me. As the lawyers would say, “assumes facts not in evidence”.
only a few cases the required api calls are not available in the pro version
The pro version of what? I always do testing against live AWS APIs with real resources. I don’t know anything about testing with stubs, mocks, or local API servers. Please fill me in.

When I want to run tests locally, I just run make test because I already have the API keys set up to a real AWS account and the necessary tools installed on my host, but that’s because I’m on staff at Cloud Posse. Before that, I used make docker/test because I didn’t want to install all those tools, but but I still had my own AWS account to use for resource creation and testing, and stayed within the free tier doing so.

You are exposing me to another paradigm which I’m willing to support (or at least consider supporting), but I need to know more if I want a high likelihood of success.

Hans D avatar


having to do that change in all 150+ repos
well, with the sweeping and stuff, encountering various issues doing it already for 10+, and more upcoming (https://github.com/orgs/cloudposse/projects/27)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Sorry, I’m out of the loop on your undertaking. When you say things like “that change”, “the sweeping and stuff”, “encountering various issues”, I’m missing all the referential predicates.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

If my task were to clean up a bunch of repos, I’d still just be running make test in all of them

Hans D avatar

Sorry, my bad, might indeed be skipping some thinking/context steps.

Hans D avatar

I do not have yet an AWS account for doing this kind of testing (need to clear that internally, cost wise it should not be a big problem, and I would like to keep it separate from my regular sanbox I have available for my normal work). I do have localstack pro, so most of the aws stuff I can test in a confined environment - and even allowing for testing like bulk creation/deletion of accounts). That one is easy to wipe in an instant (1 curl away).

Hans D avatar

for the bulk repo sweeping, and investigating why terratest is failing in various cases (see the project link), I would prefer to stay as close as possible to using the makefiles etc in the repos - and upstreamining the few minimal changes needed for running it locally in a contained way.

Hans D avatar

Appreciating your help in keeping me sane re this.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Well, I think there are 3 different issues to deal with regarding our testing framework being out of date in repos.

  1. Updating test/Makefile and test/src/Makefile. These Makefiles should just be replaced by whatever is in terraform-example-module.
  2. Updating the go version and dependencies. Our current go tests should be using go v1.21. It was my intention that we keep the framework updated in terraform-example-module and the update it in other module more-or-less via copy and paste, but this has several problems, not least of which is that we have not upstreamed the changes to the example module. Nevertheless, updating test to go v1.21 an updating dependencies to current versions should not be that hard.
  3. We have enhanced the baseline of the test framework in go in a few ways. This is trickier to roll out. a. We migrated the random attribute (used to avoid resource name collisions) from go.random to terratest.random.UniqueId b. We migrated to using tempTestFolder := testStructure.CopyTerraformFolderToTemp so that we can truly run tests in parallel c. We added TestExamplesCompleteDisabled everywhere to ensure modules work properly with enabled = false d. We added a cleanup function to cleanup after all this parallel stuff. The changes in (3) are harder to copy and paste. Furthermore, the cleanup function may not be the right implementation. @Andriy Knysh (Cloud Posse) found a more robust way to ensure that cleanup happens even if the test crashes, but I haven’t looked into it deeply, and do not know if it should be use along with or instead of the cleanup function (which I wrote).

And then we have additional utilities that we should deploy as needed, specifically an aws client to check that certain resources were created correctly, and a Kubernetes client to determine that Kubernetes clusters were configured correctly. Those should be added only as needed.

I don’t know how much you want to get into all of that.

cloudposse/terraform-example-module

Example Terraform Module Scaffolding

Hans D avatar

let me know how I can help.

The current harness is a bit of a black box - which I personally don’l like as when stuff breaks its harder to find out what part is breaking. Moving more towards a central multi arch testing container would help with all the dependencies needed (and personally i prefer such a container vs having to install locally). (the black box is partly because I didnt take the time yet to go deepdive into the internals)

And as indicated, if we can make it work with localstack (only on the level of settings the right env vars) that would allow others without an aws account (but with localstack) to run the tests as well. The finanl test against AWS is the baseline (if it fails there that trumps “but it works on localstack”. Quite ok that the support for localstack is hardly visible / not pointed out in the docs.

2024-03-08

2024-03-09

    keyboard_arrow_up