#variant (2020-04)
Discuss variant (the “Universal CLI”) https://github.com/mumoshu/variant
Archive: https://archive.sweetops.com/variant/
2020-04-02
@mumoshu on variant2, is this where you’re currently headed? looks really nice.
absolutely!
we’re considering variant for the next phase of a project and wondering if we should invest in variant1 or variant 2
does variant2 support nesting of commands?
I would definitely recommend variant 2 for your next project.
I think it’s almost complete(I can’t come up with anything to add anymore for almost a month) but please feel free to submit feature requests when you find something is still missing
@mumoshu I didn’t see an example of how to nest commands
…like in variant 1
E.g. “mycli command subcommand”
the last major thing I’d like to decide until releasing variant 2 is wether we add a variant 1 compatibility https://archive.sweetops.com/variant/2020/01/#a3343add-6c11-4f17-8ed0-5b7e3a98e2f0 cc/ @tolstikov
SweetOps Slack archive of #variant for January, 2020.
I am okay with no backwards compatibility at this point. It’s a radical departure.
@Erik Osterman (Cloud Posse) You should use
job "bar" {
import = "./path/to/dir"`
}
to nest all imported commands under “bar”
Hrmm would about with in a fully self-contained script?
Without importing other files
I don’t generally recommend it as it makes a single variant 2 script too big to read but
Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2
Ohhhhhhhhhh
Easy enough. I assumed it would be expressed with nesting somehow
@johncblandii
@johncblandii has joined the channel
I didn’t like we had been forced to nest task to define nested commands in variant 1
I agree that this will look more readable in the long run
also - this syntax makes it more searchable. just grep for job "foo bar"
for any subcommand. this wasn’t possible with variant 1
True - that’s a plus
@Igor Rodionov
@Igor Rodionov has joined the channel
@mumoshu this begs the question - have you considered HCL2 for Helmfile?
Yeah! Just hesitated to create a feature request for that in the helmfile repo. (I think) there’s so many people who use k8s but not tf and it seemed not appealing at all for non-tf users
Ya could be controversial
also i’m not yet sure on which project i’d build something for managing set of helm releases w/ hcl2
Btw have you see the new kpt tool?
yep
i was a bit disappointed on it - i was expecting something similar to helm but for kustomize
apparently it is not
kpt live apply
seems very useful tho
That was my interpretation of of it - only from reading faq - that it was “helm for kustomize”; surprised it’s not
so a possible feature would be that kpy live apply
is integrated into helmfile apply
, which allows us to deploy helmfile managed apps without helm…
Hrmmm maybe not time for helmfile? I mean why not build libs for variant2?
yeah i think that’s a valid question
Could maybe reimplement a lot of this without some of the tech debt
so perhaps i’d enhance variant 2 to add hcl2 syntax on top of something similar to helmfile
and use kpt live apply
with that? just brain storming
Yep - but maybe not corner your self by making it generalized
E.g. this is the tool to replace Terragrunt for terraform
Helmfile for helm
“Kptfile” for kpt
This is a way to declaratively express all of that in one tool
What we can do though is make certain things easier to express
With less boilerplate
So Helmfile like business logic expressed in a variant2 lib
that makes sense
The next thing to consider is the common characteristic behind all succesful language alike this: a registry component (ideally just GitHub)
Does the import functionality just use go getter?
So we can import GitHub repos?
its local - but you can use https://github.com/variantdev/mod integrated into variant2 for dependency management
Missing package manager for any task runners and build tools e.g. make and variant - variantdev/mod
let’s say you’d want a external variant2 lib hosted under [github.com/cloudposse/variant2libs](http://github.com/cloudposse/variant2libs)
, you’ll write a yourapp.variantmod
like
module "yourapp" {
dependency "github_release" "variant2libs" {
source = "cloudposse/variant2libs"
version = "> 1.0.0"
}
directory "lib/variant2libs" {
source = "github.com/cloudposse/variant2libs?ref=${dep.variant2libs.version}"
}
running variant mod build
resolves the latest variant2libs release that mathces the semver constraint > 1.0.0
and downloads the whole tree for the release under lib/variant2libs
then in yourcmd.variant
, you import the libs like
job "foo" {
module = "./yourapp.variantmod"
import = "./lib/variant2libs"
}
:point_up: module = "./yourapp.variantmod"
instructs variant2 to load the module from the variantmod file
can we just stick it in the main variant file?
No, not yet. I’m still unsure whether we should allow inlining it or not
Does the import functionality just use go getter?
back to your question - yes, it’s almost like that.
but its worth noting that, under the hood, variant2 delegates dependency management to another tool
so import = "path"
doesn’t need go-getter
Ok, that is pretty rad
Wow
You have like generalized terraform. This is magical.
Does it have to be in a separate file like that?
can we just stick it in the main variant file?
Aha so for right now, I need to let go the drive to have it self contained. I mean, it makes sense as you have it, and we obviously break out out our terraform code. Will wait to use it in practice before making and changes.
For example being able to inline the module map :-)
Ahhh HCL probably doesn’t allow mixed types for a key
alright anything else you’d want variant2 to add before you actually use it?
yes. i’ll probably rename module
to import_module
or something similar and use module
for inlined modules
Lol haha
In that case, could import
then be optional? e.g. it defaults to something like .variant/modules/$sha
(just thinking “what would terraform do?”)
hm.. maybe? I do understand your motivation. just not sure how the current variant2 module system fits with that
the variant2 module also allow you to install executable binaries required by your command
perhaps we’d better have a dedicated syntax to import a remote variant2 lib, similar to what we had in variant1
how would you update your variant2 module?
hrm… good point. mycmd init -refresh
?
the good part of the variant2 module system is that you can run variant2 mod up
or mod up
to fetch all the dependencies and update respective <yourmodule>.variantmod.lock
files
to be pull-requested for reviews. this allows you to manage dependency updates like you would do with go mod
for go projects
ya, makes sense.
everything is technically possible.
it is just that i need some time to think about how all these things can fit together.
thx for your feedback anyway! please give me more as you come up. it’s necessary to finish variant2
@Erik Osterman (Cloud Posse) I’ve enhanced import
as we discussed above. Please see https://github.com/mumoshu/variant2/blob/master/examples/advanced/import-remote/import.variant#L2
Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2
Rock on! We will give this a shot
this is absolutely lovely
The dependency management is intriguing… One of the problems we always have is managing versions of tools used in a project pipeline… We pin anywhere we can use dependabot to automate the ref update through a pr, but there are a lot of release apis/targets and it just doesn’t work for everything. Any thought to a service or tool that would work with the lockfile and help open a pr for new versions?
right - maybe piggy back on the gopkg.toml
format for compatibility with dependabot
For example, terraform-docs just updated to 0.9.0, introducing the Requirements section. We run tfdocs in our ci to compare diffs and now every pr pulls in the new version and fails. Now, actually, all our tooling is in a container, and we versioned the container. So, as long as we use the earlier container, it keeps working, and it’s just the pr for the container update that fails. But it would be awesome to get the scope down to the specific app update
We pin anywhere we can use dependabot to automate the ref update through a pr, but there are a lot of release apis/targets and it just doesn’t work for everything
i hear you and yeah - that’s why i created variantdev/mod.
run mod up --pull-request
periodically from your ci(e.g. circleci scheduled jobs) and it becomes a dependa-like-bot periodically checks for updates and send prs
Soooo, what’s involved with that? Support for non-github remotes? E.g. codecommit, gitlab?
Gonna have to dig in…
version: 2.1
jobs:
mod:
steps:
- checkout
- install_mod
- run:
name: Update dependencies
command: |
DATE="$(date -u '+%Y-%m-%d')"
mod up --base master --branch "dep-update" --build --pull-request --title "Dependency updates: ${DATE}"
workflows:
version: 2
update_dependecies:
jobs:
- run_mod_up
triggers:
- schedule:
cron: "0,10,20,30,40,50 0-7 * * *"
filters:
branches:
only:
- master
Support for non-github remotes? E.g. codecommit, gitlab?
It supports github only for now. Perhaps it’s not that difficult to add support for more, as long as there’s api?
Yes, there are APIs, we actually added the codecommit support to dependabot. Of course then they were bought by github and have basically stopped merging external contributions, so now we have to maintain a fork and build our own
for github prs, its one line in go
please feel free to give me a snippet of go code for codecommit and gitlab or just submit a pr
// btw, does codecommit supoprt pull requests?
Missing package manager for any task runners and build tools e.g. make and variant - variantdev/mod
we actually added the codecommit support to dependabot
awesome
Of course then they were bought by github and have basically stopped merging external contributions
oh that’s too bad..
We’re way overcommitted right now, but this definitely sounds interesting… I’ll have to see if I can get someone some time to play here
// btw, does codecommit supoprt pull requests?
Yes, yes it does!
2020-04-04
2020-04-06
@mumoshu shouldn’t positional parameters in variant2
should up in help
? I see the flags, but not the parameters.
it should! probably i just missed implementing it
Error: accepts between 1 and 1 arg(s), received 0
Usage:
mycli terraform init [flags]
Flags:
--cachedir string Module cache folder
-h, --help help for init
Global Flags:
--env string Environment to operate on
job "terraform init" {
parameter "project" {
type = string
description = "Project to provision"
}
...
}
Also, it appears it’s not possible to use parameters in defaults for options?
parameter "project" {
type = string
description = "Project to provision"
}
option "cachedir" {
type = string
default = try(".modules/${param.project}")
description = "Module cache folder"
}
Yes it’s not possible. I couldn’t find out how this should work:
job "x" {
parameter "a" {
type = string
default = "foo"
}
parameter "b" {
type = string
}
}
what should ./mycmd x bar
do then? is bar
intented for specifying a
, or b
?
I believe in many cases it would be better to use option
for anything that can be default
ed.
I discovered variables
too - so that’s good.
Using this instead of parameters inside of options defaults
is there something like locals in terraform? (a more terse way to express a lot of variables)
what would you use locals
for?
i mean why variables
can’t be used instead?
a more terse way to express a lot of variables
ah so you want it just for a short-hand syntax for a lot of variables?
locals {
moduledir = "${opt.cachedir}"
id = "${opt.namespace}-${opt.stage}-${opt.name}"
....
}
more terse.
than
variable "moduledir" {
value = "${opt.cachedir}"
}
variable "id" {
value = "${opt.namespace}-${opt.stage}-${opt.name}"
}
seems true!
(not a deal breaker, just a nice to have)
i just prefer a TYPE RESOURCE { ATTRS }
syntax
is locals
widely used in terraform?
oh ya! all over the place.
oh really. then we should add it to variant2 as well
(because in terraform variable
default values cannot have interpolations)
so it’s a bit unfair to compare! haha
ah good point
perhaps we can deprecate variant2 variable
s in favor of locals
?
hrm… so variable
could be confusing for those coming from terraform where these are ways to pass settings like option
and parameter
in variant. But I think it’s pretty quick and easy to see that it’s different.
another idea:
variable "locals" {
foo = "bar"
apple = "delicious"
}
variable "mysql" {
username = "test"
host = "localhost"
}
so locals is just arbitrary
but this always for a terse expression of a lot of settings and an easy way to group them.
looks nice
just curious but how would you define a single variable then?
or maybe introduce a new type?
i’m assuming you declared var.locals.foo
, var.lobals.apple
, var.mysql.username
, var.mysql.host
in your above example, right?
settings "mysql" {
username = "test"
host = "localhost"
}
yeah, or just variables
would work
ohhh true
that’s better
variable
and variables
yeah. then i would get a little annoyed on the inconsistency between two
variable "foo" {
value = "FOO"
}
variables "bar" {
baz = "BAZ"
}
I take it this is not possible:
variable "foo" {
value = {
baz = "BAZ"
}
}
im not sure either, but this one has more possibility to work:
variable "foo" {
value = map({
baz = "BAZ"
})
}
I’m working on passing a list to another job. I get the following error:
Error: handler for type tuple not implemneted yet
Note, no line numbers (like variant2 usually outputs)
try list(["foo", "bar"])
in hcl2 tuples and lists are different.
can you use either of them in terraform?(then perhaps terraform has an automatic type conversion between list <-> tuple?
so I think there’s a bug maybe? let me show you….
job "shell" {
description = "Run a command in a shell"
parameter "commands" {
description = "List of commands to execute"
type = list(string)
}
exec {
command = "bash"
args = ["-c", join("\n", param.commands)]
}
}
so I have defined it as list(string)
not a tuple
i didn’t consider it as a bug
in hcl2 ["-c", join("\n", param.commands)]
is a tuple
so you need to convert it to a list: list(["-c", join("\n", param.commands)])
Hrmm…
ah well, give me a minute
no list in the example (e.g. list(["-e", opt.script])
)
i think i misread your example. yeah exec.args
can take either types
well, are you trying to call the job shell
from another job, or from the command-line?
So I took your example here: https://github.com/mumoshu/variant2/blob/4d9c0a5bb6a824b72f0b21a5bf81fac65b4d0763/docs/proposals/testmock/simple.variant
Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2
But I don’t like HEREDOC
syntax b/c the closing demark needs to be at character zero on a new line
Not to sidetrack, but does the <<-
heredoc syntax not work in variant2? Hard to tell what is a general hcl2 thing vs a terraform-specific thing…
To improve on this, Terraform also accepts an indented heredoc string variant that is introduced by the <<- sequence:
block {
value = <<-EOT
hello
world
EOT
}
In this case, Terraform analyses the lines in the sequence to find the one with the smallest number of leading spaces, and then trims that many spaces from the beginning of all of the lines, leading to the following result:
hello
world
The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.
Thanks for pointing that out!
Haven’t tried it yet, but will if it comes up again.
so I thought I’d just convert it to use a list of commands instead
So your example is:
job "shell" {
parameter "script" {
type = string
}
parameter "path" {
type = string
}
exec {
command = "bash"
args = ["-c", param.script]
env = {
PATH = param.path
}
}
}
I thought I could just do:
job "shell" {
parameter "commands" {
type = list(string)
}
parameter "path" {
type = string
}
exec {
command = "bash"
args = ["-c", join("\n", param.commands)]
env = {
PATH = param.path
}
}
}
And call it like this:
job "app deploy" {
option "path" {
type = string
default = ".:${abspath("${context.sourcedir}/mocks/kubectl")}:/bin:/usr/bin"
}
run "shell" {
commands = [
"kubectl -n ${opt.namespace} apply -f ${context.sourcedir}/manifests/"
]
path = opt.path
}
assert "path" {
condition = opt.path != ""
}
}
yeah but variant2 doesn’t support passing list(string)
from command-line args so i was just curious how you tried to call it
ah okay
try
commands = list([
"kubectl -n ${opt.namespace} apply -f ${context.sourcedir}/manifests/"
])
i think we don’t have a type conversion from tuple -> list there today.
Ok, so I changed it to this
…same error
ok i reproduced it on my machine, too
hmm
ahhhh
please try:
commands = list(
"kubectl", "-n", opt.namespace, "apply", "-f", "${context.sourcedir}/manifests/"
)
so what I’m trying to do is create a “script” by joining a list of commands.
not passing commands to exec
(where each arg should be passed like in your example)
ah okay
then
commands = list(
"kubectl -n ${opt.namespace} apply -f ${context.sourcedir}/manifests/"
)
ok, that was it!
so not: list(["a", "b", "c])
, but list("a", "b", "c")
exactly. the former creates a single item list with the type of element being a tuple(string), which isn’t what you want
but hey - i just implemented the automatic type conversion so you can use either, like you would in terraform
… args So that this example should just work, without converting the tuple
["kubectl -n …] into a list(string) with
list("kubectl -n …").
``
option namespace { descript…
it’s available since v0.18.0
Does tolist
work? https://www.terraform.io/docs/configuration/functions/tolist.html
The tolist function converts a value to a list.
it would also work!
when you need an explicit type conversion
fyi here’s the list of available functions for variant2
Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2
sweet!
I see some test for chdir
, but no example
I noticed that exec
is not valid in step "..." { ... }
blocks
there’s no chdir in variant2 dsl. how would you use it?
so not all terraform commands support a directory argument
e.g. terraform output
yes, exec
isn’t available under step
. use run
instead to delegate to a job
of course, I can wrap this in a script block, but was hoping not to.
not a blocker, just a nice to have
the general recommendation is to wrap any shell script within exec
called from within a job
, so that your variant command can be easily tested
I saw you had a nice example of writing tests for variants
haven’t looked closer yet though at that.
so you’d better create a terraform
job that cd
before executing terraform
if one job cd’s into a directory, will the next job be run there?
(doubt that)
job `terraform` {
paramter "workdir" {
type = string
}
exec {
command = "bash"
args = ["-c", "cd ${param.workdir}; terraform ...."]
}
}
note the problem with this is now we dont’ free shell-escaping
It goes from:
exec {
command = "terraform"
args = concat(list(param.subcommand), opt.args)
}
(safe)
to…
(still working on it)
(the problem I’m working on is that when I’m running exec, I have a bug somewhere, but variant
just exits 1 with no output)
now we dont’ free shell-escaping
good point
should we add
exec {
dir = "workdir"
command = ...
}
?
(pretty please - that would make things a lot easier in the long run).
i believe i know how shell escaping is hard and that’s why i made exec
a combination of command
and args
we also use a lot of relative paths in helmfile
@Erik Osterman (Cloud Posse) when would you use relative paths in variant2?
fyi: exec dir
and hidden
jobs is available since v0.19.0
Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2
Thanks!! works as intended. it.
if one job cd’s into a directory, will the next job be run there?
no. and i believe it’s a great thing. previous jobs shouldn’t affect later jobs whereas possible.
bug: if type = "string"
(syntax error) variant exits non-zero, but prints no error messages. obviously, this was my user error.
thanks! i’ve fixed it so that it will emit a nicer message like this:
Error: Invalid type specification
on example/test.variant line 3, in option "namespace":
3: type = "string"
A type specification is either a primitive type keyword (bool, number, string) or a complex type
constructor call, like list(string).
thanks!!
is it deliberate that global parameters and options do not get passed by run
automatically? I need to explicitly pass them everywhere.
not really. would you prefer global parameters implicitly inherited to jobs?
that would also mean that job-specific parameters would hide global parameters by names
would you prefer global parameters implicitly inherited to jobs
Yes, from a user perspective that seems natural that globals are indeed global. But I admit I don’t know the implications of that.
job-specific parameters would hide global parameters by names
Not sure I grok the implications of this. Sounds not good.
2020-04-07
@mumoshu is there something like need
but for jobs rather than steps?
Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2
In a Makefile
it’s very easy to have a dynamic list of target dependencies. I want to do the same with job dependencies.
e.g.
# static deps (like steps in variant)
target: dep1 dep2 dep3
could be expressed
# dynamically compute deps
DEPS = $(shell echo dep1 dep2 dep3)
target: $(DEPS)
How to do this is variant2
?
E.g. define a job for “all” that runs a list of jobs (dynamically discovered from the yaml configuration file)
@Erik Osterman (Cloud Posse) variant2 equivalent of that would be concurrent steps:
job "dep1" {
}
job "dep2" {
}
job "terraform init all" {
concurrency = 2
step "run-dep1" {
run "dep1" {
...
}
}
step "run-dep2" {
run "dep2" {
...
}
}
step "do something with dep1 and dep2" {
needs = ["run-dep1", "run-dep2"]
run "something" {
}
}
}
would you prefer a shorthand syntax for this?
which has indeed limited usage but easier to write
job "terraform init all" {
needs = ["dep1", "dep2"]
}
hmm but how should we choose which parameters/options to be passed to dep1 and dep2 then?
So what I was trying to do was create a project configuration in yaml
then use variant to operate on that configuration
then create an all
job (e.g. mycli terraform init all
) that would run terraform init $project
on each of the projects in the config in order
with the step
notation, I have to know ahead of time each project
but i was hoping to create my own “schema” if you will, and then using the same job to operate on each one in an unattended fashion using an all
job
I have it working now being explicit
probably an ideal and better shorthand syntax for your specific example would be
job "terraform init all" {
need "dep1" {
param1 = "paramvalue"
opt1 = "optvalue"
}
need "dep2" {
param1 = "paramvalue"}
}
run "foo" {
...
}
}
which will be translated to todays
job "terraform init all" {
concurrency = 2
step "run-dep1" {
run "dep1" {
param1 = "paramvalue"
opt1 = "optvalue"
}
}
step "run-dep2" {
run "dep2" {
param1 = "paramvalue"}
}
}
step "run-foo" {
needs = ["run-dep1", "run-dep2"]
run "foo" {
...
}
}
}
this is what’s missing in the variant2 today
so we don’t have a kind of for
expression in variant2 dsl
yea, and I am not crazy about that convention for terraform.
i think it was a necessary evil, but maybe for variant there’s another syntax.
Maybe it doesn’t need to be an iterator.
i hope so.. but have no concrete idea yet
Let me write up how I thought it could work maybe…
how about adding items
to the step
syntax, so that it will “expanded” into a multiple concurrent steps?
variable "projects" {
value = [
map("dir", "web"),
map("dir", "infra"),
}
}
step "all-deps" {
items = var.projects
run "terraform init" {
dir = item.dir
}
}
job "terraform init" {
parameter "project" {
...
}
option "extra_args" {
...
}
exec {
...
args = opt.extra_args
}
}
job "terraform init all" {
# note this list of "depends on" can be calculated dynamically
depends_on = ["terrraform init eks", "terraform init efs"]
extra_args = ["-refresh=true"]
}
In this example, terraform init eks
resolves to the first job and passes eks
as the first positional parameter.
# note this list of “depends on” can be calculated dynamically
Yea, I haven’t yet figured out how to do that, but haven’t tried. I assume there’s some way I could extract that from the yaml configuration.
Worst case I exec out to jq
and globals would be passed automatically
the last space-separated item in each depends_on entry gets turned into the project
parameter value?
how about options?
sec
Updated example.
thanks!
The key is that when you depends_on
, you can only pass all the same parameters or options
okay i see how it works
depends_on
could maybe be steps
and it expands to a set of inline steps
how about this then?
job "terraform init all" {
depends_on "terrraform init" {
project = "eks"
}
depends_on "terrraform init" {
project = "efs"
}
Right, so that’s what it should expand to.
The thing is eks
and efs
come from the config.
ah no, i mean this is the short-hand syntax to be expanded.
For reference, here’s a sample config I’m working with
projects:
eks:
module: "git::<https://github.com/cloudposse/terraform-root-modules.git//aws/eks?ref=tags/0.122.0>"
min_nodes: 3
max_nodes: 10
efs:
module: "git::<https://github.com/cloudposse/terraform-root-modules.git//aws/efs?ref=tags/0.122.0>"
Think astro
by uber: https://github.com/uber/astro
Astro is a tool for managing multiple Terraform executions as a single command - uber/astro
but generalized with variant
so it will work with helm, helmfile, terraform, etc.
what i worry about is this requires me to write a command parser that should behave equivalent to shells
depends_on = [“terrraform init eks”, “terraform init efs”]
extra_args = [“-refresh=true”]
hopefully i could avoid it. that’s the same reason why i’ve added exec { dir = "..." }
yesterday
but we definitely need a way to dynamically generate dependent job runs
write a command parser that should behave equivalent to shells
So, it’s not unheard of in make to have a target call make ; not crazy about it, but I guess that’s always a possibility to have variant job call $0
yeah true. but i won’t add a syntax sugar to just call variant run <whatever>
from within variant
so a few possible options in my mind now:
job "terraform init all" {
exec {
# exec will be expanded to multiple concurrent execs by replacing `item` in `args` to each item in this tuple:
items = ["efs", "eks"]
cmd = "variant"
args = ["run", "terraform", "init", item, "--refresh=true"]
}
}
job "terraform init all" {
need {
# this step will be expanded to multiple concurrent steps by replacing `item` in the run body to each item in this tuple:
items = ["efs", "eks"]
run "terraform init" {
project = item
refresh = true
}
}
}
job "terraform init all" {
depends_on {
# this step will be expanded to multiple concurrent steps by replacing `item` in the run body to each item in this tuple:
items = ["efs", "eks"]
run "terraform init" {
project = item
refresh = true
}
}
}
Should items
maybe be parameters
since that’s how it’s used?
Probably no. items
is a predefined name of the variable which is used for extracting each item
and it’s up to the user how they would use item
for
ok
i mean the item
can be passed as a paramter value, or an optional value
Oh, now I see how you’re doing that
or it can even be used to dynamically compute a value, which is eventually passed to a param or option
yea, that could work.
Maybe depends_on
is more like for_each
now?
maybe?
but you can omit items
at all
also, you can have more than two kinds of depends_on
job "terraform init all" {
depends_on {
run "install terraform" {
}
}
depends_on {
# this step will be expanded to multiple concurrent steps by replacing `item` in the run body to each item in this tuple:
items = ["efs", "eks"]
run "terraform init" {
project = item
refresh = true
}
}
}
this would firstly install terraform, then concurrently terraform-init efs and eks
aha, makes sense
that’s nice
would you prefer this?
job "terraform init all" {
depends_on "install terraform" {
}
depends_on "terraform init" {
# this step will be expanded to multiple concurrent steps by replacing `item` in the args to each item in this tuple:
items = ["efs", "eks"]
args = {
project = item
refresh = true
}
}
}
this way, the dependent job names are known at parsing time
Oh, I see what you’re doing.
Yea, that’s better and less nesting
while the terraform init
targets and args
can be still computed at run time
great
which one would you prefer, need
or depends_on
?
job "terraform init all" {
need "install terraform" {
}
need "terraform init" {
# this step will be expanded to multiple concurrent steps by replacing `item` in the args to each item in this tuple:
items = ["efs", "eks"]
args = {
project = item
refresh = true
}
}
}
So currently, you don’t use args
like this and just pass everything
exec
does take arguments like that
Ah, true
(though thought of that more as args to the syscall function)
so in my initial variant2 implementation, it was
job "foo" {
run {
job = "anotherjob"
args = {
param1 = "param1
}
}
}
(though thought of that more as args to the syscall function)
i think that’s valid. so maybe we’re mixing two conceptually different things into the name args
. not sure it’s good or bad though
if it made it easier to implement it this way, then we should keep it.
conceptually, this would make more sense to me:
options = {
...
}
parameters = {
...
}
vs args
where they are lumped together.
i understand
my idea was to explain it like:
Variant reads named args
, matches and sets options
and parameters
values by names
so that we don’t need to distinguish between params and options from the call-side, which makes it more readable(in my op) and easier to refactor later:
run "terraform init" {
params = ["efs"]
options = {refresh = true}
}
vs
run "terraform init" {
project = "fs"
refresh = true
}
i believe it’s pretty close to that
i occasionally see people wrapping helmfile, helm, kubectl, and terraform with bash snippets/scripts.
perhaps that’s where variant/variant2 shines
Everyone is pretty much using some wrapper script for Terraform AFAIK, be it Terragrunt/scripts/geodesic
Yea, I think we’ve pushed make
to it’s limits
I’m really excited about variant2
and the new DSL
working on a prototype cli for our clients as a starting off point.
Also note, variant2 ships with built-in testing framework.
and slack bot
it’s make on beastly steroids.
Nice
My 2 cents:
I’m not sure if integrations with external services (e.g. Slack
) should be inside variant
core
As for me, all integrations should be pluggable on demand, and the core should be kept to the very minimal functionality (“Do one thing and do it well”).
Otherwise, it could end like variant1
: a lot of half-baked integrations inside the core (Docker runner, Github actions, etc), which are not maintained for the obvious reasons :slightly_smiling_face:
Anyway, that’s a great tool and I’m using variant1
heavily, but it still has some bugs, and I’m not sure if I should invest my time to fix them (as v1
is not supported anymore).
Still thinking about integrating variant2
into my workflows…
@mumoshu thank you for the great work again!
I thought there was no critical bug in variant1 that’s why I switched to spend more time on writing variant2. if you find any, please file issue(s)! I’m not gonna abandon variant1 any time soon
I’m not sure if integrations with external services (e.g. Slack) should be inside variant core
I hear you. It’s just that I couldn’t come up with such pluggable interface at the time of writing variant2.
For example, how would an universal slack bot engine communicate with variant2 to show a dialog to ask the user for filling missing options to the job? I had no idea.
Otherwise, it could end like variant1: a lot of half-baked integrations inside the core (Docker runner, Github actions, etc), which are not maintained for the obvious reasons
Just curious, but why did you find them half-baked?
I mean, I use most of them in production and no one reported critical issues or submitted PRs to fix them. So I was considering all of them just working as expected.
Anyways, thanks a lot for your feedback!
2020-04-08
@Erik Osterman (Cloud Posse) depends_on
under job
is available since v0.20.0
running variant run all
for this example
https://github.com/mumoshu/variant2/blob/master/examples/depends_on/depends_on.variant
would produce the exact output in
Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2
Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2
Very clean! I like it. Thanks for doing that so quickly! Going to have a demo probably next weeks #office-hours
whoot! I got it working now with depends_on
and dynamically getting the list of items by calling keys
on my configuration.
Seems like order is not preserved
@mumoshu will keys always be returned in the same order as in the configuration file?
i guess no - variant doesn’t do any fancy things to preserve key ordering and i thought go’s map doesn’t preserve the key ordering
ya, fair enough. probably just means I’m using the wrong data type for the job.
job "terraform init all" {
description = "Init all projects"
depends_on "terraform init" {
items = keys(conf.terraform.projects)
args = {
project = item
env = opt.env
}
}
}
The keys are returned in lexicographical order, ensuring that the result will be identical as long as the keys in the map don’t change.
And in variant3 we’ll get named IDs like for_each to avoid annoying issues with order changes…
haha
i realized I should probably be using a yaml list instead.
2020-04-10
I’m not sure if integrations with external services (e.g. Slack
) should be inside variant
core
I get what you’re saying - but this is so rad. In the end, we all keep saying “I want a to do chatops, just never get around to it.” Well, no more excuses.
a lot of half-baked integrations inside the core (Docker runner, Github actions, etc), which are not maintained for the obvious reasons
that’s true…
My guess is this is a peak in to @mumoshu personal laboratory. What I love about it is he doesn’t hold back, inhibited by figuring out where to put something.
as v1
is not supported anymore
@mumoshu is this true? @Igor Rodionov has a massive variant1 script he’s working on right now. It would be really hard to convert it to variant2.
understand if no new feature requests in v1. but maybe bugfixes, if they come up? (none right now)
2020-04-11
how can i properly type env
to allow it to be optional here:
job "shell" {
description = "Run a command in a shell"
private = true
option "dir" {
default = ""
description = "Directory to run the command"
type = string
}
option "env" {
default = {}
description = "Directory to run the command"
type = map(string)
}
parameter "commands" {
description = "List of commands to execute"
type = list(string)
}
exec {
dir = opt.dir
env = opt.env
command = "bash"
args = ["-c", join("\n", param.commands)]
}
}
for context, this is a simple wrapper around shell commands so we can simply do things like this (note: this uses a similar wrapper used to set a dir
specific to tf runs):
job "tf clean" {
run "tfshell" {
project = opt.project
tenant = opt.tenant
commands = ["rm -rf .terraform *.planfile"]
}
}
please let me recall how I might have designed this to work… anyways, i believe this is due to how HCL differentiates map
and objefct
.
i thought {}
and {k=v}
was for the syntax for hcl objects
, not maps
.
Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2
I tried that and it didn’t work. This is for a dynamic map of different keys: map(string)
.
i think i managed to reproduce it. will try to fix it soon! than ks
@johncblandii this should be fixed since v0.21.1.
you should be able to just use {}
as seen in https://github.com/mumoshu/variant2/blob/master/examples/defaults/defaults.variant#L16
Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2
Sweet. Will check it out
Just add a default value for it
It’s a map I believe
Vs parameter is a list
I tried a default. I’ll post the errors when I get back to things.
2020-04-12
2020-04-13
Error: handler for type object not implemneted yet
@mumoshu happens with in 0.21.1
job "shell" {
description = "Run a command in a shell"
private = true
option "dir" {
default = ""
description = "Directory to run the command"
type = string
}
option "env" {
default = {}
description = "Directory to run the command"
type = map(string)
}
parameter "commands" {
description = "List of commands to execute"
type = list(string)
}
exec {
dir = opt.dir
env = opt.env
command = "bash"
args = ["-c", join("\n", param.commands)]
}
}
quick spelling fix in the error: https://github.com/mumoshu/variant2/pull/14
@mumoshu any thoughts on the above exception?
@johncblandii hey! thx - im trying to reproduce it and fixing a few bugs along the way. you can expect me to cut a new release soon :)
i’m around for a bit so ping whenever
maybe you can reproduce it only when you cann the shell
job from another job, right? i got different issues when tried to run from variant run shell whatever
hrmm…i didn’t try it from run
btw, VARIANT_TRACE
fails for short options. have you seen that?
I suppose not. What do you mean by “short options”?
Error: unknown shorthand flag: 'p' in -p
Usage:
nbo [flags]
Flags:
-h, --help help for nbo
changed -p
to --project
and the command would run
Whoa! I have no idea how that could happen!
Just curious but you’re enabling the trace like VARIANT_TRACE=1 ./variant run shell 'echo foo' 'echo bar'
, right?
using a shim
VARIANT_TRACE=1 ../cli/nbo -p acme
seems ok
i’ll try to recreate. 1 sec
Are you sure you’ve set short = "p"
within option "project" { ... }
?
yeah
I’ll keep an eye on it if i can recreate it
i had a code issue i was trying to trace when i hit that
okay. anyways, i managed to reproduce the handler for type object not implemneted yet
error. i’ll shortly fix it and then try to pass run "shell"
on my own env
after that there would be much less probability that you’d encounter any issue i hope so
awesome
Re: VARIANT_TRACE issue - I couldn’t reproduce it
$ VARIANT_TRACE=1 PATH=$(pwd):$PATH ./test1/test1 example -d
deploy switch tenant=mytenant item=foo
deploy switch tenant=mytenant item=bar
Done.
I’ve created the shim by modifying an example command contained in the variant2 repo examples/issues/cant-convert-go-str-to-bool
to have short = "d"
for the example’s dry-run option:
job "example" {
config "file" {
source file {
path = "${context.sourcedir}/conf.yaml"
}
}
option "dry-run" {
type = bool
default = false
short = "d"
}
# snip
and ran export shim
to create the shim:
$ ./variant export shim examples/issues/cant-convert-go-str-to-bool ./test1
I can’t reproduce the issue any longer either, @mumoshu.
on my machine variant run shell 'echo foo' 'echo bar'
is working fine
it is the map
part that’s the problem
1 sec
here is a run
from a job
.
run "shell" {
commands = param.commands
dir = "../projects/helmfiles/${opt.project}"
project = opt.project
tenant = opt.tenant
env = {
AWS_PROFILE = "nbo-${opt.tenant}-helm"
TENANT = opt.tenant
PROJECT = opt.project
KUBECONFIG = "~/.kube/kubecfg.${opt.tenant}-helm"
}
}
here’s an interesting one; details to follow:
in ./helmfile.yaml: error during ../environments.yaml.part.0 parsing: template: stringTemplate:22:9: executing "stringTemplate" at <exec "yq" (list "read" (printf "../../../tenants/%v.yaml" (env "TENANT")) (printf "projects.helmfile.%v.values" (env "PROJECT")))>: error calling exec: exec: "yq": executable file not found in $PATH
COMMAND:
yq read, ../../../tenants/acme.yaml, projects.helmfile.reloader.values
ERROR:
exec: "yq": executable file not found in $PATH
Error: command "helmfile --environment tenant diff" in "../projects/helmfiles/reloader": exit status 1
yq
exists in my shell
running this formerly with the shell
command (bash -C …
) works perfectly fine
here’s the job:
job "helmfile shell" {
description = "Run a command in a shell targeted specifically at a projects/helmfiles project for Helmfile commands"
private = true
parameter "args" {
description = "List of args to execute"
type = list(string)
}
exec {
args = param.args
command = "helmfile"
dir = "../projects/helmfiles/${opt.project}"
env = {
AWS_PROFILE = "${opt.tenant}-helm"
KUBECONFIG = "~/.kube/kubecfg.${opt.tenant}-helm"
PROJECT = opt.project
TENANT = opt.tenant
}
}
}
I think you need to set PATH in exec { }
Perhaps variant2 should just inherit the process envvars when you use exec { env = ... }
but omitted PATH
in the env
attr?
i would expect it to
I couldn’t reproduce the PATH issue with variant2 alone
I ran this successfully:
job "example" {
exec {
command = "jq"
args = ["-h"]
}
}
Ooops, it does reproduce when env = { whatever }
is set
v0.22.2 fixes this.
got a sigsegv when using an empty list args = []
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x1f9bafe]
goroutine 1 [running]:
github.com/mumoshu/variant2/pkg/app.ctyToGo(0x283bb40, 0xc0004b3da0, 0x21295e0, 0xc0004b3dc0, 0xc0004b3d80, 0x0, 0x0, 0x0)
/home/runner/work/variant2/variant2/pkg/app/app.go:1271 +0x2be
github.com/mumoshu/variant2/pkg/app.(*App).execRunInternal(0xc0000a38c0, 0xc000031280, 0xc0004b3c80, 0xc00000c3c0, 0x0, 0x0, 0x0)
/home/runner/work/variant2/variant2/pkg/app/app.go:1048 +0x1bf
github.com/mumoshu/variant2/pkg/app.(*App).execRun(0xc0000a38c0, 0xc000031280, 0xc0004b3c80, 0xc00000c3c0, 0x28268c0, 0xc00049dc70, 0x0, 0x0, 0x0)
/home/runner/work/variant2/variant2/pkg/app/app.go:1187 +0x81
github.com/mumoshu/variant2/pkg/app.(*App).execJob(0xc0000a38c0, 0xc000031280, 0xc0005e6fd0, 0x7, 0x0, 0x283b980, 0xc00045f740, 0x0, 0x0, 0x0, ...)
/home/runner/work/variant2/variant2/pkg/app/app.go:812 +0x5fa
github.com/mumoshu/variant2/pkg/app.(*App).Job.func1(0x0, 0x0, 0x0)
/home/runner/work/variant2/variant2/pkg/app/app.go:626 +0x9d2
github.com/mumoshu/variant2/pkg/app.(*App).Run(0xc0000a38c0, 0xc0005e6fd0, 0x7, 0xc00052c5d0, 0xc00052c600, 0xc00058b9e0, 0x1, 0x1, 0x0, 0x0, ...)
/home/runner/work/variant2/variant2/pkg/app/app.go:497 +0xc3
github.com/mumoshu/variant2.(*Runner).Cobra.func1(0xc0005e1180, 0xc000520740, 0x0, 0x4, 0x0, 0x0)
/home/runner/work/variant2/variant2/variant.go:624 +0x114
github.com/spf13/cobra.(*Command).execute(0xc0005e1180, 0xc000520700, 0x4, 0x4, 0xc0005e1180, 0xc000520700)
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:826 +0x460
github.com/spf13/cobra.(*Command).ExecuteC(0xc000032780, 0x2812d00, 0xc000098010, 0x2812d00)
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:914 +0x2fb
github.com/spf13/cobra.(*Command).Execute(...)
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:864
github.com/mumoshu/variant2.(*Runner).Run(0xc000483090, 0xc00024e0f0, 0x6, 0x6, 0xc00058bd78, 0x1, 0x1, 0x0, 0x0)
/home/runner/work/variant2/variant2/variant.go:754 +0x2cc
github.com/mumoshu/variant2.Main.Run(0x7ffeefbff81f, 0x3, 0x0, 0x0, 0x0, 0x7ffeefbff818, 0xa, 0x2812d00, 0xc000098008, 0x2812d00, ...)
/home/runner/work/variant2/variant2/variant.go:359 +0x12d
main.main()
/home/runner/work/variant2/variant2/pkg/cmd/main.go:13 +0xb8
I think @Erik Osterman (Cloud Posse) mentioned this before. Using his workaround works:
arg = coalesce(list(""))
Thanks! This should be fixed in v0.21.2
great
2020-04-14
can we run dynamic depends_on
like this? depends_on "${var.cmd} init" {
or would this be a place for an exec?
Unfortunately it isn’t supported
Would you mind sharing your exact use-case? I’m considering how much we might need it
i’ll share code in a sec. let me give context
I mean, I’m eager to make the depends_on target dynamic if it’s necessar
THx
we have a file with a list like this:
order:
- terraform.proj1
- terraform.proj2
- helmfile.proj3
- terraform.proj4
Note that HCL2 doesn’t support expressions inside the block label (e.g. NAME
in depends_on "NAME" { }
So we need an alternative syntax if we end up implementing it
- read the file
- loop over
order
- run internal
job
for${var.command} plan
or${var.command} diff
ah, gotcha
so this code outputs a simple echo of each project with the command split:
➜ ../cli/nbo deploy all -t acme
terraform account
terraform cloudtrail
terraform vpc
config file looks like:
order:
- terraform.account
- terraform.cloudtrail
- terraform.vpc
code to output it is like this:
#!/usr/bin/env variant
# vim: filetype=hcl
option "tenant" {
description = "Tenant to interact with"
short = "t"
type = string
}
option "project" {
default = ""
description = "Terraform project to process"
short = "p"
type = string
}
config "file" {
source file {
path = "${opt.tenant}.yaml"
}
}
job "deploy all" {
description = "Init all projects"
depends_on "e" {
items = conf.file.order
args = {
a = item
project = opt.project
tenant = opt.tenant
}
}
}
job "e" {
option "a" {
default = coalesce(list(""))
description = "args to pass to subcommand"
type = string
}
variable "asplit" {
type = list(string)
value = split(".", opt.a)
}
# variable "cmd" {
# type = string
# value = var.asplit[0]
# }
# variable "project" {
# type = string
# value = var.asplit[1]
# }
exec {
command = "echo"
args = var.asplit
}
# run "${opt.a} init" {
# }
# depends_on "${var.cmd} init" {
# args = {
# project = var.project
# tenant = opt.tenant
# }
# }
}
some comments still in there
makes sense
and you wanna make e
in depends_on "e"
dependent on each item?
so that you can run helmfile apply
for helmfile.proj3
and terraform apply
for items like terraform.proj1
?
i technically want to run internal jobs within the variant cli
yes
that makes total sense
i could break it out to individual commands via exec, but plan
would need init
and workspace
firstly, i don’t think variant2 as of today has a good way to express that
ok
but i do want great support for your use-case
actually i was planning to create a command that looks very similar to yours
myself
nice!
let me dump my ideas
cool
job "deploy all" {
description = "Init all projects"
depends_on "deploy" {
items = conf.file.order
args = {
dir = item
}
}
}
job "deploy" {
description = "deploy the app defined in the directory"
option "dir" {
type = string
}
variable "cmd" {
value = "${ match(opt.dir, "helmfile") ? "helmfile" : "apply" }"
}
exec {
command = var.cmd
dir = opt.dir
args = ["apply"]
}
}
This is the first option. The idea is that you define a deploy
job can is able to call helmfile apply
or terraform apply
depending on the dir it is targeted
so that it can be used from depends_on
The second option is that you make each item of conf.file.order
a object which has two atrributes “type” and “dir”
job "deploy all" {
description = "Init all projects"
depends_on "deploy" {
items = conf.file.order
args = {
dir = item.dir
type = item.type
}
}
}
job "deploy" {
description = "deploy the app defined in the directory"
option "dir" {
type = string
}
option "type" {
type = string
}
variable "cmd" {
value = opt.type
}
exec {
command = var.cmd
dir = opt.dir
args = ["apply"]
}
}
Does either of the two options look good to you? If so we don’t need to enhance variant2 in any way for this. This should just work with today’s variant2.
Otherwise we need something more. Not sure if that’s about making depends_on
dynamic. The downside of doing so would be that the deploy all
job can be more complex which makes unit-testing it harder
On the other hand, either of the above two options is easily testable. I mean, you can write test for deploy all
and deploy
separately and independently
the all
vs the individual is the plan. so this setup is where we have multiple internal jobs we want to call.
ex:
tf plan
calls tf init
followed by tf workspace
then the exec
for terraform plan
the deploy
will need to call apply
and that will consist of init -> workspace -> plan -> apply
Makes sense. As we don’t have any notion of mutliple conditional exec
s within a single job, that’s not possible
perhaps the only way would be exec
another variant command
mmm…so command
is the cli with the values passed there
that could be a good workaround
is there a concept of “self” in this case?
or would we need to bake in the cli name in the exec
’s command
?
would we need to bake in the cli name in the exec ’s command?
yes, that’s the way it works now.
but there’s a potential solution. we already have context
that is used for context.sourcedir
which basically translates to dirname $($0)
so it would be straight-forward to add context.self
or perhaps context.command
that represents $0
which one do you prefer? or i’m open to alternatives
The third possible option would be to enhance step
s.
Variant2 job
can have two or more step
s. Each step
is able to depend on other step
in the same job
, run concurrently, and call an another job
.
It doesn’t support conditional execution(i.e. if
). But if we add support for if
, like
job "apply" {
# params and opts here
# instead of exec or run, use steps
step "do terraform apply" {
if = opt.type == "terraform"
run "terraform apply" {
# args here
}
}
step "do helmfile apply" {
if = opt.type == "helmfile"
run "helmfile apply" {
# args here
}
}
we neither need self
nor dynamic depends_on
target, or write/call a lengthy shell script.
that looks spot on, @mumoshu! i definitely love the simplicity of an if
statement to allow control of the run as well
that’d be SWEET.
thoughts, @Erik Osterman (Cloud Posse)?
ideally, i’d like each variant2 job to have only a single responsibility, so that we can enforce testability of each job.
that is, a job should only do any of the followings:
- selectively run a job(new)
- run a static graph of jobs(
step
s. note thatsteps
has no conditions or loops) - run a job(
job
with arun
. note that you can’t have multipleruns
within a single job. usesteps
if you need to do so) - run a command(
job
with aexec
. note that you can’t have multipleexec
s within a job)
so i got to think i would prefer a dedicated block selectively { }
for it, like:
job "apply" {
# params and opts here
# instead of exec or run, use steps
selectively {
run "helmfile deploy" {
args = {}
if = opt.type == "helmfile"
}
run "terraform deploy" {
args = {}
if = opt.type == "terraform"
}
}
or change the run
syntax to accept an expression for the job name, so that we can write
job "apply" {
# params and opts here
# instead of exec or run, use new run syntax
variable "runs" {
value = {
helmfile = {
name = "helmfile deploy"
args = {...}
}
terraform = {
name = "terraform deploy"
args = {...}
}
}
}
variable "job" {
value = var.runs[opt.type]
}
run {
job = var.job.name
with = var.job.args
}
coincidentally, the last solution requires https://github.com/mumoshu/variant2/issues/15
This specific use-case is to help DRY up code and make it a bit more readable inside of the job. job "myjob" { option "value" { default = coalesce(list("")) descriptio…
I think I like the latter. Being able to dynamically choose a job in this way could be beneficial.
I do think anyone coming from Terraform (hcl background) would look for a way to use or not use a job (see count
on resources) so maybe both?
Got it. run
without the job name in a label should now work as documented in https://github.com/mumoshu/variant2/#indirect-run
Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2
Perhaps we’d better deprecate the old run syntax because having both seems confusing?
For the consistency reason, I guess we’d better change the step
syntax as well
BEFORE
step "STEP_NAME" {
run "JOB_NAME" {
dir = "deploy/environments/${opt.env}/manifests"
}
}
AFTER
step "STEP_NAME" {
job = "JOB_NAME"
with = {
dir = "deploy/environments/${opt.env}/manifests"
}
}
Oh, man. That looks solid. Going to dig into this later today and will report back my results
I’m continuously baffled by how fast you implement things @mumoshu
in the past, you used args
instead of with
; is there a subtle difference I’m not picking up?
I’ve been long wanted a clear distinction between exec’s args of list(string)
and job run’s args of map(any)
. I thought that picking a different name for job run’s would achieve that.
But I’m open to suggestions
along the previous lines above, I’m passing in cmd.project
as a string, splitting it, then using it. can I have a variable depend on a variable?
variable "asplit" {
type = list(string)
value = split(".", opt.a)
}
variable "cmd" {
type = string
value = var.asplit[0]
}
variable "project" {
type = string
value = var.asplit[1]
}
I get an error when I do:
Error: Unknown variable
on ../cli/main.variant line 51:
(source code not available)
There is no variable named "var".
Error: Unknown variable
on ../cli/main.variant line 51:
(source code not available)
There is no variable named "var".
Error: ../cli/main.variant:51,13-16: Unknown variable; There is no variable named "var".
btw, I can create github issues for these things at any point. just let me know.
can I have a variable depend on a variable?
No, but seems valid and feasible to add support for it.
Would u mind creating a github issue for that?
This specific use-case is to help DRY up code and make it a bit more readable inside of the job. job "myjob" { option "value" { default = coalesce(list("")) descriptio…
will do
gonna crash, but i’ll check in tomorrow. variant2 is good stuff, @mumoshu. there is a lot of potential here. thx for all the work
John just showed me the working bones of what we wanted to get working!
2020-04-15
@mumoshu does this work?
option "dry-run" {
default = false
description = "simulate an install"
type = bool
}
I’m getting this error no matter what I do:
job "deploy switch": option "dry-run": can't convert Go string to boolError: job "deploy switch": option "dry-run": can't convert Go string to bool
it happens because of:
depends_on "deploy switch" {
items = conf.file.order
args = {
dry-run = opt.dry-run
item = item
tenant = param.tenant
}
}
I tried to run it with tobool(opt.dry-run)
, but I get that error every time I try to pass the option
or just plain ol’ false
to the run
Could you share me a fuller example? Is the option dry-run defined in the same job as the depends_on is defined in?
it is a global
full job:
#!/usr/bin/env variant
# vim: filetype=hcl
option "dry-run" {
default = false
description = "simulate an install"
type = bool
}
# option "tenant" {
# description = "Tenant to interact with"
# short = "t"
# type = string
# }
# option "project" {
# default = ""
# description = "Terraform project to process"
# short = "p"
# type = string
# }
job "deploy" {
description = "Init all projects"
parameter "tenant" {
type = string
}
config "file" {
source file {
path = "${param.tenant}.yaml"
}
}
depends_on "deploy switch" {
items = conf.file.order
args = {
item = item
tenant = param.tenant
}
}
}
job "deploy switch" {
private = true
parameter "item" {
default = []
description = "a config param to deploy in the format: cli.project; e.g. terraform.eks; e.g. helmfile.reloader"
type = string
}
parameter "tenant" {
type = string
}
# option "dry-run" {
# default = false
# description = "simulate an install"
# type = bool
# }
variable "item-split" {
type = list(string)
value = split(".", param.item)
}
variable "type" {
type = string
value = var.item-split[0]
}
variable "project" {
type = string
value = var.item-split[1]
}
variable "subcommand" {
type = string
# value = opt.dry-run ? "echo ${var.type}" : var.type
value = "echo ${var.type}"
}
run {
job = "${var.subcommand} deploy"
with = {
project = var.project
tenant = param.tenant
}
}
}
job "echo terraform deploy" {
parameter "project" {
description = "args to pass to subcommand"
type = string
}
parameter "tenant" {
type = string
}
exec {
command = "echo"
args = [opt.dry-run, param.tenant, "terraform deploy", param.project]
}
}
job "echo helmfile deploy" {
parameter "project" {
description = "args to pass to subcommand"
type = string
}
parameter "tenant" {
type = string
}
exec {
command = "echo"
args = [param.tenant, "helmfile apply", param.project]
}
}
still refining a bit
so this one runs since i’m not passing it directly to the switch
job
it always outputs false
, though
but it seems depends_on
args are string conversions
but it fails
Thanks for the detailed report! Just spotted the cause and fixed locally.
I’ll publish the next patch release a few hours later
I believe your config is correct. Please just wait for the patch release
This one is working fine after the fix
job "deploy switch" {
option "dry-run" {
type = bool
}
option "item" {
type = string
}
option "tenant" {
type = string
}
exec {
command = "bash"
args = ["-c", "echo deploy switch tenant=${opt.tenant} item=${opt.item}"]
}
}
job "example" {
config "file" {
source file {
path = "${context.sourcedir}/conf.yaml"
}
}
option "dry-run" {
type = bool
default = false
}
parameter "tenant" {
type = string
default = "mytenant"
}
depends_on "deploy switch" {
items = conf.file.order
args = {
dry-run = opt.dry-run
item = item
tenant = param.tenant
}
}
exec {
command = "bash"
args = ["-c", "echo Done."]
}
}
variant run example
go build -o variant ./pkg/cmd
deploy switch tenant=mytenant item=foo
deploy switch tenant=mytenant item=bar
Done.
There was definitely a bug around type conversion. Having two or more types of values in args
was causing the issue.
ahhhh
@johncblandii Just released v0.22.1 with the fix
2020-04-16
@mumoshu it looks like indirect run works in a step
fails:
step "deploy run" {
run {
job = "${var.subcommand} deploy"
with = {
project = var.project
tenant = param.tenant
}
}
}
works:
# step "deploy run" {
run {
job = "${var.subcommand} deploy"
with = {
project = var.project
tenant = param.tenant
}
}
# }
Use case: I added a previous step to do a simple log message and thought that was the problem, but ran into this.
indirect run in step isn’t implemented yet. will make it work later!
would you be okay if it removed the older syntax run "NAME" { ... }
?
i think it is more verbose for normal runs
true
it’s just that direct/indirect run distinction makes it a bit harder to explain and maintain for me
it might make since overall since the normal syntax is type name
this is run "name of another type"
understandable
also to me, any block with label(s) like someblock "LABEL1" "LABEL2" { }
makes me think that if it can be referenced from within hcl expressions with someattr = someblock.LABEL1.LABEL2
right and you can’t with run
exactly.
btw, i have this all automated. the latest changes are indeed working
yes. but on the other hand i do think that the newer syntax is verbose. not sure what i should do
I like the foo "something" { ... }
syntax more than I like foo = "something"
… with = { ... }
That said, not definitely not worth keeping both if it makes it harder for you to implement things. This is just a stylistic preference.
great!
sometimes verbose is a must
that’s very insightful
thx. i’ll try to build a more complex example variant2 command myself and see if the verbosity is acceptable.
Also, we could do a screenshare if you want to see how we’re using it.
This is a “terragrunt
killer”
cool
@mumoshu what’s the relationship to a job’s run status code and the next step running?
@johncblandii are you talking about the case that you’ve two or more steps in a job?
job "example" {
step "one" {
run "x" {
}
}
step "two" {
run "y" {
}
}
}
the second step “two” runs only when the first step exists with 0
i.e. job example
exits on the first step with non-zero exit code
and example
inherits the exit status of the step if it returned a non-zero exit status
for example, x
exited with 1 results in variant run example
exits with 1
@johncblandii Thanks for all your feedbacks!
I think I’ve finished all the important bugs reported/features requested so far. But please feel free to poke me if I’m missing something
:boom: :boom: :boom: :boom: :boom:
Just verified the dry-run
option now works properly as a bool
.
I also confirmed the env
works perfectly fine with cli shell env
and merges internal env
with the system env
@mumoshu is there a reason you have to manually pass a global option
into tertiary runs?
Ex: run cli b --namespace hi
option "namespace"
default = "cp"
job "b"
echo opt.namespace // "hi"
run "c" // no namespace set here
run "d"
namespace = opt.namespace
job "c"
echo opt.namespace // "cp"
job "d"
echo opt.namespace // "hi"
just that i thought verbosity is important there
perhaps variant could just try to fill in missing option/parameter arguments from the global parameter/option?
well for jobs needing calling jobs calling jobs calling jobs is pushing a global down N levels
but you don’t have to define the option for the > secondary jobs
yeah probably
so i either have to define global options on every single job or hope devs understand i can set args on a run that don’t exist on the job
devs working on the same cli, i mean
I was mostly concerned about a like
option "namespace" {
type = string
}
job "deploy" {
option "dir" {
type = string
}
exec {
command = "kubectl"
args = ["-n", opt.namespace, "-f", opt.dir]
}
}
job "all" {
option "namespace" {
type = string
default = ""
}
step "app1" {
run "deploy" {
namespace = opt.namespace == "" ? "app1" : opt.namespace
dir = "app1"
}
step "app2" {
namespace = "app2"
dir = "app2"
}
}
as a way to override
i gotcha
not sure we should allow that though
it seems conflicts internal to a job to a global might confusing
perhaps just forbidding to shadow a global option with a local option, while
automatically filling missing global args as suggested above would be nice?
duplication-wise, yes. for example:
job "helmfile apply" {
description = "Apply the helmfile with the cluster"
parameter "tenant" {
description = "Tenant to interact with"
type = string
}
parameter "project" {
description = "Terraform project to process"
type = string
}
run "helmfile" {
command = "apply"
namespace = opt.namespace
region = opt.region
project = param.project
tenant = param.tenant
}
}
job "helmfile destroy" {
description = "Destroy the helmfile with the cluster"
parameter "tenant" {
description = "Tenant to interact with"
type = string
}
parameter "project" {
description = "Terraform project to process"
type = string
}
run "helmfile" {
command = "destroy"
namespace = opt.namespace
region = opt.region
project = param.project
tenant = param.tenant
}
}
job "helmfile diff" {
description = "Diff the helmfile with the cluster"
parameter "tenant" {
description = "Tenant to interact with"
type = string
}
parameter "project" {
description = "Terraform project to process"
type = string
}
run "helmfile" {
command = "diff"
namespace = opt.namespace
region = opt.region
project = param.project
tenant = param.tenant
}
}
job "helmfile lint" {
description = "Lint the helmfile with the cluster"
parameter "tenant" {
description = "Tenant to interact with"
type = string
}
parameter "project" {
description = "Terraform project to process"
type = string
}
step "cmd" {
run "helmfile" {
command = "lint"
namespace = opt.namespace
region = opt.region
project = param.project
tenant = param.tenant
}
}
}
job "helmfile sync" {
description = "Sync the helmfile with the cluster"
parameter "tenant" {
description = "Tenant to interact with"
type = string
}
parameter "project" {
description = "Terraform project to process"
type = string
}
run "helmfile" {
command = "sync"
namespace = opt.namespace
region = opt.region
project = param.project
tenant = param.tenant
}
}
namespace = opt.namespace
region = opt.region
^ those are all duplicated
add a new global, add a new line for every job
that’s indeed painful!
okay then, let’s disallow option/parameter shadowing AND do enhance run
to fill on missing args from global opt/param
sweet
job "deploy switch": shadowing global option "dry-run" with option "dry-run" is not allowedError: job "deploy switch": shadowing global option "dry-run" with option "dry-run" is not allowed
that’s great! ya, i think this is much better. if a job wants to redefine a global option seems like a bad inconsistency. instead the job should define a new option. this change you made addresses that! I like the exception msg.
2020-04-17
2020-04-20
@mumoshu is there a way to exec
a cli and allow the user to enter a response to the cli?
ex: mycli terraform apply
will prompt to apply unless I tell it to automatically say “yes” and mycli
just exits skipping over the prompt.
Yes he added support for this
Look for keyword interactive
@johncblandii https://github.com/mumoshu/variant2/issues/9
what Running variant with interactive commands fails I believe the original variant supported this mode use-case Terraform apply can be run interactively on the console. While this is not the prima…
I saw that…
Auto-prompting via interactive messages
One of cool features of the bot is that when you missed to specify values for certain options, it will automatically start a interactive session to let you select and input missing values within Slack. You don’t need to remember all the flags nor repeat lengthy commands anymore.
that’s where you can enter a variant job without required options and it’ll ask you about those options
Ya can’t wait to test the slack bot functionality later
this request was more about a tf destroy asking for yes
after you start the process
Aha, though in this case terraform supports that out of the box so we can get around it
yes, if we don’t want to allow a non -auto-approve
scenario
so i figured it’d be fine for our scenario, but i’ve hit it a couple times where i wanted to inspect a destroy
before running
Ya so maybe a new feature request for “prompt” - but I see this getting complicated with paths
yuppers
i was thinking of just respecting prompts from exec
commands
the second was support for our own prompts
can kinda do it now if we set an option as required but not provide it on the cmd
Respecting prompts is supported today, right?
unless i’m missing something, it will fly right through it
for an exec
Right so see the link above :-)
no. that’s not it
Need to add “interactive” flag to the job
that link is interactive for your variant cli
it is not interactive for an exec
command
Hrm so see the example in the associated commit?
That is allowing the user to interact with the command in exec and not fly through
^ that’s it
But also a command line arg to rm
rm -i makes it interactive
yes, but i was looking at the fix
exec { interactive = true }
enables the interactive mode Resolevs #9
good to know. thx for the ref. i should’ve read the code in the first place.
So maybe we should dynamically pass auto-approve=false or true based on options to variant
Could probably be done using a ternary
prob can be automated
not going to sweat it now, but should be easy
@mumoshu another one; can we control the exit status in the event we’re looping over something?
ex (pseudo code):
loop item as [call1, call2, call3]
item()
Let’s say call1
passes, call2
fails, I would expect call3
to not run. Right now it continues to try and run 3 even though I need 2 to pass before 3 runs.
I think this what what “need” solves
Call3 needs call2
yeah
Right now it continues to try and run 3 even though I need 2 to pass before 3 runs.
What did you observe this behavior with? step
s ?
(also, was this using the need
keyword @johncblandii?)
FYI: The needs
attribute is documented here: https://github.com/mumoshu/variant2#concurrency
Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2
Will that work within a dynamic loop?
And yes, it was with a step/run
I’ll bbiab and can provide code
pls! an example code that doesn’t work would always be helpful
well do we have a dynamic loop thing? oh, is it depends_on "JOB" { items = ... }
?
depends_on is for multiple independent dependencies of the parent job
so trying to build a dependency graph across the items
of a depends_on
seems conceptually wrong
Yes, it using that part for the looping. It isn’t intended for a dependency graph. I just need to exec jobs in a loop and stop when they fail
Is there a concept of looping outside of depends on?
exec jobs in a loop and stop when they fail
this seems like building a dependency graph that each next job depends on its previous job?
but anyways,
[10:35 AM] Is there a concept of looping outside of depends on?
no
and depends_on
items should be executed serially and stop on the first error
are you seeing different behavior?
yes, i saw different behavior. i’ll try to replicate
this seems like building a dependency graph that each next job depends on its previous job?
i guess you could call it one, but that’s not the intent. it isn’t a dependency as in the cli depends on job 1 to run for 2. it is a dependency in how we’re managing our config file for looping
Context:
order:
- terraform.account
- terraform.iam-tenant-roles
- terraform.cloudtrail
- terraform.vpc
deploy job:
depends_on "deploy switch" {
items = conf.file.order
args = {
item = item
tenant = param.tenant
}
}
switch
is the job that splits the left/right of the .
and calls an internal job
job "deploy switch" {
run {
job = "${var.subcommand} deploy"
with = {
project = var.project
tenant = param.tenant
}
}
}
so based on the order
above, you end up with:
job terraform deploy account
job terraform deploy iam-tenant-roles
job terraform deploy cloudtrail
job terraform deploy vpc
thx. the job ordering seems correct.
variable "file" {
value = {
order = [
"terraform.account",
"terraform.iam-tenant-roles",
"terraform.cloudtrail",
"terraform.vpc",
]
}
}
job "example" {
depends_on "deploy switch" {
items = var.file.order
args = {
item = item
}
}
}
job "deploy switch" {
option "item" {
type = string
}
variable "subcommand" {
value = split(".", opt.item)[0]
}
variable "project" {
value = split(".", opt.item)[1]
}
run {
job = "${var.subcommand} deploy"
with = {
project = var.project
}
}
}
job "terraform deploy" {
option "project" {
type = string
}
exec {
command = "bash"
args = ["-c", <<SCRIPT
echo job terraform deploy ${opt.project}; if [ ${opt.project} == "cloudtrail" ]; then echo simulated error 1>&2; exit 1; fi
SCRIPT
]
}
}
stops on the first (simulated) error, as expected, for me:
VARIANT_DIR=examples/issues/depends_on_stop_on_first_error ./variant run example
job terraform deploy account
job terraform deploy iam-tenant-roles
job terraform deploy cloudtrail
simulated error
Error: command "bash -c echo job terraform deploy cloudtrail; if [ cloudtrail == "cloudtrail" ]; then echo simulated error 1>&2; exit 1; fi
": exit status 1
2020-04-21
2020-04-22
Issue created for building with a .
as the path: https://github.com/mumoshu/variant2/issues/19
Problem ➜ variant export binary . mycli When exporting a binary, the path must be a directory name or an absolute path. Using the . throw an error. Error go: malformed import path ".": in…
thx!
@Zachary Loeber you have a pulse on everything. Question for you: https://github.com/mumoshu/variant2/issues/17#issuecomment-617835218
Importing from mumoshu/variant#33 Add a Login page and a project-selection page in front of #32. It should be deployed along with multiple instances of variant server #31 (+ perhaps #32). This may …
Where’s/What’s the silver bullet for building an enterprise-grade Web UI today?
@Zachary Loeber has joined the channel
I’ve been seeking such a bullet myself. I was going to look towards some of the fairwinds projects for inspiration (https://github.com/FairwindsOps/polaris for instance) as they seem to use pure Go based solutions but I haven’t gotten that far yet. Most solutions I’ve seen incorporate some java frameworks that instantly turn me off.
Validation of best practices in your Kubernetes clusters - FairwindsOps/polaris
I think they use buffalo framwork behind the scenes but, again, I’m barely scratching at this particular itch of mine yet
sorry
2020-04-23
@mumoshu I’m working with tests and it seems null
should be allowed for err
as opposed to requiring an empty string.
case "ok" {
tenant = "acme"
exitstatus = 0
err = ""
}
^ this works just fine.
take err
out and you get an error: This object does not have an attribute named "err".
set it to err = null
and you get:
panic: handler for type dynamic not implemented yet [recovered]
panic: handler for type dynamic not implemented yet
goroutine 10 [running]:
testing.tRunner.func1(0xc0000cb500)
/opt/hostedtoolcache/go/1.13.10/x64/src/testing/testing.go:874 +0x3a3
panic(0x21ebee0, 0xc0000968e0)
/opt/hostedtoolcache/go/1.13.10/x64/src/runtime/panic.go:679 +0x1b2
github.com/mumoshu/variant2/pkg/app.(*App).execAssert(0xc00020b080, 0xc0005cc980, 0xc0004dda0a, 0x5, 0x2840660, 0xc000598a80, 0x0, 0x0)
/home/runner/work/variant2/variant2/pkg/app/app.go:965 +0x9d0
github.com/mumoshu/variant2/pkg/app.(*App).execTestCase(0xc00020b080, 0xc000035740, 0x12, 0xc0005b5ec0, 0x1, 0x1, 0xc0005ab5e0, 0x1, 0x1, 0xc000035820, ...)
/home/runner/work/variant2/variant2/pkg/app/app.go:1091 +0x659
github.com/mumoshu/variant2/pkg/app.(*App).execTest.func1(0xc0000cb500)
/home/runner/work/variant2/variant2/pkg/app/app.go:1037 +0xc2
testing.tRunner(0xc0000cb500, 0xc0005be790)
/opt/hostedtoolcache/go/1.13.10/x64/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/opt/hostedtoolcache/go/1.13.10/x64/src/testing/testing.go:960 +0x350
@mumoshu I’d like to dynamically run different parameters to run
and reference them in my case
without manually typing them again. case.*
access seems only available in a run
or assert
and not within the case
itself or in variable
.
case "ng1" {
concurrency = 0
err = "delay of ${case.delayone} is less than ${case.delaytwo}"
stdout = ""
delayone = 0
delaytwo = 1
}
run "test" {
concurrency = case.concurrency
delayone = case.delayone
delaytwo = case.delaytwo
}
instead of having type type:
err = "delay of 0 is less than 1"
…which then requires me to change 0
in multiple places if I change the value of delayone
to anything else.
Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2
i think this should better be covered by variable
Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2
case "ng1" {
concurrency = 0
stdout = ""
delayone = 0
delaytwo = 1
}
variable "err" {
value = "delay of ${case.delayone} is less than ${case.delaytwo}"
}
run "test" {
concurrency = case.concurrency
delayone = case.delayone
delaytwo = case.delaytwo
}
assert "..." {
condition = var.err == ...
i tried a var from a case and it threw an error
not sure if i’ve already added support for variable
under test
blocks. let me check and add it if not exist yet
which error was it?
19: value = case.tenant
There is no variable named "case".
variable "tenant" {
type = string
value = case.tenant
}
case "ok" {
tenant = "client"
...
}
oo too bad
i think it should just work. pls expect me it fix it today :)
in typical mumoshu fashion.
much appreciated. i’ll be around to test
since v0.25.0, you should use this instead
case "ng1" {
concurrency = 0
stdout = ""
delayone = 0
delaytwo = 1
err = "delay of ${case.delayone} is less than ${case.delaytwo}"
}
@mumoshu can we get some output clean-up on test failures? separating the runs + their failures and maybe some color coding would really help with readability.
➜ variant test terraform init terraform plan -out=acme.planfile aws –profile company-blah2-helm eks update-kubeconfig –name=company-blah2-eks-cluster –region=us-east-2 –kubeconfig=/path/to/kube…
@mumoshu i think something is up with global options again. this bool
option I can echo
at the top level and it is true
. then in jobs run from that job it is false
.
deploy (dry-run = true) -> helmfile deploy (dry-run = false)
➜ ./nbo deploy acme --tenants-dir=../tenants --dry-run --kubeconfig-path=~/.kube
INTERNAL: dry? false. kc-path? /dev/shm
TOP-LEVEL: dry? true. kc-path? ~/.kube
the kc
opt is a string and dry
is a bool. those should both internally be different values.
deploy
uses depends_on
in a loop to call deploy switch
which calls other methods using run job/with
(the dynamic approach)
Thx for reporting! This should be fixed since v0.24.2
Sweet!!
I can’t pinpoint why this is failing. Any thoughts here, @mumoshu? I tried wrapping the run.res.exitstatus
in trimspace
since case.exitstatus
used it too.
(string)--- FAIL: deploy (0.06s)
--- FAIL: deploy/ok (0.05s)
app.go:1040: case "ok": assertion "out" failed: this expression must be true, but was false: run.res.stdout == case.out
, where run.res.stdout=------------------------------------------------------------------------
[CLIENT] Deploying terraform project: project1
------------------------------------------------------------------------
terraform deploy project1
------------------------------------------------------------------------
[CLIENT] Deploying terraform project: project2
------------------------------------------------------------------------
terraform deploy project2
------------------------------------------------------------------------
[CLIENT] Deploying helmfile project: project1
------------------------------------------------------------------------
helmfile apply project1
------------------------------------------------------------------------
[CLIENT] Deploying helmfile project: project2
------------------------------------------------------------------------
helmfile apply project2
------------------------------------------------------------------------
[CLIENT] Deploying terraform project: project3
------------------------------------------------------------------------
terraform deploy project3
(string) case.out=------------------------------------------------------------------------
[CLIENT] Deploying terraform project: project1
------------------------------------------------------------------------
terraform deploy project1
------------------------------------------------------------------------
[CLIENT] Deploying terraform project: project2
------------------------------------------------------------------------
terraform deploy project2
------------------------------------------------------------------------
[CLIENT] Deploying helmfile project: project1
------------------------------------------------------------------------
helmfile apply project1
------------------------------------------------------------------------
[CLIENT] Deploying helmfile project: project2
------------------------------------------------------------------------
helmfile apply project2
------------------------------------------------------------------------
[CLIENT] Deploying terraform project: project3
------------------------------------------------------------------------
terraform deploy project3
(string)
FAIL
Error: test exited with code 1
i often see this when i have
assert "out" {
condition = (run.res.set && run.res.stdout == case.out) || !run.res.set
}
and
case "ok1" {
exitstatus = 0
err = ""
out = trimspace(<<EOS
expected output
EOS
)
perhaps the actual output contains more new lines at the end so avoid using trimspace would fix it
case "ok1" {
exitstatus = 0
err = ""
out = <<EOS
expected output
EOS
i tried without trimspace
test "deploy" {
variable "kubeconfig-path" {
type = string
value = "/path/to/kube"
}
variable "namespace" {
type = string
value = "nbo"
}
variable "region" {
type = string
value = "us-east-2"
}
variable "tenant" {
type = string
value = "client"
}
case "ok" {
exitstatus = 0
err = ""
out = <<EOS
------------------------------------------------------------------------
[CLIENT] Deploying terraform project: project1
------------------------------------------------------------------------
terraform deploy project1
------------------------------------------------------------------------
[CLIENT] Deploying terraform project: project2
------------------------------------------------------------------------
terraform deploy project2
------------------------------------------------------------------------
[CLIENT] Deploying helmfile project: project1
------------------------------------------------------------------------
helmfile apply project1
------------------------------------------------------------------------
[CLIENT] Deploying helmfile project: project2
------------------------------------------------------------------------
helmfile apply project2
------------------------------------------------------------------------
[CLIENT] Deploying terraform project: project3
------------------------------------------------------------------------
terraform deploy project3
EOS
}
run "deploy" {
dry-run = true # only echo the args
kubeconfig-path = var.kubeconfig-path
namespace = var.namespace
region = var.region
tenant = var.tenant
tenants-dir = "./fixtures"
}
assert "error" {
condition = run.err == case.err
}
assert "exitstatus" {
condition = run.res.exitstatus == case.exitstatus
}
# TODO: this doesn't work as expected in variant2 0.24.1
assert "out" {
condition = run.res.stdout == case.out
}
}
one thing i did notice was ` (string) case.out` is prefixed with a space. i’m unsure if that is variant injecting one or something about the general output
hrmm…i copied out the output and it seems to be a newline character or something. trimspace doesn’t seem to be doing the full job
yup. removed the forced \n
and it no longer errored. interesting
that’s even with condition = trimspace(run.res.stdout) == trimspace(case.out)
so it seems stdout
isn’t actually processing properly
…newline chars, specifically
so we may have extra space(s) prefixed in case.out
AND run.res.stdout
having more newlines than it should?
i don’t think the extra spaces is a problem on case.out
. it seems stdout newlines when an echo something\n
happens
i removed the \n
and it worked
so you mean you get too much newlines(not only one added by \n) when you had echo something\n
,
but not when echo something
?
yeah, there is an extra character after something
that is not cleaned up by trimspace
so I’m thinking it isn’t stored w/ the newline or something
ah interesting!
it can be variant2 is doing something nasty after trimspace
is applied in case
but before it’s processed in assert
thx, i’ll investigate
coolio
(sorry about the flood today; digging into a new area with variant2)
btw, @mumoshu, I was able to completely write full CLI coverage with 0 knowledge of the test approach within a day of work for about 15+ commands
have u also tried mocking/successfully mocked dependent command like terraform
in tests?
https://github.com/mumoshu/variant2/blob/master/examples/simple/simple_test.variant#L26
Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2
i saw that, but copying around the path and setting env on it all seemed a bit much
i stuck in a dry-run
check and turned it into echo
vs the actual command
yeah i understand
makes sense
should we add a helper that works in 80% of cases
mocking as a first-class citizen would be sweet, though
definitely
like
add_to_path = "path/to/the/mock/executable"
run "job" {
...
}
assert "whatever"
...
}
also, testing in general wouldn’t really care about what cli or whether the CLI runs. i just care that exec
was called or run
or depends_on
i also considered about adding an inline syntax for mock creation. but that seemed to bloat the test code
i just care that exec was called or run or depends_on
agree.
allow us to not actually execute the command but just check the args passed to exec
interesting. that might work
assert "exec args" {
condition = run.exec.command == "helmfile"
}
also, just checking to see that a run is called and not actually running it
example:
job "helmfile apply" {
description = "Apply the helmfile with the cluster"
parameter "tenant" {
description = "Tenant to operate on"
type = string
}
parameter "project" {
description = "Terraform project to process"
type = string
}
run "helmfile shell" {
command = "apply"
project = param.project
tenant = param.tenant
}
}
one job run can results in multiple execs. probably we’d need a syntax for asserting on a sequence of multiple execs
I don’t care what helmfile shell
does here. I just want to make sure the command
, project
, and tenant
were passed
< yeah
fair point
heading out, but i’ll check back tomorrow
maybe just list expected exec and runs in sequence under a specific block for mocking?
mock {
# this should match the first invocation on terraform
exec {
command = "terraform"
args = ["plan"]
dir = "expectedir"
}
# this should match the second invocation on terraform
exec {
command = "terraform"
args = ["apply"]
dir = "expecteddir"
}
}
run "job to test" {
...
}
assert "..." {
...
}
w/ another wording:
expect {
# this should match the first invocation on terraform
exec {
that’d be sweet if we could use case
in there
args = [case.command]
np
would it be like
case "ok1" {
expect {
exec {
?
ah ok
yeah, something like that with the args passed to it would be great
starting v0.28.0, you can write expectations on execs like:
expect exec {
command = ...
args = ...
dir = ...
}
2020-04-24
I think there is a regression in 0.24.2. the tests worked in 0.24.0 and now fail with:
26: aws --profile ${var.namespace}-${var.tenant}-helm eks update-kubeconfig --name=${var.namespace}-${var.tenant}-eks-cluster --region=${var.region} --kubeconfig=${var.kubeconfig-path}/kubecfg.${var.tenant}-helm
There is no variable named "var".
@mumoshu this is inside a case
it seems like there is an issue with the case in some scenarios too.
10: bash -c terraform workspace select ${case.tenant} || terraform workspace new ${case.tenant}
There is no variable named "case".
case "ok" {
project = "account"
tenant = "client"
err = ""
exitstatus = 0
stdout = trimspace(<<-EOS
bash -c terraform init
bash -c terraform workspace select ${case.tenant} || terraform workspace new ${case.tenant}
terraform destroy -var defaults_config_file=../../defaults.yaml -var tenant_config_file=../../${case.tenant}.yaml -auto-approve
EOS
)
}
went back to 0.24.0 and all tests pass
well are you trying to read case variables from within the case itself?
i’ve never intended to make it work
not sure how it worked before
i was, yeah.
so my output is based on the same vars my run
is using
26: aws --profile ${var.namespace}-${var.tenant}-helm eks update-kubeconfig --name=${var.namespace}-${var.tenant}-eks-cluster --region=${var.region} --kubeconfig=${var.kubeconfig-path}/kubecfg.${var.tenant}-helm
i think this one is due to the change made for https://sweetops.slack.com/archives/CFFQ9GFB5/p1587702567304900?thread_ts=1587681605.300200&cid=CFFQ9GFB5
case "ng1" {
concurrency = 0
stdout = ""
delayone = 0
delaytwo = 1
}
variable "err" {
value = "delay of ${case.delayone} is less than ${case.delaytwo}"
}
run "test" {
concurrency = case.concurrency
delayone = case.delayone
delaytwo = case.delaytwo
}
assert "..." {
condition = var.err == ...
the dependency was case -> variable before
which is now variable -> case
use case:
• command takes 3 args
• case 1
: verify it works with defaults
• case 2
: verify it works with custom values
• case 3
: verify it fails with invalid values
for that, i want my case to define the values
i want my case.stdout
to reference values from case.*
i think that’s where variable
is used
yes that’s okay
aws --profile ${var.namespace}-${var.tenant}-helm eks update-kubeconfig --name=${var.namespace}-${var.tenant}-eks-cluster --region=${var.region} --kubeconfig=${var.kubeconfig-path}/kubecfg.${var.tenant}-helm
i think this is a different beast
so i have to create a variable
for every case
property i want to reuse?
this should better be a variable
nope
it’ll fail when it hits a property for a case without a specific value; say the scenario case 1
above
i’d bring this back
case "ok" {
project = "account"
tenant = "client"
err = ""
exitstatus = 0
stdout = trimspace(<<-EOS
bash -c terraform init
bash -c terraform workspace select ${case.tenant} || terraform workspace new ${case.tenant}
terraform destroy -var defaults_config_file=../../defaults.yaml -var tenant_config_file=../../${case.tenant}.yaml -auto-approve
EOS
)
}
I think there is a regression in 0.24.2. the tests worked in 0.24.0 and now fail with:
26: aws --profile ${var.namespace}-${var.tenant}-helm eks update-kubeconfig --name=${var.namespace}-${var.tenant}-eks-cluster --region=${var.region} --kubeconfig=${var.kubeconfig-path}/kubecfg.${var.tenant}-helm
There is no variable named "var".
a var
is expected to exist in the test for a case
or a run
bidirectional dependency between variable <-> case is too hard to be implemented
well i dont understand. i thought you will only need either if you rewrite it?
so if we bring back access to previously defined case fields from later case fields, you can rewrite this
26: aws --profile ${var.namespace}-${var.tenant}-helm eks update-kubeconfig --name=${var.namespace}-${var.tenant}-eks-cluster --region=${var.region} --kubeconfig=${var.kubeconfig-path}/kubecfg.${var.tenant}-helm
to
26: aws --profile ${case.namespace}-${case.tenant}-helm eks update-kubeconfig --name=${casenamespace}-${case.tenant}-eks-cluster --region=${case.region} --kubeconfig=${case.kubeconfig-path}/kubecfg.${case.tenant}-helm
ok…so i’ll leave the best decision to you, but the need is for my case
block to have access to all case
vars
a plus is having a case
block have access to all var
declarations
i’d standardize my run
on case
vars only
my case
would ref var
for any global property that doesn’t change value per case
so you don’t need access to case
from var
s?
let me revisit this.
i’m about to write some tests so let me try it out
i thought you would need it
yes. that wasn’t working, though
but to get that var.value
was removed inside of a case
but if we could previously refer to case fields from within later case fields, we wont need variable
under case
in the first place
we couldn’t previously
well…wait….could we?
yeah it wasnt working so i added it in v0.24.1, which breaks existing behavior on accessing var from case
i want the dependency to be one direction here. either variable
can depend on case
, or case
can depend on variable
well anyways, give me a minute and i’ll publish new variant2 release for testing
ok…maybe case can use var
seems more logical
maybe case can use var
yeah i agree
my case would ref var for any global property that doesn’t change value per case
this made sense to me
yeah
okay so this seems to have worked only when you are so lucky to have a specific Go map to have a specific key ordering
We are unable to get the case fields in the order of their definitions. So I’d need to add some dependency analysis between the fields
gotcha
so case
is largely just a map of values and not some special block
exactly
gotcha. i thought it was a special block like an exec
or something
then in that case i think it makes sense to not overcomplicate it
maybe make that clear with case =
vs case {
or maybe it is just me
well it is a special block in that sense. it is just that no variant block has support for self referencing yet
i do think there are 3 types of variables that should be useful within a test
:
- case-independent variables (
variable
blocks today) - case-dependent variables (
case
block fields today) - case-dependent variables that depend on 1 and 2 (does not exist today. adding support for self-referencing
case
fields from withincase
fields would be one of possible solutions to this
3 is added in v0.25.0.
variant now builds a DAG of case fields and evaluates it in an order that all the required case fields are known when evaluating dependent fields.
so this just works:
case "ok" {
bar = case.foo
foo = "FOO"
@mumoshu was talking with @Erik Osterman (Cloud Posse) about a new cli command and it’d be clean if I could use an if
statement on an internal depends_on
. is that possible?
@johncblandii Can you provide a mockup?
job "something" {
run "some job" {
condition = opt.bool-value
}
run "some other job" {
condition = !opt.bool-value
}
}
i understand but i’m afraid this would make the call side too complex to be tested
k
what’s the exact usecase?
i thought i would rather add condition
to exec
or run
Thanks @mumoshu - don’t want to make things harder on ya!
on exec/run might be enough, tbh
ah no! i meant im afraid making it hard for you guys to use!
the use case was simply being able to trigger a specific job or depends_on only if some condition is met
this one was specifically:…
job:
job 1
param x
cli:
cli job
output:
[list available param.x options]
basically, dynamic help
so it would be:
run "some-job" {
condition = param.x != ""
}
exec {
command = "echo
condition = params.x == ""
args = "docs here"
}
^ steps and stuff like that in there, but that’s the idea
hmm? i think i’d rather do
variable "help_wanted" {
value = param.x == ""
}
variable "show_help" {
value = {
job = "show help"
with = {
text = "docs here"
}
}
}
variable "help_or_run" {
value = help_wanted ? var.show_help : var.run_it
}
run {
job = help_or_run.job
with = help_or_run.with
}
works if there are two things
also variant2 restricts a single job to have either exec
or run
, not both, to force the job to be simple
if you need to diff for more than 2 jobs, it gets to need extra
no worries, though. this isn’t a blocker
this is easy to implement. but what i’m afraid is that the more you do it imperatively, it gets more difficult to debug
please post more examples like that. probably i’d eventually come up with something that helps you
well it could be used on source
to dynamically load a file
if you need to diff for more than 2 jobs, it gets to need extra
i’d use maps for 3 or more conditional jobs. but not sure it’s applicable to every case.
config "file" {
source file {
path = "${opt.tenants-dir}/defaults.yaml"
}
source file {
if = params.tenant
path = "${opt.tenants-dir}/${param.tenant}.yaml"
}
}
or
config "file" {
source file {
path = "${opt.tenants-dir}/defaults.yaml"
}
source file {
if = fileexists("${opt.tenants-dir}/${param.tenant}.yaml")
path = "${opt.tenants-dir}/${param.tenant}.yaml"
}
}
could also log when things change:
depends_on "echo" {
if = params.x
args = { message = "blah" }
}
that would log when a condition is met (ex: running a specific log message for a client as opposed to without one)
take this:
job "terraform plan" {
step "plan init" {
run "terraform init" {
}
}
step "plan workspace" {
run "terraform workspace" {
}
}
step "plan cmd" {
run "terraform subcommand" {
command = "plan"
}
}
}
that init
and workspace
is copied to multiple places
i could put that only in subcommand
with if
to toggle it based on some option/arg I pass to it
something like:
step "plan cmd" {
run "terraform subcommand" {
command = "plan"
init = true
}
}
and subcommand
could easily have:
step "plan init" {
if = param.init
run "terraform init" {
}
}
(or either inside of the run
)
umm, sorry i dont get it yet. why you can’t run terraform plan
directly/why you need terraform subcommand
?
that’s just a DRY command
it handles the dir
, etc
don’t worry about the commands, though
job "init_and_workspace" {
step "plan init" {
run "terraform init" {
}
}
step "plan workspace" {
run "terraform workspace" {
}
}
}
job "terraform" {
depends_on "init_and_workspace" {
}
parameter "subcmd" {
type = string
}
run {
job = "util terraform run-subcommand ${param.subcmd}"
args = ...
}
}
job "util terraform run-subcommand plan" {
step "do more common things"
....
}
step "exec terraform plan"
}
}
yes and you still need to copy/paste code to multiple places
depends_on "init_and_workspace" {
}
OR you end up unnecessarily running init_and_workspace
for terraform
runs that don’t need it
you end up in the same place
does adding if
resolve the issue of copy-pasting that depends_on?
exactly
are you saying that you would create a higher kind job that can run any low-level job w/ depends_on
only run when necessary?
yes
ok i believe i understand
my point is, we should at least avoid adding control structures to every kind of blocks
cuz that makes things too hard to test/maintain due to that there becomes many ways to achieve one thing
i’d rather add items
and condition
to any of run
, exec
and step
maybe run
should be the best place
also let’s allow calling multiple sequential run
s in a job
with that you could write the ideal command like
job "terraform" {
depends_on "init_and_workspace_if_needed" {
...
}
run {
job = "util terraform run-subcommand ${param.subcmd}"
with = var.args_for_subcmd
}
}
job "init_and_workspace_if_needed" {
parameter "subcmd" {
type = string
}
run {
condition = contains(["plan", "apply"], param.subcmd)
job = "init_and_workspace"
}
}
job "util terraform run-subcommand plan" {
run "do common things"
....
}
run "exec terraform plan"
}
}
also, the more we enhance run
, it is more likely we can merge depends_on
into run
job "terraform" {
run {
condition = contains(["plan", "apply"], param.subcmd)
job = "terraform "init_and_workspace"
}
run {
job = "util terraform run-subcommand ${param.subcmd}"
with = var.args_for_subcmd
}
}
job "util terraform run-subcommand plan" {
run "do common things"
....
}
run "exec terraform plan"
}
}
just deprecate/remove depends_on in favor of enhanced run
? or just make it an alias to run
? not sure which is better
yeah, depends_on
and run
pretty much are the same thing anyway from a cli dev perspective
i think the idea of branches making it too complex, variant should support us using this complexity.
we have option
s. that inherently means we do X or Y or Z at times so won’t be uncommon for us to do extra things at times
…and not do them at other times
we have options. that inherently means we do X or Y or Z at times so won’t be uncommon for us to do extra things at times
yeah probably that makes sense now.
i was wondering if everything can be generalized to mapping variant opts/params to exec
which isn’t realistic as it turned out that we wanna do more things “within” variant
yeah, sometimes. sometimes we need to add a var, use a source to load something, or not based on an opt
like branching, looping, etc
yup
owe my son some fishing time so i’ll bbiab
Multiple conditional run blocks has been added in v0.26.0
job "terraform" {
run {
condition = contains(["plan", "apply"], param.subcmd)
job = "terraform "init_and_workspace"
}
run {
condition = ...
# snip
@mumoshu along the lines of the previous question, can you do a source
dynamically (only pull if param.x
exists) or try/catch on a failed source
load?
@mumoshu it seems options are not available to a job when it is used as a source
of a config
config "state" {
source job {
name = "state"
args = {
tenant = param.tenant
}
key = "key"
format = "text"
}
}
that works, but it does not recognize the opts.tenants-dir
in the state
job and it does recognize the opt
when I call it directly
and is there a way to suppress the output when we use it with source
?
deploy
Deploy entire tenant stack
------------------------------------------------------------------------
[CLIENT] Deploying terraform project: project1
------------------------------------------------------------------------
that first deploy
is just reading the state and echo
’ing it. I’d rather not output that when using it as a source
to a config
@johncblandii regarding the first question, does the state
have option "tenants-dir"
? if so, what’s the expected value of it in your specific example?
are you expecting something to be automatically propagated/set in the state
job as it is called from within a config
block?
tenants-dir
is a top-level option
i expect anything top-level to propagate down every job/run/source/etc no matter the chain
gotcha! seems like i’ve missed adding support for propagating global opts/params for that
should be fixed in v0.25.1
@mumoshu can a variable
not reference a config
?
variable "trigger-name" {
value = param.name == "state" ? conf.file.state : param.name
}
error:
23: value = param.name == "state" ? conf.file.state : param.name
There is no variable named "conf".
no. it’s opposite
i can reverse the evaluation order. but not sure which is better
I’ve just reversed the order anyway. Please try v0.27.0!
will do
2020-04-25
2020-04-27
2020-04-29
For variant … this tool appears to be running locally, as opposed to something in a cluster from the local terminal. Is that correct?
For example, would we use this in a distributed system? Is there an agent mode?
There are 2 modes of invocation
one is as a slackbot
the other is as a cli
Also, as a cli
you could incorporate it with any sort of CI/CD pipelines you have
@nian Erik’s correct.
Would it be nice for you if it worked in a cluster? (K8s?
Usually it would be a matter of building a docker image containing the binary built by running variant export binary
and run it via an AWS ECS task or K8s Job/Pod/etc
But I have considered about if I could add a client
mode to Variant.
It would probably look like variant client run --config someconnectioninfo.yaml CMD ARGS
which creates e.g. K8s pod running the CMD in the K8s cluster as configured in the someconnectioninfo.yaml
But I stopped there as I had no specific use-case at the time. If you have one, i’d appreciate it if you could share!
Yes … distributed in k8s cluster.
I think I’d prefer deploying it in k8s more along the lines of how we deploy other things so as not to introduce a new mechanism
That said, a UI would make this more appealing
Importing from mumoshu/variant#33 Add a Login page and a project-selection page in front of #32. It should be deployed along with multiple instances of variant server #31 (+ perhaps #32). This may …
but at what point are we reinventing jenkins
0.28 seems to be pretty solid. tests are passing with no changes.
@mumoshu I’m getting weird results from simply reading my file.
contents:
triggers:
smoke-test:
description: Smoke testing the CLI
order:
- job: terraform plan eks
args:
- -detailed-exitcode
error:
panic: inconsistent map element types (cty.List(cty.String) then cty.String)
goroutine 1 [running]:
github.com/zclconf/go-cty/cty.MapVal(0xc0009ea880, 0xc0009ea880, 0xc0009d20f8, 0x3, 0xc0009ea9b8)
/home/runner/go/pkg/mod/github.com/zclconf/[email protected]/cty/value_init.go:207 +0x4b3
github.com/mumoshu/variant2/pkg/app.goToCty(0x1767100, 0xc0004d38f0, 0x0, 0x0, 0x1, 0xc0009bb8e0, 0x0, 0x1)
/home/runner/work/variant2/variant2/pkg/app/go_to_cty.go:28 +0x60e
github.com/mumoshu/variant2/pkg/app.(*App).execMultiRun(0xc0000e3140, 0xc000239580, 0xc000819280, 0xc0009eadf8, 0x16, 0xc0009d1160, 0x16)
/home/runner/work/variant2/variant2/pkg/app/app.go:1214 +0x13f
github.com/mumoshu/variant2/pkg/app.(*App).execJob(0xc0000e3140, 0xc000239580, 0xc0000bd997, 0x7, 0x0, 0x205fbc0, 0xc0006fc540, 0xc0000c7610, 0xc000577ea0, 0x2, ...)
/home/runner/work/variant2/variant2/pkg/app/app.go:816 +0x246
github.com/mumoshu/variant2/pkg/app.(*App).Job.func1(0x0, 0x0, 0x0)
/home/runner/work/variant2/variant2/pkg/app/app.go:644 +0x939
github.com/mumoshu/variant2/pkg/app.(*App).Run(0xc0000e3140, 0xc0000bd997, 0x7, 0xc00080adb0, 0xc00080ade0, 0xc00020b9e0, 0x1, 0x1, 0x0, 0x0, ...)
/home/runner/work/variant2/variant2/pkg/app/app.go:513 +0xc3
github.com/mumoshu/variant2.(*Runner).Cobra.func1(0xc00037d180, 0xc00080acf0, 0x2, 0x3, 0x0, 0x0)
/home/runner/work/variant2/variant2/variant.go:661 +0x114
github.com/spf13/cobra.(*Command).execute(0xc00037d180, 0xc00080ac00, 0x3, 0x3, 0xc00037d180, 0xc00080ac00)
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:826 +0x460
github.com/spf13/cobra.(*Command).ExecuteC(0xc0000e9b80, 0x2034ba0, 0xc0000be010, 0x2034ba0)
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:914 +0x2fb
github.com/spf13/cobra.(*Command).Execute(...)
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:864
github.com/mumoshu/variant2.(*Runner).Run(0xc0007c3040, 0xc0000f8c90, 0x4, 0x4, 0xc00020bd78, 0x1, 0x1, 0x0, 0x0)
/home/runner/work/variant2/variant2/variant.go:791 +0x2cc
github.com/mumoshu/variant2.Main.Run(0x7ffe211094bb, 0x3, 0x0, 0x0, 0x0, 0x7ffe211094b4, 0xa, 0x2034ba0, 0xc0000be008, 0x2034ba0, ...)
/home/runner/work/variant2/variant2/variant.go:396 +0x12d
main.main()
/home/runner/work/variant2/variant2/pkg/cmd/main.go:13 +0xb8
another interesting one, @mumoshu.
Parsing this YAML works fine:
order:
- job: terraform plan eks
args: ""
- job: helmfile diff teleport
args: --selector chart=teleport-node
Parsing this YAML seems to have a problem with:
order:
- job: terraform plan eks
# args: ""
- job: helmfile diff teleport
args: --selector chart=teleport-node
It throws the following error, but the issue is a missing args
not a missing job
:
Error: Missing map element
on ../cli/trigger.variant line 36:
(source code not available)
This map does not have an element with the key "job".
Error: ../cli/trigger.variant:36,29-33: Missing map element; This map does not have an element with the key "job".
so it seems mixing a job with args
and without is some weird issue
Interesting. I thought it doesn’t have any specific logic to handle keys named “args” and “jobs” in a map
Maybe it depends on the context?
Could you share your trigger.variant
, so that I can see the code around L36 and L29-33
the line numbers are likely off due to recent changes
the issue was with an array of objects without the same keys in each
weird…looks like the code didn’t come through.
it was referring to:
depends_on "trigger switch" {
items = conf.file.triggers[var.trigger-name].order
args = {
item = item.job
item-args = try(item.args, "")
dry-run = opt.dry-run
tenant = param.tenant
trigger-name = var.trigger-name
}
}