#refarch (2025-01)
Cloud Posse Reference Architecture
2025-01-02
hey folks,
im trying to install a helm chart there is a way to upgrade the argocd because it comes v2.5.9+e5f1194
out of the box, i tried to upgrade it but the dex connection was not working so login fails after upgrading to latest
@Yonatan Koren did you encounter this recently?
@Dan Miller (Cloud Posse) Hey man. It seems both the argo and the dex versions are pinned, I suppose do to this: <https://github.com/argoproj/argo-cd/issues/11392>
Do you have documented a pinned DEX version that works to a pinned Argo 7.x version?
or do you all stay on the older argo versions?
Because of https://github.com/argoproj/argo-cd/issues/11392, we usually do:
chart_values:
# Work around for issue with `invalid session token: failed to verify signature: failed to verify id token signature`
# <https://github.com/argoproj/argo-cd/issues/11392>
dex:
image:
tag: v2.31.2
This hasn’t made it into cloudposse/terraform-aws-components
as we don’t include the stack YAML in that repo, other than snippets in the README. Possibly, we should update the README for eks/argocd
.
However
- We shouldn’t limit ourselves to a version from 2+ years ago. It’s not that we don’t support the 7.x helm chart, it’s simply that, AFAIK, we haven’t had a chance to try the latest version and debug any of the possible dex issues. Speaking for myself, I wanted to try the latest helm chart version, but the customer was using AWS Identity Center (AWS SSO), which needed dex. Because the customer wanted their hands on Argo CD as soon as possible, and being aware of potential issues with dex, I decided to stick with the default version and deferred upgrading and debugging to later. This was very recent, so it hasn’t happened yet.
- You don’t always need dex. Disable it with
dex.enabled: false
if you can use an OIDC provider listed in the second bullet point here. As for that Victoria Metrics CRD error screenshot, the message regarding needing Helm 3.14 or higher makes it sound like the provider is not satisfying that requirement, henceinclude-crds
not being able to render some manifests (presumably CRDs?). However we don’t pin or limithashicorp/helm
(link))). So, not sure about that.
yes the provider not limit the version of helm, but that version comes with argocd server version built-in
now login works, but inside i can’t see any of the resources of the previous version seems there is an issue with the rbac roles
argocd_rbac_policies:
- "p, role:org-admin, applications, *, */*, allow"
- "p, role:org-admin, clusters, get, *, allow"
- "p, role:org-admin, repositories, get, *, allow"
- "p, role:org-admin, repositories, create, *, allow"
- "p, role:org-admin, repositories, update, *, allow"
- "p, role:org-admin, repositories, delete, *, allow"
argocd_rbac_groups:
- group: xxxxxx
role: org-admin
- group: xxxxxx
role: org-admin
i have that with the corresponded group id from identity center but i can’t see any resource something must have changed? i also set it to admin the group role but same result
chart_version: 7.7.14
Not sure but a couple suggestions:
- Enable the admin user temporarily so you can at least see resources until this is resolved. Also a good sanity check that there are any resources at all managed by Argo CD after your upgrade.
- Do the Argo CD server logs say anything? IIRC they might have info on viewing resources being denied as per the policy, and it’ll be useful for debugging
- Maybe some of the helm chart value structures have changed. Just a guess.
yeah, certainly with admin user im able to see the resources
Data
====
policy.csv:
----
policy.default:
----
policy.matchMode:
----
glob
scopes:
----
[groups]
BinaryData
====
Events: <none>
seems that the argocd-rbac-cm configmap is empty
fixed
the issue is on the values tpl
2025-01-03
2025-01-06
2025-01-07
Reposting this here :)
Hey guys, is there a way to control bucket ACL through the cloudfront-s3-cdn
module? I saw that the module in question depends on s3-log-storage
which depends on the module s3-buckets
which are all managed by CloudPosse. I saw that there is an input grants
which I believe controls this. I’m talking about the settings here.
Would the s3_object_ownership
variable support your use case? Here are accepted values for the input and how they influence the ACL:
• BucketOwnerPreferred
- Objects uploaded to the bucket change ownership to the bucket owner if the objects are uploaded with the bucket-owner-full-control
canned ACL.
• ObjectWriter
- The uploading account will own the object if the object is uploaded with the bucket-owner-full-control
canned ACL.
• BucketOwnerEnforced
- Access control lists (ACLs) are disabled and no longer affect permissions. The bucket owner automatically owns and has full control over every object in the bucket.
Hey guys, is there a way to control bucket ACL through the cloudfront-s3-cdn
module? I saw that the module in question depends on s3-log-storage
which depends on the module s3-buckets
which are all managed by CloudPosse. I saw that there is an input grants
which I believe controls this. I’m talking about the settings here.
Not really. I wanted to have more control over the logDeliveryGroup, external accounts, etc. But realized that the canonical ID associated with the delivery of logs to the bucket is sufficient
2025-01-10
2025-01-12
It would be awesome, if we could configure the pull through cache rules with one of CloudPosse modules.