Hi all: reading https://stackoverflow.com/questions/60338757/how-do-i-import-an-azure-ad-service-principal-password-into-terraform I’m a bit confused between APP and SP passwords: It seems to indicate that to authenticate with a SP you can use a password for the APP. Is this the case? If so, why do we need SP passwords at all?
We’re using Terraform to build our cloud infrastructure. Previously we had a few service principals created without Terraform that are being used right now on production and can’t be changed. Now w…
Hello @Padarn, I had the same question as you !
you should read the link below then everything will be clear !
https://kliushnikov.medium.com/azure-active-directory-application-or-service-principal-b5a5e14f2a23 and https://kliushnikov.medium.com/azure-active-directory-application-or-service-principal-b5a5e14f2a23
What should you choose to grant access to Azure Key Vault: Service Principal or Application?
Saved my life
Hi, I have setup a private AKS cluster by following this guide and setting it up with terraform https://docs.microsoft.com/en/azure/aks/private-clusters now I have deployed an helm charts for the nginx-ingress
release_name=nginx version=3.19.0 chart_name=ingress-nginx/ingress-nginx
But when I do
kubectl describe svc nginx-ingress-nginx-controller -n ingress-nginx
the loadbalancer Ingress is a public ip addresses !
ok the node port is private but … cluster public ips directly on internet !!! what did I missed ? ..
$ kubectl describe svc nginx-ingress-nginx-controller -n ingress-nginx Name: nginx-ingress-nginx-controller Namespace: ingress-nginx Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/version=0.43.0 helm.sh/chart=ingress-nginx-3.19.0 Annotations: meta.helm.sh/release-name: nginx meta.helm.sh/release-namespace: ingress-nginx Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx,app.kubernetes.io/name=ingress-nginx Type: LoadBalancer IP: 10.0.xx.xx LoadBalancer Ingress: 20.74.yy.zz Port: http 80/TCP TargetPort: http/TCP NodePort: http 32006/TCP Endpoints: 10.244.7.107:80 Port: https 443/TCP TargetPort: https/TCP NodePort: https 32448/TCP Endpoints: 10.yy.z.zz:443 Session Affinity: None External Traffic Policy: Cluster Events: <none>
Learn how to create a private Azure Kubernetes Service (AKS) cluster
I just found the following related docs https://docs.microsoft.com/fr-fr/azure/aks/ingress-internal-ip
Découvrez comment installer et configurer un contrôleur d’entrée NGINX pour un réseau privé interne dans un cluster Azure Kubernetes Service (AKS).
Can’t read that doc but we had the same problem. By default even private cluster can create public load balancer
We used an azure policy to disallow this. If you didn’t solve yet I can send it to you later
I fixed it by attributing a private ip address and it works flawlessly. But yes if you can share the azure policy then I’ll disallow the aks to create public ip !
Cool can do. Will share in a few hours
what’s the best way to import terraform module from azure devops ? In the example below the guy uses his personnal ssh key , is there a way to use an Azure service principal ? and he warns about it “Note that unlike PAT tokens, you cannot scope SSH keys. Using the SSH key will give you the same rights as the user who’s account this key belongs to, so use them carefully.” https://samcogan.com/using-terraform-modules-from-git-in-azure-devops/
Did you know you can reference custom Terraform modules direct from Git? Here’s how to do it, and how to make this work with an Azure DevOps pipeline