#azure (2020-07)
Archive: https://archive.sweetops.com/azure/
2020-07-26
2020-07-27
Hi guys - I wish to make a AKS cluster using terraform, but provide the service principle credentials via a aws key vault. However I am also making they key vault at the same time.. so I have a bit of a chicken and egg situation
any advice on how to resolve?
The AKS service principal’s credentials, or the credentials you are using to authenticate azurerm?
the service principle in this case (we will use a managed identity to apply the terraform itself, but that I resigned to making manually)
Are you using terraform to create the service principal?
yes, planned to
Manages a Service Principal associated with an Application within Azure Active Directory.
You already have the password in aws vault?
no, that is the current issue - I want to make the SP, and have it added to the Key Vault
(just trying to avoid as much manual manipulation as possible)
I don’t have too much advice I guess; not familiar with AWS vault. The SP password can be specified in the following resource: https://www.terraform.io/docs/providers/azuread/r/service_principal_password.html
Manages a Password associated with a Service Principal within Azure Active Directory.
sorry keyvault (Azure not AWS) but thank you
ah
but it seems like the best way may be to manually add the password to key vault and then import it as data
terraform can figure out the dependencies; you can create a keyvault, then a https://www.terraform.io/docs/providers/azurerm/r/key_vault_secret.html to store your secret in the vault and subsequently use that as the password for the service principal.
the problem here, is your password also ends up in the tf state
Manages a Key Vault Secret.
yeah that is not ideal
I wonder if its possible to just use a MI with AKS now, happen to know of any good resources for Azure Tf modules?
I think AKS still requires a SP last I checked
one other possible middle ground solution to your issue is to use an external data source which interacts with the keyvault via azure cli
yeah thats a possibility
I think though if it comes to that its probably better to accept a small amount of manual management over a new tech at the early stagge
You have two option:
-1
sync your AKS Key vault with an Azure key vault. Creating an Azure Key Vault is pretty straight forward
then you can request it whenever it is needed with data "azurerm_key_vault
and data "azurerm_key_vault_secret
-2 add to your terraform backend a provider for aws with aws credential ( that you need to give as a variable for example ) then calling
data "vault_aws_access_credentials
will use your aws provider credential to fetch the data and allow you to retrieve credentials (nb I am only azure , and not aws )
yeah that makes sense, the second option seems clean
plus keyvault seems confusing I can’t understand how the UI and networking security is meant to work yet
there is a default subnet NSG security that allows resources to request azure services
NSG?
Network security groups.
also on Azure Vault, you need to explicitelly allow a user or application to request it in the “Access Policies” tab or in terraform in resource ”azurerm_key_vault” with an access_policy block
got it, thanks
actually I’ve already set up user access, but blocking out internet traffic has stoped us being able to manage keys in the portal (or cli I assume): it makes sense, just figuring out best way around that
to be more precise but it’s out of scope of your question about NSG and Azure services, they can be explicitly allowed or denied using Service tags https://docs.microsoft.com/en-us/azure/virtual-network/service-tags-overview
Learn about service tags. Service tags help minimize the complexity of security rule creation.
I think I’m going to have to take some training courses to understand this properly
thank you
2020-07-31
Hi guys, an AKS credentials question: I’m reading through https://github.com/Azure/AKS/issues/397, but cannot make heads nor tails of how this is supposed to work
For General Availability of AKS, will az aks get-credentials enforce kubectl to connect with credentials unique to each AAD user logged in via az login instead of returning shared credentials that …
I have an AKS cluster, and it seems I can get credentials with and without --admin
but I do not understand what is allowing that, now how to disallow that for others
the final comment on the github issue says
--admin is controlled by Azure RBAC role (azure-kubernetes-service-cluster-admin-role). It basically ignores AAD and uses client certificates.
but --admin
fetches the kubeconfig for this role…?
further: The user role seems to have full access to the cluster too
This issue will likely be of interest to you: https://github.com/MicrosoftDocs/azure-docs/issues/10754
What is the method that Microsoft is recommending, of limiting who can log into the cluster with the “—admin” flag? If my organization eventually rolls this out to more groups with access to Azure …
and this as well if you are not familiar: https://docs.microsoft.com/en-us/azure/aks/concepts-identity#azure-rbac-to-authorize-access-to-the-aks-resource
Learn about access and identity in Azure Kubernetes Service (AKS), including Azure Active Directory integration, Kubernetes role-based access control (RBAC), and roles and bindings.
a few lines in a github comment make this easier to understand than all of the azure docs
thanks
np
first link cleared up my uncertainly effectively