unable to attach acr with aks - kubernetes

I have use following command to attach AKS with ACR
az aks create -n myAKSCluster -g myResourceGroup --attach-acr $MYACR
But the error still persist while fetching image. Then I did a little more investigation to find what all service principal ids are attached using following
az aks list
and I get [] empty array list. Any clue what I might be missing?
az role assignment list --assignee --scope

You can first set the subscription using the following command and then can try with further ones to map ACR to AKS
az account set --subscription
there are two ways to get this sorted
Map the ACR to AKS
CLIENT_ID=$(az aks show --resource-group $AKS_RESOURCE_GROUP --name
$AKS_CLUSTER_NAME --subscription $SUBSCRIPTION_ID --query "servicePrincipalProfile.clientId" --output tsv)
ACR_ID=$(az acr show --name $ACR_NAME --resource-group $ACR_RESOURCE_GROUP --subscription $SUBSCRIPTION_ID --query "id" --output tsv)
az role assignment create --assignee $CLIENT_ID --role Reader --scope $ACR_ID

First of all check if your cluster is already attached to ACR.
az aks check-acr --name myAKSCluster --resource-group myResourceGroup --acr myAcr.azurecr.io
If already attached you will get a message like "Your cluster can now pull images from ACR". If you get an error code like 403 you can attach ACR to existing cluster with this command
az aks update -n myAKSCluster -g myResourceGroup --attach-acr myAcr
You can also attach ACR to AKS from different subscription.
az aks update -g myResourceGroup -n myAKSCluster --attach-acr "/subscriptions/{subcription-id}/resourceGroups/myResourceGroup/providers/Microsoft.ContainerRegistry/registries/myAcr"

Related

Creating an AKS Cluster using C#?

I have not found a way to do so using C# K8s SDK: https://github.com/kubernetes-client/csharp
How to create a AKS Cluster in C#? Basically, the following command:
az aks create -g $RESOURCE_GROUP -n $AKS_CLUSTER \
--enable-addons azure-keyvault-secrets-provider \
--enable-managed-identity \
--node-count $AKS_NODE_COUNT \
--generate-ssh-keys \
--enable-pod-identity \
--network-plugin azure
Sending a PUT request with payload (JSON body) to ARM.
See this: https://learn.microsoft.com/en-us/rest/api/aks/managed-clusters/create-or-update?tabs=HTTP

Howto create azure postgres server with admin-password in keyvault?

To make parameters using key vaults available for my azure webapp I've executed the following
identity=`az webapp identity assign \
--name $(appName) \
--resource-group $(appResourceGroupName) \
--query principalId -o tsv`
az keyvault set-policy \
--name $(keyVaultName) \
--secret-permissions get \
--object-id $identity
Now I want to create an azure postgres server taking admin-password from a key vault:
az postgres server create \
--location $(location) \
--resource-group $(ResourceGroupName) \
--name $(PostgresServerName) \
--admin-user $(AdminUserName) \
--admin-password '$(AdminPassWord)' \
--sku-name $(pgSkuName)
If the value of my AdminPassWord is here something like
#Microsoft.KeyVault(SecretUri=https://<myKv>.vault.azure.net/secrets/AdminPassWord/)
I need the single quotes (like above) to get the postgres server created. But does this mean that the password will be the whole string '#Microsoft.KeyVault(SecretUri=https://<myKv>.vault.azure.net/secrets/AdminPassWord/)' instead of the secret stored in <myKv> ?
When running my pipeline without the quotes (i.e. just --admin-password $(AdminPassWord) \) I got the error message syntax error near unexpected token ('. I thought that it could be consequence of the fact that I have't set the policy --secret-permissions get for the resource postgres server. But how can I set it before creating the postgres server ?
The expresssion #Microsoft.KeyVault(SecretUri=https://<myKv>.vault.azure.net/secrets/AdminPassWord/) is used to access the keyvault secret value in azure web app, when you configure it with the first two commands, the managed identity of the web app will be able to access the keyvault secret.
But if you want to create an azure postgres server with the password, you need to obtain the secret value firstly and use it rather than use the expression.
For Azure CLI, you could use az keyvault secret show, then pass the secret to the parameter --admin-password in az postgres server create.
az keyvault secret show [--id]
[--name]
[--query-examples]
[--subscription]
[--vault-name]
[--version]

Grant AKS access to Postgres database in Azure via VNet

Trying to figure out how to best give my AKS cluster access to a Postgres database in Azure.
This is how I create the cluster:
az group create \
--name $RESOURCE_GROUP \
--location $LOCATION
az aks create \
--resource-group $RESOURCE_GROUP \
--name $CLUSTER_NAME \
--node-vm-size Standard_DS2_v2 \
--node-count 1 \
--enable-addons monitoring \
--enable-managed-identity \
--generate-ssh-keys \
--kubernetes-version 1.19.6 \
--attach-acr $ACR_NAME \
--location $LOCATION
This will automatically create a VNet with a subnet that the node pool uses.
The following works:
Find the VNet resource in Azure
Go to "subnets" -> select the subnet -> Choose "Microsoft.SQL" under "Services". Save
Find the Postgres resource in Azure
Go to "Connection Security" -> Add existing virtual network -> Select the AKS VNet subnet. Save
So I have two questions:
Is it recommended to "fiddle" with the VNet subnet automatically created by az aks create? I.e adding the service endpoint for Micrsoft.SQL
If it's ok, how can I accomplish the same using Azure CLI only? The problem I have is how to figure out the id of the subnet (based on what az aks create returns)

Azure Resource Manager template to create resources for different environments

I'm trying to create an ARM template so I can create all of my resources that already exist in one azure subscription to another new subscription. For example, if I have something in the testing environment, I would like to create new resources in a different environment for me to be able to deploy code after. However, I am very new to Azure and powershell and ARM templates and therefore, am looking for guidance on where to begin and how to achieve this goal.
I've already read up on powershell.
I know how to move resources from one resource group to another or even different Azure subscriptions.
so generally you would create an ARM Template to do this. When you need to change something you add\remove resources to it, then you would deploy it to different environments. This would be similar to how you promote your application across environment. First you deploy it to dev, test it. Then you deploy it to test and do more rigorous testing, perhaps performance testing. Then you deploy it to production.
If you are looking for examples, here's the official examples repo. The official docs might help as well.
Azure Resource Manager templates are the preferred way of automating the deployment of resources to ARM. Learn how to deploy resources with Resource Manager templates and Azure PowerShell , you can refer to this official document.
To deploy to a subscription, use New-AzDeployment:
New-AzDeployment -Location <location> -TemplateFile <path-to-template>
If you want to deploy Azure Resource Manager Templates with azure devops, you can refer to these ( blog, blog). One of the concepts about devops is automation, if you don’t want to manually recreate everytime your environment through the portal , this is a good try.
you can take a look at Azure Citadel self-paced ARM template lab
If you really want to start with ARM Templates you need to parameterise all values in the template azuredeploy.json and build out your parameters file azuredeploy.parameters.json with the parameters that you need to change between environments such as name, location, sku/size etc.
Although if you're just starting out I recommend going straight to Azure CLI. It's simple, easily repeatable and you can deploy whole solutions in a few commands. This creates a Resource Group, SQL Logical Server with DB and App Service Plan with Web App.
Dev
az group create --name "rg-d-01" --location "australiaeast"
az appservice plan create --name "asp-d-01" --resource-group "rg-d-01" --location "australiaeast" --sku "S1"
az webapp create --name "awa-d-01" --plan "asp-d-01" --resource-group "rg-d-01"
az sql server create --name "sql-d-01" --resource-group "rg-d-01" --location "australiaeast"
az sql db create --server "sql-d-01" --resource-group "rg-d-01" --name "sqldb-d-01" --service-objective S0
Test
az group create --name "rg-t-01" --location "australiaeast"
az appservice plan create --name "asp-t-01" --resource-group "rg-t-01" --location "australiaeast" --sku "S1"
az webapp create --name "awa-t-01" --plan "awhp-t-01" --resource-group "rg-t-01"
az sql server create --name "sql-t-01" --resource-group "rg-t-01" --location "australiaeast"
az sql db create --server "sql-t-01" --resource-group "rg-t-01" --name "sqldb-t-01" --service-objective S0
Prod
az group create --name "rg-p-01" --location "australiaeast"
az appservice plan create --name "asp-p-01" --resource-group "rg-p-01" --location "australiaeast" --sku "S1"
az webapp create --name "awa-p-01" --plan "awhp-p-01" --resource-group "rg-p-01"
az sql server create --name "sql-p-01" --resource-group "rg-p-01" --location "australiaeast"
az sql db create --server "sql-p-01" --resource-group "rg-p-01" --name "sqldb-p-01" --service-objective S0

Shell (ssh) into Azure AKS (Kubernetes) cluster worker node

I have a Kubernetes cluster in Azure using AKS and I'd like to 'login' to one of the nodes. The nodes do not have a public IP.
Is there a way to accomplish this?
The procedure is longly decribed in an article of the Azure documentation:
https://learn.microsoft.com/en-us/azure/aks/ssh. It consists of running a pod that you use as a relay to ssh into the nodes, and it works perfectly fine:
You probably have specified the ssh username and public key during the cluster creation. If not, you have to configure your node to accept them as the ssh credentials:
$ az vm user update \
--resource-group MC_myResourceGroup_myAKSCluster_region \
--name node-name \
--username theusername \
--ssh-key-value ~/.ssh/id_rsa.pub
To find your nodes names:
az vm list --resource-group MC_myResourceGroup_myAKSCluster_region -o table
When done, run a pod on your cluster with an ssh client inside, this is the pod you will use to ssh to your nodes:
kubectl run -it --rm my-ssh-pod --image=debian
# install ssh components, as their is none in the Debian image
apt-get update && apt-get install openssh-client -y
On your workstation, get the name of the pod you just created:
$ kubectl get pods
Add your private key into the pod:
$ kubectl cp ~/.ssh/id_rsa pod-name:/id_rsa
Then, in the pod, connect via ssh to one of your node:
ssh -i /id_rsa theusername#10.240.0.4
(to find the nodes IPs, on your workstation):
az vm list-ip-addresses --resource-group MC_myAKSCluster_myAKSCluster_region -o table
This Gist and this page have pretty good explanations of how to do it. Sshing into the nodes and not shelling into the pods/containers.
you can use this instead of SSH. This will create a tiny priv pod and use nsenter to access the noed.
https://github.com/mohatb/kubectl-wls