I have not found a way to do so using C# K8s SDK: https://github.com/kubernetes-client/csharp
How to create a AKS Cluster in C#? Basically, the following command:
az aks create -g $RESOURCE_GROUP -n $AKS_CLUSTER \
--enable-addons azure-keyvault-secrets-provider \
--enable-managed-identity \
--node-count $AKS_NODE_COUNT \
--generate-ssh-keys \
--enable-pod-identity \
--network-plugin azure
Sending a PUT request with payload (JSON body) to ARM.
See this: https://learn.microsoft.com/en-us/rest/api/aks/managed-clusters/create-or-update?tabs=HTTP
Related
To make parameters using key vaults available for my azure webapp I've executed the following
identity=`az webapp identity assign \
--name $(appName) \
--resource-group $(appResourceGroupName) \
--query principalId -o tsv`
az keyvault set-policy \
--name $(keyVaultName) \
--secret-permissions get \
--object-id $identity
Now I want to create an azure postgres server taking admin-password from a key vault:
az postgres server create \
--location $(location) \
--resource-group $(ResourceGroupName) \
--name $(PostgresServerName) \
--admin-user $(AdminUserName) \
--admin-password '$(AdminPassWord)' \
--sku-name $(pgSkuName)
If the value of my AdminPassWord is here something like
#Microsoft.KeyVault(SecretUri=https://<myKv>.vault.azure.net/secrets/AdminPassWord/)
I need the single quotes (like above) to get the postgres server created. But does this mean that the password will be the whole string '#Microsoft.KeyVault(SecretUri=https://<myKv>.vault.azure.net/secrets/AdminPassWord/)' instead of the secret stored in <myKv> ?
When running my pipeline without the quotes (i.e. just --admin-password $(AdminPassWord) \) I got the error message syntax error near unexpected token ('. I thought that it could be consequence of the fact that I have't set the policy --secret-permissions get for the resource postgres server. But how can I set it before creating the postgres server ?
The expresssion #Microsoft.KeyVault(SecretUri=https://<myKv>.vault.azure.net/secrets/AdminPassWord/) is used to access the keyvault secret value in azure web app, when you configure it with the first two commands, the managed identity of the web app will be able to access the keyvault secret.
But if you want to create an azure postgres server with the password, you need to obtain the secret value firstly and use it rather than use the expression.
For Azure CLI, you could use az keyvault secret show, then pass the secret to the parameter --admin-password in az postgres server create.
az keyvault secret show [--id]
[--name]
[--query-examples]
[--subscription]
[--vault-name]
[--version]
Trying to figure out how to best give my AKS cluster access to a Postgres database in Azure.
This is how I create the cluster:
az group create \
--name $RESOURCE_GROUP \
--location $LOCATION
az aks create \
--resource-group $RESOURCE_GROUP \
--name $CLUSTER_NAME \
--node-vm-size Standard_DS2_v2 \
--node-count 1 \
--enable-addons monitoring \
--enable-managed-identity \
--generate-ssh-keys \
--kubernetes-version 1.19.6 \
--attach-acr $ACR_NAME \
--location $LOCATION
This will automatically create a VNet with a subnet that the node pool uses.
The following works:
Find the VNet resource in Azure
Go to "subnets" -> select the subnet -> Choose "Microsoft.SQL" under "Services". Save
Find the Postgres resource in Azure
Go to "Connection Security" -> Add existing virtual network -> Select the AKS VNet subnet. Save
So I have two questions:
Is it recommended to "fiddle" with the VNet subnet automatically created by az aks create? I.e adding the service endpoint for Micrsoft.SQL
If it's ok, how can I accomplish the same using Azure CLI only? The problem I have is how to figure out the id of the subnet (based on what az aks create returns)
Background:
We run Analytics pipelines on dedicated clusters once a day. All clusters are created at the same time, have once pod deployed, run their pipeline and are deleted once complete, use the default VPC network in the same region and are created with a command like this:
gcloud beta container clusters create <CLUSTER_NAME> \
--zone "europe-west1-b" \
--machine-type "n1-standard-2" \
--num-nodes=1 \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--service-account=<SA_EMAIL> \
--disk-size 10GB \
--network default \
--subnetwork <SUB_NETWORK> \
--enable-master-global-access \
--enable-private-nodes \
--enable-private-endpoint \
--enable-ip-alias \
--enable-intra-node-visibility \
--enable-master-authorized-networks \
--master-ipv4-cidr=<MASTER_IP>/28 \
--cluster-ipv4-cidr <CLUSTER_IP>/14 \
--services-ipv4-cidr <SERVICES_IP/20 \
--enable-network-policy \
--enable-shielded-nodes
When we add a new cluster for a new pipeline we have encountered issues where the IP addresses collide, overlap and are unavailable. As we expect to continually add more pipelines and thus more clusters we want an automated way of avoiding this issue.
We have explored creating a dedicated network (and subnetwork) for each cluster so each cluster can have the same IP addresses (albeit in different networks) but are unsure if this is best practice.
Question:
Is it possible to create kubernetes clusters in Google Cloud so as the master, cluster and service IP addresses are auto-assigned?
I was deploying a cloudformation template (pre-build and provided by AWS) and was looking for a way to control the parameters (i.e. update it on regular basis with new parameters). I was wondering if there is a programmatic best practice to manage this?
Thanks!
If you want to update an existing stack, you can use the aws-cli and run the aws cloudformation update-stack command with an --parameters argument that specifies the parameters you want. You can also update the template itself with --template-body or --template-url, if needed.
Documentation: https://docs.aws.amazon.com/cli/latest/reference/cloudformation/update-stack.html
Better to use aws cloudformation deploy
Something like this
aws cloudformation deploy \
--template-file ./template.yaml \
--s3-bucket ${s3_bucket_name} \
--s3-prefix ${cf-templates} \
--stack-name ${stackName} \
--capabilities CAPABILITY_NAMED_IAM \
--no-fail-on-empty-changeset \
--parameter-overrides par1=${par1value} par2=${par2value} \
--tags tag1=tag1value tag2=tag2value \
--profile test_profile
Is it possible to expose Hue with Component Gateway for Dataproc? I went through the docs and didn't find any option to add service to it. I am creating Dataproc cluster with below command.
gcloud beta dataproc clusters create hive-cluster \
--scopes sql-admin,bigquery \
--image-version 1.5 \
--master-machine-type n1-standard-4 \
--num-masters 1 \
--worker-machine-type n1-standard-1 \
--num-workers 2 \
--region $REGION \
--zone $ZONE \
--optional-components=ANACONDA,JUPYTER \
--initialization-actions gs://bucket/init-scripts/cloud-sql-proxy.sh,gs://bucket/init-scripts/hue.sh \
--properties hive:hive.metastore.warehouse.dir=gs://$PROJECT-warehouse/datasets,dataproc:jupyter.notebook.gcs.dir=gs://bucket/notebooks/jupyter \
--metadata "hive-metastore-instance=$PROJECT:$REGION:hive-metastore" \
--enable-component-gateway
Hue is not an optional component of Dataproc, hence not accessible from component gateway. For now, you have to use Dataproc web interfaces:
Once the cluster has been created, Hue is configured to run on port 8888 on the master node in a Dataproc cluster. To connect to the Hue web interface, you will need to create an SSH tunnel and use a SOCKS 5 Proxy with your web browser as described in the dataproc web interfaces documentation. In the opened web browser go to 'localhost:8888' and you should see the Hue UI.