How to automate Azure DevOps Kubernetes Service Connection to Cluster? - kubernetes

To deploy services via Azure Devops to my kubernetes cluster, I need to create a Kubernetes Service Connection manually. I want to automate this by creating the service connection dynamically in Azure DevOps so I can delete and recreate the cluster and deployment. Is this possible? How can I do this?

you can create the service endpoint using the azure devops api,
check this out for api detail
this might be related

Related

Terraform provisioned Private AKS cluster unable to deploy application from Azure pipeline

enter image description here
We are trying to deploy application to the provisioned private aks cluster using terraform in Azure devops, when we try to deploy helm or access the cluster we are getting error.
enter image description here
As you did not provided much information, i will do my best to help you:
It seems that the user or Service principal that is running the pipeline has permissions on subscription level to create the AKS but not enough permissions to create anything inside Kubernetes.
You can leverage RBAC, Azure AD & Azure RBAC with your Kubernetes. With Terraform you can specify admin_group_object_ids inside the azure_active_directory_role_based_access_control block. Just assign the group there and add the pipeline User / SP to that group.
Alternative you can use Azure build-in roles like Azure Kubernetes Service Cluster Admin Role and add your User / SP there.

Azure DevOps YAML Environment Auto-Deploy Trigger for New Servers

I want to use an Azure DevOps YAML pipeline to deploy to an AWS stack with EC2 instances and a Load Balancer. I've read here that I can use the AWS userdata script to join new EC2 instances to the Azure DevOps Environment.
My question is, how can I get my Azure DevOps Environment or YAML build to deploy to new servers that join that group? For example, if I use auto-scaling and a new server spins up.
I know that Deployment Groups which are used in the Classic Pipelines had a feature that allowed you to enable a Post Deployment Trigger that could redeploy the last successful build when a new server joined like this.
Is this possible to do with YAML Environments? If so, how?
If it matters, I hope to be able to share the AWS stack and have several separate applications that will get deployed to the same stack with their own YAML builds.

Azure DevOps environment with private EKS cluster

I am currently using EKS private cluster with a public API server endpoint in order to use Azure DevOps environments(with Kubernetes service connection).
I have a requirement to make everything private in EKS.
Once EKS becomes private, it breaks everything in Azure DevOps as it is not able to reach the API server.
Any suggestion on how to communicate private kubernetes API server with azure devops would be appreciated.
If you're trying to target the cluster for deployment, you need a self-hosted agent that has a network route to your cluster.
The other capabilities exposed by the environment feature of Azure DevOps (i.e. monitoring the state of the cluster via the environment view) will not work -- they require a public-facing Kubernetes API to work.
If you don't mind the additional cost, VPN can be used to establish connection to the private EKS cluster.

How to create Azure devops kubernetes service connection for to access private AKS cluster?

Creating a service connection to access non-private AKS cluster is straight forward, however if i want to create service connection for private AKS cluster is it possible from Azure Devops?
You can create New Kubernetes service connection using the KubeConfig option and click the dropdown arrow to choose Save without Verification
Also see Deploying to Private AKS Cluster
Please use below link
https://techcommunity.microsoft.com/t5/fasttrack-for-azure/using-azure-devops-to-deploy-an-application-on-aks-private/ba-p/2029630
I have impleted this solution in my place, we had private aks , we where unable to make service connection from azure devops to azure kubeneted,
we created a self hosted linux agent in the subnet where kubenetes is and add used my agent to run build and release pipeline

Connecting to AKS API (via HTTP) from an Azure VM

I would like to connect to AKS API from a script on Azure VM inorder to scrape some metrics, check some stats, etc of the cluster.
Is there any approach (like an Azure policy or role and attaching it to VM) other than creating an user in azure AD or a service account in the AKS with clusterRole bound to it and referencing the certs/tokens from VM?
Thank you