Auto-scale AKS with keda and AzuredevOps Server - azure-devops

I follow instruction described here https://learn.microsoft.com/en-us/azure/aks/keda-about to setup keda with AKS and have a concrete workflow using self-hosted Azuredevops agent.
I do confirm everything is working fine when my agent pool is hosted on AzureDevOps service.
I would like to do same with azuredevops server. Here what i get:
AKS is connected to VNET which is connected to my on-premises network
Self-hosted agent can register to my AzureDevOps server instance
To make it works i needed to customize my etc/hosts file and also add the .crt certificate required by AzureDevops for registration
So far so good my Self-hosted agent do the job for pipeline build&release
my Agent belongs to namespace 'default'
My issue is related to Keda which failed because of certificate
ERROR Failed to create new HPA resource {"controller": "scaledobject", "controllerGroup": "keda.sh", "controllerKind": "ScaledObject", "scaledObject": {"name":"azure-pipelines-scaledobject","namespace":"default"}, "namespace": "default", "name": "azure-pipelines-scaledobject", "reconcileID": "76554c6b-3876-4e83-8ce2-b6966e9b10ec", "HPA.Namespace": "default", "HPA.Name": "keda-hpa-azure-pipelines-scaledobject", "error": "error parsing azure Pipelines metadata: Get \"https://myazuredevops.mydomain.lan/MyCollection/_apis/distributedtask/pools?poolName=My-pool\": x509: certificate signed by unknown authority"}
As you can see keda-operator tries to join my Azuredevops instance with name 'myazuredevops.mydomain.lan' but failed because of certificate.
I try to setup my custom certificate as described here https://learn.microsoft.com/en-us/azure/aks/custom-certificate-authority but i am not sure to doing things good.
Is there any member who succedeed to setup keda with AzureDevopsServer ? How do you solve certificate issue ?
My understanding is that we could configure a certificate at AKS cluster and then any pod could beneficiate it ? Is it how it works ? If yes how can achieve that?
Thx

Related

Running into certificate errors when running puppet agent config using vault lookup

I'm running into certificate errors when I run "puppet agent -t" using a vault lookup module in my branch for the agent config. Here's the errors I get:
"Failed to apply catalog: certificate verify failed" and "The certificate for does not match its private key"
The error persists even after I swap back to the production branch for the agent, where we then have to do an SSL clean to get the prod agent config to apply successfully.
Would setting up puppet to be the intermediaery CA be a good idea? Anybody run into this before?
We also setup approle auth for vault, but to no avail. Any help would be appreciated, thanks!
Unsuccessful solutions: vault app role auth, generating new keys, defining the ssl_cert manually in the agent config, and cleaning the agent cert from the master.

How To create DigitalOcean service endpoint connection in AzureDevops when using DigitalOcean Tools task in Azure DevOps

I am trying to upload build apk file in DigitalOcean by using Azure DevOps.
In AzureDevops,we have task called DigitalOcean Tools by using this we can upload the files in DigitalOcean.Below is the link for your reference.
https://marketplace.visualstudio.com/items?itemName=marcelo-formentao.digitalocean-tools&ssr=false#overview
I installed that task in my organization.
First it will ask for to create DigitalOcean Connection by using service endpoint in azure devops.
I Search in Service endpoint in Azure DevOps i didn't find Service connection for Digital Ocean(apart that I found gitlab,ssh,azure..eg i found for all these Service Connection).
My Question is which service connector i need to used for Digital Ocean?
Please help me on this
DigitalOcean Connection: It's based on AWS configuration (only Access Key ID and Secret Key ID is required).
You can choose AWS connection type to create DigitalOcean service endpoint connection in Azure DevOps. And fill the Access Key ID and Secret Key ID that you can get from your DigitalOcean.
Result
UPDATE
If you didn't find AWS end point connection, you should install AWS Toolkit for Azure DevOps extension in marketplace.Url:AWS Toolkit for Azure DevOps
After installing the extension, you would see AWS connection type in your service connection.I test in my devops and it works.

Kubernetes service connections in azure devops w/ AAD bound AKS cluster

Will kubernetes service connections in azure devops work with an AKS cluster that is bound to AAD via openidconnect? Logging into such clusters goes through an openidconnect flow that involves a device login + browser. How is this possible w/ azure devops k8s service connections?
Will kubernetes service connections in azure devops work with an AKS
cluster that is bound to AAD via openidconnect?
Unfortunately to say, no, this does not support until now.
According to your description, what you want to connect with in Azure Devops Kubernetes service connection is Azure Kubernetes service. This means you would select Azure Subscription in Choose authentication. BUT, this connection method is using Service Principal Authentication (SPA) to authenticate, which does not yet supported for the AKS that is bound with AAD auth.
If you connect your AKS cluster as part of your CI/CD deployment in Azure Devops, and attempt to get the cluster credentials. You would get a warning response which inform you to log in since the service principal cannot handle it:
WARNING: To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code *** to authenticate.
You should familiar with this message, it needs you open a browser to login in to complete the device code authentication manually. But this could not be achieve in Azure Devops.
There has a such feature request raised on our forum which request us expand this feature to Support non-interactive login for AAD-integrated clusters. You can vote and comment there to advance the priority of this suggestion ticket. Then it could be considered into the develop plan by our Product Manager as soon as possible.
Though it could not be achieved directly. But there has 2 work around can for you refer now.
The first work around is change the Azure DevOps authenticate itself from AAD client to the server client.
Use az aks get-credentials command and specify the parameter --admin with it. This can help with bypassing the Azure AD auth since it can let you connect and retrieve the admin credentials which can work without Azure AD.
But, I do not recommend this method because subjectively, this method is ignoring the authentication rules set in AAD for security. If you want a quick method to achieve what you want and not too worry about the security, you can try with this.
The second one is using Kubernetes service accounts
You can follow this doc to create a service account. Then in Azure Devops, we could use this service account to communicate with AKS API. Here you also need to consider about the authorized IP address ranges in AKS.
After the service account created successfully, choose Service account in the service connection of Azure Devops:
Server URL: Get it from the AKS instance(API server address) in Azure portal, then do not forget append the https:// before it while you input it into this service connection.
Secret: Generate it by using command:
kubectl get secret -n <name of secret> -o yaml -n service-accounts
See this doc: Deploy Vault on Azure Kubernetes Service (AKS).
Then you can use this service connection in Azure Devops tasks.

How do I connect to a secure cluster from YAML pipeline?

That's it. Plain and simple.
The first step in my pipeline is to remove services that are no longer supported. To do that I need to use Connect-ServiceFabricCluster to connect to the cluster. But that requires a certificate installed on the local machine. I won't have a local machine in a hosted pipeline and I have a problem with installing the certificate on the hosted VM for security reasons.
So how do I connect?
1,
Dont know if you tried azure cli sfctl cluster select which allows you to specify a certificate, check here for more information.
In order to use the certificate in your pipeline. You need to go to the Library under Pipelines and click secure files and add your certificate from local. Make sure Authorize for use in all pipelines is checked when adding your certificate.
Then you can add a Download secure file task to download your certificate in your pipeline.
Then you can consume it in your next task by referring to the download location "$(Agent.TempDirectory)\yourcertificatefilename", check here for more information
sfctl cluster select --endpoint https://testsecurecluster.com:19080 --cert "$(Agent.TempDirectory)\yourcertificatefilename" --key ./keyfile.key
2,
If above sfctl cluster select is not working, You can install the certificate which is already uploaded with a powershell task to the hosted agent
Import-Certificate -FilePath ""$(Agent.TempDirectory)\yourcertificatefilename"" -CertStoreLocation cert:\LocalMachine\Root
3,
If the hosted agent has security concern. You can create your own self-hosted agent on your local machine. You can then install the certificate in your on-premises agent.
To create self-hosted agent.
You need to get a PAT and assign the scope to Agent Pool. click here for detailed steps. You will need the PAT to config your self-hosted agent later.
Then go to Project setting, select Agent Pools under Pipelines, Create a self-defined agent pool if you donot have one, Then select your agent pool, click new agent, and follow the steps to create your own agent.
Hope above can be helpful to you!

How to debug issues with certificates in IBM Cloud Kubernetes Service /Certificate Manager?

I have a paid cluster with the IBM Cloud Kubernetes Service and a container / service deployed. I have a valid wildcard certificate which I imported into the Certificate Manager. Now I want to apply or deploy that certificate to my cluster:
bx cs alb-cert-deploy --secret-name henrik-xxxx --cluster henrik-bla-bla --cert-crn crn:v1:bluemix:public:cloudcerts:us-south:a/lotsofnumbers:certificate:morenumbers
The above command returns without an error. But when I check the certificate deployment with alb-cert-get it reports a "create_failed". I looked at the troubleshooting guide and tried to update and remove the certificate resp. secret. However, it seems the secret is still around and I cannot really remove it.
Are there command options I can use to get more diagnostic data? Any logs I can see? Any command I can use to clean up the environment?
There are several ways to debug the issue:
Use export BLUEMIX_TRACE=true; bx cs alb-cert-deploy ... to trace the command.
Use the Activity Tracker service and account-level events.
In my case I could see the following in the Activity Tracker logs:
"responseData_str": "{\"code\":\"IAMERR403-01\",\"message\":\"Forbidden\"}",
It was part of an event related to:
"action_str": "cloudcerts.certificate.read",
"target": {
"name_str": "cloudcerts",
"id_str": "crn:v1:bluemix:public:cloudcerts:us-south:a/lotsofnumberhere::",
"typeURI_str": "certificate/read"
},
This points to an authorization issue.