Kubernetes Error: devicelogin is not supported if interactiveMode is 'never' - kubernetes

Trying to deploy application to AKS Private Cluster via Azure Devops Service Connection SPN.
Have been trying to use Kubernetes#1 built-in task on Azure Devops and gotten below issue:
Error: devicelogin is not supported if interactiveMode is ‘never’
Unable to connect to the server: getting credentials: exec: executable kubelogin failed with exit code 1.
Had tried to do this step, https://github.com/Azure/kubelogin#service-principal-login-flow-non-interactive
but unsuccessful.
Kubernetes#1 will not have issue if I specify useClusterAdmin as true but we are strictly not allowed to use cluster admin...
Expect that Kubernetes#1 to be able to deploy or create to AKS private cluster.

Related

How to fix error with GitLab runner inside Kubernetes cluster - try setting KUBERNETES_MASTER environment variable

I have setup two VMs that I am using throughout my journey of educating myself in CI/CD, GitLab, Kubernetes, Cloud Computing in general and so on. Both VMs have Ubuntu 22.04 Server as a host.
VM1 - MicroK8s Kubernetes cluster
Most of the setup is "default". Since I'm not really that knowledgeable, I have only configured two pods and their respective services - one with PostGIS and the other one with GeoServer. My intent is to add a third pod, which is the deployment of a app that I a have in VM2 and that will communicate with the GeoServer in order to provide a simple map web service (Leaflet + Django). All pods are exposed both within the cluster via internal IPs as well as externally (externalIp).
I have also installed two GitLab-related components here:
GitLab Runner with Kubernetes as executor
GitLab Kubernetes Agent
In VM2 both are visible as connected.
VM2 - GitLab
Here is where GitLab (default installation, latest version) runs. In the configuration (/etc/gitlab/gitlab.rb) I have enabled the agent server.
Initially I had the runner in VM1 configured to have Docker as executor. I had not issues with that. However then I thought it would be nice to try out running the runner inside the cluster so that everything is capsuled (using the internal cluster IPs without further configuration and exposing the VM's operating system).
Both the runner and agent are showing as connected but running a pseudo-CI/CD pipeline (the one provided by GitLab, where you have build, test and deploy stages with each consisting of a simple echo and waiting for a few seconds) returns the following error:
Running with gitlab-runner 15.8.2 (4d1ca121)
on testcluster-k8s-runner Hko2pDKZ, system ID: s_072d6d140cfe
Preparing the "kubernetes" executor
Using Kubernetes namespace: gitlab-runner
ERROR: Preparation failed: getting Kubernetes config: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
Will be retried in 3s ...
Using Kubernetes namespace: gitlab-runner
ERROR: Preparation failed: getting Kubernetes config: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
Will be retried in 3s ...
Using Kubernetes namespace: gitlab-runner
ERROR: Preparation failed: getting Kubernetes config: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
Will be retried in 3s ...
ERROR: Job failed (system failure): getting Kubernetes config: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
I am unable to find any information regarding KUBERNETES_MASTER except in issue tickets (GitLab) and questions (SO and other Q&A platforms). I have no idea what it is, where to set it. My guess would be it belongs in the runner's configuration on VM1 or at least the environment of the gitlab-runner (the user that contains the runner's userspace with its respective /home/gitlab-runner directory).
The only one possible solution I have found so far is to create the .kube directory from the user which uses kubectl (in my case microk8s kubectl since I use MicroK8s) to the home directory of the GitLab runner. I didn't see anything special in this directory (no hidden files) except for a cache subdirectory, hence my decision to simply create it at /home/gitlab-runner/.kube, which didn't change a thing.

Creating a Jenkins X Kubernetes cluster with GKE throws exception: secrets "jenkins" not found

When I try to create a Jenkins X Kubernetes cluster with GKE using this command:
jx create cluster gke --skip-login
The following exeption is thrown at the end of installation:
error creating cluster configuring Jenkins: creating Jenkins API token: after 3 attempts, last error: creating Jenkins Auth configuration: secrets "jenkins" not found
During installation I select the default settings and provide my own github settings, including generated personal access token, but I don't think that the github token is the issue in this case (I'm pretty sure all my github settings are correct)
The problem has been solved by using --tekton flag:
jx create cluster gke --skip-login --tekton

az aks update-credentials - How to ignore Resource Not Found errors

When trying to update an AKS cluster with new service principal credentials using az aks update-credentials command, it is blocked with an error about a Resource not Found (a Microsoft.OperationalInsights resource).
Is there a way to run this command while ignoring such resource not found error?
Thanks

Helm Azure Devops fails with Broken Pipe error

I have a deployment pipeline in Azure Devops to deploy a chart to my Kubernetes cluster. I'm using the built in Helm tasks to:
Install Helm Client
Create Tiller
Deploy my chart that has been dropped by a separate build task
My Tiller Upgrade YAML (step 3) is as follows:
steps:
- task: HelmDeploy#0
displayName: 'helm upgrade'
inputs:
azureSubscription: '****'
azureResourceGroup: '****'
kubernetesCluster: ****
command: upgrade
chartType: FilePath
chartPath: '$(System.DefaultWorkingDirectory)/_Helm Chart Package/charts/****.tgz'
releaseName: ****
waitForExecution: false
enableTls: true
caCert: '****'
certificate: '****'
privatekey: '****'
Note that Install if not present is checked although I don't see how that is represented in the YAML.
It works sometimes but most of the time I get the following exception:
3627 portforward.go:363] error copying from remote stream to local
connection: readfrom tcp4 127.0.0.1:33429->127.0.0.1:39710: write tcp4
127.0.0.1:33429->127.0.0.1:39710: write: broken pipe
This always happens after my charts have been deployed successfully. I tried removing the --wait param but that did not help and the task still fails causing my deployment pipeline to fail. There is a known issue for this on the Helm Github but is there a way to get this error to not fail my task and as a result my deployment pipeline?
you can select "continue on error" when configuring task (under control options), which will do just that, continue running after an error.
although, I have to admit I dont see that error at all, i sometimes see this error when I create helm release right after creating AKS and AKS is bringing up system pods, so under load.

GitLab Pipeline fails connecting to Kubernetes

When I execute the pipeline job as the video shows it fails and give a message:
ERROR: Preparation failed: error connecting to Kubernetes: invalid configuration: no configuration has been provided
Is this intended? have i missed any configuration?
the kubernetes is configured for my runner and the project i am working on and i haven't seen any configuration to config