Issues running kubectl from Jenkins - kubernetes

I have deployed jenkins on the kubernetes cluster using helmchart by following this:
https://octopus.com/blog/jenkins-helm-install-guide
I have the pods and services running in the cluster. I was trying to create a pipeline to run some kubectl commands. It is failing with below error:
java.io.IOException: error=2, No such file or directory
Caused: java.io.IOException: Cannot run program "kubectl": error=2, No such file or directory
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1128)
I thought that it has something to do with the Kubernetes CLI plugin for jenkins and raised an issue here:
https://github.com/jenkinsci/kubernetes-cli-plugin/issues/108
I have been advised to install kubectl inside jenkins pod.
I have the jenkins pod already running (deployed using helmchart). I have been seeing options to include the kubectl image binary as part of the dockerfile. But, I have used the helmcharts and not sure if I have to luxury to edit and deploy the pod to add the kubectl.
Can you please help with your inputs to resolve this? IS there any steps/documentation that explain how to install kubectl on the running pod? Really appreciate your inputs as this issue stopped one of my critical projects. Thanks in advance.
I tried setting the rolebinding for the jenkins service account as mentioned here:
Kubernetes commands are not running inside the Jenkins container
I haven't installed kubectl inside pod yet. Please help.
Jenkins pipeline:
kubeconfig(credentialsId: 'kube-config', serverUrl: '')
sh 'kubectl get all --all-namespaces'
(attached the pod/service details for jenkins)enter image description here

Related

kubernetes Python Client utils.create_from_yaml raise Create Error when deploy argo workflow

I need to deploy the Argo workflow using kubernetes python API SDK.
My code is like this and 'quick-start-postgres.yaml' is the official deployment yaml file.
argo_yaml = 'quick-start-postgres.yaml' res = utils.create_from_yaml(kube.api_client, argo_yaml, verbose=True, namespace="argo")
I tried to create the argo-server pod, postgress pod, etc. I finally created services and pods successfully except for the argo-server
and there is also an error shows as the following:
error information here
I am not clear about what happened so anybody can give me a help? Thanks!

Can we run sonobuoy to be k8s conformance on a Rancher cluster

We setup a rancher cluster with 3 nodes for testing and I would like to apply for k8s conformance using this rancher cluster. However, while running sonobuoy it returns error
ERRO[0000] could not create sonobuoy client: failed to get rest config: invalid configuration: no configuration has been provided
It seems like Rancher does not have any kubernates binaries built-in (Kubectl, kubeadm etc). May I know if it is possible to be k8s conformance on a rancher cluster?
You should have kubeernetes cluster kubeconfig localy where you are running sonobuoy.
from Rancher documentation: How to Manage Kubernetes With Kubectl:
RKE:
When you create a Kubernetes cluster with RKE, RKE creates a
kube_config_rancher-cluster.yml file in the local directory that
contains credentials to connect to your new cluster with tools like
kubectl.
You can copy this file to $HOME/.kube/config or, if you are working
with multiple Kubernetes clusters
Rancher-Managed Kubernetes Clusters:
Within Rancher, you can download a kubeconfig file through the web UI
and use it to connect to your Kubernetes environment with kubectl.
From the Rancher UI, click on the cluster you would like to connect to
via kubectl. On the top right-hand side of the page, click the
Kubeconfig File button: Click on the button for a detailed look at
your config file as well as directions to place in ~/.kube/config.
Upon copying your configuration to ~/.kube/config, you will be able to
run kubectl commands without having to specify the –-kube-config file
location:
Check First launch with sonobuoy requests for a configuration - maybe it will be useful for you.
Also, look here - just for you: Conformance tests for Rancher 2.x Kubernetes
Run Conformance Test
Once you Rancher Kubernetes cluster is active, Fetch it's kubeconfig.yml file and save it locally.
Download a sonobuoy binary release of the CLI, or build it yourself by running:
$ go get -u -v github.com/heptio/sonobuoy
Configure your kubeconfig file by running:
$ export KUBECONFIG="/path/to/your/cluster/kubeconfig.yml"
Run sonobuoy:
$ sonobuoy run
Watch the logs:
$ sonobuoy logs
Check the status:
$ sonobuoy status
Once the status commands shows the run as completed, you can download the results tar.gz file:
$ sonobuoy retrieve

Is it possible to use cloud code extension in vscode to deploy kubernetes pods on a non-GKE cluster?

This is my very first post here and looking for some advise please.
I am learning Kubernetes and trying to get cloud code extension to deploy Kubernetes manifests on non-GKE cluster. Guestbook app can be deployed using cloud code extension to local K8 cluster(such as MiniKube or Docker-for-Desktop).
I have two other K8 clusters as below and I cannot deploy manifests via cloud code. I am not entirely sure if this is supposed to work or not as I couldn't find any docs or posts on this. Once the GCP free trial is finished, I would want to deploy my test apps on our local onprem K8 clusters via cloud code.
3 node cluster running on CentOS VMs(built using kubeadm)
6 node cluster on GCP running on Ubuntu machines(free trial and built using Hightower way)
Skaffold is installed locally on MAC and my local $HOME/.kube/config has contexts and users set to access all 3 clusters.
➜
guestbook-1 kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
docker-desktop docker-desktop docker-desktop
* kubernetes-admin#kubernetes kubernetes kubernetes-admin
kubernetes-the-hard-way kubernetes-the-hard-way admin
Error:
Running: skaffold dev -v info --port-forward --rpc-http-port 57337 --filename /Users/testuser/Desktop/Cloud-Code-Builds/guestbook-1/skaffold.yaml -p cloudbuild --default-repo gcr.io/gcptrial-project
starting gRPC server on port 50051
starting gRPC HTTP server on port 57337
Skaffold &{Version:v1.19.0 ConfigVersion:skaffold/v2beta11 GitVersion: GitCommit:63949e28f40deed44c8f3c793b332191f2ef94e4 GitTreeState:dirty BuildDate:2021-01-28T17:29:26Z GoVersion:go1.14.2 Compiler:gc Platform:darwin/amd64}
applying profile: cloudbuild
no values found in profile for field TagPolicy, using original config values
Using kubectl context: kubernetes-admin#kubernetes
Loaded Skaffold defaults from \"/Users/testuser/.skaffold/config\"
Listing files to watch...
- python-guestbook-backend
watching files for artifact "python-guestbook-backend": listing files: unable to evaluate build args: reading dockerfile: open /Users/adminuser/Desktop/Cloud-Code-Builds/src/backend/Dockerfile: no such file or directory
Exited with code 1.
skaffold config file skaffold.yaml not found - check your current working directory, or try running `skaffold init`
I have the docker and skaffold file in the path as shown in the image and have authenticated the google SDK in vscode. Any help please ?!
I was able to get this working in the end. What helped in this particular case was removing skaffold.yaml, then skaffold init, generated new skaffold.yaml. And, Cloud Code was then able deploy pods on both remote clusters. Thanks for all your help.

Error when installing Spinnaker on Kubernetes on prem cluster

I'm trying to install Spinnaker on a Kubernetes setup onprem.
Following instructions from https://www.spinnaker.io/setup/
Install and run Halyard as Docker on the Kubernetes master.
Run everything as root
mkdir ~/.hal on Kubemaster. Created the service account as instrcuted in the site.
Copied the kubeconfig file from ./kube/config into ~/.hal/kubeconfig as it didnt work with docker -v option, there was some permission issue, so made it work this way
docker run halyard command -- all up and running fine.
Ran Bash and Inside halyard.
Now when I do these two things inside halyard
Point kubectl to the kubeconfig by export KUBECONFIG command
Enable kubernetes provider "hal config provider kubernetes enable"
The command gets executed sometimes successfully or it fails with this warning after timeout error
Getting object contents of versions.yml
Unexpected error comparing versions: com.netflix.spinnaker.halyard.core.error.v1.HalException: Could not load "versions.yml" from config bucket: www.googleapis.com.*
Even if it somehow manages to run successfully. When I run these,
CONTEXT=$(kubectl config current-context)
hal config provider kubernetes account add my-k8s-account --context $CONTEXT
It fails with the same error as above.
Total weird stuff. Its intermittent. Does it have something to do with the kubeconfig file? Any pointers or help would be greatly appreciated.
Thanks.
As noted in comments these kind of errors could result when there lack of network connectivity from inside the container.
As Vikram mentioned in his comment:
Yes, that was the problem. Azure support recommended installing a CNI plugin and it resolved the issue. So, it seems like inside of Azure VM without a Public IP, the CNI plugin is needed for a VM To connect to internet.
To configure CNI plugin on Azure platform use this guide.
Hope it helps.

kubernetes create cluster with logging and monitoring for ubuntu

I'm setting up a kubernetes cluster on digitalocean ubuntu machines. I got the cluster up and running following this get started guide ubuntu. During the setup the ENABLE_NODE_LOGGING, ENABLE_CLUSTER_LOGGING and ENABLE_CLUSTER_DNS variables are set to true in the config-default.sh.
However there is no controller, services created for elasticsearch/kabana. I did have to run the deployAddon.sh manually for the skydns, do I need to do the same for logging and monitoring ? or am I missing something in the default configuration.
By default the logging and monitoring services are not in the default namespace.
You should be able to see if the services are running with kubectl cluster-info.
To look at the individual services/controllers, specify the kube-system namespace:
kubectl get service --namespace=kube-system
By default, logging and monitor is not enabled if you are installing kubernetes on ubuntu machines. It looks like someone has copied the config-default.sh script from some other folder, hence the variables ENABLE_NODE_LOGGING and ENABLE_CLUSTER_LOGGING are copied but are not used to bring up the relevant logging deployments and services.
As #Jon Mumm said, kubectl cluster-info gives you the info. But if you want to install the logging service, go to
kubernetes/cluster/addons/fluentd-elasticsearch
and run
kubectl create -f es-controller.yaml -f es-service.yaml -f kibana-controller.yaml -f kibana-service.yaml
with right setup. Change the yaml files to suit your configuration and ensure kubectl is in your path.
Update 1: This will bring up kibana and logstash services