Can we run sonobuoy to be k8s conformance on a Rancher cluster - kubernetes

We setup a rancher cluster with 3 nodes for testing and I would like to apply for k8s conformance using this rancher cluster. However, while running sonobuoy it returns error
ERRO[0000] could not create sonobuoy client: failed to get rest config: invalid configuration: no configuration has been provided
It seems like Rancher does not have any kubernates binaries built-in (Kubectl, kubeadm etc). May I know if it is possible to be k8s conformance on a rancher cluster?

You should have kubeernetes cluster kubeconfig localy where you are running sonobuoy.
from Rancher documentation: How to Manage Kubernetes With Kubectl:
RKE:
When you create a Kubernetes cluster with RKE, RKE creates a
kube_config_rancher-cluster.yml file in the local directory that
contains credentials to connect to your new cluster with tools like
kubectl.
You can copy this file to $HOME/.kube/config or, if you are working
with multiple Kubernetes clusters
Rancher-Managed Kubernetes Clusters:
Within Rancher, you can download a kubeconfig file through the web UI
and use it to connect to your Kubernetes environment with kubectl.
From the Rancher UI, click on the cluster you would like to connect to
via kubectl. On the top right-hand side of the page, click the
Kubeconfig File button: Click on the button for a detailed look at
your config file as well as directions to place in ~/.kube/config.
Upon copying your configuration to ~/.kube/config, you will be able to
run kubectl commands without having to specify the –-kube-config file
location:
Check First launch with sonobuoy requests for a configuration - maybe it will be useful for you.
Also, look here - just for you: Conformance tests for Rancher 2.x Kubernetes
Run Conformance Test
Once you Rancher Kubernetes cluster is active, Fetch it's kubeconfig.yml file and save it locally.
Download a sonobuoy binary release of the CLI, or build it yourself by running:
$ go get -u -v github.com/heptio/sonobuoy
Configure your kubeconfig file by running:
$ export KUBECONFIG="/path/to/your/cluster/kubeconfig.yml"
Run sonobuoy:
$ sonobuoy run
Watch the logs:
$ sonobuoy logs
Check the status:
$ sonobuoy status
Once the status commands shows the run as completed, you can download the results tar.gz file:
$ sonobuoy retrieve

Related

Is it possible to use cloud code extension in vscode to deploy kubernetes pods on a non-GKE cluster?

This is my very first post here and looking for some advise please.
I am learning Kubernetes and trying to get cloud code extension to deploy Kubernetes manifests on non-GKE cluster. Guestbook app can be deployed using cloud code extension to local K8 cluster(such as MiniKube or Docker-for-Desktop).
I have two other K8 clusters as below and I cannot deploy manifests via cloud code. I am not entirely sure if this is supposed to work or not as I couldn't find any docs or posts on this. Once the GCP free trial is finished, I would want to deploy my test apps on our local onprem K8 clusters via cloud code.
3 node cluster running on CentOS VMs(built using kubeadm)
6 node cluster on GCP running on Ubuntu machines(free trial and built using Hightower way)
Skaffold is installed locally on MAC and my local $HOME/.kube/config has contexts and users set to access all 3 clusters.
➜
guestbook-1 kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
docker-desktop docker-desktop docker-desktop
* kubernetes-admin#kubernetes kubernetes kubernetes-admin
kubernetes-the-hard-way kubernetes-the-hard-way admin
Error:
Running: skaffold dev -v info --port-forward --rpc-http-port 57337 --filename /Users/testuser/Desktop/Cloud-Code-Builds/guestbook-1/skaffold.yaml -p cloudbuild --default-repo gcr.io/gcptrial-project
starting gRPC server on port 50051
starting gRPC HTTP server on port 57337
Skaffold &{Version:v1.19.0 ConfigVersion:skaffold/v2beta11 GitVersion: GitCommit:63949e28f40deed44c8f3c793b332191f2ef94e4 GitTreeState:dirty BuildDate:2021-01-28T17:29:26Z GoVersion:go1.14.2 Compiler:gc Platform:darwin/amd64}
applying profile: cloudbuild
no values found in profile for field TagPolicy, using original config values
Using kubectl context: kubernetes-admin#kubernetes
Loaded Skaffold defaults from \"/Users/testuser/.skaffold/config\"
Listing files to watch...
- python-guestbook-backend
watching files for artifact "python-guestbook-backend": listing files: unable to evaluate build args: reading dockerfile: open /Users/adminuser/Desktop/Cloud-Code-Builds/src/backend/Dockerfile: no such file or directory
Exited with code 1.
skaffold config file skaffold.yaml not found - check your current working directory, or try running `skaffold init`
I have the docker and skaffold file in the path as shown in the image and have authenticated the google SDK in vscode. Any help please ?!
I was able to get this working in the end. What helped in this particular case was removing skaffold.yaml, then skaffold init, generated new skaffold.yaml. And, Cloud Code was then able deploy pods on both remote clusters. Thanks for all your help.

Error when installing Spinnaker on Kubernetes on prem cluster

I'm trying to install Spinnaker on a Kubernetes setup onprem.
Following instructions from https://www.spinnaker.io/setup/
Install and run Halyard as Docker on the Kubernetes master.
Run everything as root
mkdir ~/.hal on Kubemaster. Created the service account as instrcuted in the site.
Copied the kubeconfig file from ./kube/config into ~/.hal/kubeconfig as it didnt work with docker -v option, there was some permission issue, so made it work this way
docker run halyard command -- all up and running fine.
Ran Bash and Inside halyard.
Now when I do these two things inside halyard
Point kubectl to the kubeconfig by export KUBECONFIG command
Enable kubernetes provider "hal config provider kubernetes enable"
The command gets executed sometimes successfully or it fails with this warning after timeout error
Getting object contents of versions.yml
Unexpected error comparing versions: com.netflix.spinnaker.halyard.core.error.v1.HalException: Could not load "versions.yml" from config bucket: www.googleapis.com.*
Even if it somehow manages to run successfully. When I run these,
CONTEXT=$(kubectl config current-context)
hal config provider kubernetes account add my-k8s-account --context $CONTEXT
It fails with the same error as above.
Total weird stuff. Its intermittent. Does it have something to do with the kubeconfig file? Any pointers or help would be greatly appreciated.
Thanks.
As noted in comments these kind of errors could result when there lack of network connectivity from inside the container.
As Vikram mentioned in his comment:
Yes, that was the problem. Azure support recommended installing a CNI plugin and it resolved the issue. So, it seems like inside of Azure VM without a Public IP, the CNI plugin is needed for a VM To connect to internet.
To configure CNI plugin on Azure platform use this guide.
Hope it helps.

Kubernetes cluster not deleting

I am trying to delete the entire kubernetes that created for my CI/CD pipeline R&D. So for deleting the cluster and everything I run the following command,
kubectl config delete-cluster <cluster-name>
kubectl config delete-context <Cluster-context>
For making sure that the clustee is deleted, I build the jenkins pipeline job again. So I found that it is deploying with updated changes.
When I run the command "kubectl config view", I found the following result,
docker#mildevdcr01:~$ kubectl config view
apiVersion: v1
clusters: []
contexts: []
current-context: kubernetes-admin#cluster.local
kind: Config
preferences: {}
users: []
docker#mildevdcr01:~$
Still my Spring Boot micro service is deploying in cluster with updated changes.
I created the Kubernetes cluster using kubespray tool that I got reference from Github:
https://github.com/kubernetes-incubator/kubespray.git
What do I need to do for the deletion of everything that I created for this Kubernetes cluster? I need to remove everything including master node.
If you setup your cluster using Kubespray, you ran whole installation using ansible, so to delete cluster you have to use it too.
But you can also reset the entire cluster for fresh installation:
$ ansible-playbook -i inventory/mycluster/hosts.ini reset.yml
Remember to keep the “hosts.ini” updated properly.
You can remove node by node from your cluster simply adding specific node do section [kube-node] in inventory/mycluster/hosts.ini file (your hosts file) and run command:
$ ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml
KubeSpray documentation: kubespray.
Useful articles: kubespray-steps, kubespray-ansible.
Okay so for a kubespray CI/CD pipeline it's a little more complicated then just deleting the cluster context. You have to actively delete other items on each node and perform a reset.yml for ETCD.
Sometimes just running the reset.yml is enough for your pipeline so it resets the cluster back to the initial state but if this is not enough then you have to delete docker, kubelet, repositories, /etc/kubernetes and many other directories on the nodes to get a clean deployment. In this case it's almost always easier to just provision new nodes in your pipeline using terraform and vsphere(vra) API.

kube-apiserver on OpenShift

I'm new to OpenShift and Kubernetes.
I need to access kube-apiserver on existing OpenShift environment
oc v3.10.0+0c4577e-1
kubernetes v1.10.0+b81c8f8
how do I know kube-apiserver is already installed, or how to get it installed?
I checked all the containers and there is no even such path /etc/kubernetes/manifests.
Here is the list of docker processes on all clusters, could it hide behind one of these?
k8s_fluentd-elasticseark8s_POD_logging
k8s_POD_tiller-deploy
k8s_api_master-api-ip-...ec2.internal_kube-system
k8s_etcd_master-etcd-...ec2.internal_kube-system
k8s_POD_master-controllers
k8s_POD_master-api-ip-
k8s_POD_kube-state
k8s_kube-rbac-proxy
k8s_POD_node-exporter
k8s_alertmanager-proxy
k8s_config-reloader
k8s_POD_alertmanager_openshift-monitoring
k8s_POD_prometheus
k8s_POD_cluster-monitoring
k8s_POD_heapster
k8s_POD_prometheus
k8s_POD_webconsole
k8s_openvswitch
k8s_POD_openshift-sdn
k8s_POD_sync
k8s_POD_master-etcd
If you just need to verify that the cluster is up and running then you can simply run oc get nodes which communicates with the kube-apiserver to retrieve information.
oc config view will show where kube-apiserver is hosted under the clusters -> cluster -> server section. On that host machine you can run command docker ps to display the running containers, which should include the kube-apiserver

kubernetes create cluster with logging and monitoring for ubuntu

I'm setting up a kubernetes cluster on digitalocean ubuntu machines. I got the cluster up and running following this get started guide ubuntu. During the setup the ENABLE_NODE_LOGGING, ENABLE_CLUSTER_LOGGING and ENABLE_CLUSTER_DNS variables are set to true in the config-default.sh.
However there is no controller, services created for elasticsearch/kabana. I did have to run the deployAddon.sh manually for the skydns, do I need to do the same for logging and monitoring ? or am I missing something in the default configuration.
By default the logging and monitoring services are not in the default namespace.
You should be able to see if the services are running with kubectl cluster-info.
To look at the individual services/controllers, specify the kube-system namespace:
kubectl get service --namespace=kube-system
By default, logging and monitor is not enabled if you are installing kubernetes on ubuntu machines. It looks like someone has copied the config-default.sh script from some other folder, hence the variables ENABLE_NODE_LOGGING and ENABLE_CLUSTER_LOGGING are copied but are not used to bring up the relevant logging deployments and services.
As #Jon Mumm said, kubectl cluster-info gives you the info. But if you want to install the logging service, go to
kubernetes/cluster/addons/fluentd-elasticsearch
and run
kubectl create -f es-controller.yaml -f es-service.yaml -f kibana-controller.yaml -f kibana-service.yaml
with right setup. Change the yaml files to suit your configuration and ensure kubectl is in your path.
Update 1: This will bring up kibana and logstash services