Error when installing Spinnaker on Kubernetes on prem cluster - kubernetes

I'm trying to install Spinnaker on a Kubernetes setup onprem.
Following instructions from https://www.spinnaker.io/setup/
Install and run Halyard as Docker on the Kubernetes master.
Run everything as root
mkdir ~/.hal on Kubemaster. Created the service account as instrcuted in the site.
Copied the kubeconfig file from ./kube/config into ~/.hal/kubeconfig as it didnt work with docker -v option, there was some permission issue, so made it work this way
docker run halyard command -- all up and running fine.
Ran Bash and Inside halyard.
Now when I do these two things inside halyard
Point kubectl to the kubeconfig by export KUBECONFIG command
Enable kubernetes provider "hal config provider kubernetes enable"
The command gets executed sometimes successfully or it fails with this warning after timeout error
Getting object contents of versions.yml
Unexpected error comparing versions: com.netflix.spinnaker.halyard.core.error.v1.HalException: Could not load "versions.yml" from config bucket: www.googleapis.com.*
Even if it somehow manages to run successfully. When I run these,
CONTEXT=$(kubectl config current-context)
hal config provider kubernetes account add my-k8s-account --context $CONTEXT
It fails with the same error as above.
Total weird stuff. Its intermittent. Does it have something to do with the kubeconfig file? Any pointers or help would be greatly appreciated.
Thanks.

As noted in comments these kind of errors could result when there lack of network connectivity from inside the container.
As Vikram mentioned in his comment:
Yes, that was the problem. Azure support recommended installing a CNI plugin and it resolved the issue. So, it seems like inside of Azure VM without a Public IP, the CNI plugin is needed for a VM To connect to internet.
To configure CNI plugin on Azure platform use this guide.
Hope it helps.

Related

Can we run sonobuoy to be k8s conformance on a Rancher cluster

We setup a rancher cluster with 3 nodes for testing and I would like to apply for k8s conformance using this rancher cluster. However, while running sonobuoy it returns error
ERRO[0000] could not create sonobuoy client: failed to get rest config: invalid configuration: no configuration has been provided
It seems like Rancher does not have any kubernates binaries built-in (Kubectl, kubeadm etc). May I know if it is possible to be k8s conformance on a rancher cluster?
You should have kubeernetes cluster kubeconfig localy where you are running sonobuoy.
from Rancher documentation: How to Manage Kubernetes With Kubectl:
RKE:
When you create a Kubernetes cluster with RKE, RKE creates a
kube_config_rancher-cluster.yml file in the local directory that
contains credentials to connect to your new cluster with tools like
kubectl.
You can copy this file to $HOME/.kube/config or, if you are working
with multiple Kubernetes clusters
Rancher-Managed Kubernetes Clusters:
Within Rancher, you can download a kubeconfig file through the web UI
and use it to connect to your Kubernetes environment with kubectl.
From the Rancher UI, click on the cluster you would like to connect to
via kubectl. On the top right-hand side of the page, click the
Kubeconfig File button: Click on the button for a detailed look at
your config file as well as directions to place in ~/.kube/config.
Upon copying your configuration to ~/.kube/config, you will be able to
run kubectl commands without having to specify the –-kube-config file
location:
Check First launch with sonobuoy requests for a configuration - maybe it will be useful for you.
Also, look here - just for you: Conformance tests for Rancher 2.x Kubernetes
Run Conformance Test
Once you Rancher Kubernetes cluster is active, Fetch it's kubeconfig.yml file and save it locally.
Download a sonobuoy binary release of the CLI, or build it yourself by running:
$ go get -u -v github.com/heptio/sonobuoy
Configure your kubeconfig file by running:
$ export KUBECONFIG="/path/to/your/cluster/kubeconfig.yml"
Run sonobuoy:
$ sonobuoy run
Watch the logs:
$ sonobuoy logs
Check the status:
$ sonobuoy status
Once the status commands shows the run as completed, you can download the results tar.gz file:
$ sonobuoy retrieve

Is it possible to use cloud code extension in vscode to deploy kubernetes pods on a non-GKE cluster?

This is my very first post here and looking for some advise please.
I am learning Kubernetes and trying to get cloud code extension to deploy Kubernetes manifests on non-GKE cluster. Guestbook app can be deployed using cloud code extension to local K8 cluster(such as MiniKube or Docker-for-Desktop).
I have two other K8 clusters as below and I cannot deploy manifests via cloud code. I am not entirely sure if this is supposed to work or not as I couldn't find any docs or posts on this. Once the GCP free trial is finished, I would want to deploy my test apps on our local onprem K8 clusters via cloud code.
3 node cluster running on CentOS VMs(built using kubeadm)
6 node cluster on GCP running on Ubuntu machines(free trial and built using Hightower way)
Skaffold is installed locally on MAC and my local $HOME/.kube/config has contexts and users set to access all 3 clusters.
➜
guestbook-1 kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
docker-desktop docker-desktop docker-desktop
* kubernetes-admin#kubernetes kubernetes kubernetes-admin
kubernetes-the-hard-way kubernetes-the-hard-way admin
Error:
Running: skaffold dev -v info --port-forward --rpc-http-port 57337 --filename /Users/testuser/Desktop/Cloud-Code-Builds/guestbook-1/skaffold.yaml -p cloudbuild --default-repo gcr.io/gcptrial-project
starting gRPC server on port 50051
starting gRPC HTTP server on port 57337
Skaffold &{Version:v1.19.0 ConfigVersion:skaffold/v2beta11 GitVersion: GitCommit:63949e28f40deed44c8f3c793b332191f2ef94e4 GitTreeState:dirty BuildDate:2021-01-28T17:29:26Z GoVersion:go1.14.2 Compiler:gc Platform:darwin/amd64}
applying profile: cloudbuild
no values found in profile for field TagPolicy, using original config values
Using kubectl context: kubernetes-admin#kubernetes
Loaded Skaffold defaults from \"/Users/testuser/.skaffold/config\"
Listing files to watch...
- python-guestbook-backend
watching files for artifact "python-guestbook-backend": listing files: unable to evaluate build args: reading dockerfile: open /Users/adminuser/Desktop/Cloud-Code-Builds/src/backend/Dockerfile: no such file or directory
Exited with code 1.
skaffold config file skaffold.yaml not found - check your current working directory, or try running `skaffold init`
I have the docker and skaffold file in the path as shown in the image and have authenticated the google SDK in vscode. Any help please ?!
I was able to get this working in the end. What helped in this particular case was removing skaffold.yaml, then skaffold init, generated new skaffold.yaml. And, Cloud Code was then able deploy pods on both remote clusters. Thanks for all your help.

Configure apiserver to use encryption config using minikube

I am trying to configure the kube-apiserver so that it uses encryption to configure secrets in my minikube cluster.
For that, I have followed the documentation on kubernetes.io but got stuck at step 3 that says
Set the --encryption-provider-config flag on the kube-apiserver to point to the location of the config file.
I have discovered the option --extra-config on minikube start and have tried starting my setup using
minikube start --extra-config=apiserver.encryption-provider-config=encryptionConf.yaml
but naturally it doesn't work as encryptionConf.yaml is located in my local file system and not in the pod that's spun up by minikube. The error minikube log gives me is
error: error opening encryption provider configuration file "encryptionConf.yaml": open encryptionConf.yaml: no such file or directory
What is the best practice to get the encryption configuration file onto the kube-apiserver? Or is minikube perhaps the wrong tool to try out these kinds of things?
I found the solution myself in this GitHub issue where they have a similar issue for passing a configuration file. The comment that helped me was the slightly hacky solution that made use of the fact that the directory /var/lib/localkube/certs/ from the minikube VM is mounted into the apiserver.
So my final solution was to run
minikube mount .:/var/lib/minikube/certs/hack
where in the current directory I had my encryptionConf.yaml and then start minikube like so
minikube start --extra-config=apiserver.encryption-provider-config=/var/lib/minikube/certs/hack/encryptionConf.yaml
Based on drivers used some directories are mounted on to your minikube VM.
Check this link - https://kubernetes.io/docs/setup/minikube/#mounted-host-folders
Also ~/.minikube/files is also mounted into the VM at /files. So you can keep your files there and use that path for API server config
I had similar issues in windows regarding filepath location
since C:\Users\%USERNAME%\ is by default mounted in minikube VM
so i copied the files to Desktop folder( any folder under C drive )
minikube --extra-config=apiserver.encryption-provider-config=/c/Users/%USERNAME%/.../<file-name>
hope this is helpful for folks facing this issues on windows platform.

Cachet on Kubernetes APP_KEY Error

I'm trying to run the open source cachet status page within Kubernetes via this tutorial https://medium.com/#ctbeke/setting-up-cachet-on-google-cloud-817e62916d48
2 docker containers (cachet/nginx) and Postgres are deployed to a pod on GKE but the cachet container fails with the following CrashLoopBackOff error
Within the docker-compose.yml file its set to APP_KEY=${APP_KEY:-null} and i’m wondering if I didn’t set an environment variable I should have.
Any help with configuring the cachet docker file would be much appreciated! https://github.com/CachetHQ/Docker
Yes, you need to generate a key.
In the entrypoint.sh you can see that the bash script generates a key for you:
https://github.com/CachetHQ/Docker/blob/master/entrypoint.sh#L188-L193
It seems there's a bug in the Dockerfile here. Generate a key manually and then set it as an environment variable in your manifest.
There's a helm chart you can use in development here: https://github.com/apptio/helmcharts/blob/cachet/devel/cachet/templates/secrets.yaml#L12

How does Kubectl connect to the master

I've installed Kubernetes via Vagrant on OS X and everything seems to be working fine, but I'm unsure how kubectl is able to communicate with the master node despite being local to the workstation filesystem.
How is this implemented?
kubectl has a configuration file that specifies the location of the Kubernetes apiserver and the client credentials to authenticate to the master. All of the commands issued by kubectl are over the HTTPS connection to the apiserver.
When you run the scripts to bring up a cluster, they typically generate this local configuration file with the parameters necessary to access the cluster you just created. By default, the file is located at ~/.kube/config.
In addition to what Robert said: the connection between your local CLI and the cluster is controlled through kubectl config set, see the docs.
The Getting started with Vagrant section of the docs should contain everything you need.