Cachet on Kubernetes APP_KEY Error - postgresql

I'm trying to run the open source cachet status page within Kubernetes via this tutorial https://medium.com/#ctbeke/setting-up-cachet-on-google-cloud-817e62916d48
2 docker containers (cachet/nginx) and Postgres are deployed to a pod on GKE but the cachet container fails with the following CrashLoopBackOff error
Within the docker-compose.yml file its set to APP_KEY=${APP_KEY:-null} and i’m wondering if I didn’t set an environment variable I should have.
Any help with configuring the cachet docker file would be much appreciated! https://github.com/CachetHQ/Docker

Yes, you need to generate a key.
In the entrypoint.sh you can see that the bash script generates a key for you:
https://github.com/CachetHQ/Docker/blob/master/entrypoint.sh#L188-L193
It seems there's a bug in the Dockerfile here. Generate a key manually and then set it as an environment variable in your manifest.
There's a helm chart you can use in development here: https://github.com/apptio/helmcharts/blob/cachet/devel/cachet/templates/secrets.yaml#L12

Related

Container Image not see in GitLab registry

I am trying to push a Docker Compose file to GitLab Container Registry. The commands are getting executed successfully, however, I do not see the image in the registry. When I tried to push the Dockerfile, that works. The Compose file isn't. No known solutions for this. Is searched for similar posts but could not find an answer.
If you are using Docker executor with Docker-in-Docker service docker-compose command is not available by default and it has to be installed. You can see here if you might be hitting some more limitations in your CI/CD configuration using docker build

Error when installing Spinnaker on Kubernetes on prem cluster

I'm trying to install Spinnaker on a Kubernetes setup onprem.
Following instructions from https://www.spinnaker.io/setup/
Install and run Halyard as Docker on the Kubernetes master.
Run everything as root
mkdir ~/.hal on Kubemaster. Created the service account as instrcuted in the site.
Copied the kubeconfig file from ./kube/config into ~/.hal/kubeconfig as it didnt work with docker -v option, there was some permission issue, so made it work this way
docker run halyard command -- all up and running fine.
Ran Bash and Inside halyard.
Now when I do these two things inside halyard
Point kubectl to the kubeconfig by export KUBECONFIG command
Enable kubernetes provider "hal config provider kubernetes enable"
The command gets executed sometimes successfully or it fails with this warning after timeout error
Getting object contents of versions.yml
Unexpected error comparing versions: com.netflix.spinnaker.halyard.core.error.v1.HalException: Could not load "versions.yml" from config bucket: www.googleapis.com.*
Even if it somehow manages to run successfully. When I run these,
CONTEXT=$(kubectl config current-context)
hal config provider kubernetes account add my-k8s-account --context $CONTEXT
It fails with the same error as above.
Total weird stuff. Its intermittent. Does it have something to do with the kubeconfig file? Any pointers or help would be greatly appreciated.
Thanks.
As noted in comments these kind of errors could result when there lack of network connectivity from inside the container.
As Vikram mentioned in his comment:
Yes, that was the problem. Azure support recommended installing a CNI plugin and it resolved the issue. So, it seems like inside of Azure VM without a Public IP, the CNI plugin is needed for a VM To connect to internet.
To configure CNI plugin on Azure platform use this guide.
Hope it helps.

Issue in setting up KUBECTL on Windows 10 Home

I am trying to learn Kubernetes and so I installed Minikube on my local Windows 10 Home machine and then I tried installing the kubectl. However so far I have been unsuccessful in getting it right.
So this what I have done so far:
Downloaded the kubectl.exe file from https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/windows/amd64/kubectl.exe
Then I added the path of this exe in the path environment variable as shown below:
However this didn't work when I executed kubectl version on the command prompt or even on pwoershell (in admin mode)
Next I tried using the curl command as given in the docs - https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-with-curl-on-windows
However that too didn't work as shown below:
Upon searching for answers to fix the issue, I stumbled upon this StackOverfow question which explained how to create a .kube config folder because it didn't exist on my local machine. I followed the instructions, but that too failed.
So right now I am completely out of ideas and not sure whats the issue here. FYI, I was able to install everything in a breeze on my Mac, however Windows is just acting crazy.
Any help would be really helpful.
As user #paltaa mentioned:
did you do a minikube start ? – paltaa 2 days ago
The fact that you did not start the minikube is the most probable cause why you are getting this error.
Additionally this error message shows when the minikube is stopped as stopping will change the current-context inside the config file.
There is no need to create a config file inside of a .kube directory as the minikube start will create appropriate files and directories for you automatically.
If you run minikube start command successfully you should get below message at the end of configuration process which will indicate that the kubectl is set for minikube automatically.
Done! kubectl is not configured to use "minikube"
Additionally if you invoke command $ kubectl config you will get more information how kubectl is looking for configuration files:
The loading order follows these rules:
1. If the --kubeconfig flag is set, then only that file is loaded. The flag may only be set once and no merging takes
place.
2. If $KUBECONFIG environment variable is set, then it is used as a list of paths (normal path delimiting rules for
your system). These paths are merged. When a value is modified, it is modified in the file that defines the stanza. When
a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the
last file in the list.
3. Otherwise, ${HOME}/.kube/config is used and no merging takes place.
Please take a special look on part:
Otherwise, ${HOME}/.kube/config is used
Even if you do not set the KUBECONFIG environment variable kubectl will default to $USER_DIRECTORY (for example C:\Users\yoda\.
If for some reason your cluster is running and files got deleted/corrupted you can:
minikube stop
minikube start
which will recreate a .kube/config
Steps for running minikube on Windows in this case could be:
Download and install Kubernetes.io: Install minikube using an installer executable
Download, install and configure a Hypervisor (for example Virtualbox)
Download kubectl
OPTIONAL: Add the kubectl directory to Windows environment variables
Run from command line or powershell from current user: $ minikube start --vm-driver=virtualbox
Wait for configuration to finish and invoke command like $ kubectl get nodes.

Append/Extend LD_LIBRARY_PATH using Kubernetes Source Code

When a pod is being scheduled, I dynamically (and transparently) mount some shared libraries folder into the client containers through Kubernetes DevicePlugins. Now, in the container I want to append/extend these dynamically mounted shared libraries to LD_LIBRARY_PATH environmental variables.
Inside the container: This can be achieved by running command on the bash
"export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/some/new/directory"
From the Host: I can add the export command to the pod.yaml file under pod.spec.command and args.
But, I wanted to do it transparently without the client/admin specifying it in the yaml file using Kubernetes DevicePlugins or Extended-Schedulers
I am looking method/hack by which I can append/extend the LD_LIBRARY_PATH inside the container only using Kubernetes source code.
Thanks.
You can just bake into your Dockerfile and create an image that you use in Kubernetes for that. No need to hack the Kubernetes source code.
In your Dockerfile in some line:
ENV LD_LIBRARY_PATH /extra/path:$LD_LIBRARY_PATH
Then:
docker build -t <your-image-tag> .
docker push <your-image-tag>
Then, update your pod or deployment definition and deploy to Kubernetes.
Hope it helps.
If i understand your issue, all you need is to transparently add ld_library_path to the pod as it is scheduled. Maybe you can try to use mutatingadmission webhook. Which allows you to send patch command to kubernetes to modify the manifest. Theres a good documentation from banzai cloud. I have not tried it myself.
https://banzaicloud.com/blog/k8s-admission-webhooks/

Read-only file system: MongoDB Cluster on Kubernetes using Helm charts

I launched a MongoDB replica set on Kubernetes (GKE as well as kubeadm). I faced no problems with the pods accessing the storage.
However, when I used Helm to deploy the same, I face this problem.
When I run this command-
(
kubectl describe po mongodb-shard1-0 --namespace=kube-system
)
(Here mongodb-shard1-0 is the first and only pod (of the desired three) which was created)
I get the error-
Events
Error: failed to start container "mongodb-shard1-container": Error
response from daemon: error while creating mount source path
'/mongo/data': mkdir /mongo: read-only file system
I noticed one major difference between the two ways of creating MongoDB cluster (without Helm, and with Helm)- when using Helm, I had to create a service account and install the Helm chart using that service account. Without Helm, I did not need that.
I used different mongo docker images, I faced the same error every time.
Can anybody help why I am facing this issue?
Docker exports volumes from filesystem using -v command line option. i.e. -v /var/tmp:/tmp
Can you check if the containers/pods are writing to shared volumes, not to the root filesystem?