I want to update viewers_can_edit param in grafana defaults.ini file deployed by helm in kubernetes - kubernetes-helm

I want to update viewers_can_edit param in grafana defaults.ini file deployed by helm in kubernetes . I need change the parameter from false to true.
I deployed the solution prometheus stack in a kubernetes cluster, version 1.20, using helm.
So, I was finding in the pods and I found the file. But it's not possible modify.
Locate the file
bash-5.1$ pwd
/usr/share/grafana
bash-5.1$ cd conf/
bash-5.1$ ls
defaults.ini ldap.toml ldap_multiple.toml provisioning sample.ini
Param to modify
# Viewers can edit/inspect dashboard settings in the browser. But not save the dashboard.
viewers_can_edit = false
My question is, if I canot login as root and I cannot modify the file in the pod, what is the correct way to do this change?
I listen your experience or advice.
Thank in advance.
Juan Andres

Related

Editing the path on k8s cluster with a helm chart

I am trying to edit my path like I would normally on a unix system. Let's say I want to append the path /opt/test/ to my path environment variable. How do I go about doing this with a helm chart on kubernetes? I am also using openshift if that helps. I have tried within the values.yaml file under config:
config:
PATH: "/opt/test/:$PATH"
This does not seem to work as the helm install breaks down. I am new to k8s and any help would be thoroughly appreciated.

Accessing a shared directory inside GKE from an external world

I am newbie to Google Cloud and GKE and i am trying to setup NFS Persistent Volumes with Kubernetes on GKE with the help of following link :
https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266
I had followed the instructions and i was able to achieve the desired results as mentioned in the blog but i need to access the shared folder (/uploads) from an external world so can someone help me to achieve it or any pointers or any suggestions to achieve the same
I have followed the doc and implemented the steps on my test GKE cluster like you. Just I have one observation about the current API version for deployment. We need to use apiVersion: apps/v1 instead of apiVersion: extensions/v1beta1. Then I test with a busybox pod to mount the volume and the test was successful.
Then I exposed the service “nfs-server” as service type “Load Balancer” like below
and found the external load balancer endpoints like (LB_Public_Ip):111 in Services & Ingress tab. I allowed ports 111, 2049, 20048 in firewall. After that I took a redhat based VM in the GCP project and installed “sudo dnf install nfs-utils -y”. Then you may use the below command to see the nfs exports list. Then you can mount it as expected.
-sudo showmount -e LB_Public_IP
Please have a look on the below sample configuration and you may follow the GCP doc

Error when installing Spinnaker on Kubernetes on prem cluster

I'm trying to install Spinnaker on a Kubernetes setup onprem.
Following instructions from https://www.spinnaker.io/setup/
Install and run Halyard as Docker on the Kubernetes master.
Run everything as root
mkdir ~/.hal on Kubemaster. Created the service account as instrcuted in the site.
Copied the kubeconfig file from ./kube/config into ~/.hal/kubeconfig as it didnt work with docker -v option, there was some permission issue, so made it work this way
docker run halyard command -- all up and running fine.
Ran Bash and Inside halyard.
Now when I do these two things inside halyard
Point kubectl to the kubeconfig by export KUBECONFIG command
Enable kubernetes provider "hal config provider kubernetes enable"
The command gets executed sometimes successfully or it fails with this warning after timeout error
Getting object contents of versions.yml
Unexpected error comparing versions: com.netflix.spinnaker.halyard.core.error.v1.HalException: Could not load "versions.yml" from config bucket: www.googleapis.com.*
Even if it somehow manages to run successfully. When I run these,
CONTEXT=$(kubectl config current-context)
hal config provider kubernetes account add my-k8s-account --context $CONTEXT
It fails with the same error as above.
Total weird stuff. Its intermittent. Does it have something to do with the kubeconfig file? Any pointers or help would be greatly appreciated.
Thanks.
As noted in comments these kind of errors could result when there lack of network connectivity from inside the container.
As Vikram mentioned in his comment:
Yes, that was the problem. Azure support recommended installing a CNI plugin and it resolved the issue. So, it seems like inside of Azure VM without a Public IP, the CNI plugin is needed for a VM To connect to internet.
To configure CNI plugin on Azure platform use this guide.
Hope it helps.

How to update/Patch Kube-proxy config?

I am using Rancher for Kubernetes Installation and cluster Management. For fixing issue related to IP Tables I need to update the cluster-cidr attribute in Kubeproxy config (https://github.com/kubernetes/kubernetes/issues/36835 ), I am not sure how to update the kube-proxy. Can someone tell me how to update it via Kubectl or UI or how to login to kubeproxy and chnage it?
if we are talking about the Kubernetes:
find the directory /etc/kubernetes on your node
there will be some files and directories, you need to find where the manifests store (something like kube-proxy.manifest, which is a .yml file )
open it and there you'll find --cluster-cidr, it is an option to command:
command:
- /hyperkube
- proxy
- ...
- --cluster-cidr=<your_CIDR>
(actually command is a list in YAML implementation, and the --cluster-cidr is a member of this list)
Note: depending on deploying tool (including the Rancher) this directory structure/file names might be different

Cachet on Kubernetes APP_KEY Error

I'm trying to run the open source cachet status page within Kubernetes via this tutorial https://medium.com/#ctbeke/setting-up-cachet-on-google-cloud-817e62916d48
2 docker containers (cachet/nginx) and Postgres are deployed to a pod on GKE but the cachet container fails with the following CrashLoopBackOff error
Within the docker-compose.yml file its set to APP_KEY=${APP_KEY:-null} and i’m wondering if I didn’t set an environment variable I should have.
Any help with configuring the cachet docker file would be much appreciated! https://github.com/CachetHQ/Docker
Yes, you need to generate a key.
In the entrypoint.sh you can see that the bash script generates a key for you:
https://github.com/CachetHQ/Docker/blob/master/entrypoint.sh#L188-L193
It seems there's a bug in the Dockerfile here. Generate a key manually and then set it as an environment variable in your manifest.
There's a helm chart you can use in development here: https://github.com/apptio/helmcharts/blob/cachet/devel/cachet/templates/secrets.yaml#L12