Edit config file inside image before deployment/pod creation - kubernetes

I am running grafana as a pod inside my Kubernetes cluster. Once Grafana is initialized, it create a DB on localhost and saves all data there. This means that whenever a pod is destroyed and recreated, the whole DB is reinitialized and I lose all previous Data.
The grafana config inside the Pod for DB is ::
#################################### Database ####################################
[database]
# Either "mysql", "postgres" or "sqlite3", it's your choice
;type = sqlite3
;host = 127.0.0.1:3306
;name = grafana
;user = root
;password =
Inorder to get rid of this problem, I have to create an external DB and point my Grafana to use that DB instance everytime I create the Grafana Pod. My current default implementation to create the Grafana pod is ::
apiVersion: v1
kind: Service
metadata:
name: lb-grafana-service
spec:
ports:
- port: 4545
targetPort: 4545
protocol: TCP
clusterIP: 10.100.10.100
----
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
app: grafana
name: grafana
name: grafana
spec:
ports:
- name: scrape
port: 4545
nodePort: 30999
protocol: TCP
type: NodePort
selector:
app: grafana
----
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
name: grafana
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:develop
env:
- name: Prometheus_SERVICE_URL
value: http://172.29.219.105:30901
- name: GF_SECURITY_ADMIN_PASSWORD
value: "grafana"
- name: GF_SERVER_HTTP_PORT
value: "4545"
ports:
- containerPort: 9101
volumeMounts:
- mountPath: /var
name: grafana-storage
volumes:
- name: grafana-storage
emptyDir: {}
So what I want to do is overwrite the /etc/grafana/grafana.ini file before Grafana pod comes online OR just rewrite the current file with new values. I have no idea how I can do that right now. A little guidance will be much appreciated.

In general, you could use ConfigMaps like the comment said.
The Grafana image itself provides the ability to provide all configuration parameters via environment variables. This is only mentioned in the GitHub readme.
This way you could set the environment variables with Kubernetes, like:
spec:
template:
spec:
containers:
- name: grafana
image: grafana/grafana:4.1.1
env:
- name: "GF_SERVER_ROOT_URL"
value: "http://grafana.{{.clusterDomain}}"
- name: "GF_DATABASE_TYPE"
value: "{{.gfDatabaseType}}"
- name: "GF_DATABASE_HOST"
value: "{{.gfDatabaseHost}}"
- name: "GF_DATABASE_NAME"
value: "{{.gfDatabaseName}}"
- name: "GF_DATABASE_USER"
value: "{{.gfDatabaseUser}}"
- name: "GF_DATABASE_PASSWORD"
value: "{{.gfDatabasePassword}}"
- name: "GF_DATABASE_SSL_MODE"
value: "disable"
- name: "GF_AUTH_ANONYMOUS_ENABLED"
value: "true"

Related

Cannot Connect Kubernetes Secrets to Kubernetes Deployment (Values Are Empty)

I have a Golang Microservice Application which has following Kubernetes Manifest Configuration...
apiVersion: v1 # Service for accessing store application (this) from Ingress...
kind: Service
metadata:
name: store-internal-service
namespace: store-namespace
spec:
type: ClusterIP
selector:
app: store-internal-service
ports:
- name: http
port: 8000
targetPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: store-application-service
namespace: store-namespace
labels:
app: store-application-service
spec:
selector:
matchLabels:
app: store-internal-service
template:
metadata:
labels:
app: store-internal-service
spec:
containers:
- name: store-application
image: <image>
envFrom:
- secretRef:
name: project-secret-store
ports:
- containerPort: 8000
protocol: TCP
imagePullPolicy: Always
env:
- name: APPLICATION_PORT
value: "8000"
- name: APPLICATION_HOST
value: "localhost"
terminationGracePeriodSeconds: 60
---
apiVersion: v1
kind: Secret
metadata:
name: project-secret-store
namespace: store-namespace
type: Opaque
stringData:
# Prometheus Server Credentials...
PROMETHEUS_HOST: "prometheus-internal-service"
PROMETHEUS_PORT: "9090"
# POSTGRESQL CONFIGURATION.
DATABASE_HOST: "postgres-internal-service"
DATABASE_PORT: "5432"
DATABASE_USER: "postgres_user"
DATABASE_PASSWORD: "postgres_password"
DATABASE_NAME: "store_db"
And Also for Test Purposes, I've specified following Variables in order to receive values from secrets in my application..
var (
POSTGRES_USER = os.Getenv("DATABASE_USER")
POSTGRES_PASSWORD = os.Getenv("DATABASE_PASSWORD")
POSTGRES_DATABASE = os.Getenv("DATABASE_NAME")
POSTGRES_HOST = os.Getenv("DATABASE_HOST")
POSTGRES_PORT = os.Getenv("DATABASE_PORT")
)
The Problem is when run my application, and after some time go check the logs of my application using kubectl logs <my-application-pod-name> --namespace=store-namespace, turns out that all this Golang variables are empty, despite the fact that they all has been declared in the Secret...
There is probably some other issues, that can cause this problem, but if there is some errors in configuration to point out, please share with your thoughts about it :)

kubernetes Deployment PodName setting

apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
name: test
labels:
app: test
spec:
containers:
- name: server
image: test_ml_server:2.3
ports:
- containerPort: 8080
volumeMounts:
- name: hostpath-vol-testserver
mountPath: /app/test/api
# env:
# - name: POD_NAME
# valueFrom:
# fieldRef:
# fieldPath: template.metadata.name
- name: testdb
image: test_db:1.4
ports:
- name: testdb
containerPort: 1433
volumeMounts:
- name: hostpath-vol-testdb
mountPath: /var/opt/mssql/data
# env:
# - name: POD_NAME
# valueFrom:
# fieldRef:
# fieldPath: template.metadata.name
volumes:
- name: hostpath-vol-testserver
hostPath:
path: /usr/testhostpath/testserver
- name: hostpath-vol-testdb
hostPath:
path: /usr/testhostpath/testdb
I want to set the name of the pod Because it communicates internally based on the name of the pod
but when a pod is created, it cannot be used because the variable name is appended to the end.
How can I set the pod name?
It's better if you use, statefulset instead of deployment. Statefulset's pod name will be like <statefulsetName-0>,<statefulsetName-1>... And you will need a clusterIP service. with which you can bound your pods. see the doc for more details. Ref
apiVersion: v1
kind: Service
metadata:
name: test-svc
labels:
app: test
spec:
ports:
- port: 8080
name: web
clusterIP: None
selector:
app: test
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: test-StatefulSet
labels:
app: test
spec:
replicas: 1
serviceName: test-svc
selector:
matchLabels:
app: test
template:
metadata:
name: test
labels:
app: test
spec:
containers:
- name: server
image: test_ml_server:2.3
ports:
- containerPort: 8080
volumeMounts:
- name: hostpath-vol-testserver
mountPath: /app/test/api
- name: testdb
image: test_db:1.4
ports:
- name: testdb
containerPort: 1433
volumeMounts:
- name: hostpath-vol-testdb
mountPath: /var/opt/mssql/data
volumes:
- name: hostpath-vol-testserver
hostPath:
path: /usr/testhostpath/testserver
- name: hostpath-vol-testdb
hostPath:
path: /usr/testhostpath/testdb
Here, The pod name will be like this test-StatefulSet-0.
if you are using the kind: Deployment it won't be possible ideally in this scenario you can use kind: Statefulset.
Instead of POD to POD communication, you can use the Kubernetes service for communication.
Still, statefulset manage the pod name in the sequence
statefulsetname - 0
statefulsetname - 1
statefulsetname - 2
You can't.
It is the property of the pods of a Deployment that they do not have an identity associated with them.
You could have a look at Statefulset instead of a Deployment if you want the pods to have a state.
From the docs:
Like a Deployment, a StatefulSet manages Pods that are based on an
identical container spec. Unlike a Deployment, a StatefulSet maintains
a sticky identity for each of their Pods. These pods are created from
the same spec, but are not interchangeable: each has a persistent
identifier that it maintains across any rescheduling.
So, if you have a Statefulset object named myapp with two replicas, the pods will be named as myapp-0 and myapp-1.

Kibana on Kubernetes - how to point to ES container running on a different pod

Learning Kubernetes by setting up two pods, each running an elastic-search and a kibana container respectively.
My configuration file is able to setup both pods as well as create two services to access these applications on host machine's web browser.
Issue is that i don't know how to make Kibana container communicate with ES application/pod.
Earlier while learning Docker i crafted a docker-compose app configuration and now basically trying to do the same using Kubernetes ( docker-compose config pasted below ) .
Came across a blog that suggested using Deployment instead of Pod. Again not sure how would one make Kibana talk to ES
Kubernetes configuation yaml:
apiVersion: v1
kind: Pod
metadata:
name: pod-elasticsearch
labels:
app: myapp
spec:
hostname: "es01-docker-local"
containers:
- name: myelasticsearch-container
image: myelasticsearch
imagePullPolicy: Never
volumeMounts:
- name: my-volume
mountPath: /home/newuser
volumes:
- name: my-volume
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: myelasticsearch-service
spec:
type: NodePort
ports:
- targetPort: 9200
port: 9200
nodePort: 30015
selector:
app: myapp
---
apiVersion: v1
kind: Pod
metadata:
name: pod-kibana
labels:
app: myapp
spec:
containers:
- name: mykibana-container
image: mykibana
imagePullPolicy: Never
volumeMounts:
- name: my-volume
mountPath: /home/newuser
volumes:
- name: my-volume
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: mykibana-service
spec:
type: NodePort
ports:
- targetPort: 5601
port: 5601
nodePort: 30016
selector:
app: myapp
For reference below is the docker-compose that i am trying to replicate on Kubernetes
version: "2.2"
services:
elasticsearch:
image: myelasticsearch
container_name: myelasticsearch-container
restart: always
hostname: 'es01.docker.local'
ports:
- '9200:9200'
- '9300:9300'
volumes:
- myVolume:/home/newuser/
environment:
- discovery.type=single-node
kibana:
depends_on:
- elasticsearch
image: mykibana
container_name: mykibana-container
restart: always
ports:
- '5601:5601'
volumes:
- myVolume:/home/newuser/
environment:
ELASTICSEARCH_URL: http://es01:9200
ELASTICSEARCH_HOSTS: http://es01:9200
volumes:
myVolume:
networks:
myNetwork:
ES Pod description:
% kubectl describe pod/pod-elasticsearch
Name: pod-elasticsearch
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.3
Start Time: Sun, 10 Jan 2021 23:06:18 -0800
Labels: app=myapp
Annotations: <none>
Status: Running
IP: 10.x.0.yy
IPs:
IP: 10.x.0.yy
In kubernetes Pod/Deployment/DaemonSet... in the same cluster can communicate with each other with no problem because it has a flat network architecture .One way for these resources to call each other directly is by the name of Kubernetes service of each resource.
For example any resource in the cluster can call your kibana-app directly by service name you give it to it mykibana-service.name-of-namespace.
So for kibana pod to communicate with elasticsearch it can use http://name-of-service-of-elasticsearch.name-of-namespace:9200 namespace is be default if you dont specify where you create your service => http://name-of-service-of-elasticsearch.default:9200 or http://name-of-service-of-elasticsearch:9200
The concern you raised on what type of your resource you have to create (pod, deployment,daemonset or statefulSet) is not important for these resources to communicate with each other.
If you re having problem converting docker-compose to manifest file you can start with Kompose you can do kompose convert where is your docker-compose is located .
Here sample
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: elasticsearch
name: elasticsearch
namespace: default
spec:
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- image: myelasticsearch:yourtag #fix this
name: elasticsearch
ports:
- containerPort: 9200
- containerPort: 9300
volumeMounts:
- mountPath: /home/newuser/
name: my-volume
volumes:
- name: my-volume
emptyDir: {} # I wouldnt use emptydir
---
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch
namespace: default
spec:
ports:
- port: 9200
name: "9200"
targetPort: 9200
- port: 9300
name: "9300"
targetPort: 9300
selector:
app: elasticsearch
type: ClusterIP #you dont need to make expose your service publicly
#####################################
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kibana
name: kibana
namespace: default
spec:
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200/ #elasticsearch is the same name as service resrouce name
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
image: mykibana:yourtagname #fix this
name: kibana
---
apiVersion: v1
kind: Service
metadata:
labels:
app: kibana
name: kibana
namespace: default
spec:
ports:
- port: 5601
protocol: TCP
targetPort: 5601
selector:
app: kibana
type: NodePort
You can choose whats adequate for your app , for example in elasticsearch you can use StatefulSet ,Deployment, in ElasticSearch, and you can you use Deployment for Kibana , Also you can change the type of volume .
Also the mynetwork that you created in docker-compose can be translated network policy where you can isolate your resources (for example isolated mynetwork namespace) because these resources are not isolated if they are created in the same cluster by default.
Hope I helped
If you want to deploy Elasticsearch and Kibana in Kubernetes the usual way then you have to take care of some core Elasticsearch cluster configuration like:
cluster.initial_master_nodes [7.0] Added in 7.0.
network.host
network.publish_host
Also you would have to carefully setup the network.host so that even after accidental pod restarts the network.host remains the same.
While deploying Kibana you need provide Elasticsearch service and also manually configure the SSL certificates if Elasticsearch has SSL enabled.
So to install Elastic Stack on Kubernetes then you should probably prefer
Elastic Cloud on Kubernetes (ECK). The documentation provided by Elastic is easy to understand.
Elastic Cloud on Kubernetes (ECK) uses Kubernetes Operators to make installation easier and it automatically takes care of core cluster configuration.
ECK installation will create a default user called "elastic" and you can retrieve its password from secrets. It also creates self-signed certificates which can be found in secrets.
For deploying Kibana you can just provide "elasticsearchRef" in your YAML file and it will automatically configure the Elasticsearch endpoints. You can use the default "elastic" user to login to Kibana.

SonarQube + Postgresql Connection refused error in Kubernetes Cluster

sonar-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sonarqube
spec:
replicas: 1
selector:
matchLabels:
app: sonarqube
template:
metadata:
labels:
app: sonarqube
spec:
containers:
- image: 10.150.0.131/devops/sonarqube:1.0
args:
- -Dsonar.web.context=/sonar
name: sonarqube
env:
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-pwd
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:postgresql://sonar-postgres:5432/sonar
ports:
- containerPort: 9000
name: sonarqube
sonar-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: sonarqube
name: sonarqube
spec:
type: NodePort
ports:
- port: 80
targetPort: 9000
name: sonarport
selector:
name: sonarqube
sonar-postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sonar-postgres
spec:
replicas: 1
selector:
matchLabels:
app: sonar-postgres
template:
metadata:
labels:
app: sonar-postgres
spec:
containers:
- image: 10.150.0.131/devops/postgres:12.1
name: sonar-postgres
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-pwd
key: password
- name: POSTGRES_USER
value: sonar
ports:
- containerPort: 5432
name: postgresport
volumeMounts:
# This name must match the volumes.name below.
- name: data-disk
mountPath: /var/lib/postgresql/data
volumes:
- name: data-disk
persistentVolumeClaim:
claimName: claim-postgres
sonar-postgresql-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: sonar-postgres
name: sonar-postgres
spec:
ports:
- port: 5432
selector:
name: sonar-postgres
Kubernetes Version:1.18.0
Docker Version : 19.03
**I am having a connection problem between the Sonarqube pod and the Postgresql pod.
I use the flannel network plug.
Can you help with the error?
Postgresql pod log value does not come.
**
ERROR
Try with:
apiVersion: v1
kind: Service
metadata:
labels:
name: sonar-postgres
name: sonar-postgres
spec:
ports:
- port: 5432
selector:
app: sonar-postgres
because it looks like your selector is wrong. The same issue with sonar-service.yaml, change name to app and it should work.
If you installed postgresql on the sql cloud service, it is necessary to release the firewall access ip. To validate this question, try adding the 0.0.0.0/0 ip, it will release everything, but placing the correct sonar ip is the best solution

IP Pod to container environment variable

I have an angular app and some node containers for backend, in my deployment file, how i can get container backed for connect my front end.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: container_imaer_backend
env:
- name: IP_BACKEND
value: here_i_need_my_container_ip_pod
ports:
- containerPort: 80
protocol: TCP
I would recommend instead of using the IP to use the DNS Name there's more info here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
But basically it's http://metadata-name.namespace.svc.cluster.local so in the case for that deployment it's http://frontend.default.svc.cluster.local
It's better this way because the local IP address can change.
You could use Pod field values for environment(ref: here). That way you can set POD IP in environment variable.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mysql
name: mysql
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.6
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
ports:
- containerPort: 3306
name: mysql
protocol: TCP
volumeMounts:
- mountPath: /var/lib/mysql
name: data
volumes:
- name: data
emptyDir: {}