Cannot Connect Kubernetes Secrets to Kubernetes Deployment (Values Are Empty) - kubernetes

I have a Golang Microservice Application which has following Kubernetes Manifest Configuration...
apiVersion: v1 # Service for accessing store application (this) from Ingress...
kind: Service
metadata:
name: store-internal-service
namespace: store-namespace
spec:
type: ClusterIP
selector:
app: store-internal-service
ports:
- name: http
port: 8000
targetPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: store-application-service
namespace: store-namespace
labels:
app: store-application-service
spec:
selector:
matchLabels:
app: store-internal-service
template:
metadata:
labels:
app: store-internal-service
spec:
containers:
- name: store-application
image: <image>
envFrom:
- secretRef:
name: project-secret-store
ports:
- containerPort: 8000
protocol: TCP
imagePullPolicy: Always
env:
- name: APPLICATION_PORT
value: "8000"
- name: APPLICATION_HOST
value: "localhost"
terminationGracePeriodSeconds: 60
---
apiVersion: v1
kind: Secret
metadata:
name: project-secret-store
namespace: store-namespace
type: Opaque
stringData:
# Prometheus Server Credentials...
PROMETHEUS_HOST: "prometheus-internal-service"
PROMETHEUS_PORT: "9090"
# POSTGRESQL CONFIGURATION.
DATABASE_HOST: "postgres-internal-service"
DATABASE_PORT: "5432"
DATABASE_USER: "postgres_user"
DATABASE_PASSWORD: "postgres_password"
DATABASE_NAME: "store_db"
And Also for Test Purposes, I've specified following Variables in order to receive values from secrets in my application..
var (
POSTGRES_USER = os.Getenv("DATABASE_USER")
POSTGRES_PASSWORD = os.Getenv("DATABASE_PASSWORD")
POSTGRES_DATABASE = os.Getenv("DATABASE_NAME")
POSTGRES_HOST = os.Getenv("DATABASE_HOST")
POSTGRES_PORT = os.Getenv("DATABASE_PORT")
)
The Problem is when run my application, and after some time go check the logs of my application using kubectl logs <my-application-pod-name> --namespace=store-namespace, turns out that all this Golang variables are empty, despite the fact that they all has been declared in the Secret...
There is probably some other issues, that can cause this problem, but if there is some errors in configuration to point out, please share with your thoughts about it :)

Related

Cannot connect to my MiniKube external service ip/port?

I have a mongo yaml and web-app(NodeJS) yaml set up like this:
mongo-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-config
data:
mongo-url: mongo-service
mongo-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
type: Opaque
data:
mongo-user: bW9uZ291c2Vy
mongo-password: bW9uZ29wYXNzd29yZA==
mongo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
labels:
app: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
# blueprint for pods, creates pods with mongo:5.0 image
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongodb
image: mongo:5.0
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
---
# kind: service
# name: any
# selector: select pods to forward the requests to
apiVersion: v1
kind: Service
metadata:
name: mongo-service
spec:
selector:
app: mongo
ports:
- protocol: TCP
port: 8080
targetPort: 27017
and the webapp.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
# blueprint for pods, creates pods with mongo:5.0 image
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nanajanashia/k8s-demo-app:v1.0
ports:
- containerPort: 3000
env:
- name: USER_NAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: USER_PWD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
- name: DB_URL
valueFrom:
configMapKeyRef:
name: mongo-config
key: mongo-url
---
# kind: service
# name: any
# selector: select pods to forward the requests to
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
# default ClusterIP
# nodeport = external service
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30100
I ran the commands for each file
kubectl apply -f
i checked the status of the webapp which returned:
app listening on port 3000!
I got the IP address by
minikube ip
and the port was 30100
Why cannot not I access this web app?
I get a site cant be reached error.
If you are on Mac, check your minikube driver. I had to stop, delete minikube, then restart while specifying the hyperkit driver like so.
minikube stop
minikube delete
docker start --vm-driver=hyperkit
The information listed here is pretty useful too.

"Failed to connect to database" : Keycloak Operator external database Config not working

I checked this How to use for Keycloak operator custom resource using external database connection. I am using CloudSQL from Google platform as the external database source.
My configurations are
keycloak-idm
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
name: kiwigrid-keycloak-idm
spec:
instances: 3
externalAccess:
enabled: false
externalDatabase:
enabled: true
external db storage secret
apiVersion: v1
kind: Secret
metadata:
name: keycloak-db-secret
namespace: kiwios-application
type: Opaque
stringData:
POSTGRES_DATABASE: keycloak-storage
POSTGRES_EXTERNAL_ADDRESS: pgsqlproxy.infra
POSTGRES_EXTERNAL_PORT: "5432"
POSTGRES_HOST: keycloak-postgresql
POSTGRES_USERNAME: keycloak-user
POSTGRES_PASSWORD: S1ly3AValJYBNR-fsptLYdT74
POSTGRES_SUPERUSER: "true"
storage database
apiVersion: sql.cnrm.cloud.google.com/v1beta1
kind: SQLDatabase
metadata:
name: keycloak-storage
namespace: kiwios-application
annotations:
cnrm.cloud.google.com/deletion-policy: "abandon"
spec:
charset: UTF8
collation: en_US.UTF8
instanceRef:
name: keycloak-storage-instance-pg
namespace: infra
storage users
apiVersion: sql.cnrm.cloud.google.com/v1beta1
kind: SQLUser
metadata:
name: keycloak-user
namespace: kiwios-application
annotations:
cnrm.cloud.google.com/deletion-policy: "abandon"
spec:
instanceRef:
name: keycloak-storage-instance-pg
namespace: infra
password:
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_PASSWORD
And the error shown in Kubernetes console
It is not working. Anyone please help me to figure out what I am doing wrong.
Update: I deep dived with k9s console. As per keycloak-operator functionality it creates a external name for the database connection.
which is here keycloak-postgresql
check image below
There is no error showing in keycloak-operator console. Only the keycloak-idm is not able to make a connection using this external name. It shows the below error.
This is what i am using for keycloak setup, also if you have read the question he has mention secret issue issue in update section
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: keycloak
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:10.0.0
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_VENDOR
value: POSTGRES
- name: DB_ADDR
value: postgres
- name: DB_DATABASE
value: keycloak
- name: DB_USER
value: root
- name: DB_PASSWORD
value: password
- name : KEYCLOAK_HTTP_PORT
value : "80"
- name: KEYCLOAK_HTTPS_PORT
value: "443"
- name : KEYCLOAK_HOSTNAME
value : keycloak.harshmanvar.tk #replace with ingress URL
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
You can try changing the ENV variables into the secret you are using.
Example files : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment
Environment variables that Keycloak support : https://github.com/keycloak/keycloak-containers/blob/master/server/README.md#environment-variables
Have you tried this way..!
apiVersion: v1
kind: Secret
metadata:
name: keycloak-db-secret
namespace: kiwios-application
type: Opaque
stringData:
POSTGRES_DATABASE: "keycloak-storage"
POSTGRES_EXTERNAL_ADDRESS: "pgsqlproxy.infra"
POSTGRES_EXTERNAL_PORT: "5432"
POSTGRES_HOST: "keycloak-postgresql"
POSTGRES_USERNAME: "keycloak-user"
POSTGRES_PASSWORD: "S1ly3AValJYBNR-fsptLYdT74"
POSTGRES_SUPERUSER: "true"

SonarQube + Postgresql Connection refused error in Kubernetes Cluster

sonar-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sonarqube
spec:
replicas: 1
selector:
matchLabels:
app: sonarqube
template:
metadata:
labels:
app: sonarqube
spec:
containers:
- image: 10.150.0.131/devops/sonarqube:1.0
args:
- -Dsonar.web.context=/sonar
name: sonarqube
env:
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-pwd
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:postgresql://sonar-postgres:5432/sonar
ports:
- containerPort: 9000
name: sonarqube
sonar-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: sonarqube
name: sonarqube
spec:
type: NodePort
ports:
- port: 80
targetPort: 9000
name: sonarport
selector:
name: sonarqube
sonar-postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sonar-postgres
spec:
replicas: 1
selector:
matchLabels:
app: sonar-postgres
template:
metadata:
labels:
app: sonar-postgres
spec:
containers:
- image: 10.150.0.131/devops/postgres:12.1
name: sonar-postgres
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-pwd
key: password
- name: POSTGRES_USER
value: sonar
ports:
- containerPort: 5432
name: postgresport
volumeMounts:
# This name must match the volumes.name below.
- name: data-disk
mountPath: /var/lib/postgresql/data
volumes:
- name: data-disk
persistentVolumeClaim:
claimName: claim-postgres
sonar-postgresql-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: sonar-postgres
name: sonar-postgres
spec:
ports:
- port: 5432
selector:
name: sonar-postgres
Kubernetes Version:1.18.0
Docker Version : 19.03
**I am having a connection problem between the Sonarqube pod and the Postgresql pod.
I use the flannel network plug.
Can you help with the error?
Postgresql pod log value does not come.
**
ERROR
Try with:
apiVersion: v1
kind: Service
metadata:
labels:
name: sonar-postgres
name: sonar-postgres
spec:
ports:
- port: 5432
selector:
app: sonar-postgres
because it looks like your selector is wrong. The same issue with sonar-service.yaml, change name to app and it should work.
If you installed postgresql on the sql cloud service, it is necessary to release the firewall access ip. To validate this question, try adding the 0.0.0.0/0 ip, it will release everything, but placing the correct sonar ip is the best solution

IP Pod to container environment variable

I have an angular app and some node containers for backend, in my deployment file, how i can get container backed for connect my front end.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: container_imaer_backend
env:
- name: IP_BACKEND
value: here_i_need_my_container_ip_pod
ports:
- containerPort: 80
protocol: TCP
I would recommend instead of using the IP to use the DNS Name there's more info here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
But basically it's http://metadata-name.namespace.svc.cluster.local so in the case for that deployment it's http://frontend.default.svc.cluster.local
It's better this way because the local IP address can change.
You could use Pod field values for environment(ref: here). That way you can set POD IP in environment variable.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mysql
name: mysql
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.6
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
ports:
- containerPort: 3306
name: mysql
protocol: TCP
volumeMounts:
- mountPath: /var/lib/mysql
name: data
volumes:
- name: data
emptyDir: {}

Edit config file inside image before deployment/pod creation

I am running grafana as a pod inside my Kubernetes cluster. Once Grafana is initialized, it create a DB on localhost and saves all data there. This means that whenever a pod is destroyed and recreated, the whole DB is reinitialized and I lose all previous Data.
The grafana config inside the Pod for DB is ::
#################################### Database ####################################
[database]
# Either "mysql", "postgres" or "sqlite3", it's your choice
;type = sqlite3
;host = 127.0.0.1:3306
;name = grafana
;user = root
;password =
Inorder to get rid of this problem, I have to create an external DB and point my Grafana to use that DB instance everytime I create the Grafana Pod. My current default implementation to create the Grafana pod is ::
apiVersion: v1
kind: Service
metadata:
name: lb-grafana-service
spec:
ports:
- port: 4545
targetPort: 4545
protocol: TCP
clusterIP: 10.100.10.100
----
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
app: grafana
name: grafana
name: grafana
spec:
ports:
- name: scrape
port: 4545
nodePort: 30999
protocol: TCP
type: NodePort
selector:
app: grafana
----
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
name: grafana
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:develop
env:
- name: Prometheus_SERVICE_URL
value: http://172.29.219.105:30901
- name: GF_SECURITY_ADMIN_PASSWORD
value: "grafana"
- name: GF_SERVER_HTTP_PORT
value: "4545"
ports:
- containerPort: 9101
volumeMounts:
- mountPath: /var
name: grafana-storage
volumes:
- name: grafana-storage
emptyDir: {}
So what I want to do is overwrite the /etc/grafana/grafana.ini file before Grafana pod comes online OR just rewrite the current file with new values. I have no idea how I can do that right now. A little guidance will be much appreciated.
In general, you could use ConfigMaps like the comment said.
The Grafana image itself provides the ability to provide all configuration parameters via environment variables. This is only mentioned in the GitHub readme.
This way you could set the environment variables with Kubernetes, like:
spec:
template:
spec:
containers:
- name: grafana
image: grafana/grafana:4.1.1
env:
- name: "GF_SERVER_ROOT_URL"
value: "http://grafana.{{.clusterDomain}}"
- name: "GF_DATABASE_TYPE"
value: "{{.gfDatabaseType}}"
- name: "GF_DATABASE_HOST"
value: "{{.gfDatabaseHost}}"
- name: "GF_DATABASE_NAME"
value: "{{.gfDatabaseName}}"
- name: "GF_DATABASE_USER"
value: "{{.gfDatabaseUser}}"
- name: "GF_DATABASE_PASSWORD"
value: "{{.gfDatabasePassword}}"
- name: "GF_DATABASE_SSL_MODE"
value: "disable"
- name: "GF_AUTH_ANONYMOUS_ENABLED"
value: "true"