Can an openshift template parameter refer to the project name in which it is getting deployed? - kubernetes

I am trying to deploy Kong API Gateway via template to my openshift project. The problem is that Kong seems to be doing some DNS stuff that causes sporadic failure of DNS resolution. The workaround is to use the FQDN (<name>.<project_name>.svc.cluster.local). So, in my template i would like to do:
- env:
- name: KONG_DATABASE
value: postgres
- name: KONG_PG_HOST
value: "{APP_NAME}.{PROJECT_NAME}.svc.cluster.local"
I am just not sure how to get the current PROJECT_NAME of if perhaps there is a default set of available parameters...

You can read the namespace(project name) from the Kubernetes downward API into an environment variable and then use that in the value perhaps.
See the OpenShift docs here for example.
Update based on Claytons comment:
Tested and the following snippet from the deployment config works.
- env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: EXAMPLE
value: example.$(MY_POD_NAMESPACE)
Inside the running container:
sh-4.2$ echo $MY_POD_NAMESPACE
testing
sh-4.2$ echo $EXAMPLE
example.testing
In the environment screen of the UI it appears as a string value such as example.$(MY_POD_NAMESPACE)

Related

Access secret environment variables from kubernetes in nextjs app

I'm trying to deploy a nextjs app to GCP with kubernetes. I'm trying to setup auth with nextAuth and keycloak provider. The issue I'm running into is that KEYCLOAK_CLIENT_SECRET and KEYCLOAK_CLIENT_SECRET can't be found, all of the other environment variables listed here show up under process.env.XX the two with secretKeyRefs come in undefined.
I have read in multiple places that they need to be available during build time for nextjs but I'm not sure how to set this up. We have several nodejs apps with this same deployment file and they are able to obtain the secrets.
This is what my deployment.yaml env section looks like.
Anyone have any experience setting this up?
I can't add the secrets to any local env files for security purposes.
env:
- name: NODE_ENV
value: production
- name: KEYCLOAK_URL
value: "https://ourkeycloak/auth/"
- name: NEXTAUTH_URL
value: "https://nextauthpath/api/auth"
- name: NEXTAUTH_SECRET
value: "11234"
- name: KEYCLOAK_REALM
value: "ourrealm"
- name: KEYCLOAK_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: vault_client
key: CLIENT_SECRET
- name: KEYCLOAK_CLIENTID
valueFrom:
secretKeyRef:
name: vault_client
key: CLIENT_ID

How to add protocol prefix in Kubernetes ConfigMap

In my Kubernetes cluster, I have a ConfigMap object containing the address of my Postgres pod. It was created with the following YAML:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap
data:
database_url: postgres-service
Now I reference this value in one of my Deployment's configuration:
env:
- name: DB_ADDRESS
valueFrom:
configMapKeyRef:
name: postgres-configmap
key: database_url
This deployment is a Spring Boot application that intends to communicate with the database. Thus it reads the database's URL from the DB_ADDRESS environment variable. (ignore the default values, those are used only during development)
datasource:
url: ${DB_ADDRESS:jdbc:postgresql://localhost:5432/users}
username: ${POSTGRES_USER:postgres}
password: ${POSTGRES_PASSWORD:mysecretpassword}
So, according to the logs, the problem is that the address has to have the jdbc:postgresql:// prefix. Either in the ConfigMap's YAML or in the application.yml I would need to concatenate the prefix protocol string with the variable. Any idea how to do it in yml or suggestion of some other workaround?
If you create a Service, that will provide you with a hostname (the name of the service) that you can then use in the ConfigMap. E.g., if you create a service named postgres, then your ConfigMap would look like:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap
data:
database_url: jdbc:postgresql://postgres:5432/users
Kubernetes environment variable declarations can embed the values of other environment variables. This is the only string manipulation that Kubernetes supports, and it pretty much only works in env: blocks.
For this setup, once you've retrieved the database hostname from the ConfigMap, you can then embed it into a more complete SPRING_DATASOURCE_URL environment variable:
env:
- name: DB_ADDRESS
valueFrom:
configMapKeyRef:
name: postgres-configmap
key: database_url
- name: SPRING_DATASOURCE_URL
value: 'jdbc:postgresql://$(DB_ADDRESS):5432/users'
You might similarly parameterize the port (though it will almost always be the standard port 5432) and database name. Avoid putting these settings in a Spring profile YAML file, where you'll have to rebuild your application if any of the deploy-time settings change.

Is it possible to retrieve pod name from values.yaml file of helm chart?

Quite new to Helm. Currently, I create an env variable in a way that when I deploy my pod, I am able to see the pod name in the environment variables list. This can be done like so in the template file:
containers:
- name: my_container
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
Is it possible to do something similar in the values.yaml file (maybe in an extraEnv field?) and then use this value in the .tpl? Other configurations, like configmap names, depend on it, in order to be unique between pods and I want to easily retrieve the value like so:
volumes:
- name: my_vol
configMap:
name: {{ .Values.pathto.extraEnv.podname }}
Thanks in advance!

Splunk Universal Forwarder as sidecar in kubernetes

I am setting up a splunk universal forwarder as a sidecar with my application through a deployment spec. The splunk universal forwarder is setup as a different docker image where I copy custom inputs.conf and outputs.conf through docker COPY (shown below).
Effectively when I deploy my application, the sidecar is starting. In the current state, the indexer configuration is in the output.conf and which is taking effect.
*The issue comes here: I want to change the indexer server host and port dynamically based on the environment. *
Here is my dockerfile content of splunk universal forwarder.
FROM splunk/universalforwarder:latest
COPY configs/*.conf /opt/splunkforwarder/etc/system/local/
Built the docker images with name splunk-universal-forwarder:demo
The configs folder have both files inputs.conf and outputs.conf.
The content of outputs.conf is
[tcpout]
defaultGroup = default-lb-group
[tcpout:default-lb-group]
server = ${SPLUNK_BASE_HOST}
[tcpout-server://host1:9997]
I want to pass the SPLUNK_BASE_HOST environment variable through the sidecar deployment like below.
- name: universalforwarder
image: splunk-universal-forwarder:demo
imagePullPolicy: Always
env:
- name: SPLUNK_START_ARGS
value: "--accept-license --answer-yes"
- name: SPLUNK_BASE_HOST
value: 123.456.789.000:9997
- name: SPLUNK_USER
valueFrom:
secretKeyRef:
name: credentials
key: splunk.username
- name: SPLUNK_PASSWORD
valueFrom:
secretKeyRef:
name: credentials
key: splunk.password
volumeMounts:
- name: container-logs
mountPath: /var/log/splunk-fwd-myapp
I have a separate deployment.yaml per environment (dev, stage, uat, qa, prod) and I should be able to pass different indexer host and port SPLUNK_BASE_HOST based on these environments. If I hardcode the indexer host and port in outputs.conf, it will take the same value across all environments but I don't want that to happen.
The environment variable ${SPLUNK_BASE_HOST} in the outputs.conf is not referring to the value supplied in deployment yaml file.
You need to create an init script that should source the host name from environment variable and update the same in the output.conf using sed command. Finally launch slunk forwarder

kubernetes transfer Physical IP to dubbo

I want to transfer Physical IP to dubbo pod by yaml,but the parameter is Fixed value.For example:
dubbo.yaml
spec:
replicas: 2
...
env:
- name: PhysicalIP
value: 192.168.1.1
In pod before start dubbo,i can replay container ip,for example:
echo "replay /etc/hosts"
cp /etc/hosts /etc/hosts.tmp
sed -i "s/.*$(hostname)/${PhysicalIP} $(hostname)/" /etc/hosts.tmp
cat /etc/hosts.tmp > /etc/hosts
There is an question,when pod deploy to host 192.168.1.1 and host 192.168.1.2,the host 192.168.1.2's pod environment variable ${PhysicalIP} value is 192.168.1.1,I want to ${PhysicalIP} is 192.168.1.2 in host 192.168.1.2,Is there any way?
You should be able to get information about the pod through Environment Variables or using a DownwardAPIVolumeFile.
Using environment variables:
You should add to your yaml something like this:
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
As far as I know, the name of the node is, right now, the best approach to what you need that you can get from inside the container.
Using a DownwardAPIVolumeFile:
You should add to your yaml something like this:
volumes:
- name: podinfo
downwardAPI:
items:
- path: "nodename"
fieldRef:
fieldPath: spec.nodeName
This way you will have the information of the node name stored in /etc/nodename
The issue #24657 and the pull #42717, on the kubernetes github are related to this. (Sorry, I need more reputation here to be able to post more links!).
As you can see there, access to the node IP through the downwardAPI should be available soon (using status.hostIP, probably).