Splunk Universal Forwarder as sidecar in kubernetes - kubernetes

I am setting up a splunk universal forwarder as a sidecar with my application through a deployment spec. The splunk universal forwarder is setup as a different docker image where I copy custom inputs.conf and outputs.conf through docker COPY (shown below).
Effectively when I deploy my application, the sidecar is starting. In the current state, the indexer configuration is in the output.conf and which is taking effect.
*The issue comes here: I want to change the indexer server host and port dynamically based on the environment. *
Here is my dockerfile content of splunk universal forwarder.
FROM splunk/universalforwarder:latest
COPY configs/*.conf /opt/splunkforwarder/etc/system/local/
Built the docker images with name splunk-universal-forwarder:demo
The configs folder have both files inputs.conf and outputs.conf.
The content of outputs.conf is
[tcpout]
defaultGroup = default-lb-group
[tcpout:default-lb-group]
server = ${SPLUNK_BASE_HOST}
[tcpout-server://host1:9997]
I want to pass the SPLUNK_BASE_HOST environment variable through the sidecar deployment like below.
- name: universalforwarder
image: splunk-universal-forwarder:demo
imagePullPolicy: Always
env:
- name: SPLUNK_START_ARGS
value: "--accept-license --answer-yes"
- name: SPLUNK_BASE_HOST
value: 123.456.789.000:9997
- name: SPLUNK_USER
valueFrom:
secretKeyRef:
name: credentials
key: splunk.username
- name: SPLUNK_PASSWORD
valueFrom:
secretKeyRef:
name: credentials
key: splunk.password
volumeMounts:
- name: container-logs
mountPath: /var/log/splunk-fwd-myapp
I have a separate deployment.yaml per environment (dev, stage, uat, qa, prod) and I should be able to pass different indexer host and port SPLUNK_BASE_HOST based on these environments. If I hardcode the indexer host and port in outputs.conf, it will take the same value across all environments but I don't want that to happen.
The environment variable ${SPLUNK_BASE_HOST} in the outputs.conf is not referring to the value supplied in deployment yaml file.

You need to create an init script that should source the host name from environment variable and update the same in the output.conf using sed command. Finally launch slunk forwarder

Related

Using a variable within a path in Kubernetes

I have a simple StatefulSet with two containers. I just want to share a path by an emptyDir volume:
volumes:
- name: shared-folder
emptyDir: {}
The first container is a busybox:
- image: busybox
name: test
command:
- sleep
- "3600"
volumeMounts:
- mountPath: /cache
name: shared-folder
The second container creates a file on /cache/<POD_NAME>. I want to mount both paths within the emptyDir volume to be able to share files between containers.
volumeMounts:
- name: shared-folder
mountPath: /cache/$(HOSTNAME)
Problem. The second container doesn't resolve /cache/$(HOSTNAME) so instead of mounting /cache/pod-0 it mounts /cache/$(HOSTNAME). I have also tried getting the POD_NAME and setting as env variable but it doesn't resolve it neither.
Dows anybody knows if it is possible to use a path like this (with env variables) in the mountPath attribute?
To use mountpath with env variable you can use subPath with expanded environment variables (k8s v1.17+).
In your case it would look like following:
containers:
- env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- mountPath: /cache
name: shared-folder
subPathExpr: $(MY_POD_NAME)
I tested here and just using Kubernetes (k8s < 1.16) with env variables isn't possible to achieve what you want, basically what is happening is that variable will be accessible only after the pod gets deployed and you're referencing it before it happens.
You can use Helm to define your mounthPath and statefulset with the same value in the values.yaml file, then get this same value and set as a value for the mounthPath field and statefulset name. You can see about this here.
Edit:
Follow Matt's answer if you are using k8s 1.17 or higher.
The problem is that YAML configuration files are POSTed to Kubernetes exactly as they are written. This means that you need to create a templated YAML file, in which you will be able to replace the referenced ti environment variables with values bound to environment variables.
As this is a known "quirk" of Kubernetes there already exist tools to circumvent this problem. Helm is one of those tools which is very pleasant to use

Appending key to CATALINA_OPTS using kubernetes

I have some CATALINA_OPTS properties (regarding database port, user and so on) set up in ConfigMap file. Then, this file is added to the docker image via Pod environment variable.
One of the CATALINA_OPTS properties is database password, and it is required to move this from ConfigMap to the Secrets file.
I can expose key from Secrets file through environment variable:
apiVersion: v1
kind: Pod
...
containers:
- name: myContainer
image: myImage
env:
- name: CATALINA_OPTS
valueFrom:
configMapKeyRef:
name: catalina_opts
key: CATALINA_OPTS
- name: MY_ENV_PASSWORD
valueFrom:
secretKeyRef:
name: db-pass
key: my-pass
Thing is, i need to append this password to the CATALINA_OPTS. I tried to do it in Dockerfile:
RUN export CATALINA_OPTS="$CATALINA_OPTS -Dmy.password=$MY_ENV_PASSWORD"
However, MY_ENV_PASSWORD is not appending to the existing CATALINA_OPTS. When I list my environment variables (i'm checking the log in Jenkins) i cannot see the password.
Am I doing something wrong here? Is there any 'regular' way to do this?
Dockerfile RUN steps are run as part of your image build step and NOT during your image execution. Hence, you cannot rely on RUN export (build step) to set K8S environment variables for your container (run step).
Remove the RUN export from your Dockerfile and Ensure you are setting CATALINA_OPTS in your catalina_opts ConfigMap like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: catalina_opts
data:
SOME_ENV_VAR: INFO
CATALINA_OPTS: opts... -Dmy.password=$MY_ENV_PASSWORD

How can I switch the active profile to "Dev" in JHipster micro-service app deployed to Kubernetes?

I have a micro-services based JHipster app and have generated a Kubernetes deployment script using the kubernetes sub-generator.
I have deployed the app to Azure AKS and have it running smoothly. The current profile it is running with is 'prod'. How can I change the active profile the 'dev' in order to view swagger documentation?
I managed to get the swagger API functional by adding swagger to the SPRING_PROFILES_ACTIVE environment variable for all containers' deployment file.
spec:
...
containers:
- name: core-app
image: myrepo.azurecr.io/core
env:
- name: SPRING_PROFILES_ACTIVE
value: prod,swagger
For everyone who is here because want to google out why the swagger isn't enabled on prod in heroku installation despite of setting it into application-prod.yml, application-heroku.yml, SPRING_PROFILES_ACTIVE env variable, mvn start-up parameters in MAVEN_CUSTOM_OPTS env variable in heroku:config custom env variables...
It looks like the actual profile that will be used by the heroku prod run is in Procfile.

Can an openshift template parameter refer to the project name in which it is getting deployed?

I am trying to deploy Kong API Gateway via template to my openshift project. The problem is that Kong seems to be doing some DNS stuff that causes sporadic failure of DNS resolution. The workaround is to use the FQDN (<name>.<project_name>.svc.cluster.local). So, in my template i would like to do:
- env:
- name: KONG_DATABASE
value: postgres
- name: KONG_PG_HOST
value: "{APP_NAME}.{PROJECT_NAME}.svc.cluster.local"
I am just not sure how to get the current PROJECT_NAME of if perhaps there is a default set of available parameters...
You can read the namespace(project name) from the Kubernetes downward API into an environment variable and then use that in the value perhaps.
See the OpenShift docs here for example.
Update based on Claytons comment:
Tested and the following snippet from the deployment config works.
- env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: EXAMPLE
value: example.$(MY_POD_NAMESPACE)
Inside the running container:
sh-4.2$ echo $MY_POD_NAMESPACE
testing
sh-4.2$ echo $EXAMPLE
example.testing
In the environment screen of the UI it appears as a string value such as example.$(MY_POD_NAMESPACE)

how to pass a configuration file thought yaml on kubernetes to create new replication controller

i am trying to pass a configuration file(which is located on master) on nginx container at the time of replication controller creation through kubernetes.. ex. as we are using ADD command in Dockerfile...
There isn't a way to dynamically add file to a pod specification when instantiating it in Kubernetes.
Here are a couple of alternatives (that may solve your problem):
Build the configuration file into your container (using the docker ADD command). This has the advantage that it works in the way which you are already familiar but the disadvantage that you can no longer parameterize your container without rebuilding it.
Use environment variables instead of a configuration file. This may require some refactoring of your code (or creating a side-car container to turn environment variables into the configuration file that your application expects).
Put the configuration file into a volume. Mount this volume into your pod and read the configuration file from the volume.
Use a secret. This isn't the intended use for secrets, but secrets manifest themselves as files inside your container, so you can base64 encode your configuration file, store it as a secret in the apiserver, and then point your application to the location of the secret file that is created inside your pod.
I believe you can also download config during container initialization.
See example below, you may download config instead index.html but I would not use it for sensetive info like passwords.
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}