How to run consul via kubernetes? - kubernetes

I tried running the pod (https://www.consul.io/docs/platform/k8s/run.html)
It failed with... containers with unready status: [consul]
kubectl create -f consul-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: consul-example
spec:
containers:
- name: example
image: "consul:latest"
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command:
- "/bin/sh"
- "-ec"
- |
export CONSUL_HTTP_ADDR="${HOST_IP}:8500"
consul kv put hello world
restartPolicy: Never

Related

How to get self pod with kubernetes client-go

I have a kubernetes service, written in Go, and am using client-go to access the kubernetes apis.
I need the Pod of the service's own pod.
The PodInterface allows me to iterate all pods, but what I need is a "self" concept to get the currently running pod that is executing my code.
It appears by reading /var/run/secrets/kubernetes.io/serviceaccount/namespace and searching pods in the namespace for the one matching hostname, I can determine the "self" pod.
Is this the proper solution?
Expose the POD_NAME and POD_NAMESPACE to your pod as environment variables. Later use those values to get your own pod object.
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_POD_NAME MY_POD_NAMESPACE;
sleep 10;
done;
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: Never
Ref: environment-variable-expose-pod-information

Kubernetes promtail sidecar: how to get labels from the parent pod metadata

I have some kubernetes applications that log to files rather than stdout/stderr, and I collect them with Promtail sidecars. But since the sidecars execute with "localhost" target, I don't have a kubernetes_sd_config that will apply pod metadata to labels for me. So I'm stuck statically declaring my labels.
# ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: promtail
name: sidecar-promtail
data:
config.yml: |
client:
url: http://loki.loki.svc.cluster.local:3100/loki/api/v1/push
backoff_config:
max_period: 5m
max_retries: 10
min_period: 500ms
batchsize: 1048576
batchwait: 1s
external_labels: {}
timeout: 10s
positions:
filename: /tmp/promtail-positions.yaml
server:
http_listen_port: 3101
target_config:
sync_period: 10s
scrape_configs:
- job_name: sidecar-logs
static_configs:
- targets:
- localhost
labels:
job: sidecar-logs
__path__: "/sidecar-logs/*.log"
----
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-logger
spec:
selector:
matchLabels:
run: test-logger
template:
metadata:
labels:
run: test-logger
spec:
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs-claim
- name: promtail-config
configMap:
name: sidecar-promtail
containers:
- name: sidecar-promtail
image: grafana/promtail:2.1.0
volumeMounts:
- name: nfs
mountPath: /sidecar-logs
- mountPath: /etc/promtail
name: promtail-config
- name: simple-logger
image: foo/simple-logger
volumeMounts:
- name: nfs
mountPath: /logs
What is the best way to label the collected logs based on the parent pod's metadata?
You can do the following:
In the sidecar container, expose the pod name, node name and other information you need as environment variables, then add the flag '-config.expand-env' to enable environment expansion inside promtail config file, e.g.:
...
- name: sidecar-promtail
image: grafana/promtail:2.1.0
# image: grafana/promtail:2.4.1 # use this one if environment expansion is not available in 2.1.0
args:
# Enable environment expansion in promtail config file
- '-config.expand-env'
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
...
Then in your configMap, add the environment variables in your static_config labels as such:
...
scrape_configs:
- job_name: sidecar-logs
static_configs:
- targets:
- localhost
labels:
job: sidecar-logs
pod: ${POD_NAME}
node_name: ${NODE_NAME}
__path__: "/sidecar-logs/*.log"
...

subPathExpr for parallel pods

We have parallel jobs on EKS and we would like the jobs to write to hostPath.
We are using subPathExpr with environment variable as according to the documentation. However, after the run, the hostPath contains only the one folder probably due to racing condition from the parallel jobs and whichever job get hold of the hostPath.
We are on Kubernetes 1.17. Is subPathExpr meant for this use case of allowing parallel jobs to write to the same hostPath? What are other options to allow parallel jobs to write to host volume?
apiVersion: batch/v1
kind: Job
metadata:
name: gatling-job
spec:
ttlSecondsAfterFinished: 300 # delete after 5 minutes
completions: 5
parallelism: 5
backoffLimit: 0
template:
spec:
restartPolicy: "Never"
containers:
- name: gatling
image: GATLING_IMAGE_NAME
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
volumeMounts:
- name: perftest-results
mountPath: /opt/gatling/results
subPathExpr: $(POD_NAME)
volumes:
- name: perftest-results
hostPath:
path: /data/perftest-results
Tested with a simple job template as below, and files were created in respective folder and worked as expected.
Will investigate the actual project. Closing for now.
apiVersion: batch/v1
kind: Job
metadata:
name: subpath-jobs
labels:
name: subpath-jobs
spec:
completions: 5
parallelism: 5
backoffLimit: 0
template:
spec:
restartPolicy: "Never"
containers:
- name: busybox
image: busybox
workingDir: /outputs
command: [ "touch" ]
args: [ "a_file.txt" ]
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
volumeMounts:
- name: job-output
mountPath: /outputs
subPathExpr: $(POD_NAME)
volumes:
- name: job-output
hostPath:
path: /data/outputs
type: DirectoryOrCreate
# ls -R /data
/data:
outputs
/data/outputs:
subpath-jobs-6968q subpath-jobs-6zp4x subpath-jobs-nhh96 subpath-jobs-tl8fx subpath-jobs-w2h9f
/data/outputs/subpath-jobs-6968q:
a_file.txt
/data/outputs/subpath-jobs-6zp4x:
a_file.txt
/data/outputs/subpath-jobs-nhh96:
a_file.txt
/data/outputs/subpath-jobs-tl8fx:
a_file.txt
/data/outputs/subpath-jobs-w2h9f:
a_file.txt

how to add configmap created from a .txt to a pod?

I am trying to make a simple config map from a config.txt file:
config.txt:
----------
key1=val1
key2=val2
this is the pod yaml:
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
command: [ "/bin/sh", "-c", "env" ]
env:
- name: KEY_VALUES
valueFrom:
configMapKeyRef:
name: keyvalcfgmap
key1: key1
key2: key2
by running kubectl create configmap keyvalcfgmap --from-file=<filepath> -o yaml > configmap.yaml and applying the created configmap, I supposedly can use it in a pod. the question is how? I tried adding it as a volume or calling it using --from-file= and even envFrom but the best I could get was that the volume just mounted the file itself and not the configmap.
You can use envFrom like this
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: keyvalcfgmap #<--------------Here
restartPolicy: Never
or you can use configmap as env variables
env:
- name: NAME
valueFrom:
configMapKeyRef:
name: keyvalcfgmap #<--------------Here
key: key1
- name: NAME
valueFrom:
configMapKeyRef:
name: keyvalcfgmap #<--------------Here
key: key2

Kubernetes NFS volume with dynamic path

I am trying to mount my applications' logs directory to nfs dynamically including node_name.
No success so far.
I tried as below:
kind: Pod
apiVersion: v1
metadata:
name: nfs-in-a-pod
spec:
containers:
- name: app
image: alpine
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs
subPath: /$(NODE_NAME)
command: ["/bin/sh"]
args: ["-c", "sleep 500000"]
volumes:
- name: nfs-volume
nfs:
server: ip_adress_here
path: /mnt/events
I think instead of subPath you should use subPathExpr, as mentioned in the documentation.
Use the subPathExpr field to construct subPath directory names from Downward API environment variables. This feature requires the VolumeSubpathEnvExpansion feature gate to be enabled. It is enabled by default starting with Kubernetes 1.15. The subPath and subPathExpr properties are mutually exclusive.
In this example, a Pod uses subPathExpr to create a directory pod1 within the hostPath volume /var/log/pods, using the pod name from the Downward API. The host directory /var/log/pods/pod1 is mounted at /logs in the container.
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: container1
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
image: busybox
command: [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ]
volumeMounts:
- name: workdir1
mountPath: /logs
subPathExpr: $(POD_NAME)
restartPolicy: Never
volumes:
- name: workdir1
hostPath:
path: /var/log/pods
Hope that´s it.