kubectl run with env var from a secret config - kubernetes

How can I issue a kubectl run that pulls an environment var from a k8s secret configmap?
Currently I have:
kubectl run oneoff -i --rm NAME --image=IMAGE --env SECRET=foo

Look into the overrides flag of the run command... it reads as:
An inline JSON override for the generated object. If this is non-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field.
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run
So in your case I guess it would be something like:
kubectl run oneoff -i --rm --overrides='
{
"spec": {
"containers": [
{
"name": "oneoff",
"image": "IMAGE",
"env": [
{
"name": "ENV_NAME"
"valueFrom": {
"secretKeyRef": {
"name": "SECRET_NAME",
"key": "SECRET_KEY"
}
}
}
]
}
]
}
}
' --image= IMAGE

This is another one that does the trick:
kubectl run oneoff -i --rm NAME --image=IMAGE --env SECRET=$(kubectl get secret your-secret -o=jsonpath="{.server['secret\.yml']}")

Related

How to run once an openshift job with configmap

I have figured how to run a one time job in openshift (alternative to docker run):
oc run my-job --replicas=1 --restart=Never --rm -ti --command /bin/true --image busybox
How can I mount a configmap into the job container?
You can use --overrides flag :
oc run my-job --overrides='
{
"apiVersion": "v1",
"kind": "Pod",
"spec": {
"containers": [
{
"image": "busybox",
"name": "mypod",
"volumeMounts": [
{
"mountPath": "/path",
"name": "configmap"
}
]
}
],
"volumes": [
{
"configMap": {
"name": "myconfigmap"
},
"name": "configmap"
}
]
}
}
' --replicas=1 --restart=Never --rm -ti --command /bin/true --image busybox

How can I manually set values on a Kubernetes object using kubectl create?

In particular, I want to set environment variables. I have a CronJob definition that runs on a schedule, but every so often I want to invoke it manually while specifying slightly different environment variables.
I can invoke the cron job manually with this command:
kubectl create job --from=cronjob/my-cron-job my-manual-run
But that copies in all the same environment variables that are specified in the resource definition. How can I add additional, new environment variables using this create job command?
I built on the answer from #Rico to first create the job in kubectl as a --dry-run, then modifiy the job using jq, then apply. This removes the need for having base JSON files and having to manage additional job metadata fields.
For example:
$ kubectl create job --from=cronjob/my-cron-job my-manual-run --dry-run -o "json" \
| jq ".spec.template.spec.containers[0].env += [{ \"name\": \"envname1\", value:\"$envvalue1\" }]" \
| kubectl apply -f -
Easiest to do IMO is to have a base JSON file and modify it. The output of kubectl get cronjob jobname has a lot of other info that you don't need.
For example:
{
"apiVersion": "batch/v1",
"kind": "Job",
"metadata": {
"name": "changeme"
},
"spec": {
"template": {
"metadata": {
"labels": {
"job-name": "changeme"
}
},
"spec": {
"restartPolicy": "Never",
"containers": [
{
"command": [
"perl",
"-Mbignum=bpi",
"-wle",
"print bpi(2000)"
],
"image": "perl",
"name": "pi"
}
]
}
}
}
}
Then run something like this:
$ cat yourjobtemplate.json \
| jq '. + {metadata: {name: "mynewjobname"}}' \
| jq '.spec.template.metadata.labels |= . + {"job-name": "mynewjobname"}' \
| jq '.spec.template.spec.containers[0] |= . + {"env": [{name: "envname1", value: "envvalue1"}, {name: "envname2", value: "envvalue2"}]}' \
| kubectl apply -f -

Telepresence fails, saying my namespace doesn't exist, pointing to problems with my k8s context

I've been working with a bunch of k8s clusters for a while, using kubectl from the command line to examine information. I don't actually call kubectl directly, I wrap it in multiple scripting layers. I also don't use contexts, as it's much easier for me to specify different clusters in a different way. The resulting kubectl command line has explicit --server, --namespace, and --token parameters (and one other flag to disable tls verify).
This all works fine. I have no trouble with this.
However, I'm now trying to use telepresence, which doesn't give me a choice (yet) of not using contexts to configure this. So, I now have to figure out how to use contexts.
I ran the following (approximate) command:
kubectl config set-context mycontext --server=https://host:port --namespace=abc-def-ghi --insecure-skip-tls-verify=true --token=mytoken
And it said: "Context "mycontext " modified."
I then ran "kubectl config view -o json" and got this:
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [],
"users": [],
"contexts": [
{
"name": "mycontext",
"context": {
"cluster": "",
"user": "",
"namespace": "abc-def-ghi"
}
}
],
"current-context": "mycontext"
}
That doesn't look right to me.
I then ran something like this:
telepresence --verbose --swap-deployment mydeployment --expose 8080 --run java -jar target/my.jar -Xdebug -Xrunjdwp:transport=dt_socket,address=5000,server=y,suspend=n
And it said this:
T: Error: Namespace 'abc-def-ghi' does not exist
Update:
And I can confirm that this isn't a problem with telepresence. If I just run "kubectl get pods", it fails, saying "The connection to the server localhost:8080 was refused". That tells me it obviously can't connect to the k8s server. The key is my "set-context" command. It's obviously not working, and I don't understand what I'm missing.
You don't have any clusters or credentials defined in your configuration. First, you need to define a cluster:
$ kubectl config set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file
Then something like this for the user:
$ kubectl config set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile
Then you define your context based on your cluster, user and namespace:
$ kubectl config set-context dev-frontend --cluster=development --namespace=frontend --user=developer
More information here
Your config should look something like this:
$ kubectl config view -o json
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [
{
"name": "development",
"cluster": {
"server": "https://1.2.3.4",
"certificate-authority-data": "DATA+OMITTED"
}
}
],
"users": [
{
"name": "developer",
"user": {
"client-certificate": "fake-cert-file",
"client-key": "fake-key-seefile"
}
}
],
"contexts": [
{
"name": "dev-frontend",
"context": {
"cluster": "development",
"user": "developer",
"namespace": "frontend"
}
}
],
"current-context": "dev-frontend"
}

kubernetes - volume mapping via command

I need to map a volume while starting the container, I am able to do it so with yaml file.
Is there an way volume mapping can be done via command line without using yaml file? just like
-v option in docker?
without using yaml file
Technically, yes: you would need a json file, as illustrated in "Create kubernetes pod with volume using kubectl run"
See kubectl run.
kubectl run -i --rm --tty ubuntu --overrides='
{
"apiVersion": "batch/v1",
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ubuntu:14.04",
"args": [
"bash"
],
"stdin": true,
"stdinOnce": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/home/store",
"name": "store"
}]
}
],
"volumes": [{
"name":"store",
"emptyDir":{}
}]
}
}
}
}
' --image=ubuntu:14.04 --restart=Never -- bash

Create kubernetes pod with volume using kubectl run

I understand that you can create a pod with Deployment/Job using kubectl run. But is it possible to create one with a volume attached to it? I tried running this command:
kubectl run -i --rm --tty ubuntu --overrides='{ "apiVersion":"batch/v1", "spec": {"containers": {"image": "ubuntu:14.04", "volumeMounts": {"mountPath": "/home/store", "name":"store"}}, "volumes":{"name":"store", "emptyDir":{}}}}' --image=ubuntu:14.04 --restart=Never -- bash
But the volume does not appear in the interactive bash.
Is there a better way to create a pod with volume that you can attach to?
Your JSON override is specified incorrectly. Unfortunately kubectl run just ignores fields it doesn't understand.
kubectl run -i --rm --tty ubuntu --overrides='
{
"apiVersion": "batch/v1",
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ubuntu:14.04",
"args": [
"bash"
],
"stdin": true,
"stdinOnce": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/home/store",
"name": "store"
}]
}
],
"volumes": [{
"name":"store",
"emptyDir":{}
}]
}
}
}
}
' --image=ubuntu:14.04 --restart=Never -- bash
To debug this issue I ran the command you specified, and then in another terminal ran:
kubectl get job ubuntu -o json
From there you can see that the actual job structure differs from your json override (you were missing the nested template/spec, and volumes, volumeMounts, and containers need to be arrays).