I want to add Annotation as expiry time in a Kubernetes resource (rbac definition object).
How to add annotation as the expiry time.
Pseudo code is something like below,
annotations:
expiry-time: {{ current date + 1 hour }}
How to add this custom annotation? What's the language of code needs to be added for custom annotation?
If you are using *nix shell like bash you can use the date command 🔧 and the kubectl patch command 🧰.
kubectl patch <k8s-resource> <resource-name> -p \
"{\"metadata\":{\"annotations\":{\"expiry-time\":\"`date -d '1 hour' '+%m-%d-%Y-%H:%M:%S'`\"}}}"
If you are on Mac you can substitute the date command with this:
date -v+1d '+%m/%d/%Y -%H:%M:%S'
✌️☮️
This worked..
kubectl annotate rbacdefinition joe-access "expires-at=$(date -v+1H '+%m%d/%Y -%H:%M:%S')"
Related
i have the follow question. i try connect to eks cluster using a Terraform with Gitlab CI/CD , i receive the error message , but when try it in my compute , this error dont appear, someone had same error ?
$ terraform output authconfig > authconfig.yaml
$ cat authconfig.yaml
<<EOT
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: "arn:aws:iam::503655390180:role/clusters-production-workers"
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
EOT
$ kubectl create -f authconfig.yaml -n kube-system
error: error parsing authconfig.yaml: error converting YAML to JSON: yaml: line 2: mapping values are not allowed in this context
The output is including EOT(EndOfText) marks since it is generated as a multiline string originally.
as documentation suggests (terrafom doc link)
Don't use "heredoc" strings to generate JSON or YAML. Instead, use the
jsonencode function or the yamlencode function so that Terraform can
be responsible for guaranteeing valid JSON or YAML syntax.
use json encoding or yaml encoding before building output.
If you want to continue like this with what you have now then try to give these options with output -json or -raw
terraform output -json authconfig > authconfig.yaml
or
terraform output -raw authconfig > authconfig.yaml
The error message tells you the authconfig.yaml file can not be converted from YAML to JSON, suggesting it's not a valid yaml
The cat authconfig.yaml you're showing us includes some <<EOT and EOT tags. I would suggest to remove those, before running kubectl create -f
Your comment suggests you knew this already - then why didn't you ask about terraform, rather than showing us kubectl create failing? From your post, it really sounded like you copy/pasted the output of your job, without even reading it.
So, obviously, the next step is to terraform output -raw, or -json, there are several mentions in their docs, or knowledge base, a google search would point you to:
https://discuss.hashicorp.com/t/terraform-outputs-with-heredoc-syntax-leaves-eot-in-file/18584/7
https://www.terraform.io/docs/cli/commands/output.html
Last: we could ask why? Why would you terraform output > something, when you can have terraform write a file?
While as a general rule, whenever writing terraform stdout/stderr to files, I strongly suggest going with no-color.
The doc https://docs.openshift.com/container-platform/3.9/dev_guide/cron_jobs.html provides details of creating a cron job.
To start a scheduled task that executes a build every 10 mins I use the command:
oc run run-build 161/my-app --image=myimage --restart=OnFailure --schedule='*/10 * * * *'
Which returns:
cronjob.batch/run-build created
But the job fails to start:
The log of pod displays:
Error: unknown command "161/my-app" for "openshift-deploy"
Run 'openshift-deploy --help' for usage.
Have I configured the command ( oc run run-build 161/my-app --image=myimage --restart=OnFailure --schedule='*/10 * * * *' ) to start the cron job incorrectly ?
You are trying to override the image CMD/ARG with the 161/my-app command (which seems not to be valid).
You should use:
oc run run-build --image=myimage --schedule='*/10 * * * *' \
--restart=OnFailure \
--command -- <YOUR COMMAND HERE>
Where run-build is the name of your created cronjob.
If you want to use the default CMD/ARG built in the container image, just omit the --command flag and its value.
First of all, It is not easy to find a full docs for the oc run, so let's discuss with the source code
As the cronjob.batch/run-build has been created, the build is scheduled by kubernetes, so there may be no problem for the schedule part.
The prolem is now why the image run failed.
we can find it from the logs, 161/my-app is recognized as an args for the command openshift-deploy which should be the CMD defined in --image=myimage
Error: unknown command "161/my-app" for "openshift-deploy"
Run 'openshift-deploy --help' for usage.
You have to expain the 161/my-app and update the command based on it.
There is always a CMD defined in a Docker image, so we have to decide whether to use the default CMD:
If the default CMD would be used and want to modify the args: Check this example
oc run nginx --image=nginx -- <arg1> <arg2> ... <argN>
If a new CMD and args would be used: Check this Example
oc run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>
I noticed two more infos in your questions, you can check here and update the question if necessary:
for the openshift-deploy part, you may reference here
for the openshift-build part, you may reference here
I have a downloaded file through helm inspect called sftp.yaml
I have a parameter in that sftp.yaml file:-
sftp:
allowedMACs: "hmac-sha2-512"
allowedCiphers: aes256-ctr
Now if i install the corresponding helm chart after commenting out the entire line of "allowedMACs" from custom values files i.e. "sftp.yaml", then K8s takes the delta of sftp.yaml and the actual values.yaml and then use values.yaml's "allowedMACs".
However What i want is if "allowedMACs" line is commented in "sftp.yaml" custom values file, then it should not set the env variable at all, or sets it as null.
presently my deployment file's env section looks like
- name: MACs
value: {{ default "" .Values.sftp.allowedMACs | quote }}
You need to either override (with new value) or unset the value, if you only comment out the section you are not doing any of the above and the default value is going to be used.
Basically you are looking to unset a default value. As per banzaicloud example this can be done like so:
helm install stable/chart-name --set sftp.allowedMACs=null
You can also use override value file in a similar way:
sftp:
allowedMACs: null
allowedCiphers: aes256-ctr
This is available in Helm since version 2.6. If you like in-depth information you can review the issue and the subsequent PR that introduced the feature.
yeah I think helm would retrieve values from all values files, so if allowedMACs is in one of those it'll get populated. If this parameter is affected only by sftp.yaml file should it really belong only to it and would i make sense to remove it from main values.yaml?
I have developed an Openshift template which basically creates two objects (A cluster & a container operator).
I understand that templates run oc create under the hood. So, in case any of these two objects already exists then trying to create the objects through template would through an error. Is there any way to override this behaviour? I want my template to re-configure the object even if it exists.
You can use "oc process" which renders template into set of manifests:
oc process foo PARAM1=VALUE1 PARAM2=VALUE2 | oc apply -f -
or
oc process -f template.json PARAM1=VALUE1 PARAM2=VALUE2 | oc apply -f -
I have a values.yaml where I need to mention multiple ports like the following:
kafkaClientPort:
- 32000
- 32001
- 32002
In yaml for statefulset, I need to get value using ordinal number.
So for kf-0, I need to put first element of kafkaClientPort; and for kf-1, second element and so on.
I am trying like the following:
args:
- "KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://$(MY_NODE_NAME):{{ index .Values.kafkaClientPort ${HOSTNAME##*-} }}"
But it is showing an error.
Please advise what is the best way to access dynamically values.yaml value.
The trick here is that Helm template doesn't know anything about ordinal in your stateful set. If you look at the Kafka Helm Chart, you see that they are using a base port 31090 and then they add the ordinal number but that substitution is in place 'after' the template is created. Something like this in your values:
"advertised.listener": |-
PLAINTEXT://kafka.cluster.local:$((31090 + ${KAFKA_BROKER_ID}))
and then in the template file, the use a bash export under command with a printf which is an alias for fmt.Sprintf. Something like this in your case:
command:
- sh
- -exc
- |
unset KAFKA_PORT && \
export KAFKA_BROKER_ID=${HOSTNAME##*-} && \
export "KAFKA_ADVERTISED_LISTENERS={{ printf "%s" $advertised.listener }} \\
...