OpenShift template "object already exists" error - kubernetes

I have developed an Openshift template which basically creates two objects (A cluster & a container operator).
I understand that templates run oc create under the hood. So, in case any of these two objects already exists then trying to create the objects through template would through an error. Is there any way to override this behaviour? I want my template to re-configure the object even if it exists.

You can use "oc process" which renders template into set of manifests:
oc process foo PARAM1=VALUE1 PARAM2=VALUE2 | oc apply -f -
or
oc process -f template.json PARAM1=VALUE1 PARAM2=VALUE2 | oc apply -f -

Related

Set Variables for Airflow during Helm install

I'm installing Airflow on kind with the following command:
export RELEASE_NAME=first-release
export NAMESPACE=airflow
helm install $RELEASE_NAME apache-airflow/airflow --namespace $NAMESPACE \
--set images.airflow.repository=my-dags \
--set images.airflow.tag=0.0.1 \
--values env.yaml
Andthe file env.yaml looks like the below:
env:
- name: "AIRFLOW_VAR_KEY"
value: "value_1"
But from the Web UI (when I go to Admins --> Variables), these credentials don't appear there.
How do I pass these credentials during helm install? Thanks!
UPDATE: It turns out that the environment variable was set successfully. However it doesn't show up on the Web UI
i am not sure how your full env.yaml file is
but to set the environment variables in Airflow
## environment variables for the web/scheduler/worker Pods (for airflow configs)
##
## WARNING:
## - don't include sensitive variables in here, instead make use of `airflow.extraEnv` with Secrets
## - don't specify `AIRFLOW__CORE__SQL_ALCHEMY_CONN`, `AIRFLOW__CELERY__RESULT_BACKEND`,
## or `AIRFLOW__CELERY__BROKER_URL`, they are dynamically created from chart values
##
## NOTE:
## - airflow allows environment configs to be set as environment variables
## - they take the form: AIRFLOW__<section>__<key>
## - see the Airflow documentation: https://airflow.apache.org/docs/stable/howto/set-config.html
##
## EXAMPLE:
## config:
## ## Security
## AIRFLOW__CORE__SECURE_MODE: "True"
## AIRFLOW__API__AUTH_BACKEND: "airflow.api.auth.backend.deny_all"
Reference file
and after that you have to run your command and further your DAG will be able to access the variables.
Helm documentation : https://github.com/helm/charts/tree/master/stable/airflow#docs-airflow---configs
Make sure you are configuring section : airflow.config
Okay, so I've figured this one out: The environment variables are set just nicely on the pods. However, this will not appear on the Web UI.
Workaround: to make it appear in the web UI, I will have to go into the Scheduler pod and import the variables. It can be done with a bash script.
# Get the name of scheduler pod
export SCHEDULER_POD_NAME="$(kubectl get pods --no-headers -o custom-columns=":metadata.name" -n airflow | grep scheduler)"
# Copy variables to the scheduler pod
kubectl cp ./variables.json airflow/$SCHEDULER_POD_NAME:./
# Import variables to scheduler with airflow command
kubectl -n $NAMESPACE exec $SCHEDULER_POD_NAME -- airflow variables import variables.json

Error when try apply configmap to auth with EKS cluster

i have the follow question. i try connect to eks cluster using a Terraform with Gitlab CI/CD , i receive the error message , but when try it in my compute , this error dont appear, someone had same error ?
$ terraform output authconfig > authconfig.yaml
$ cat authconfig.yaml
<<EOT
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: "arn:aws:iam::503655390180:role/clusters-production-workers"
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
EOT
$ kubectl create -f authconfig.yaml -n kube-system
error: error parsing authconfig.yaml: error converting YAML to JSON: yaml: line 2: mapping values are not allowed in this context
The output is including EOT(EndOfText) marks since it is generated as a multiline string originally.
as documentation suggests (terrafom doc link)
Don't use "heredoc" strings to generate JSON or YAML. Instead, use the
jsonencode function or the yamlencode function so that Terraform can
be responsible for guaranteeing valid JSON or YAML syntax.
use json encoding or yaml encoding before building output.
If you want to continue like this with what you have now then try to give these options with output -json or -raw
terraform output -json authconfig > authconfig.yaml
or
terraform output -raw authconfig > authconfig.yaml
The error message tells you the authconfig.yaml file can not be converted from YAML to JSON, suggesting it's not a valid yaml
The cat authconfig.yaml you're showing us includes some <<EOT and EOT tags. I would suggest to remove those, before running kubectl create -f
Your comment suggests you knew this already - then why didn't you ask about terraform, rather than showing us kubectl create failing? From your post, it really sounded like you copy/pasted the output of your job, without even reading it.
So, obviously, the next step is to terraform output -raw, or -json, there are several mentions in their docs, or knowledge base, a google search would point you to:
https://discuss.hashicorp.com/t/terraform-outputs-with-heredoc-syntax-leaves-eot-in-file/18584/7
https://www.terraform.io/docs/cli/commands/output.html
Last: we could ask why? Why would you terraform output > something, when you can have terraform write a file?
While as a general rule, whenever writing terraform stdout/stderr to files, I strongly suggest going with no-color.

Use different name of the kustomization.yaml

For CI/CD purposes, the project is maintaining 2 kustomization.yaml files
Regular deployments - kustomization_deploy.yaml
Rollback deployment - kustomization_rollback.yaml
To run kustomize build, a file with the name "kustomization.yaml" is required in the current directory.
If the project wants to use kustomization_rollback.yaml and NOT kustomization.yaml, how is this possible? Does kustomize accept file name as an argument? Docs do not specify anything on this.
Currently there is no possibility to change the behavior of kustomize to support other file names (by using precompiled binaries) than:
kustomization.yaml
kustomization.yml
Kustomization
All of the below cases will produce the same error output:
kubectl kustomize dir/
kubectl apply -k dir/
kustomize build dir/
Error: unable to find one of 'kustomization.yaml', 'kustomization.yml' or 'Kustomization' in directory 'FULL_PATH/dir'
Depending on the CI/CD platform/solution/tool you should try other way around like for example:
split the Deployment into 2 directories kustomization_deploy/kustomization_rollback with kustomization.yaml
As a side note!
File names that kustomize uses are placed in the:
/kubernetes/vendor/sigs.k8s.io/kustomize/pkg/constants/constants.go
// Package constants holds global constants for the kustomize tool.
package constants
// KustomizationFileNames is a list of filenames that can be recognized and consumbed
// by Kustomize.
// In each directory, Kustomize searches for file with the name in this list.
// Only one match is allowed.
var KustomizationFileNames = []string{
"kustomization.yaml",
"kustomization.yml",
"Kustomization",
}
The logic behind choosing the Kustomization file is placed in:
/kubernetes/vendor/sigs.k8s.io/kustomize/pkg/target/kusttarget.go
Additional reference:
Github.com: Kubernetes-sigs: Kustomize
Kubernetes.io: Docs: Tasks: Manage kubernetes objects: Kustomization

How to render only selected template in Helm?

I have ~20 yamls in my helm chart + tons of dependencies and I want to check the rendered output of the specific one. helm template renders all yamls and produces a hundred lines of code. Is there a way (it would be nice to have even a regex) to render only selected template (by a file or eg. a name).
From helm template documentation
-s, --show-only stringArray only show manifests rendered from the given templates
For rendering only one resource use helm template -s templates/deployment.yaml .
If you have multiple charts in one directory:
|helm-charts
|-chart1
|--templates
|---deployment.yaml
|--values.yaml
|--Chart.yaml
|...
|- chart2
If you want to generate only one file e.g. chart1/deployment.yaml using values from file chart1/values.yaml follow these steps:
Enter to the chart folder:
cd chart1
Run this command:
helm template . --values values.yaml -s templates/deployment.yaml --name-template myReleaseName > chart1-deployment.yaml
Generated manifest will be inside file chart1-deployment.yaml.

How to save content of a configmap to a file with kubectl and jsonpath?

I'm trying to save the contents of a configmap to a file on my local hard drive. Kubectl supports selecting with JSONPath but I can't find the expression I need to select just the file contents.
The configmap was created using the command
kubectl create configmap my-configmap --from-file=my.configmap.json=my.file.json
When I run
kubectl describe configmap my-configmap
I see the following output:
Name: my-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
my.file.json:
----
{
"key": "value"
}
Events: <none>
The furthest I've gotten so selecting only the file contents is this:
kubectl get configmap my-configmap -o jsonpath="{.data}"
Which outputs
map[my.file.json:{
"key": "value"
}]
The output that I want is
{
"key": "value"
}
What is the last piece of the JSONPath puzzle?
There’s an open issue at the Kubernetes GitHub repo with a list of things that needs to be fixed in regards to kubectl (and JSONpath), one of them are issue 16707 jsonpath template output should be json.
Edit:
How about this:
kubectl get cm my-configmap -o jsonpath='{.data.my\.file\.json}'
I just realized i had answered another question related (kind of) to this one. The above command should output what you had in mind!
If you have the ability to use jq, then you can use the following approach to e.g. "list" all config maps by selector, and extract the files:
readarray -d $'\0' -t a < <(kubectl get cm -l grafana=dashboards -o json | jq -cj '.items[] | . as $cm | .data | to_entries[] | [ ($cm.metadata.name + "-" + .key), .value ][]+"\u0000"') ; count=0; while [ $count -lt ${#a[#]} ]; do echo "${a[$((count + 1))]}" > ${a[$count]}; count=$(( $count + 2)); done
This uses kubectl (using -l for a label selector) to get all configmaps. Next it pipes them through jq, creating key value pairs with a null byte termination (the key also contains the name of the configmap, this way I ensured that duplicate file names are not an issue). Then it reads this into a bash array, iterating over the array in steps of 2. Creating files with the content.
This also works file config map values that contain newlines.