I'm installing Airflow on kind with the following command:
export RELEASE_NAME=first-release
export NAMESPACE=airflow
helm install $RELEASE_NAME apache-airflow/airflow --namespace $NAMESPACE \
--set images.airflow.repository=my-dags \
--set images.airflow.tag=0.0.1 \
--values env.yaml
Andthe file env.yaml looks like the below:
env:
- name: "AIRFLOW_VAR_KEY"
value: "value_1"
But from the Web UI (when I go to Admins --> Variables), these credentials don't appear there.
How do I pass these credentials during helm install? Thanks!
UPDATE: It turns out that the environment variable was set successfully. However it doesn't show up on the Web UI
i am not sure how your full env.yaml file is
but to set the environment variables in Airflow
## environment variables for the web/scheduler/worker Pods (for airflow configs)
##
## WARNING:
## - don't include sensitive variables in here, instead make use of `airflow.extraEnv` with Secrets
## - don't specify `AIRFLOW__CORE__SQL_ALCHEMY_CONN`, `AIRFLOW__CELERY__RESULT_BACKEND`,
## or `AIRFLOW__CELERY__BROKER_URL`, they are dynamically created from chart values
##
## NOTE:
## - airflow allows environment configs to be set as environment variables
## - they take the form: AIRFLOW__<section>__<key>
## - see the Airflow documentation: https://airflow.apache.org/docs/stable/howto/set-config.html
##
## EXAMPLE:
## config:
## ## Security
## AIRFLOW__CORE__SECURE_MODE: "True"
## AIRFLOW__API__AUTH_BACKEND: "airflow.api.auth.backend.deny_all"
Reference file
and after that you have to run your command and further your DAG will be able to access the variables.
Helm documentation : https://github.com/helm/charts/tree/master/stable/airflow#docs-airflow---configs
Make sure you are configuring section : airflow.config
Okay, so I've figured this one out: The environment variables are set just nicely on the pods. However, this will not appear on the Web UI.
Workaround: to make it appear in the web UI, I will have to go into the Scheduler pod and import the variables. It can be done with a bash script.
# Get the name of scheduler pod
export SCHEDULER_POD_NAME="$(kubectl get pods --no-headers -o custom-columns=":metadata.name" -n airflow | grep scheduler)"
# Copy variables to the scheduler pod
kubectl cp ./variables.json airflow/$SCHEDULER_POD_NAME:./
# Import variables to scheduler with airflow command
kubectl -n $NAMESPACE exec $SCHEDULER_POD_NAME -- airflow variables import variables.json
Related
I'm using ArgoWorkflow to automate our CI/CD chains.
In order to build images, and push them to our private registry we are faced between the choice of either buildah or kaniko. But I can't put my finger on the main difference between the two. Pros and cons wise, and also on how do these tools handle parallel builds and cache management. Can anyone clarify these points ? Or even suggest another tool that can maybe do the job in a more simple way.
Some clarifications on the subject would be really helpful.
Thanks in advance.
buildah will require either a privileged container with more then one UID or a container running with CAP_SETUID, CAP_SETGID to build container images.
It is not hacking on the file system like kanicko does to get around these requirements. It runs full contianers when building.
--isolation chroot, will make it a little easier to get buildah to work within kubernetes.
kaniko is very simple to setup and has some magic that let it work with no requirements in kubernetes :)
I also tried buildah but was unable to configure it and found it too complex to setup in a kubernetes environment.
You can use an internal Docker registry as cache management for kaniko, but a local storage can be configured instead (not tried yet). Just use the latest version of kaniko (v1.7.0), that fixes an important bug in the cached layers management.
These are some functions (declared in the file ci/libkaniko.sh) that I use in my GitLab CI pipelines, executed by a GitLab kubernetes runner. They should hopefully clarify setup and usage of kaniko.
function kaniko_config
{
local docker_auth="$(echo -n "$CI_REGISTRY_USER:$CI_REGISTRY_PASSWORD" | base64)"
mkdir -p $DOCKER_CONFIG
[ -e $DOCKER_CONFIG/config.json ] || \
cat <<JSON > $DOCKER_CONFIG/config.json
{
"auths": {
"$CI_REGISTRY": {
"auth": "$docker_auth"
}
}
}
JSON
}
# Usage example (.gitlab-ci.yml)
#
# build php:
# extends: .build
# variables:
# DOCKER_CONFIG: "$CI_PROJECT_DIR/php/.docker"
# DOCKER_IMAGE_PHP_DEVEL_BRANCH: &php-devel-image "${CI_REGISTRY_IMAGE}/php:${CI_COMMIT_REF_SLUG}-build"
# script:
# - kaniko_build
# --destination $DOCKER_IMAGE_PHP_DEVEL_BRANCH
# --dockerfile $CI_PROJECT_DIR/docker/images/php/Dockerfile
# --target devel
function kaniko_build
{
kaniko_config
echo "Kaniko cache enabled ($CI_REGISTRY_IMAGE/cache)"
/kaniko/executor \
--build-arg http_proxy="${HTTP_PROXY}" \
--build-arg https_proxy="${HTTPS_PROXY}" \
--build-arg no_proxy="${NO_PROXY}" \
--cache --cache-repo $CI_REGISTRY_IMAGE/cache \
--context "$CI_PROJECT_DIR" \
--digest-file=/dev/termination-log \
--label "ci.job.id=${CI_JOB_ID}" \
--label "ci.pipeline.id=${CI_PIPELINE_ID}" \
--verbosity info \
$#
[ -r /dev/termination-log ] && \
echo "Manifest digest: $(cat /dev/termination-log)"
}
With these functions a new image can be built with:
stages:
- build
build app:
stage: build
image:
name: gcr.io/kaniko-project/executor:v1.7.0-debug
entrypoint: [""]
variables:
DOCKER_CONFIG: "$CI_PROJECT_DIR/app/.docker"
DOCKER_IMAGE_APP_RELEASE_BRANCH: &app-devel-image "${CI_REGISTRY_IMAGE}/phelps:${CI_COMMIT_REF_SLUG}"
GIT_SUBMODULE_STRATEGY: recursive
before_script:
- source ci/libkaniko.sh
script:
- kaniko_build
--destination $DOCKER_IMAGE_APP_RELEASE_BRANCH
--digest-file $CI_PROJECT_DIR/docker-content-digest-app
--dockerfile $CI_PROJECT_DIR/docker/Dockerfile
artifacts:
paths:
- docker-content-digest-app
tags:
- k8s-runner
Note that you have to use the debug version of kaniko executor because this image tag provides a shell (and other busybox based binaries).
i have the follow question. i try connect to eks cluster using a Terraform with Gitlab CI/CD , i receive the error message , but when try it in my compute , this error dont appear, someone had same error ?
$ terraform output authconfig > authconfig.yaml
$ cat authconfig.yaml
<<EOT
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: "arn:aws:iam::503655390180:role/clusters-production-workers"
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
EOT
$ kubectl create -f authconfig.yaml -n kube-system
error: error parsing authconfig.yaml: error converting YAML to JSON: yaml: line 2: mapping values are not allowed in this context
The output is including EOT(EndOfText) marks since it is generated as a multiline string originally.
as documentation suggests (terrafom doc link)
Don't use "heredoc" strings to generate JSON or YAML. Instead, use the
jsonencode function or the yamlencode function so that Terraform can
be responsible for guaranteeing valid JSON or YAML syntax.
use json encoding or yaml encoding before building output.
If you want to continue like this with what you have now then try to give these options with output -json or -raw
terraform output -json authconfig > authconfig.yaml
or
terraform output -raw authconfig > authconfig.yaml
The error message tells you the authconfig.yaml file can not be converted from YAML to JSON, suggesting it's not a valid yaml
The cat authconfig.yaml you're showing us includes some <<EOT and EOT tags. I would suggest to remove those, before running kubectl create -f
Your comment suggests you knew this already - then why didn't you ask about terraform, rather than showing us kubectl create failing? From your post, it really sounded like you copy/pasted the output of your job, without even reading it.
So, obviously, the next step is to terraform output -raw, or -json, there are several mentions in their docs, or knowledge base, a google search would point you to:
https://discuss.hashicorp.com/t/terraform-outputs-with-heredoc-syntax-leaves-eot-in-file/18584/7
https://www.terraform.io/docs/cli/commands/output.html
Last: we could ask why? Why would you terraform output > something, when you can have terraform write a file?
While as a general rule, whenever writing terraform stdout/stderr to files, I strongly suggest going with no-color.
I am trying to pass an environment variable in my deployment that should define a prefix based on a version number:
env:
- name: INDEX_PREFIX
value: myapp-$(VERSION)
$(VERSION) is not defined in my deployment but is set in the docker image used by the pod.
I tried to use both $() and ${} but VERSION is not interpolated in the environment of my pod. In my pod shell doing export TEST=myapp-${VERSION} does work though.
Is there any way to achieve what I am looking for? ie setting an environment variable in my deployment that reference an environment variable set in the docker image?
VERSION is an environment variable of the docker image. So you can assign it a value either inside the container or by passing
env:
- name : VERSION
value : YOUR-VALUE
In your case, VERSION is either set by a script inside the docker container or in the Dockerfile.
You can do :
In the Dockerfile, adding ENV INDEX_PREFIX myapp-${VERSION}
Adding a script to your entrypoint as
export INDEX_PREFIX=myapp-${VERSION}
In case you can't modify Dockerfile, you can try to :
Get the image entrypoint file from the docker image (ie: /IMAGE-entrypoint.sh) and the image args(ie: IMAGE-ARGS). you can use docker inspect IMAGE.
Override the container command and args in the pod spec using a script.
command:
- '/bin/sh'
args:
- '-c'
- |
set -e
set -x
export INDEX_PREFIX=myapp-${VERSION}
IMAGE-entrypoint.sh IMAGE-ARGS
k8s documentation : https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
Hope it could help you.
Below is Helm code to install
helm install coreos/kube-prometheus --name kube-prometheum --namespace monitoring -f kube-prometheus.yml
by this way we can override the value.yml values with the values present in kube-prometheus.yml.
Is there any way by which we can first install and then update the value.yml from kube-prometheus.yml file.
I can use helm upgrade releasename kube-prometheumafter changing the value.yml file directly. I don't want that
Use case:
Initially, I used an image with tag 1.0 in value.yml. Now I have below code in kube-prometheus.yml just to update the image tag
prometheusconfigReloader:
image:
tag: 2.0
Instead of deleting and creating again. I want to upgrade it. This is just for example, there could be multiple values. that is why I can't use -set.
So you first run helm install coreos/kube-prometheus --name kube-prometheum --namespace monitoring -f kube-prometheus.yml with your values file set to point at 1.0 of the image:
prometheusconfigReloader:
image:
tag: 1.0
Then you change the values file or create a new values file or even create a new values file containing:
prometheusconfigReloader:
image:
tag: 2.0
Let's say this file is called kube-prometheus-v2.yml Then you can run:
helm upgrade -f kube-prometheus-v2.yml kube-prometheum coreos/kube-prometheus
Or even:
helm upgrade -f kube-prometheus.yml -f kube-prometheus-v2.yml kube-prometheum coreos/kube-prometheus
This is because both values file overrides will be overlaid and according to the helm upgrade documentation "priority will be given to the last (right-most) value specified".
Or if you've already installed and want to find out what the values file that was used contained then you can use helm get values kube-prometheum
Where are documented the "types" of secrets that you can create in kubernetes?
looking at different samples I have found "generic" and "docker-registry" but I have no been able to find a pointer to documentation where the different type of secrets are documented.
I always end in the k8s doc:
https://kubernetes.io/docs/concepts/configuration/secret/
https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/
Thank you.
Here is a list of 'types' from the source code:
SecretTypeOpaque SecretType = "Opaque"
[...]
SecretTypeServiceAccountToken SecretType = "kubernetes.io/service-account-token"
[...]
SecretTypeDockercfg SecretType = "kubernetes.io/dockercfg"
[...]
SecretTypeDockerConfigJson SecretType = "kubernetes.io/dockerconfigjson"
[...]
SecretTypeBasicAuth SecretType = "kubernetes.io/basic-auth"
[...]
SecretTypeSSHAuth SecretType = "kubernetes.io/ssh-auth"
[...]
SecretTypeTLS SecretType = "kubernetes.io/tls"
[...]
SecretTypeBootstrapToken SecretType = "bootstrap.kubernetes.io/token"
In the kubectl docs you can see some of the available types. Also, in the command line
$ kubectl create secret --help
Create a secret using specified subcommand.
Available Commands:
docker-registry Create a secret for use with a Docker registry
generic Create a secret from a local file, directory or literal value
tls Create a TLS secret
Usage:
kubectl create secret [flags] [options]
Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).