Where are documented the "types" of secrets that you can create in kubernetes?
looking at different samples I have found "generic" and "docker-registry" but I have no been able to find a pointer to documentation where the different type of secrets are documented.
I always end in the k8s doc:
https://kubernetes.io/docs/concepts/configuration/secret/
https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/
Thank you.
Here is a list of 'types' from the source code:
SecretTypeOpaque SecretType = "Opaque"
[...]
SecretTypeServiceAccountToken SecretType = "kubernetes.io/service-account-token"
[...]
SecretTypeDockercfg SecretType = "kubernetes.io/dockercfg"
[...]
SecretTypeDockerConfigJson SecretType = "kubernetes.io/dockerconfigjson"
[...]
SecretTypeBasicAuth SecretType = "kubernetes.io/basic-auth"
[...]
SecretTypeSSHAuth SecretType = "kubernetes.io/ssh-auth"
[...]
SecretTypeTLS SecretType = "kubernetes.io/tls"
[...]
SecretTypeBootstrapToken SecretType = "bootstrap.kubernetes.io/token"
In the kubectl docs you can see some of the available types. Also, in the command line
$ kubectl create secret --help
Create a secret using specified subcommand.
Available Commands:
docker-registry Create a secret for use with a Docker registry
generic Create a secret from a local file, directory or literal value
tls Create a TLS secret
Usage:
kubectl create secret [flags] [options]
Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
Related
I have a secret being used as env var in another env var as follows:
- name: "PWD"
valueFrom:
secretKeyRef:
name: "credentials"
key: "password"
- name: HOST
value: "xyz.mongodb.net"
- name: MONGODB_URI
value: "mongodb+srv://user:$(PWD)#$(HOST)/db_name?"
When i exec into the container and run env command to see the values of env i see -
mongodb+srv://user:password123
#xyz.mongodb.net/db_name?
The container logs show error as authentication failure.
Is this something that is expected to work in kubernetes ? There docs talk about dependent env vars but do not give example using secrets. Did not find clear explanation on this after extensive search. Only found this one article doing something similar.
Some points to note -
The secret is a sealed secret.
This is the final manifest's contents, but all this is templated using helm.
The value is being used inside a spring boot application
Is the new line after 123 expected ?
If this evaluation of env from a secret in another env is possible then what am I doing wrong here ?
The issue was with the command used to encode the secret - echo "pasword" | base64. The echo adds a newline character at the end of the string. Using echo -n "password" | base64 fixed the secret.
Closing the issue.
I'm installing Airflow on kind with the following command:
export RELEASE_NAME=first-release
export NAMESPACE=airflow
helm install $RELEASE_NAME apache-airflow/airflow --namespace $NAMESPACE \
--set images.airflow.repository=my-dags \
--set images.airflow.tag=0.0.1 \
--values env.yaml
Andthe file env.yaml looks like the below:
env:
- name: "AIRFLOW_VAR_KEY"
value: "value_1"
But from the Web UI (when I go to Admins --> Variables), these credentials don't appear there.
How do I pass these credentials during helm install? Thanks!
UPDATE: It turns out that the environment variable was set successfully. However it doesn't show up on the Web UI
i am not sure how your full env.yaml file is
but to set the environment variables in Airflow
## environment variables for the web/scheduler/worker Pods (for airflow configs)
##
## WARNING:
## - don't include sensitive variables in here, instead make use of `airflow.extraEnv` with Secrets
## - don't specify `AIRFLOW__CORE__SQL_ALCHEMY_CONN`, `AIRFLOW__CELERY__RESULT_BACKEND`,
## or `AIRFLOW__CELERY__BROKER_URL`, they are dynamically created from chart values
##
## NOTE:
## - airflow allows environment configs to be set as environment variables
## - they take the form: AIRFLOW__<section>__<key>
## - see the Airflow documentation: https://airflow.apache.org/docs/stable/howto/set-config.html
##
## EXAMPLE:
## config:
## ## Security
## AIRFLOW__CORE__SECURE_MODE: "True"
## AIRFLOW__API__AUTH_BACKEND: "airflow.api.auth.backend.deny_all"
Reference file
and after that you have to run your command and further your DAG will be able to access the variables.
Helm documentation : https://github.com/helm/charts/tree/master/stable/airflow#docs-airflow---configs
Make sure you are configuring section : airflow.config
Okay, so I've figured this one out: The environment variables are set just nicely on the pods. However, this will not appear on the Web UI.
Workaround: to make it appear in the web UI, I will have to go into the Scheduler pod and import the variables. It can be done with a bash script.
# Get the name of scheduler pod
export SCHEDULER_POD_NAME="$(kubectl get pods --no-headers -o custom-columns=":metadata.name" -n airflow | grep scheduler)"
# Copy variables to the scheduler pod
kubectl cp ./variables.json airflow/$SCHEDULER_POD_NAME:./
# Import variables to scheduler with airflow command
kubectl -n $NAMESPACE exec $SCHEDULER_POD_NAME -- airflow variables import variables.json
i have the follow question. i try connect to eks cluster using a Terraform with Gitlab CI/CD , i receive the error message , but when try it in my compute , this error dont appear, someone had same error ?
$ terraform output authconfig > authconfig.yaml
$ cat authconfig.yaml
<<EOT
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: "arn:aws:iam::503655390180:role/clusters-production-workers"
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
EOT
$ kubectl create -f authconfig.yaml -n kube-system
error: error parsing authconfig.yaml: error converting YAML to JSON: yaml: line 2: mapping values are not allowed in this context
The output is including EOT(EndOfText) marks since it is generated as a multiline string originally.
as documentation suggests (terrafom doc link)
Don't use "heredoc" strings to generate JSON or YAML. Instead, use the
jsonencode function or the yamlencode function so that Terraform can
be responsible for guaranteeing valid JSON or YAML syntax.
use json encoding or yaml encoding before building output.
If you want to continue like this with what you have now then try to give these options with output -json or -raw
terraform output -json authconfig > authconfig.yaml
or
terraform output -raw authconfig > authconfig.yaml
The error message tells you the authconfig.yaml file can not be converted from YAML to JSON, suggesting it's not a valid yaml
The cat authconfig.yaml you're showing us includes some <<EOT and EOT tags. I would suggest to remove those, before running kubectl create -f
Your comment suggests you knew this already - then why didn't you ask about terraform, rather than showing us kubectl create failing? From your post, it really sounded like you copy/pasted the output of your job, without even reading it.
So, obviously, the next step is to terraform output -raw, or -json, there are several mentions in their docs, or knowledge base, a google search would point you to:
https://discuss.hashicorp.com/t/terraform-outputs-with-heredoc-syntax-leaves-eot-in-file/18584/7
https://www.terraform.io/docs/cli/commands/output.html
Last: we could ask why? Why would you terraform output > something, when you can have terraform write a file?
While as a general rule, whenever writing terraform stdout/stderr to files, I strongly suggest going with no-color.
How to connect the Kubernetes pods (terminal) interactively through API or other?
We can expose the pods using services but we need how to connect the pods interactively using API or others.
Maybe you're looking for kubectl port-forward which can be used without exposing the pods.
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
You can use exec --stdin as described here.
Something like this:
kubectl exec --stdin --tty [POD ID] -- /bin/bash
If you want to achieve it through API calls, the easiest way is to use one of the api client libraries e.g. kubernetes python client.
In python client it can be done using api_instance.connect_get_namespaced_pod_exec method.
It's documentation gives you even a ready working example:
from __future__ import print_function
import time
import kubernetes.client
from kubernetes.client.rest import ApiException
from pprint import pprint
configuration = kubernetes.client.Configuration()
# Configure API key authorization: BearerToken
configuration.api_key['authorization'] = 'YOUR_API_KEY'
# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed
# configuration.api_key_prefix['authorization'] = 'Bearer'
# Defining host is optional and default to http://localhost
configuration.host = "http://localhost"
# Enter a context with an instance of the API kubernetes.client
with kubernetes.client.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = kubernetes.client.CoreV1Api(api_client)
name = 'name_example' # str | name of the PodExecOptions
namespace = 'namespace_example' # str | object name and auth scope, such as for teams and projects
command = 'command_example' # str | Command is the remote command to execute. argv array. Not executed within a shell. (optional)
container = 'container_example' # str | Container in which to execute the command. Defaults to only container if there is only one container in the pod. (optional)
stderr = True # bool | Redirect the standard error stream of the pod for this call. Defaults to true. (optional)
stdin = True # bool | Redirect the standard input stream of the pod for this call. Defaults to false. (optional)
stdout = True # bool | Redirect the standard output stream of the pod for this call. Defaults to true. (optional)
tty = True # bool | TTY if true indicates that a tty will be allocated for the exec call. Defaults to false. (optional)
try:
api_response = api_instance.connect_get_namespaced_pod_exec(name, namespace, command=command, container=container, stderr=stderr, stdin=stdin, stdout=stdout, tty=tty)
pprint(api_response)
except ApiException as e:
print("Exception when calling CoreV1Api->connect_get_namespaced_pod_exec: %s\n" % e)
As a command you need to use bash, /bin/bash, sh or /bin/sh (it depends on what shell is available in your pod).
Compare it also with this answer.
I'm trying to save the contents of a configmap to a file on my local hard drive. Kubectl supports selecting with JSONPath but I can't find the expression I need to select just the file contents.
The configmap was created using the command
kubectl create configmap my-configmap --from-file=my.configmap.json=my.file.json
When I run
kubectl describe configmap my-configmap
I see the following output:
Name: my-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
my.file.json:
----
{
"key": "value"
}
Events: <none>
The furthest I've gotten so selecting only the file contents is this:
kubectl get configmap my-configmap -o jsonpath="{.data}"
Which outputs
map[my.file.json:{
"key": "value"
}]
The output that I want is
{
"key": "value"
}
What is the last piece of the JSONPath puzzle?
There’s an open issue at the Kubernetes GitHub repo with a list of things that needs to be fixed in regards to kubectl (and JSONpath), one of them are issue 16707 jsonpath template output should be json.
Edit:
How about this:
kubectl get cm my-configmap -o jsonpath='{.data.my\.file\.json}'
I just realized i had answered another question related (kind of) to this one. The above command should output what you had in mind!
If you have the ability to use jq, then you can use the following approach to e.g. "list" all config maps by selector, and extract the files:
readarray -d $'\0' -t a < <(kubectl get cm -l grafana=dashboards -o json | jq -cj '.items[] | . as $cm | .data | to_entries[] | [ ($cm.metadata.name + "-" + .key), .value ][]+"\u0000"') ; count=0; while [ $count -lt ${#a[#]} ]; do echo "${a[$((count + 1))]}" > ${a[$count]}; count=$(( $count + 2)); done
This uses kubectl (using -l for a label selector) to get all configmaps. Next it pipes them through jq, creating key value pairs with a null byte termination (the key also contains the name of the configmap, this way I ensured that duplicate file names are not an issue). Then it reads this into a bash array, iterating over the array in steps of 2. Creating files with the content.
This also works file config map values that contain newlines.