Kubernetes - set env var using the value of one of the auto generated service env vars - kubernetes

Kubernetes automatically generates several environment variables for you, like SERVICE1_SERVICE_HOST and SERVICE1_SERVICE_PORT. I would like to use the value of these variables to set my own variables in the deployment.yml, like below:
env:
- name: MY_NEW_VAR
value: ${SERVICE1_SERVICE_HOST}
For some reason Kubernetes isn't able to resolve this. When I go inside the container it turns out it has been assigned as a literal string, giving me MY_NEW_VAR = ${SERVICE1_SERVICE_HOST}.
Is there a way to assign the value of ${SERVICE1_SERVICE_HOST} instead?

The syntax is $(SERVICE1_SERVICE_HOST), as one can see in the fine manual

Related

AWS ECS Task Definition: How do I reference an environment variable in another environment variable?

I would like to be able to define environment variables in AWS ECS task definition like below:
TEST_USER: admin
TEST_PATH: /home/$TEST_USER/workspace
When I echo TEST_PATH:
Actual Value = /home/**$TEST_USER**/workspace
Expected Value = /home/**admin**/workspace
You can't do that. I don't think Docker in general supports evaluating environment variables like that before exposing them in the container.
If you are using something like CloudFormation or Terraform to create create your Task Definitions, you would use a variable in that tool to store the value, and create the ECS environment variables using that CloudFormatin/Terraform variable.
Otherwise you could edit the entrypoint script of your Docker image to do the following when the container starts up:
export TEST_PATH="/home/$TEST_USER/workspace"

How to read airflow variables into kubernetes pod instance

I am trying to read airflow variables into my ETL job to populate variables in the curation script. I am using the KubernetesPodOperator. How do I access the metadata database from my k8's pod?
Error I am getting from airflow:
ERROR - Unable to retrieve variable from secrets backend (MetastoreBackend). Checking subsequent secrets backend.
This is what I have in main.py for outputting into the console log. I have a key in airflow variables named "AIRFLOW_URL".
from airflow.models import Variable
AIRFLOW_VAR_AIRFLOW_URL = Variable.get("AIRFLOW_URL")
logger.info("AIRFLOW_VAR_AIRFLOW_URL: %s", AIRFLOW_VAR_AIRFLOW_URL)
Can anyone point me in the right direction?
Your DAG can pass them as environment variables to your Pod, using a template (e.g. KubernetesPodOperator(... env_vars={"MY_VAR": "{{var.value.my_var}}"}, ...)).
It looks like you have a secrets backend set in config without having a secrets backend set up, so Airflow is trying to go there to fetch your variable. See this link.
Alter your config to remove the backend and backend_kwargs keys, and it should look at your Airflow variables first.
[secrets]
backend =
backend_kwargs =

AWS Fargate | Environment variable is not setting in fargate task

I'm using ECS Fargate Platform 1.4
I'm setting environment variable while creating Task definition in a cluster but When I tried to access that environment variable in containers but environment is missing container's environment.
I tried all possible way to set and get environment.
Even I tried to set env variable using command option but it failed.
Please help me out.
I know this is an older question, but I'll share what I've done.
I use terraform, so in my aws_ecs_task_definition container_definitions block I have environment defined like this:
environment = [
{
name = "MY_ENVIRONMENT_VARIABLE",
value = "MY_VALUE"
}
]
Then in my app, which is a Flask application, I do this:
from os import environ
env_value = environ.get("MY_ENVIRONMENT_VARIABLE", None) # "MY_VALUE"
In the ECS console, if you click on a task definition, you should be able to view the container definition, which should show the environment variables you set. It's just a sanity check.

How to reference other environment variables in a Helm values file?

I have a Helm values with content like following:
envs:
- name: PACT_BROKER_DATABASE_NAME
value: $POSTGRES_DBNAME
I want to reference another variable called $POSTGRES_DB_NAME and feed into that PACT_BROKER_DATABASE_NAME. The current value does not work. How do I feed one value to another variable?
I was looking for something to "reference another variable" in the helm variable section, google landed me here, posting as an answer, might help someone else.
I was looking for away to set heap allocation base on pod limit.
env:
- name: MEMORY_LIMIT
valueFrom:
resourceFieldRef:
resource: limits.memory
- name: NODE_OPTIONS
value: "--max-old-space-size=$(MEMORY_LIMIT)"
so I just need $(MEMORY_LIMIT)", so $POSTGRES_DBNAME this should be define in the same pod/container env.

How to pass kubernetes pod instance id to within the pod upon start up?

So I'm researching how to use Kubernetes for my case. I installed it and played a bit.
The question is when the replication controller starts couple of replicas they have something like an id in their name:
How unique is this id? Is it uniqueness for the lifetime of kubernetes? Is it unique across different kubernetes runs (i.e. if I restart kubernetes)?
How to pass this id to the app in the container? Can I specify some kind of template in the yaml so for example the id will be assigned to environment variable or something similar?
Alternatively is there a way for the app in the container to ask for this id?
More explanation of the use case. I have an application that writes some session files inside a directory. I want to guarantee unique for the session ids in the system. This means if one app instance is running on VM1 and another instance on VM2, I want to prepend some kind of identifier to the ids like app-1-dajk4l and app-2-dajk4l, where app is the name of the app and 1, 2 is the instance identifier, which should come from the replication controller because it is dynamic and can not be configured manually. dajk4l is some identifier like the current timestamp or similar.
Thanks.
The ID is guaranteed to be unique at any single point in time, since Kubernetes doesn't allow two pods in the same namespace to have the same name. There aren't any longer-term guarantees though, since they're just generated as a random string of 5 alphanumeric characters. However, given that there are more than 60 million such random strings, conflicts across time are also unlikely in most environments.
Yes, you can pull in the pod's namespace and name as environment variables using what's called the "Downward API", adding a field on the container like
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name