AWS ECS Task Definition: How do I reference an environment variable in another environment variable? - amazon-ecs

I would like to be able to define environment variables in AWS ECS task definition like below:
TEST_USER: admin
TEST_PATH: /home/$TEST_USER/workspace
When I echo TEST_PATH:
Actual Value = /home/**$TEST_USER**/workspace
Expected Value = /home/**admin**/workspace

You can't do that. I don't think Docker in general supports evaluating environment variables like that before exposing them in the container.
If you are using something like CloudFormation or Terraform to create create your Task Definitions, you would use a variable in that tool to store the value, and create the ECS environment variables using that CloudFormatin/Terraform variable.
Otherwise you could edit the entrypoint script of your Docker image to do the following when the container starts up:
export TEST_PATH="/home/$TEST_USER/workspace"

Related

Azure pipelines timing out when I add a variable group using msft-hosted agent deploying terraform code

My Azure pipeline is timing out on terraform plan/apply when I add variable groups.
The variables
Variable group
I tried to do some research but to no avail.
yml
When running Terraform in a pipeline you need to pass the parameter -input-false. This will cause Terraform to immediately output the error and I suspect it's saying something like the input variable vnet_name is not set because you have not explained how you are joining the variable group to Terraform. It is not enough to simply add the pipeline variables, you must also pass them into Terraform.
There are a number of ways of passing values into Terraform as described in this page here: https://www.terraform.io/language/values/variables#assigning-values-to-root-module-variables
The most common method I have seen used is to create an environment.tfvars file. This is a simple key:pair value of variables which you can hard-code variables into which you then pass into Terraform like terraform apply -var-file="environment.tfvars"
If you have values in Azure Pipelines that you want to place there, you can use the Azure Replace Tokens tasks. (https://marketplace.visualstudio.com/items?itemName=qetza.replacetokens)
In which case your environment.tfvars will look like this:
vnet_name = #{vnet_name}#
The Replace Token Task will replace the Azure Pipeline variable vnet_name with the value described in your variable group.

How to read airflow variables into kubernetes pod instance

I am trying to read airflow variables into my ETL job to populate variables in the curation script. I am using the KubernetesPodOperator. How do I access the metadata database from my k8's pod?
Error I am getting from airflow:
ERROR - Unable to retrieve variable from secrets backend (MetastoreBackend). Checking subsequent secrets backend.
This is what I have in main.py for outputting into the console log. I have a key in airflow variables named "AIRFLOW_URL".
from airflow.models import Variable
AIRFLOW_VAR_AIRFLOW_URL = Variable.get("AIRFLOW_URL")
logger.info("AIRFLOW_VAR_AIRFLOW_URL: %s", AIRFLOW_VAR_AIRFLOW_URL)
Can anyone point me in the right direction?
Your DAG can pass them as environment variables to your Pod, using a template (e.g. KubernetesPodOperator(... env_vars={"MY_VAR": "{{var.value.my_var}}"}, ...)).
It looks like you have a secrets backend set in config without having a secrets backend set up, so Airflow is trying to go there to fetch your variable. See this link.
Alter your config to remove the backend and backend_kwargs keys, and it should look at your Airflow variables first.
[secrets]
backend =
backend_kwargs =

AWS Fargate | Environment variable is not setting in fargate task

I'm using ECS Fargate Platform 1.4
I'm setting environment variable while creating Task definition in a cluster but When I tried to access that environment variable in containers but environment is missing container's environment.
I tried all possible way to set and get environment.
Even I tried to set env variable using command option but it failed.
Please help me out.
I know this is an older question, but I'll share what I've done.
I use terraform, so in my aws_ecs_task_definition container_definitions block I have environment defined like this:
environment = [
{
name = "MY_ENVIRONMENT_VARIABLE",
value = "MY_VALUE"
}
]
Then in my app, which is a Flask application, I do this:
from os import environ
env_value = environ.get("MY_ENVIRONMENT_VARIABLE", None) # "MY_VALUE"
In the ECS console, if you click on a task definition, you should be able to view the container definition, which should show the environment variables you set. It's just a sanity check.

Azure DevOps passing Dynamically assigned variables between build hosts

I'm using Azure DevOps on a vs2017-win2016 build agent to provision some infrastructure using Terraform.
What I want to know is it possible to pass the Terraform Output of a hosts dynamically assigned IP address to a
2nd Job running a different build agent.
I'm able to pass these to build variables in the first Job
BASTION_PRIV_IP=x.x.x.x
BASTION_PUB_IP=1.1.1.1
But un-able to get these variables to appear to be consumed with the second build agent running ubuntu-16.04
I am able to pass any static defined parameters like Azure Resource Group name that I define before the job start, its just the
dynamically assigned ones.
This is pretty easily done when you are using the YAML based builds.
It's important to know that variables are only available within the scope of current job by default.
However you can set a variable as an output variable for your job.
This output variable can then be mapped to a variable within second job (do note that you need to set the first job as a dependency for the second job).
Please see the following link for an example of how to get this to work
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-a-multi-job-output-variable
It may also be doable in the visual designer type of build, but i couldn't get that to work in the quick test i did, maybe you can get something to work inspired on the linked example.

Kubernetes - set env var using the value of one of the auto generated service env vars

Kubernetes automatically generates several environment variables for you, like SERVICE1_SERVICE_HOST and SERVICE1_SERVICE_PORT. I would like to use the value of these variables to set my own variables in the deployment.yml, like below:
env:
- name: MY_NEW_VAR
value: ${SERVICE1_SERVICE_HOST}
For some reason Kubernetes isn't able to resolve this. When I go inside the container it turns out it has been assigned as a literal string, giving me MY_NEW_VAR = ${SERVICE1_SERVICE_HOST}.
Is there a way to assign the value of ${SERVICE1_SERVICE_HOST} instead?
The syntax is $(SERVICE1_SERVICE_HOST), as one can see in the fine manual