ecs instances metadata files for EKS - metadata

I know that in Amazon ECS container agent by setting the variable ECS_ENABLE_CONTAINER_METADATA=true ecs metadata files are created for the containers.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-metadata.html
Is there any similar feature for the EKS?. I would like to retrieve instance metadata info from a file inside the container instead of using the IMDSv2 api.

you simply can't, you still need to use IMDSv2 api in your service,if you want to have get instance metadata
if you're looking at the Pod Metadata, ref https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
or you can use pod labels too...

Try adding this as part of the user data:
"echo 'ECS_ENABLE_CONTAINER_METADATA=1' > /etc/ecs/ecs.config"
Found here: https://github.com/aws/amazon-ecs-agent/issues/1514

Related

Kubernetes - how to specify image pull secrets, when creating deployment with image from remote private repo

I need to create and deploy on an existing Kubernetes cluster, an application based on a docker image which is hosted on private Harbor repo on a remote server.
I could use this if the repo was public:
kubectl create deployment <deployment_name> --image=<full_path_to_remote_repo>:<tag>
Since the repo is private, the username, password etc. are required for it to be pulled. How do I modify the above command to embed that information?
Thanks in advance.
P.S.
I'm looking for a way that doesn't involve creating a secret using kubectl create secret and then creating a yaml defining the deployment.
The goal is to have kubectl pull the image using the supplied creds and deploy it on the cluster without any other steps. Could this be achieved with a single (above) command?
Edit:
Creating and using a secret is acceptable if there was a way to specify the secret as an option in kubectl command rather than specify it in a yaml (really trying to avoid yaml). Is there a way of doing that?
There are no flags to pass an imagePullSecret to kubectl create deployment, unfortunately.
If you're coming from the world of Docker Compose or Swarm, having one line deployments is fairly common. But even these deployment tools use underlying configuration and .yml files, like docker-compose.yml.
For Kubernetes, there is official documentation on pulling images from private registries, and there is even special handling for docker registries. Check out the article on creating Docker config secrets too.
According to the docs, you must define a secret in this way to make it available to your cluster. Because Kubernetes is built for resiliency/scalability, any machine in your cluster may have to pull your private image, and therefore each machine needs access to your secret. That's why it's treated as its own entity, with its own manifest and YAML file.

Discover AWS ECS cluster association from running container (self managed cluster)

I'm working with ECS with self managed EC2 based clusters. We have 1 cluster for each env, dev/stage/prod
I'm struggling to have my containers in ECS be aware of what cluster / environment they start in so that on task start up time they can properly configure themselves without having to bake the env specific config into the images.
It would be really easy if there was some command to run inside the container that could return the cluster name. It seems like that should be easy. I can think of a few sub optimal ways to do this. get the container/host IP and look up the instance. Try to grab /etc/ecs/ecs.config from the host instance etc...
It seems like there should be a better way. my google skills are failing... thx!
The ECS Task Metadata endpoint, available at ${ECS_CONTAINER_METADATA_URI_V4}/task within any ECS task, will return the cluster name, among other things.
Alternatively, if you were using an IaC tool such as Terraform or CloudFormation to build your ECS tasks, it would be trivial to inject the cluster name as an environment variable in the tasks.
Mark B's answer is better but before I got that I found this solution:
Add ECS_ENABLE_CONTAINER_METADATA=true to the /etc/ecs/ecs.config file on the ec2 host and you will have access to the ecs.config file as well as having the file available as and env var. See:
[Ecs Container Metadata File][1]
I think Mark's answer is better b/c this solution involves editing the userdata script for the host instances
[1]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-metadata.html

How to expose cluster+project values to container in GKE (or current-context in k8s)

My container code needs to know in which environment it is running on GKE, more specifically what cluster and project. In standard kubernetes, this could be retrieved from current-context value (gke_<project>_<cluster>).
Kubernetes has a downward api that can push pod info to containers - see https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/ - but unfortunately nothing from "higher" entities.
Any thoughts on how this can be achieved ?
Obviously I do not want to explicit push any info at deployment (e.g. as env in the configMap). I rather deploy using a generic/common yaml and have the code at runtime retrieve the info from env or file and branch accordingly.
You can query the GKE metadata server from within your code. In your case, you'd want to query the /computeMetadata/v1/instance/attributes/cluster-name and /computeMetadata/v1/project/project-id endpoints to get the cluster and project. The client libraries for each supported language all have simple wrappers for accessing the metadata API as well.

How to execute shell commands from within a Kubernetes ConfigMap?

I am using Helm charts to create and deploy applications into my K8s cluster.
One of my pods requires a config file with a SDK key to start and function properly. This SDK key is considered a secret and is stored in AWS Secret Manager. I don't include the secret data in my Docker image. I want to be able to mount this config file at runtime. A ConfigMap seems to be a good option in this case, except that I have not been able to figure out how to obtain the SDK key from Secrets Manager during the chart installation. Part of my ConfigMap looks like this:
data:
app.conf: |
[sdkkey] # I want to be able to retrieve sdk from aws secrets manager
I was looking at ways to write shell commands to use AWS CLI to get secrets, but have not seen a way to execute shell commands from within a ConfigMap.
Any ideas or alternative solutions?
Cheers
K
tl;dr; You can't execute a ConfigMap, it is just a static manifest. Use an init container instead.
ConfigMaps are a static manifest that can be read from the Kubernetes API or injected into a container at runtime as a file or environment variables. There is no way to execute a ConfigMap.
Additionally, ConfigMaps should not be used for secret data, Kubernetes has a specific resource, called Secrets, to use for secret data. It can be used in similar ways to a ConfigMap, including being mounted as a volume or exposed as environment variables within the container.
Given your description it sounds like your best option would be to use an init container to retrieve the credentials and write them to a shared emptyDir Volume mounted into the container with the application that will use the credentials.

Dynamic provisioning of Cinder volume and Persistent volume using Terraform through Kubernetes

I have been doing a research and I've been trying to find out if there is way to create Cinder and Persistent volumes dynamically using Terraform through Kubernetes. So I am taking info from here:
https://www.terraform.io/docs/providers/kubernetes/r/persistent_volume.html https://docs.okd.io/latest/install_config/persistent_storage/persistent_storage_cinder.html
but looks like Cinder volume must be created manually before and then Persistent volume could be associated with already created "volume_id" .
However, I believe there is a way of dynamic creation of PV looking here
https://www.terraform.io/docs/providers/kubernetes/d/storage_class.html
but I am not sure how should it looks like AND If it is possible using Terraform ?
Thank you !
I found the way .Here is the way to do that --> https://kubernetes.io/blog/2017/03/dynamic-provisioning-and-storage-classes-kubernetes/ and https://www.terraform.io/docs/providers/kubernetes/r/storage_class.html and https://kubernetes.io/docs/concepts/storage/storage-classes/#openstack-cinder
So when when you deploying with Terraform you must specify "storage_class_name = name_of_your_class" in your "resource "kubernetes_persistent_volume_claim"" in "spec" section .
Storage class must be created before tat in Kubernetes.