How to update secret string using cludformation? - aws-cloudformation

I am a newbie to AWS cloudformation, any help will appreciated.
I have a use case wherein I would like to write CFN to update already existing secret string. I was able to find a CFN to create a secret string but not to update.
I see the AWS CLI has aws secretsmanager update-secret --secret-id I was looking for similar option in Cloudformation.

Use cloud formation template for create secret but instead of creating stack, update the existing stack using change set.
but do it you must know the stack name which is created before to create secrets.
may be use the same template which used earlier just change the value and update stack

Related

ecs instances metadata files for EKS

I know that in Amazon ECS container agent by setting the variable ECS_ENABLE_CONTAINER_METADATA=true ecs metadata files are created for the containers.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-metadata.html
Is there any similar feature for the EKS?. I would like to retrieve instance metadata info from a file inside the container instead of using the IMDSv2 api.
you simply can't, you still need to use IMDSv2 api in your service,if you want to have get instance metadata
if you're looking at the Pod Metadata, ref https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
or you can use pod labels too...
Try adding this as part of the user data:
"echo 'ECS_ENABLE_CONTAINER_METADATA=1' > /etc/ecs/ecs.config"
Found here: https://github.com/aws/amazon-ecs-agent/issues/1514

How to merge a K8s Configmap to a Secret (or two secrets together)

I am using Helm w/ Kubernetes and am trying to add data that I have in an existing Configmap to an existing secret. The reason for this, is that there is a property on a CRD that I need to set which only takes in a single secret key ref. The existing secret is created by Vault, and the existing Configmap is configured in the Helm chart in plain text. For reasons that I won't get into, we cannot include the content of the configmap into the Vault secret entry, so I MUST be able to merge these two into a secret.
I've tried searching for this, but most answers I see involve creating an initContainer and setting up a volume, but unfortunately I don't think this will work for my situation. I just need a single secret that I can reference in a CRD and problem solved. Is this possible using Kubernetes/Helm?
My fallback plan is to create my own CRD and associated controller to merge the configmap data and the secret's data and basically create a new secret, but it seems like overkill.
As far as I am aware of there is not way to do this in kubernetes.
The only solution that I can see would be to implement some tool yourself. With something like kopf you could implement a simple operator that listen for the creation/update of a specific secret and configmap, get their data and merge it into a new secret.
Using an operator allows you to handle all the cases that might occur during the life of your resources, such as when your new secret is deleted or updated, etc.

Populate kubernetes Configmap from hashicorp vault

i want to populate configmaps from data inside vault in kubernetes. I just complete setup of vault and auth method as kubernetes(Service account) and userpass.
Can someone suggest easy way to integrate variables for application ? what to add in yaml file ? if i can populate configmap then i can easily use it to yaml.
how to changes will be affected if variable change on vault.
you can try using Vault CRD, when you create a custom resource of type vault, it will create a secrets using a data from the vault
You can use Vault CRD as Xavier Adaickalam mentioned.
Regarding the subject of variable changes, you have 2 ways of exposing variables inside Pods, using volumes and using environment variables. Volumes are updated automatically when the secrets are modified. Unfortunately, environment variables do not receive updates even if you modify your secrets. You have to restart your container if the values are modified.

How to handle secrets in ConfigMaps?

I would like to use a Secret inside a ConfigMap. Is this possible?
Example:
An example where this might be required is if you would like to write from Fluentd to S3. In the configuration you have to add your AWS credentials.
Alternatives:
Using environment variables on the cluster itself. I do not like this idea, because the variable would still contain the secret as plain text.
Passing the password during set-up. If you are using deployment tools it might be possible to pass the secret during the deployment of your application. This is also not a nice solution since you are still passing the secret as plain text to the deployment tool. An advantage of this approach is that you do not accidentally check-in your secret to git.
Try to avoid making use of aws credentials in kubernetes.
As you can see aws_key_id and aws_sec_key are the optional fields.
Make use of AWS IAM role and assign it to the kubernetes nodes.
And then try to run your fluentd application without aws credentials in its config.
Just give it a try.
Hope this helps.
Update:
This article explain different ways to use aws iam for kubernetes.
Kube2iam and many other tools like this, might help. Give it a try.
No, it is not possible. You should always use secret for your sensitive data.
By default, secrets are only base64 encoded content of files so you should use something like Vault to secure store you sensitive data.

Does AWS CDK create default stack name in CloudFormation?

Using CloudFormation template, CloudFormation service ask for a stack name(AWS::StackName) before applying the template.
Using AWS CDK, we run cdk synth to create cfn template & cdk deploy to deploy services based on the template.
Does AWS CDK create default stack name in Cloudformation service? after running cdk deploy
In order to create a stack you have to instantiate an object for that stack. When you do, you pass the stack name as a parameter. Example in Python:
class MyStackClass:
(...)
# You have to have an app
app = core.App()
# Here's the stack
MyStack = MyStackClass(app, "StackName")
Other than that, see the docs:
The physical names of the AWS CloudFormation stacks are automatically determined by the AWS CDK based on the stack's construct path in the tree. By default, a stack's name is derived from the construct ID of the Stack object, but you can specify an explicit name using the stackName prop, as follows.
new MyStack(this, 'not:a:stack:name', { stackName: 'this-is-stack-name' });