My ECS task definition has plenty of these:
{
"valueFrom": "arn:aws:secretsmanager:eu-west-1:xxx:secret:/xxx",
"name": "secret"
}
However, I am running into some issues with variables. I went into Secrets Manager, double checked the secrets I imported, and dev team agrees they look fine. But, the app we deployed still complains about some variables. They can't check them on their side, and I'm not sure how to check them on mine. What's my course of action here?
Related
Trying to execute a blue/green deployment of an ECS task within AWS using the CloudFormation approach (as documented here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/blue-green.html) and the deployment fails.
The initial stack deployment works fine and the ECS task is deployed and running as expected with the correct load balancer and target group etc. However when updating the task definition, to trigger a blue/green deployment, it fails with the message:
Imports and exports are currently not supported on templates using hooks
The deployment is created in CodeDeploy, so it's obviously triggered as expected, but the deployment screen in AWS console shows the following error:
The deployment failed because the stack update that triggered this CodeDeploy deployment failed in CloudFormation. In the AWS CloudFormation console, go to the Events tab to view status and error messages.
But the puzzling thing is the CloudFormation template does not appear to contain any imports or exports. I have even tried copying the yml from the documented example and it doesn't work.
I'm executing the CloudFormation updates using Serverless Framework, but I don't think that's an issue, the error is logged in the CloudFormation stack events tab.
Probably not unreasonable to expect the example in the AWS documentation to work?
So we did find the cause of this issue, and in fact the problem was actually caused by running the CloudFormation template via the serverless framework.
The serverless approach works for all our other AWS deployments, but the CodeDeploy transform explicitly requires for there to be no outputs from the CF template - however serverless actually adds the name of the S3 bucket that it uses as an output, which breaks this particular use case.
Therefore the solution was to invoke the CF template directly from the AWS CLI and it works perfectly.
I am studying about CI/CD on AWS (CodePipeline/CodeBuild/CodeDeploy) and found it to be a very good tool for managing a pipeline on the cloud with everything managed (don't even need to install Jenkins on EC2).
I am now reading about container building and deployment. For the build phase, CodeBuild supports building container images. For the deploy phase, while I could find a CodeDeploy solution to ECS cluster, it seems there is no direct CodeDeploy solution for EKS (kindly correct if I am wrong).
May I know if there is a solution to integrate EKS cluster (i.e. the deploy phase can fetch the docker image from ECR or dockerhub and deploy to EKS)? I have come across some ideas using lamda functions to trigger the cluster to perform rolling update of the container image, but I could not find a step-by-step guide on this.
=========================
(Update 17 Sep 2020)
Somehow managed to create a lambda function to trigger an update to EKS to perform rolling update of the k8s deployment. Thanks Prashanna for the source base.
Just want to share the key setups in the process.
(1) Update the lambda execution role to include permission to describe EKS clusters
Create a policy with describe EKS cluster access, and attach to the role:
Policy snippet:
...
......
"Action": "eks:Describe*"
...
......
Or you can create a "EKSFullAccess" policy, and attach to the lambda execution role
(2) Update the k8s ConfigMap, and supplement the lambda execution role ARN to the mapRole section. The corresponding k8s role should be a role that has permission to update container images (say system:masters) used for the k8s deployment
You can edit the map with command like below:
kubectl edit -n kube-system configmap/aws-auth
You don't have to add/update another ConfigMap even if your deployment is in another namespace. It will take effect as well.
Sample lambda function call request and response:
Gitab provides the inbuilt integration of EKS and deployment with the help of Helm charts. If you plan to use other tools Using AWS lambda to update the image is the best bet!
I've added my github project.
Setup a lambda with below code and give RBAC access to this lambda in your EKS. Try invoking the lambda by passing the required information like namespace, deployment, image etc
Lambda for Kubernetes image update
The lambda must require EKS:describecluster policy.
The Lambda role must be provided atleast update image RBAC role in EKS cluster RBAC role setup
Since there's no built-in CI/CD for EKS at the moment, this is going to be a showcase of success/failure stories of a 3rd-party CI/CDs in EKS :) My take: https://github.com/fluxcd/flux
Pros:
Quick to set up initially (until you get into multiple teams/environments)
Tracks and deploys image releases out of box
Possibility to split what to auto-deploy in dev/prod using regex. E.g. all versions to dev, only minor to prod. Or separate tag prefixes for dev/prod.
All state is in git - a good practice to start with
Cons:
Getting complex for further pipeline expansion, e.g. blue-green, canary, auto-rollbacks, etc.
The dashboard is proprietary (weave works product)
Not for on-demand parametrized job runs like traditional CIs.
Setup:
Setup an automated image build (looks like you've already figured out)
Setup flux and helm-operator into the cluster, point them to your "gitops repo"
For each app, create a HelmRelease object that describes a regex of image tag to track
Done. A newly published image tag that falls into regex will be auto-deployed to the cluster and the new version is committed to a gitops repo.
GitLab offers to manage a Kubernetes cluster, which includes (e.g.) creating the namespace, adding some tokens, etc. In GitLab CI jobs, one can directly use the $KUBECONFIG variable for contacting the cluster and e.g. creating deployments using helm. This works like a charm, as long as the GitLab project is public and therefore Docker images hosted by the GitLab project's image registry are publicly accessible.
However, when working with private projects, Kubernetes of course needs an ImagePullSecret to authenticate the GitLab's image registry to retreive the image. As far as I can see, GitLab does not automatically provide an ImagePullSecret for repository access.
Therefore, my question is: What is the best way to access the image repository of private GitLab repositories in a Kubernetes deployment in a GitLab managed deployment environment?
In my opinion, these are the possibilities and why they are not eligible/optimal:
Permanent ImagePullSecret provided by GitLab: When doing a deployment on a GitLab managed Kubernetes cluster, GitLab provides a list of variables to the deployment script (e.g. Helm Chart or kubectl apply -f manifest.yml). As far as I can (not) see, there is a lot of stuff like ServiceAccounts and tokens etc., but no ImagePullSecret - and also no configuration option for enabling ImagePullSecret creation.
Using $CI_JOB_TOKEN: When working with GitLab CI/CD, GitLab provides a variable named $CI_JOB_TOKEN which can be used for uploading Docker images to the registry during job execution. This token expires after the job is done. It could be combined with helm install --wait, but when a rescheduling takes place to a new node which does not have the image yet, the token is expired and the node is not able to download the image anymore. Therefore, this only works right in the moment of deploying the app.
Creating an ImagePullSecret manually and add it to the Deployment or the default ServiceAccount: *This is a manual step, has to be repeated for each individual project and just sucks - we're trying to automate things/GitLab managed Kubernetes clusters is designed for avoiding any manual step.`
Something else but I don't know about it.
So, am I wrong in one of these points? Am I missing a eligible option in this listing?
Again: It's all about a seamless integration with the "Managed Cluster" features of GitLab. I know how to add tokens from GitLab as ImagePullSecrets in Kubernetes, but I want to know how to automate this with the Managed Cluster feature.
There is another way. You can bake the ImagePullSecret in your container runtime configuration. Docker, containerd or CRI-O (Whatever you are using)
Docker
As root run docker login <your-private-registry-url>. Then a file /root/.docker/config.json should be created/updated. Stick that in all your Kubernetes node and make sure your kubelet runs as root (which typically does). Some background info.
The content of the file should look something like this:
{
"auths": {
"my-private-registry": {
"auth": "xxxxxx"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.09.2 (Linux)"
}
}
Containerd
Configure your containerd.toml file with something like this:
[plugins.cri.registry.auths]
[plugins.cri.registry.auths."https://gcr.io"]
username = ""
password = ""
auth = ""
identitytoken = ""
CRI-O
Specify the global_auth_file option in your crio.conf file.
✌️
Configure your account.
For example, for kubernetes pull image gitlab.com, use the address registry.gitlab.com:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
As I'm playing around with K8s deployment and Gitlab CI my deployment got stuck with the state ContainerStarting.
To reset that, I deleted the K8s namespace using kubectl delete namespaces my-namespace.
Now my Gitlab runner shows me
$ ensure_namespace
Checking namespace [MASKED]-docker-3
error: the server doesn't have a resource type "namespace"
error: You must be logged in to the server (Unauthorized)
I think that has something to do with RBAC and most likely Gitlab created that namespace with some arguments and permissions (but I don't know exactly when and how that happens), which are missing now because of my deletion.
Anybody got an idea on how to fix this issue?
In my case I had to delete the namespace in Gitlab database, so gitlab would readd service account and namespace:
On the gitlab machine or task runner enter the PostgreSQL console:
gitlab-rails dbconsole -p
Then select the database:
\c gitlabhq_production
Next step is to find the namespace that was deleted:
SELECT id, namespace FROM clusters_kubernetes_namespaces;
Take the id of the namespace to delete it:
DELETE FROM clusters_kubernetes_namespaces WHERE id IN (6,7);
Now you can restart the pipeline and the namespace and service account will be readded.
Deleting the namespace manually caused the necessary secrets from Gitlab to get removed. It seems they get autocreated on the first ever deployment and it's impossible to repeat that process.
I had to create a new repo and push to it. Now everything works.
Another solution is removing the cluster from Gitlab (under operations/kubernetes in your repo) and re-adding it.
From GitLab 12.6 you can simply clear the cluster cache.
To clear the cache:
Navigate to your project’s Operations > Kubernetes page, and select your cluster.
Expand the Advanced settings section.
Click Clear cluster cache.
This avoids losing secrets and potentially affecting other applications.
I know that using Terraform to deploy your Infra and Kubernetes Cluster is the way to go. However, does it make any sense to use Terraform to also deploy applications on kubernetes cluster? Is this also the way to go?
Thank you
Though it's not devoid of it's complexities, a better pipeline is Jenkins + Helm + Spinnaker combo.
Jenkins - CI
Helm - templating and chart build
Spinnaker - deploy
Pros:
Spinnaker is an excellent tool for deployment to kubernetis.
It can be made aware of multiple environment ,so cloud pipeline are
easier to build.
Natively integrates with most of the cloud providers like AWS,Azure,PCF etc
Cons:
On the flip side it's a little heavy tool as it is comprised of a
bunch of microservices and configuration can get under your skin.
As David Maze mentioned, you can combine terraform with helm.
You can find more information abut terraform provider here
and here
As per terraform documentation
"install_tiller" - (Optional) Install Tiller if it is not already installed. Defaults to true.
You can use also ansible with helm packages manager here:
Please take a lookk for othe automated tools described shortly here and here. like jenkins mentioned by Shirine.
Please take a lookk for othe automated tools described shortly here like jenkins mentioned bye #Shirine
There are different solutions. Depending on your needs you should consider factors like: paid/free solutions, for developers/teams, preferred platform, other factors like security, increasing transparency, collaboration and availability.
Hope this help
I maintain the Kustomization provider as an alternative integration of Kubernetes manifests into Terraform.
It has three main advantages over alternative options:
Every K8s resource is tracked individually in the Terraform state. This gives you a preview of changes in the plan phase. And also enables destroy-and-recreate plans in case of changes to immutable fields.
The provider allows you to use native Kubernetes YAML unchanged. No need to translate everything into HCL like with the Kubernetes provider.
Being based on Kustomize, it allows you to use Kustomize's overlay approach. But by defining the overlay in Terraform, you can use Terraform variables, module outputs and so on, to patch the Kubernetes resources.
You can of course use the provider's data sources and resources directly, but the most convenient way is probably via this module:
module "example_manifests" {
source = "kbst.xyz/catalog/custom-manifests/kustomization"
version = "0.1.0"
configuration_base_key = "default"
configuration = {
default = {
resources = [
# list of paths to K8s YAML files
"${path.root}/path/to/a/kubernetes/resource.yaml"
]
}
}
}