I am using the existing single master Mesosphere DCOS cloud formation template:
https://s3.amazonaws.com/downloads.mesosphere.io/dcos/stable/cloudformation/single-master.cloudformation.json
I am trying to figure out how to indicate that I want to spin this up in an existing VPC that is already configured with a NAT/Internet gateway.
New to cloud formation and can't find any docs on the Mesosphere site around what the template actually creates and why. In addition there doesn't appear to be an all up manual setup tutorial. Just this template.
Thanks!
You can easily change the CF template, just delete the VPC resource, include the vpc-id as a parameter and change all the references to the VPC resource to point the new parameter.
In the same way you could replace the subnets in the template and remove the NAT instance.
I have made the changes in cloudformation script to install mesosphere in existing VPC/NAT.
https://github.com/navidurrahman/dcos-cloudformation
Let me know, if you face any problems
This cloud formation template installs dcos version 1.3. I have written a terraform module for latest mesosphere installation.
https://github.com/navidurrahman/terraform_mesosphere
Related
I am studying about CI/CD on AWS (CodePipeline/CodeBuild/CodeDeploy) and found it to be a very good tool for managing a pipeline on the cloud with everything managed (don't even need to install Jenkins on EC2).
I am now reading about container building and deployment. For the build phase, CodeBuild supports building container images. For the deploy phase, while I could find a CodeDeploy solution to ECS cluster, it seems there is no direct CodeDeploy solution for EKS (kindly correct if I am wrong).
May I know if there is a solution to integrate EKS cluster (i.e. the deploy phase can fetch the docker image from ECR or dockerhub and deploy to EKS)? I have come across some ideas using lamda functions to trigger the cluster to perform rolling update of the container image, but I could not find a step-by-step guide on this.
=========================
(Update 17 Sep 2020)
Somehow managed to create a lambda function to trigger an update to EKS to perform rolling update of the k8s deployment. Thanks Prashanna for the source base.
Just want to share the key setups in the process.
(1) Update the lambda execution role to include permission to describe EKS clusters
Create a policy with describe EKS cluster access, and attach to the role:
Policy snippet:
...
......
"Action": "eks:Describe*"
...
......
Or you can create a "EKSFullAccess" policy, and attach to the lambda execution role
(2) Update the k8s ConfigMap, and supplement the lambda execution role ARN to the mapRole section. The corresponding k8s role should be a role that has permission to update container images (say system:masters) used for the k8s deployment
You can edit the map with command like below:
kubectl edit -n kube-system configmap/aws-auth
You don't have to add/update another ConfigMap even if your deployment is in another namespace. It will take effect as well.
Sample lambda function call request and response:
Gitab provides the inbuilt integration of EKS and deployment with the help of Helm charts. If you plan to use other tools Using AWS lambda to update the image is the best bet!
I've added my github project.
Setup a lambda with below code and give RBAC access to this lambda in your EKS. Try invoking the lambda by passing the required information like namespace, deployment, image etc
Lambda for Kubernetes image update
The lambda must require EKS:describecluster policy.
The Lambda role must be provided atleast update image RBAC role in EKS cluster RBAC role setup
Since there's no built-in CI/CD for EKS at the moment, this is going to be a showcase of success/failure stories of a 3rd-party CI/CDs in EKS :) My take: https://github.com/fluxcd/flux
Pros:
Quick to set up initially (until you get into multiple teams/environments)
Tracks and deploys image releases out of box
Possibility to split what to auto-deploy in dev/prod using regex. E.g. all versions to dev, only minor to prod. Or separate tag prefixes for dev/prod.
All state is in git - a good practice to start with
Cons:
Getting complex for further pipeline expansion, e.g. blue-green, canary, auto-rollbacks, etc.
The dashboard is proprietary (weave works product)
Not for on-demand parametrized job runs like traditional CIs.
Setup:
Setup an automated image build (looks like you've already figured out)
Setup flux and helm-operator into the cluster, point them to your "gitops repo"
For each app, create a HelmRelease object that describes a regex of image tag to track
Done. A newly published image tag that falls into regex will be auto-deployed to the cluster and the new version is committed to a gitops repo.
I am trying to build an AWS EKS Cluster with AWS cdk in Java.
We have an existing VPC and subnets which need to get some Kubernetes tags like kubernetes.io/role/internal-elb=1 etc.
I can get the ISubnets by getting the vpc with:
IVpc vpc = Vpc.fromVpcAttributes(this, "my-vpc", vpcAttributes);
List<ISubnet> subnets = vpc.getPrivateSubnets();
subnets.forEach(iSubnet -> Tag.add(iSubnet, "kubernetes.io/role/internal-elb", "1"));
but awscdk.core.Tag.add() is expecting a Construct, which I am not creating because the subnet already exists.
Also tried the example here: https://docs.aws.amazon.com/de_de/cdk/latest/guide/tagging.html
private void addTagToAllVPCSubnets(Tag tag) {
TagProps includeOnlySubnets = TagProps.builder()
.includeResourceTypes(singletonList("AWS::EC2::Subnet"))
.build();
Tag.add(this, tag.getKey(), tag.getValue(), includeOnlySubnets);
}
... but still i can not see any of the new tags in the CF yaml of the cdk synth.
Any help will be appreciated!
You can do it automatically using lambda-supported custom resources.
It seems like this is a limitation in CDK at the moment. This is something that the EKS construct in CDK should deal with, but which is currently not possible as indicated by a warning during a CDK deployment:
[Warning at /stack/some-project-EKS-cluster] Could not auto-tag private subnets with "kubernetes.io/role/internal-elb=1", please remember to do this manually
For the same reason that this can't be done automatically, you can't do it by using Tag.add().
Since the EKS module in CDK is still experimental/development preview, you have three options right now:
Wait for a full release, which perhaps includes automatic subnet tagging.
Create your own VPC through CDK, which allows you to tag your own subnets.
Manually edit existing subnets through the VPC service interface in the AWS console
A good idea would probably be to create an issue on the AWS CDK Github and request tagging existing subnets (and other existing constructs in general) as a feature. I could not find other issues regarding this on their Github.
I know that using Terraform to deploy your Infra and Kubernetes Cluster is the way to go. However, does it make any sense to use Terraform to also deploy applications on kubernetes cluster? Is this also the way to go?
Thank you
Though it's not devoid of it's complexities, a better pipeline is Jenkins + Helm + Spinnaker combo.
Jenkins - CI
Helm - templating and chart build
Spinnaker - deploy
Pros:
Spinnaker is an excellent tool for deployment to kubernetis.
It can be made aware of multiple environment ,so cloud pipeline are
easier to build.
Natively integrates with most of the cloud providers like AWS,Azure,PCF etc
Cons:
On the flip side it's a little heavy tool as it is comprised of a
bunch of microservices and configuration can get under your skin.
As David Maze mentioned, you can combine terraform with helm.
You can find more information abut terraform provider here
and here
As per terraform documentation
"install_tiller" - (Optional) Install Tiller if it is not already installed. Defaults to true.
You can use also ansible with helm packages manager here:
Please take a lookk for othe automated tools described shortly here and here. like jenkins mentioned by Shirine.
Please take a lookk for othe automated tools described shortly here like jenkins mentioned bye #Shirine
There are different solutions. Depending on your needs you should consider factors like: paid/free solutions, for developers/teams, preferred platform, other factors like security, increasing transparency, collaboration and availability.
Hope this help
I maintain the Kustomization provider as an alternative integration of Kubernetes manifests into Terraform.
It has three main advantages over alternative options:
Every K8s resource is tracked individually in the Terraform state. This gives you a preview of changes in the plan phase. And also enables destroy-and-recreate plans in case of changes to immutable fields.
The provider allows you to use native Kubernetes YAML unchanged. No need to translate everything into HCL like with the Kubernetes provider.
Being based on Kustomize, it allows you to use Kustomize's overlay approach. But by defining the overlay in Terraform, you can use Terraform variables, module outputs and so on, to patch the Kubernetes resources.
You can of course use the provider's data sources and resources directly, but the most convenient way is probably via this module:
module "example_manifests" {
source = "kbst.xyz/catalog/custom-manifests/kustomization"
version = "0.1.0"
configuration_base_key = "default"
configuration = {
default = {
resources = [
# list of paths to K8s YAML files
"${path.root}/path/to/a/kubernetes/resource.yaml"
]
}
}
}
I want to install MarkLogic solution in AWS eu-west-1 region using cloud formation template available in http://developer.marklogic.com/products/cloud/aws but the stack fails to create launch configuration.
I have downloaded the cloud formation template from the link http://developer.marklogic.com/products/cloud/aws and created a AWS cloud formation stack from "mlcluster.template" which is available in the above link but the stake failed during launch configuration set up. Not able to fix the template. Any suggestions ?
Problem got fixed. It is a configuration mistake.
For the IAM role parameter in AWS cloud formation stack I have to provide only the IAM name and not the entire ARN. Initially I provided the IAM ARN and it probably confused the resource name while creating an Auto Scaling Launch Configuration.
I am creating a server group and I want to add a label to the deployment. I don't find any option in the spinnaker UI to add one. Any help on this?
The current version of the Kubernetes cloud provider (v1) does not support configuring labels on Server Groups.
The new Kubernetes Provider (v2), which is manifest-based, allows you to configure labels. This version, however, is still in alpha.
Sources
https://github.com/spinnaker/spinnaker/issues/1624
https://www.spinnaker.io/reference/providers/kubernetes-v2/