How to migrate to OpenSearch with CloudFormation without outage? - aws-cloudformation

I found this tutorial, however this tutorial has two main drawbacks.
A brand new CloudFormation stack would be created
There would be outage.
Is there a way to upgrade ES without outage and new stack ?

I found a way to upgrade ES without outage. Here are the steps:
Add only below config in CloudFormation yml file
UpdatePolicy:
EnableVersionUpgrade: true
Update all other ES configs including version changes.
deploy the CloudFormation
Step 3 would do blue-green deployment, so there is no outage.
related official document.

Related

How to start/trigger a job when a new version of deployment is released (image updated) on Kubernetes?

I have two environments (clusters): Production and Staging with two independent databases. They are both deployed on Kubernetes and production doesn't have a fixed schedule for new deployments but it happens on a weekly basis (roughly).
I would like to sync the production database with the staging database every time that a new release is deployed to production (kubernetes deployment is updated with new image).
Is there a way that I can set a job/cronjob to be triggered everytime this even happen?
The deployments are done using ArgoCD to pull the changes in the deployment manifest from a github repository.
I don't think this functionality is inherent to kubernetes; you are asking about something custom that can be implemented in a variety of ways (depending on your tool stack)
e.g.
if you are using helm to install to Production, you can use a post-install hook that triggers a Job that does what you want.
Perhaps ArgoCD has some post-install functionality that can also create a Job resource doing what you want.
I think you can also use a tool like Kyverno and write a policy to generate a K8s job upon any resource created in K8s.
This is exactly the case what Argo Events is for.
https://argoproj.github.io/argo-events/
There are many ways to implement this, but it depends on your exact situation how it’s best for you.
Eg. if you can use a Git tag event’s webhook you could go with an HTTP trigger to initiate a Job or Argo Workflow.

Deploying serverless.yml just like cloud formation

I am new to serverless world. So just wondering is it possible to deploy the serverless.yml just like we deploy any other cloudformation template using AWS console etc or it is only possible through serverless CLI
This is not a direct answer, but for using regular CloudFormation tools, ideally one would like to get the transformed, raw CloudFormation template which is an outcome of serverless transformations of the SAM template.
For now, there is no build in dedicated functionality for that in SAM toolset. However, there has been a GitHub issue already made for such a feature:
cli command to transform sam template to regular cloudformation template?
The issue also indicates that sam validate --debug is a workaround on getting the raw template, not ideal though. Thus having this template with some manual fixes, a regular CloudFormation deployment can be attempted.

serverless deploy: Stop watching after CloudFormation has the update

I'm using Bitbucket Pipelines to do CD for a Serverless app. I want to use as few "build minutes" as possible for each deployment. The lifecycle of the serverless deploy command, when using AWS as the backing, seems to be:
Push the package to CloudFormation. (~60 seconds)
Sit around watching the logs from CloudFormation until the deployment finishes. (~20-30 minutes)
Because of the huge time difference, I don't want to do step two. So my question is simple: how do I deploy a serverless app such that it only does step one and returns success or failure based on whether or now CloudFormation successfully accepted the new package?
I've looked at the docs for serverless deploy and I can't see any options to enable that. Also, there seem to be AWS specific options in the serverless deploy command already, so maybe this is an option that the serverless team will consider if there is no other way to do this.
N.B. As for, "how will you know if CloudFormation fails?", for that, I would rather set up notifications to come from CloudFormation directly. The build can just have the responsibility of pushing to CloudFormation.
I don't think you can do it with serverless deploy. You can try serverless package command that will store the package in .serverless folder or you can specify the path using --package. Package will create a CloudFormation template file e.g. cloudformation-template-update-stack.json. You can then call Create Stack API action to create the stack. It will return the stack ID without waiting for all the resources to be created.

How should I manage deployments with kubernetes

I am hoping to find a good way to automate the process of going from code to a deployed application on my kubernetes cluster.
In order to build and deploy my app I need to first build the docker image, tag it, and then push it to ECR. I then need to update my deployment.yaml with the new tag for the docker image and run the deployment with kubectl apply -f deployment.yaml.
This will go and perform a rolling deployment on the kubernetes cluster updating the pods to the new version of the container image, once this deployment has completed I may need to do other application specific things such as running database migrations, or cache clear/warming which may or may not need to run for a given deployment.
I suppose I could just write a shell script that runs all of these commands, and run it whenever I want to start up a new deployment, but I am hoping there is a better/industry standard way to solve these problems that I have missed.
As I was writing this question I noticed stackoverflow recommend this question: Kubernetes Deployments. One of the answers to it seems to imply at least some of what I am looking for is coming soon to kubernetes, but I want to make sure that if there is a better solution I could be using now that I at least know about it.
My colleague has a good blog post about this topic:
http://blog.jonparrott.com/building-a-paas-on-kubernetes/
Basically, Kubernetes is not a Platform-as-a-Service, it's a toolkit on which you can build your own Platform-a-as-Service. It's not very opinionated by design, instead it focuses on solving some tricky problems with scheduling, networking, and coordinating containers, and lets you layer in your opinions on top of it.
One of the simplest ways to automate the workflows you're describing is using a Makefile.
A step up from that, you can design your own miniature PaaS, which the author of the first blog post did here:
https://github.com/jonparrott/noel
Or, you could get involved in more sophisticated efforts to build an open source PaaS on Kubernetes, like OpenShift:
https://www.openshift.com/
or Deis, which is building a Heroku-like platform on Kubernetes:
https://deis.com/
or Redspread, which is building "Git for Kubernetes cluster":
https://redspread.com/
and there are many other examples of people building PaaS on top of Kubernetes. But I think it will be a long time, if ever, that there is an "industry standard" way to deploy to Kubernetes, since half the purpose is to enable multiple deployment workflows for different use cases.
I do want to note that as far as building container images, Google Cloud Container Builder can be a useful tool, since you can do things like use it to automatically build an image any time you push to a repository which could then get deployed. Alternatively, Jenkins is a popular way to automate CI/CD flows with Kubernetes.
I suppose I could just write a shell script that runs all of these commands, and run it whenever I want to start up a new deployment, but I am hoping there is a better/industry standard way to solve these problems that I have missed.
The company I work for (Weaveworks) and other folks in the space had been advocating for an approach that we call GitOps, please take a look at our series of blog posts covering the topic:
GitOps - Operations by Pull Request
The GitOps Pipeline - Part 2
GitOps Part 3 - Observability
Storing Secure Sealed Secrets using GitOps
The gist of it is that you push images from CI, your checked YAML manifests in git (usually different repo from app code). This repo with manifests is then applied to each of your clusters (dev/prod) by a reconciliation operator. You can automate it all yourself quite easily, but also do take a look at what we have built.
Disclaimer: I am a Kubernetes contributor and Weaveworks employee. We build open-source and commercial tools that help people to get to production with Kubernetes sooner.
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.
When you merge a change to the master branch, Jenkins X creates a new semantically versioned distribution of your app (pom.xml, jar, docker image, helm chart). The pipeline then automates the generation of Pull Requests to promote your application through all of the Environments via GitOps.
Here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).

Kubernetes Deployments

While working on creating a platform that will do microservice deployments using Kubernetes, we want to take a Dependency on the Kubernetes Deployment Object. However, we saw the documentation http://kubernetes.io/v1.1/docs/user-guide/deployments.html says the following "Note that Deployment objects effectively have API version v1alpha1. Alpha objects may change or even be discontinued in future software releases"
I am wondering if we should go about using the Deployment concept to do our deployments, essentially rolling updates or since it could be discontinued or change, should we just reimplement the same concepts ourselves like , creating a rc with new labels, create new pods with different labels then both old rc and new rc, scale down the old rc by slowly removing pods from the old rc and slowly adding new pods into the new rc.
What is the plan or proposed changes for Deployment or that concept is going away for a better concept ?
Also i am wondering why OpenShift did not use the Deployment object, was it not ready at that time ?
OpenShifts deployment object preceded the upstream Kube object (being feature complete in the March 2015 time frame). Once Kube Deployments support the remaining features in OpenShift deployments, we'll automatically migrate them. Some things OpenShift deployments support that are not upstream yet
Automatic deployment when Docker registry tags change
Custom deployments (run your own deployment logic in a pod)
Deployment hooks - execute "bundle exec rake db:migrate" before or after deploying your app
Recreate deployment strategy
Ability to pause or "hold" a deployment so it does not automatically run (so admins can choose to deploy).
Ability for deployments to "fail" and be recorded (so that end users know that the code they pushed failed to start).
It will take time to add those remaining options.
As of now, the Deployment concept has been moved to "v1beta1". The concept will most probably be continued, because it is a declarative approach (vs. the imperative approach with the older replication controller etc.).
Can't tell anything about OpenShift but on GKE it works for me pretty well!
Deployment is planned to graduate to beta in 1.2 release. See related issue #15313 for the changes to be made. We will also have new kubectl commands for rolling update which uses Deployment, see issue #17168 and the proposal.