I'm using Bitbucket Pipelines to do CD for a Serverless app. I want to use as few "build minutes" as possible for each deployment. The lifecycle of the serverless deploy command, when using AWS as the backing, seems to be:
Push the package to CloudFormation. (~60 seconds)
Sit around watching the logs from CloudFormation until the deployment finishes. (~20-30 minutes)
Because of the huge time difference, I don't want to do step two. So my question is simple: how do I deploy a serverless app such that it only does step one and returns success or failure based on whether or now CloudFormation successfully accepted the new package?
I've looked at the docs for serverless deploy and I can't see any options to enable that. Also, there seem to be AWS specific options in the serverless deploy command already, so maybe this is an option that the serverless team will consider if there is no other way to do this.
N.B. As for, "how will you know if CloudFormation fails?", for that, I would rather set up notifications to come from CloudFormation directly. The build can just have the responsibility of pushing to CloudFormation.
I don't think you can do it with serverless deploy. You can try serverless package command that will store the package in .serverless folder or you can specify the path using --package. Package will create a CloudFormation template file e.g. cloudformation-template-update-stack.json. You can then call Create Stack API action to create the stack. It will return the stack ID without waiting for all the resources to be created.
Related
I'm looking for an option to pick all the templates from the repository without hardcode the yml template files and in future if new templates are added, the pipeline should automatically pick all of them and do the deploy and create a single stack in aws environment, without making any modification to gitlab-ci.yml/pipeline file.
I tried using deploy CLI command, it deploy all the templates but then it goes for update and start deleting one by one and only the last template resource will be available after the pipeline execution is complete.
Let me know if there is an option to do this?
I am new to terraform and I want set up a CI/CD pipeline to GCP with github to replace a current system that use's jenkins, as we want to increase automation of deployments. What would be the best way or architecture to do this.
One of the primary products related to CI/CD is Google's Cloud Build.
https://cloud.google.com/build
It's one liner reads:
Build, test, and deploy on our serverless CI/CD platform.
It has built in triggers that include GitHub integration meaning that when events occur on GitHub, Cloud Build runs its prescribed recipes.
I'd suggest reading the documenation found at the above page and also correlate against the curated documentation found on GCP Weekly here:
Tag: CI
Tag: Cloud Build
I would like to setup one GitHub repo which will hold all backend Google Cloud Functions. But I have 2 questions:
How can I set it up so that GCP knows that there are multiple Cloud Functions to be deployed?
If I only change code for one Cloud Functions, how can I setup so that GCP will only deploy the modified Cloud Functions, and NOT to redeployed the unchanged onces?
When saving your cloud function to a github repo just make sure to have in the gitignore file the proper settings. Exclude node_modules and such folder and files you don't want to commit.
Usualy all cloud functions are deployed trough a single index.js file. So here you need to make sure you have that and all files you import into it.
If you want to deploy a single function you can use this command:
firebase deploy --only functions:your_function_name
If you want to have a more structure solution you can read this article to learn more about it.
As I understand, you would like to store cloud functions code in GitHub, and I assume you would like to trigger deployment on some action - push or pull request. At the same time, you would like only modified cloud functions be redeployed, leaving others without redeployment.
Every cloud function is your set is (re)deployed separately, - I mean the for each of them you might need to provide plenty of specific parameters, and those parameters are different for different functions. Details of those parameters are described in the gcloud functions deploy command documentation page.
Most likely the --entry-point - the name of the cloud function as defined in source code - is to be different for each of them. Thus, you might have some kind of a "loop" through all cloud functions for deployment with different parameters (including the name, entry point, etc.).
Such "loop" (or "set") may be implemented by using Cloud Build or Terraform or both of them together or some other tool.
An example how to deploy only modified cloud functions is provided in the How can I deploy google cloud functions in CI/CD without redeploying unchanged SO question/answer. That example can be extended into an arbitrary number of cloud functions. If you don't want to use Terraform, the similar mechanism (based not he same idea) can be implemented by using pure Cloud Build.
We are deploying a Java backend and React UI application using docker-compose. Our Docker containers are running Java, Caddy, and Postgres.
What's unusual about this architecture is that we are not running the application as a cluster. Each user gets their own server with their own subdomain. Everything is working nicely, but we need a strategy for managing/updating machines as the number of users grows.
We can accept some down time in the middle of the night, so we don't need to have high availability.
We're just not sure what would be the best way to update software on all machines. And we are pretty new to Docker and have no experience with Kubernetes or Ansible, Chef, Puppet, etc. But we are quick to pick things up.
We expect to have hundreds to thousands of users. Each machine runs the same code but has environment variables that are unique to the user. Our original provisioning takes care of that, so we do not anticipate having to change those with software updates. But a solution that can also provide that ability would not be a bad thing.
So, the question is, when we make code changes and want to deploy the updated Java jar or the React application, what would be the best way to get those out there in an automated fashion?
Some things we have considered:
Docker Hub (concerns about rate limiting)
Deploying our own Docker repo
Kubernetes
Ansible
https://containrrr.dev/watchtower/
Other things that we probably need include GitHub actions to build and update the Docker images.
We are open to ideas that are not listed here, because there is a lot we don't know about managing many machines running docker-compose. So please feel free to offer suggestions. Many thanks!
In your case I advice you to use Kubernetes combination with CD tools. One of it is Buddy. I think it is the best way to make such updates in an automated fashion. Of course you can use just Kubernetes, but with Buddy or other CD tools you will make it faster and easier. In my answer I am describing Buddy but there are a lot of popular CD tools for automating workflows in Kubernetes like for example: GitLab or CodeFresh.io - you should pick which one is actually best for you. Take a look: CD-automation-tools-Kubernetes.
With Buddy you can avoid most of these steps while automating updates - (executing kubectl apply, kubectl set image commands ) by doing a simple push to Git.
Every time you updates your application code or Kubernetes configuration, you have two possibilities to update your cluster: kubectl apply or kubectl set image.
Such workflow most often looks like:
1. Edit application code or configuration .YML file
2. Push changes to your Git repository
3. Build an new Docker image
4. Push the Docker image
5. Log in to your K8s cluster
6. Run kubectl apply or kubectl set image commands to apply changes into K8s cluster
Buddy is a CD tool that you can use to automate your whole K8s release workflows like:
managing Dockerfile updates
building Docker images and pushing them to the Docker registry
applying new images on your K8s cluster
managing configuration changes of a K8s Deployment
etc.
With Buddy you will have to configure just one pipeline.
With every change in your app code or the YAML config file, this tool will apply the deployment and Kubernetes will start transforming the containers to the desired state.
Pipeline configuration for running Kubernetes pods or jobs
Assume that we have application on a K8s cluster and the its repository contains:
source code of our application
a Dockerfile with instructions on creating an image of your app
DB migration scripts
a Dockerfile with instructions on creating an image that will run the migration during the deployment (db migration runner)
In this case, we can configure a pipeline that will:
1. Build application and migrate images
2. Push them to the Docker Hub
3. Trigger the DB migration using the previously built image. We can define the image, commands and deployment and use YAML file.
4. Use either Apply K8s Deployment or Set K8s Image to update the image in your K8s application.
You can adjust above workflow properly to your environment/applications properties.
Buddy supports GitLab as a Git provider. Integration of these two tools is easy and only requires authorizing GitLab in your profile. Thanks to this integration you can create pipelines that will build, test and deploy your app code to the server. But of course if you are using GitLab there is no need to set up Buddy as an extra tool because GitLab is also CD tools tool for automating workflows in Kubernetes.
More information you can find here: buddy-workflow-kubernetes.
Read also: automating-workflows-kubernetes.
As it turns out, we found that a paid Docker Hub plan addressed all of our needs. I appreciate the excellent information from #Malgorzata.
I am hoping to find a good way to automate the process of going from code to a deployed application on my kubernetes cluster.
In order to build and deploy my app I need to first build the docker image, tag it, and then push it to ECR. I then need to update my deployment.yaml with the new tag for the docker image and run the deployment with kubectl apply -f deployment.yaml.
This will go and perform a rolling deployment on the kubernetes cluster updating the pods to the new version of the container image, once this deployment has completed I may need to do other application specific things such as running database migrations, or cache clear/warming which may or may not need to run for a given deployment.
I suppose I could just write a shell script that runs all of these commands, and run it whenever I want to start up a new deployment, but I am hoping there is a better/industry standard way to solve these problems that I have missed.
As I was writing this question I noticed stackoverflow recommend this question: Kubernetes Deployments. One of the answers to it seems to imply at least some of what I am looking for is coming soon to kubernetes, but I want to make sure that if there is a better solution I could be using now that I at least know about it.
My colleague has a good blog post about this topic:
http://blog.jonparrott.com/building-a-paas-on-kubernetes/
Basically, Kubernetes is not a Platform-as-a-Service, it's a toolkit on which you can build your own Platform-a-as-Service. It's not very opinionated by design, instead it focuses on solving some tricky problems with scheduling, networking, and coordinating containers, and lets you layer in your opinions on top of it.
One of the simplest ways to automate the workflows you're describing is using a Makefile.
A step up from that, you can design your own miniature PaaS, which the author of the first blog post did here:
https://github.com/jonparrott/noel
Or, you could get involved in more sophisticated efforts to build an open source PaaS on Kubernetes, like OpenShift:
https://www.openshift.com/
or Deis, which is building a Heroku-like platform on Kubernetes:
https://deis.com/
or Redspread, which is building "Git for Kubernetes cluster":
https://redspread.com/
and there are many other examples of people building PaaS on top of Kubernetes. But I think it will be a long time, if ever, that there is an "industry standard" way to deploy to Kubernetes, since half the purpose is to enable multiple deployment workflows for different use cases.
I do want to note that as far as building container images, Google Cloud Container Builder can be a useful tool, since you can do things like use it to automatically build an image any time you push to a repository which could then get deployed. Alternatively, Jenkins is a popular way to automate CI/CD flows with Kubernetes.
I suppose I could just write a shell script that runs all of these commands, and run it whenever I want to start up a new deployment, but I am hoping there is a better/industry standard way to solve these problems that I have missed.
The company I work for (Weaveworks) and other folks in the space had been advocating for an approach that we call GitOps, please take a look at our series of blog posts covering the topic:
GitOps - Operations by Pull Request
The GitOps Pipeline - Part 2
GitOps Part 3 - Observability
Storing Secure Sealed Secrets using GitOps
The gist of it is that you push images from CI, your checked YAML manifests in git (usually different repo from app code). This repo with manifests is then applied to each of your clusters (dev/prod) by a reconciliation operator. You can automate it all yourself quite easily, but also do take a look at what we have built.
Disclaimer: I am a Kubernetes contributor and Weaveworks employee. We build open-source and commercial tools that help people to get to production with Kubernetes sooner.
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.
When you merge a change to the master branch, Jenkins X creates a new semantically versioned distribution of your app (pom.xml, jar, docker image, helm chart). The pipeline then automates the generation of Pull Requests to promote your application through all of the Environments via GitOps.
Here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).