During development a monolithic component (one project in one repo) is actually quite useful:
Especially in the early stages, renaming classes and methods is easy.
Also integration tests can be written, since all services are available inside the same monolith.
But also during development there is no need to start other services. Just start the monolith and everything works end2end.
Dependency management can be done in one place.
However, when deploying, micro-services are preferable. That's why my question is, whether there are any examples or best practices for automatically splitting a monolith into micro-services, but only for deployment.
A few ideas that come to mind:
Deploying the same monolith image for several micro-services. So basically pod1, pod2 and pod3 have the same image, but each pod has its own service1, service2 and service3 (kubernetes resource "Service"). Via a configMap the monolith image for pod1 for example is configured to only serve requests for service1. Same respectively for pod2 and pod3
For REST services and Ingress configuration could allow only service1 requests on pod1.
Using the CI/CD pipeline to build several different docker images from the same monolith repo. Effectively removing those parts that are not necessary. E.g. image1 only contains service1-implementation, but service2-implementation and service3-implementation have been removed. Already during build process several different jars (for a Java application) could be build.
I'd like to know, if anyone has experience with such an approach or sees clear counter-arguments to work like this.
Thanks.
Related
I refactored my k8s objects to use Kustomization, Components, replacements, patches and got to a good DRY state so that I don't repeat much between 2 apps and between those across dev and test environments. While doing so I am referring to objects outside of the folder (but same repository)
components:
- ../../common-nonprod
- ../../common-nonprod-ui
My question is will ArgoCD be able to detect a change in the app if any of the files inside the common folders change that this application refers to as components.
In other words does ArgoCD performs kustomize build to detect what changed ?
Yes. ArgoCD runs kustomize build to realize your manifests before trying to apply them to the cluster. ArgoCD doesn't care which files have changed; it simply cares that the manifests produced by kustomize build differ (or not) from what is currently deployed in the cluster.
For the MicroService Architecture based application, I'm trying to understand a standard process about how to logically group and manage correct version compatibility among independently deployable microservices. Let me elaborate with practical scenario :
Say, I am building a software application which is composed of 10 microservices. All the microservices have their independent repositories(branching workflow etc.) and their separate CI/CD Pipeline.
The CI/CD Pipeline gets triggered whenever any change pushed to 'master' branch for respective microservice.
Considering Helm chart and Kubernetes based deployment, all the microservices will get deployed with version 1.0 for the very first deployment and our system would work. For subsequent releases, we might have only couple of services that will get deploy. So after couple of production releases, each microservice will be at different version to constituent an application at that point of time.
My question is :
How to logically group independently deployable microservices in order to deploy or rollback to earlier release i.e. how to determine what was the version of different microservices for earlier releases?
Is there any existing tool or standard practice to track versions of each microservice for given release to seamlessly rollback to expected release?
If not automated solution, what would be the right approach to address such requirement?
Appreciate your thoughts and suggestion on this.
With consideration kuberenets:
1. Helm is nice tool to deploy and track.
2. Native k8s deployment works nice, you need to use deployment properly especially look --record flag in k8s commands eg check this link
With AWS ECS clusters:
1. they have task definations and tasks. I think that works for you.
Not have pointers for docker-compose, swarm, and other tools. But you can always use the power of git and some scripting.
the idea is make a file that lists all versions of services/containers/code . and commit that file in git with code. Make tag out of it for simplicity. your script should compare this state file and current state and apply specific changes only. Look at git submodules also. it is nothing but a group of many git projects and it tracks status of each project with help of commit id of each project. This helped us in the situation you mention.
This is a fairly new problem, we just launched a new tool Reliza Hub to solve that. Also here is my post on the subject: Microservices – Combinatorial Explosion of Versions. Currently, we are at the MVP stage and a lot of work is going on - see this video tutorial if our direction makes sense for you https://www.youtube.com/watch?v=yDlf5fMBGuI
If you decide to implement and have any questions or need help with integration, just tag me on SO and I'd be very much willing to make it work for you.
To sum up few things that we are doing - we denote developer facing projects (those that map to source code) as Projects and customer facing projects (bundles that customer sees) as Products.
And we say that Products are essentially composition of Projects and provide tooling how you can compile different versions of Projects into what's called a Product bundle. You can then integrate this into any CI or CD tool out there or start manually if you haven't configured CICD yet.
Other than that, yes - I highly recommend helm and kubernetes - this is what we use on newer projects. (And I can also add ArgoCD and Spinnaker to the existing tooling). But it is not enough to track permutations of different versions of microservices and establishing which configurations are good and which are not between different environments.
Say that I have 5 apis that i want to deploy in a Kubernetes cluster, my question is simply what is the best practice to store the yaml files related to Kubernetes.
In projects I've seen online, Kubernetes yaml files are just added to the the api project itself. I wonder if it makes sense to decouple all files related to Kubernetes in an entirely separate "project", and which is managed by VCS as a completely separated entity from the api projects themselves.
This question arises since I'm currently reading a book about Kubernetes, on the topic namespaces, and considered it might be a good idea to have separate namespaces per environment (DEV / UAT / PROD), and it may make sense to have these files in a centralized "Kubernetes" project (unless it might be better to have a separate cluster per environment (?)).
Whether to put the yaml in the same repo as the app is a question that projects answer in different ways. You might want to put them together if you find that you often change both at the same time or you just find it clearer to see everything in one place. You might separate if you mostly work on the yaml separately or if you find it less clutttered or want different visibility for it (e.g. different teams to look at it). If things get more sophisticated then you'll actually want to generate the yaml from templates and inject environment-specific configuration into it at deploy time (whether those environments are namespaces or clusters) - see Best practices for storing kubernetes configuration in source control for more discussion on this.
From Production k8s experience for CI/CD:
One cluster per environment such as dev , stage , prod ( optionally per data centre )
One namespace per project
One git deployment repo per project
One branch in git deployment repo per environment
Use configmaps for configuration aspects
Use secret management solution to store and use secrets
I am working on migration of existing AWS, SprintBoot based system with 50+ independent repositories into Kubernetes. I am preparing a file containing naming conventions for artifacts, docker images and kubernetes resources (e.g. services, deployment, configmap, secret, ingress, labels etc.) for streamlining the process. I am in dilemma over should I use single or separate file for defining kubernetes resources? I know both will work, however I am inclined to preparing separate resource file for better version control and modularity.
Appreciate if you can share your feedback on which one should be preferred? Single file for all k8s resources Or Separate k8s specification file for each resource?
Try to go for separate resources files, these would help in managing the resources better, at the same time ensuring modularity as well. Also, most of the deployments in kubernetes are now being preferred via helm charts, which allows a better way to manage the resources file.
I am hoping to find a good way to automate the process of going from code to a deployed application on my kubernetes cluster.
In order to build and deploy my app I need to first build the docker image, tag it, and then push it to ECR. I then need to update my deployment.yaml with the new tag for the docker image and run the deployment with kubectl apply -f deployment.yaml.
This will go and perform a rolling deployment on the kubernetes cluster updating the pods to the new version of the container image, once this deployment has completed I may need to do other application specific things such as running database migrations, or cache clear/warming which may or may not need to run for a given deployment.
I suppose I could just write a shell script that runs all of these commands, and run it whenever I want to start up a new deployment, but I am hoping there is a better/industry standard way to solve these problems that I have missed.
As I was writing this question I noticed stackoverflow recommend this question: Kubernetes Deployments. One of the answers to it seems to imply at least some of what I am looking for is coming soon to kubernetes, but I want to make sure that if there is a better solution I could be using now that I at least know about it.
My colleague has a good blog post about this topic:
http://blog.jonparrott.com/building-a-paas-on-kubernetes/
Basically, Kubernetes is not a Platform-as-a-Service, it's a toolkit on which you can build your own Platform-a-as-Service. It's not very opinionated by design, instead it focuses on solving some tricky problems with scheduling, networking, and coordinating containers, and lets you layer in your opinions on top of it.
One of the simplest ways to automate the workflows you're describing is using a Makefile.
A step up from that, you can design your own miniature PaaS, which the author of the first blog post did here:
https://github.com/jonparrott/noel
Or, you could get involved in more sophisticated efforts to build an open source PaaS on Kubernetes, like OpenShift:
https://www.openshift.com/
or Deis, which is building a Heroku-like platform on Kubernetes:
https://deis.com/
or Redspread, which is building "Git for Kubernetes cluster":
https://redspread.com/
and there are many other examples of people building PaaS on top of Kubernetes. But I think it will be a long time, if ever, that there is an "industry standard" way to deploy to Kubernetes, since half the purpose is to enable multiple deployment workflows for different use cases.
I do want to note that as far as building container images, Google Cloud Container Builder can be a useful tool, since you can do things like use it to automatically build an image any time you push to a repository which could then get deployed. Alternatively, Jenkins is a popular way to automate CI/CD flows with Kubernetes.
I suppose I could just write a shell script that runs all of these commands, and run it whenever I want to start up a new deployment, but I am hoping there is a better/industry standard way to solve these problems that I have missed.
The company I work for (Weaveworks) and other folks in the space had been advocating for an approach that we call GitOps, please take a look at our series of blog posts covering the topic:
GitOps - Operations by Pull Request
The GitOps Pipeline - Part 2
GitOps Part 3 - Observability
Storing Secure Sealed Secrets using GitOps
The gist of it is that you push images from CI, your checked YAML manifests in git (usually different repo from app code). This repo with manifests is then applied to each of your clusters (dev/prod) by a reconciliation operator. You can automate it all yourself quite easily, but also do take a look at what we have built.
Disclaimer: I am a Kubernetes contributor and Weaveworks employee. We build open-source and commercial tools that help people to get to production with Kubernetes sooner.
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.
When you merge a change to the master branch, Jenkins X creates a new semantically versioned distribution of your app (pom.xml, jar, docker image, helm chart). The pipeline then automates the generation of Pull Requests to promote your application through all of the Environments via GitOps.
Here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).