Let's say I want to deploy a micro-service using a CI-CD pipeline for each pull request (like you can do with Gitlab Review Apps). But my microservice need some dependencies (other containers) to be able to actually work. Let's take an example: if I am using a microservice structure with a dependency graph similar to this
F
/ \
/ \
G A H
| / \
B C
|
D
|
E
I want to deploy the microservice A. To do it, I need the containers B, C, D and E deployed, but not the rest.
A
/ \
B C
|
D
|
E
So ideally there would be a dependency tool / service registry that would allow me to define my dependencies between all my microservices and being able to deploy the microservice dependencies from their deployment files from their repositories (each microservice/dependency has its own repo).
In short, Is there a dependency management tool for Kubernetes that would allow me to automatically deploy my microservice dependencies to a cluster?
My suggestion is: try to make the microservices be independently developed and deployable. You can have patterns like API aggregation and Eventual Consistency to achieve "Data Insights" across your services. You may be creating a distributed monolith by having these service dependencies.
Related
I have a pod with 3 containers (ca, cb, cc).
This container is owned by TeamA. Team A creates and owns ca.the other two containers are developed by two other teams. TeamB for cb and TeamC for cc.
Both cb and cc also run independently (outside the TeamA pod) as services within this same cluster.
How can TeamA pod find out when cb or cc deploy a newer version in the cluster and how can we ensure that it triggers a refresh?
As you may have guesses cb and cc are services that ca relies on heavily and they are also services in their own right.
Is there a way to ensure that TeamA pod keeps cb and cc updated whenever TeamB and TeamC deploy new versions to the cluster?
This is not a task for Kubernetes. You should configure that in your CI/CD tool. For example, whenever a new commit is pushed to service A, it will first trigger the pipeline for A, and then trigger corresponding pipelines for services B and C. Every popular CI/CD system has this ability and this is how it's normally done.
Proper CI/CD tests will also protect you from mistakes. If service A update breaks compatibility with B and C, your pipeline should fail and notify you about that.
There's no one specific answer. https://github.com/jetstack/version-checker provides a metrics/alerting approach. Numerous kubectl plugins give a CLI reporting approach. Stuff like https://fluxcd.io/docs/guides/image-update/ can do upgrades automatically within certain parameters.
Or maybe just actually talk to your coworkers, this is both a technological and social problem and you'll need answers from both sides.
My serverless infra is split between multiple functional stacks, each one of them has it's own resources (dynamo, topics, queues, etc).
For some stack A, I need to define a lambda which listens to another stack B queue events.
Assuming a deployment from scratch, it works well if B is deployed first, because the queue will be created when deploying A. But my ci is currently :
sls deploy A
sls deploy B
And adding, for instance, a SQS resource in B and reference it in A will cause the deployment to fail, because during A deployment the B SQS resource doesn't exist yet.
How can I handle this kind of cross stack dependency properly ?
How can I handle this kind of cross stack dependency properly ?
You have to redesign your templates. You can't have resources in A referencing resources in B which don't exist. You have to move all to A so its self-sufficient, or introduce new stack which will hold common resources and which is deployed before A and B.
Assuming that with JHipster I've generated:
- 1 Gateway (with MongoDB + JHipsterRegistry)
- 3 Microservices [called A, B and C] (with MongoDB + JHipsterRegistry)
I'm using maven.
I've composed all in Docker, so the resulting Docker configuration is:
1 JHipster Registry
1 Gateway
1 Gateway MongoDB
1 Microservice A
1 Microservice A MongoDB
1 Microservice B
1 Microservice B MongoDB
1 Microservice C
1 Microservice C MongoDB
All works fine: from the Gateway I can see entities from each Microservice.
Now I need to implement some features on Gateway (UI pages etc), and I need to debug with Eclipse during development.
How can I achieve this?
A) Do I need to run all manually, so running:
all components manually with ./mvnw
the JHipsterRegistry from a .jar
the Gateway from Eclipse running the debugger executing the main Application
B) Or can I use somehow docker for all "static" components and run only the Gateway from eclipse?
C) Any other suggestion?
If (A):
I need to start also all MongoDB manually?
How?
May the used ports collide?
Do I need to change configurations?
If (B):
How to run all "static" components in docker?
How to configure the Gateway to reach other components?
I did something similar as follows, duplicate your docker-compose directory
create docker-compose-dev
edit the new docker-compose.yml remove the gateway service
edit your hosts file create an entry as follows:
127.0.0.1 jhipster-registry
run this setup docker compose up -d
It should start without issues then you can run your gateway
from the command line terminals using mvnw and npm start
Let's say there are two environments: staging and prod.
There are also two microservices: A and B.
Microservices are already deployed to both environments and each running service version 1. So we have:
staging: A1, B1
prod: A1, B1
Now one development team implements new feature in microservice A (either synchronous API or async one through message broker) and deploys A2 to staging. Then another team implements new feature in B that makes use of new feature from A2 and also deploys it to staging (B2). Now we have:
staging: A2, B2
prod: A1, B1
New features are tested in staging environment by client and there are approved to be deployed to production. The question is how to determine which services should be deployed first to production. Obviously B2 must not be deployed first because it depends on A2. Are there any tools/strategies to keep track of it?
What I can imagine is that all services keep track on which versions of other services they depend and during deployment this versions are checked against what is running in target environment and if something is missing then deployment is rejected.
This raises a question - should microservice deployment be parallel or one-microservice-at-a-time?
Also what if, before deploying A2 and B2 to prod, there will be A3 released to staging and will depend on B2? Now we should schedule deployment to production like this:
A2 => B2 => A3
Those are real-world examples that come to my mind, but maybe in microservice architecture it's possible to avoid such situation following some rules?
Versioning in integration points can be an option.
For example if microservice-A get an information from microservice-B through a REST call, when microservice-B wants to change integration (change in rest call contract for instance), microservice-B can add new endpoint with new versioned mapping like "/getInformationFromMeV2" without deleting old one. By this way when you deploy microservice-B, microservice-A can still use the old endpoint for a while. After deploying microservice-A too, you can remove old endpoint from microservice-B in next deployment.
Note: Of course if microservice-A wants to use newly developed endpoint from microservice-B, microservice-B must be deployed before microservice A.
For async communication you can still apply this approach. If you use broker based communication like using Kafka (suppose that you want to deploy microservice-A first):
If microservice-A is consumer, then new topic is created and microservice-A subscribes the new topic too and new version is deployed for microservice-A. At that point microservice-A consume messages both from new and old topics (of course until microservice-B is deployed, all the messages will be sent to old topic) after microservice-B is also deployed, old topic is deleted.
If microservice-A is producer again new topic is created and after deployment of microservice-A new messages are sent to new topic. When Microservice-B is deployed too, it starts reading messages from beginning of new topic and old topic is deleted.
With this kind of approaches you can deploy microservices independently as convenient to microservices architecture.
Given a following scheme of services and their dependencies I would like to engineer a set of Helm charts.
API Gateway calls Service A and Service C
Service A calls Service B
Service B calls Database
Service C calls Service B and Service D
At the moment I see two alternatives:
Each of the 6 components in a diagram below is a single chart and each arrow in a diagram is a single dependency.
There's an Umbrella chart that has a dependency on all other charts. The Database chart is a dependency of Service B chart.
Helm documentation suggest going with option 2. I am however more keen towards option 1 due to an ease of local development and CI/CD pipeline.
Example scenario: developer is refactoring Service C and he wants to run the code he changed and test it.
Option 1. Developer installs a Service C chart only.
Option 2: Developer would have to either:
install an Umbrella chart which leads to waste of a CPU and memory resources because of running unneeded services like Service A or API Gateway, which doesn't scale well with the complexity of the system;
install Service C, then Service B and then Service D, which also doesn't scale well with the complexity of the system because it requires to perform many manual actions and also require from developer to be faimiliar with the architecture of the system in order to know what charts needs to be installed.
I would like to make an educated decision on which alternative to take. I am more keen towards option 1, but Helm docs and also few examples I was able to find on the Internet (link) are also going with option 2, so I think I might be missing something.
I would recommend one chart per service, with the additional simplification of making the "service B" chart depend on its database. I would make these charts independent: none of the services depend on any other.
The place where Helm dependencies work well is where you have a service that embeds/hides specific single other parts. The database behind B is an implementation detail, for example, and nothing outside B needs to know about it. So B can depend on stable/postgres or some such, and this works well in Helm.
There's one specific mechanical problem that causes problems for the umbrella-chart approach. Say service D also depended on a database, and it was the same "kind" of database (both use PostgreSQL, say). Operationally you want these two databases to be separate. Helm will see the two paths umbrella > B > database and umbrella > D > database, and only install one copy of the database chart, so you'll wind up with the two services sharing a database. You probably don't want that.
The other mechanical issue you'll encounter using Helm dependencies here is that most resources are named some variant of {{ .Release.Name }}-{{ .Chart.Name }}. In your option 1, say you do just install service C; you'd wind up with Deployments like service-C-C, service-C-B, service-C-database. If you then wanted to deploy service A alongside it, that would introduce its own service-A-B and service-A-database, which again isn't what you want.
I'm not aware of a great high-level open-source solution to this problem. A Make-based solution is hacky, but can work:
# -*- gnu-make -*-
all: api-proxy.deployed
%.deployed:
helm upgrade --install --name $* -f values.yaml ./charts/$*
touch $#
api-proxy.deployed: a.deployed c.deployed
a.deployed: b.deployed
c.deployed: b.deployed d.deployed