Let's say there are two environments: staging and prod.
There are also two microservices: A and B.
Microservices are already deployed to both environments and each running service version 1. So we have:
staging: A1, B1
prod: A1, B1
Now one development team implements new feature in microservice A (either synchronous API or async one through message broker) and deploys A2 to staging. Then another team implements new feature in B that makes use of new feature from A2 and also deploys it to staging (B2). Now we have:
staging: A2, B2
prod: A1, B1
New features are tested in staging environment by client and there are approved to be deployed to production. The question is how to determine which services should be deployed first to production. Obviously B2 must not be deployed first because it depends on A2. Are there any tools/strategies to keep track of it?
What I can imagine is that all services keep track on which versions of other services they depend and during deployment this versions are checked against what is running in target environment and if something is missing then deployment is rejected.
This raises a question - should microservice deployment be parallel or one-microservice-at-a-time?
Also what if, before deploying A2 and B2 to prod, there will be A3 released to staging and will depend on B2? Now we should schedule deployment to production like this:
A2 => B2 => A3
Those are real-world examples that come to my mind, but maybe in microservice architecture it's possible to avoid such situation following some rules?
Versioning in integration points can be an option.
For example if microservice-A get an information from microservice-B through a REST call, when microservice-B wants to change integration (change in rest call contract for instance), microservice-B can add new endpoint with new versioned mapping like "/getInformationFromMeV2" without deleting old one. By this way when you deploy microservice-B, microservice-A can still use the old endpoint for a while. After deploying microservice-A too, you can remove old endpoint from microservice-B in next deployment.
Note: Of course if microservice-A wants to use newly developed endpoint from microservice-B, microservice-B must be deployed before microservice A.
For async communication you can still apply this approach. If you use broker based communication like using Kafka (suppose that you want to deploy microservice-A first):
If microservice-A is consumer, then new topic is created and microservice-A subscribes the new topic too and new version is deployed for microservice-A. At that point microservice-A consume messages both from new and old topics (of course until microservice-B is deployed, all the messages will be sent to old topic) after microservice-B is also deployed, old topic is deleted.
If microservice-A is producer again new topic is created and after deployment of microservice-A new messages are sent to new topic. When Microservice-B is deployed too, it starts reading messages from beginning of new topic and old topic is deleted.
With this kind of approaches you can deploy microservices independently as convenient to microservices architecture.
Related
I have a pod with 3 containers (ca, cb, cc).
This container is owned by TeamA. Team A creates and owns ca.the other two containers are developed by two other teams. TeamB for cb and TeamC for cc.
Both cb and cc also run independently (outside the TeamA pod) as services within this same cluster.
How can TeamA pod find out when cb or cc deploy a newer version in the cluster and how can we ensure that it triggers a refresh?
As you may have guesses cb and cc are services that ca relies on heavily and they are also services in their own right.
Is there a way to ensure that TeamA pod keeps cb and cc updated whenever TeamB and TeamC deploy new versions to the cluster?
This is not a task for Kubernetes. You should configure that in your CI/CD tool. For example, whenever a new commit is pushed to service A, it will first trigger the pipeline for A, and then trigger corresponding pipelines for services B and C. Every popular CI/CD system has this ability and this is how it's normally done.
Proper CI/CD tests will also protect you from mistakes. If service A update breaks compatibility with B and C, your pipeline should fail and notify you about that.
There's no one specific answer. https://github.com/jetstack/version-checker provides a metrics/alerting approach. Numerous kubectl plugins give a CLI reporting approach. Stuff like https://fluxcd.io/docs/guides/image-update/ can do upgrades automatically within certain parameters.
Or maybe just actually talk to your coworkers, this is both a technological and social problem and you'll need answers from both sides.
My serverless infra is split between multiple functional stacks, each one of them has it's own resources (dynamo, topics, queues, etc).
For some stack A, I need to define a lambda which listens to another stack B queue events.
Assuming a deployment from scratch, it works well if B is deployed first, because the queue will be created when deploying A. But my ci is currently :
sls deploy A
sls deploy B
And adding, for instance, a SQS resource in B and reference it in A will cause the deployment to fail, because during A deployment the B SQS resource doesn't exist yet.
How can I handle this kind of cross stack dependency properly ?
How can I handle this kind of cross stack dependency properly ?
You have to redesign your templates. You can't have resources in A referencing resources in B which don't exist. You have to move all to A so its self-sufficient, or introduce new stack which will hold common resources and which is deployed before A and B.
That's not an actual problem that I have but I would like to know what are the different approach that people are taking in order to solve a very common scenario.
You have one or many microservices, and each of those have schemas and an interface that clients are using to consume resources.
We have a website in a different repo that is consuming data from one of those microservices, let's say REST API.
Something like
Microservice (API): I change the interface meaning that the JSON response is different.
Frontend: I make changes in the frontend to adapt the response from the microservice.
If we deploy the Microservice before deploying the frontend you will brake the frontend site.
So you need to make sure that some have deployed the new version and then deploy the microservice.
This is the manual approach but hos is the people tracking that in an automated way like not be able to make a deployment without having the correct version of the frontend deployed.
One of the safest one is trying to be always backward compatible by using versioning on service level that means having different version of the same service when you need to introduce a backward incompatible change.
Lets assume you have a microservice which serves products in a rest endpoint like this
/api/v1/products
when you do your backward incompatible change you should introduce the new version by keeping the existing one still working
/api/v1/products
/api/v2/products
You should set a sunset for your first service endpoint and communicate this with your clients. In your case it is the frontend part but in other situations there could be so many other client out there (different frontend services, different backend services etc.)
The drawback of this approach you may need to support several version of the same service which could be tricky but it is inevitable. Communication with clients would also be tricky in many situation.
On the other hand it gives you true power of microservice isolation and freedom.
I think If you use docker in your DevOps env you can use docker-compose with depends_on property depends_on startup-order OR you should create a script bash (for example) that check the correct version of the frontend deployed before continue and included in your pipeline
I wanna deploy the Spring-cloud-data-flow on several hosts.
I will deploy the server of Spring-cloud-data-flow on one host-A, and deploy the agents on the other hosts(These hosts are in charge of executing the tasks).
Except the host-A, all the other hosts run the same tasks.
Shall I modify on the basis of the Spring Cloud Data Flow Local Server or on the Spring Cloud Data Flow Apache Yarn Server or other better choice?
Do you mean how the apps are deployed on several hosts? If so, the apps are deployed using the underlying deployer implementation. For instance, if it is local deployer then, each app is deployed by spawning a new process. You can scale out the number apps deployment using the count property during the stream deploy. I am not sure what do you mean by the agents here.
I'm a newbie in Cloud Foundry. In following the reference application provided by Predix (https://www.predix.io/resources/tutorials/tutorial-details.html?tutorial_id=1473&tag=1610&journey=Connect%20devices%20using%20the%20Reference%20App&resources=1592,1473,1600), the application consisted of several modules and each module is implemented as micro service.
My question is, how do these micro services talk to each other? I understand they must be using some sort of REST calls but the problem is:
service registry: Say I have services A, B, C. How do these components 'discover' the REST URLs of other components? As the component URL is only known after the service is pushed to cloud foundry.
How does cloud foundry controls the components dependency during service startup and service shutdown? Say A cannot start until B is started. B needs to be shutdown if A is shutdown.
The ref-app 'application' consists of several 'apps' and Predix 'services'. An app is bound to the service via an entry in the manifest.yml. Thus, it gets the service endpoint and other important configuration information via this binding. When an app is bound to a service, the 'cf env ' command returns the needed info.
There might still be some Service endpoint info in a property file, but that's something that will be refactored out over time.
The individual apps of the ref-app application are put in separate microservices, since they get used as components of other applications. Hence, the microservices approach. If there were startup dependencies across apps, the CI/CD pipeline that pushes the apps to the cloud would need to manage these dependencies. The dependencies in ref-app are simply the obvious ones, read-on.
While it's true that coupling of microservices is not in the design. There are some obvious reasons this might happen. Language and function. If you have a "back-end" microservice written in Java used by a "front-end" UI microservice written in Javascript on NodeJS then these are pushed as two separate apps. Theoretically the UI won't work too well without the back-end, but there is a plan to actually make that happen with some canned JSON. Still there is some logical coupling there.
The nice things you get from microservices is that they might need to scale differently and cloud foundry makes that quite easy with the 'cf scale' command. They might be used by multiple other microservices, hence creating new scale requirements. So, thinking about what needs to scale and also the release cycle of the functionality helps in deciding what comprises a microservice.
As for ordering, for example, the Google Maps api might be required by your application so it could be said that it should be launched first and your application second. But in reality, your application should take in to account that the maps api might be down. Your goal should be that your app behaves well when a dependent microservice is not available.
The 'apps' of the 'application' know about each due to their name and the URL that the cloud gives it. There are actually many copies of the reference app running in various clouds and spaces. They are prefaced with things like Dev or QA or Integration, etc. Could we get the Dev front end talking to the QA back-end microservice, sure, it's just a URL.
In addition to the aforementioned, etcd (which I haven't tried yet), you can also create a CUPS service 'definition'. This is also a set of key/value pairs. Which you can tie to the Space (dev/qa/stage/prod) and bind them via the manifest. This way you get the props from the environment.
If micro-services do need to talk to each other, generally its via REST as you have noticed.However microservice purists may be against such dependencies. That apart, service discovery is enabled by publishing available endpoints on to a service registry - etcd in case of CloudFoundry. Once endpoint is registered, various instances of a given service can register themselves to the registry using a POST operation. Client will need to know only about the published end point and not the individual service instance's end point. This is self-registration. Client will either communicate to a load balancer such as ELB, which looks up service registry or client should be aware of the service registry.
For (2), there should not be such a hard dependency between micro-services as per micro-service definition, if one is designing such a coupled set of services that indicates some imminent issues such as orchestrating and synchronizing. If such dependencies do emerge, you will have rely on service registries, health-checks and circuit-breakers for fall-back.