Serverless - AWS Cloud Formation - Cross Stack Circular Dependencies - aws-cloudformation

My serverless infra is split between multiple functional stacks, each one of them has it's own resources (dynamo, topics, queues, etc).
For some stack A, I need to define a lambda which listens to another stack B queue events.
Assuming a deployment from scratch, it works well if B is deployed first, because the queue will be created when deploying A. But my ci is currently :
sls deploy A
sls deploy B
And adding, for instance, a SQS resource in B and reference it in A will cause the deployment to fail, because during A deployment the B SQS resource doesn't exist yet.
How can I handle this kind of cross stack dependency properly ?

How can I handle this kind of cross stack dependency properly ?
You have to redesign your templates. You can't have resources in A referencing resources in B which don't exist. You have to move all to A so its self-sufficient, or introduce new stack which will hold common resources and which is deployed before A and B.

Related

Is there a tool to automatically create my microservice dependencies in Kubernetes?

Let's say I want to deploy a micro-service using a CI-CD pipeline for each pull request (like you can do with Gitlab Review Apps). But my microservice need some dependencies (other containers) to be able to actually work. Let's take an example: if I am using a microservice structure with a dependency graph similar to this
F
/ \
/ \
G A H
| / \
B C
|
D
|
E
I want to deploy the microservice A. To do it, I need the containers B, C, D and E deployed, but not the rest.
A
/ \
B C
|
D
|
E
So ideally there would be a dependency tool / service registry that would allow me to define my dependencies between all my microservices and being able to deploy the microservice dependencies from their deployment files from their repositories (each microservice/dependency has its own repo).
In short, Is there a dependency management tool for Kubernetes that would allow me to automatically deploy my microservice dependencies to a cluster?
My suggestion is: try to make the microservices be independently developed and deployable. You can have patterns like API aggregation and Eventual Consistency to achieve "Data Insights" across your services. You may be creating a distributed monolith by having these service dependencies.

Dependency among different ecs tasks

I have developed a backend server using multiple microservices, using spring cloud.
I have discovery service, config service, and different other services.
Right now for testing purposes, I use docker-compose to run them in the right order. Now I have decided to deploy my application on AWS.
I thought of using running them using ECS using fargare, But I am not able to understand how can I define dependency among my tasks.
I found this article https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definition_dependson
It defines dependency among containers in the same task.
But I do not think that I can run all my services with just one task as there will be complications in assigning vCPUs, even if I use 4vCPUs and huge memory then also I am not sure how well my containers will run. and after that scaling them will be another issue. Overall having such huge vCPUs and memory will incur a lot of costs as well.
Is there any way to define dependency among ECS tasks?
CloudFormation supports the DependsOn attribute which allows you to control the sequence of deployment (you basically trade off speed of parallelism for ordered deployments when you need them).
Assuming your tasks are started as part of ECS services you can set the DependsOn to a service that needs to start first.
E.g.
Resources:
WebService:
DependsOn:
- AppService
Properties:
....
....
Out of curiosity, how did you move from Compose to CloudFormation? FYI we have been working with Docker to add capabilities into the Docker toolset to deploy directly to ECS (basically converting docker compose files into CloudFormation IaC). See here for more background. BTW this mechanism honors the compose dependency chain. That is, if you set one service being dependent on the other in compose, the resulting CFN template uses the DependsOn attribute I described above.

How to know to upgrade my pod when a container is updated?

I have a pod with 3 containers (ca, cb, cc).
This container is owned by TeamA. Team A creates and owns ca.the other two containers are developed by two other teams. TeamB for cb and TeamC for cc.
Both cb and cc also run independently (outside the TeamA pod) as services within this same cluster.
How can TeamA pod find out when cb or cc deploy a newer version in the cluster and how can we ensure that it triggers a refresh?
As you may have guesses cb and cc are services that ca relies on heavily and they are also services in their own right.
Is there a way to ensure that TeamA pod keeps cb and cc updated whenever TeamB and TeamC deploy new versions to the cluster?
This is not a task for Kubernetes. You should configure that in your CI/CD tool. For example, whenever a new commit is pushed to service A, it will first trigger the pipeline for A, and then trigger corresponding pipelines for services B and C. Every popular CI/CD system has this ability and this is how it's normally done.
Proper CI/CD tests will also protect you from mistakes. If service A update breaks compatibility with B and C, your pipeline should fail and notify you about that.
There's no one specific answer. https://github.com/jetstack/version-checker provides a metrics/alerting approach. Numerous kubectl plugins give a CLI reporting approach. Stuff like https://fluxcd.io/docs/guides/image-update/ can do upgrades automatically within certain parameters.
Or maybe just actually talk to your coworkers, this is both a technological and social problem and you'll need answers from both sides.

Deployment synchronization across microservices

Let's say there are two environments: staging and prod.
There are also two microservices: A and B.
Microservices are already deployed to both environments and each running service version 1. So we have:
staging: A1, B1
prod: A1, B1
Now one development team implements new feature in microservice A (either synchronous API or async one through message broker) and deploys A2 to staging. Then another team implements new feature in B that makes use of new feature from A2 and also deploys it to staging (B2). Now we have:
staging: A2, B2
prod: A1, B1
New features are tested in staging environment by client and there are approved to be deployed to production. The question is how to determine which services should be deployed first to production. Obviously B2 must not be deployed first because it depends on A2. Are there any tools/strategies to keep track of it?
What I can imagine is that all services keep track on which versions of other services they depend and during deployment this versions are checked against what is running in target environment and if something is missing then deployment is rejected.
This raises a question - should microservice deployment be parallel or one-microservice-at-a-time?
Also what if, before deploying A2 and B2 to prod, there will be A3 released to staging and will depend on B2? Now we should schedule deployment to production like this:
A2 => B2 => A3
Those are real-world examples that come to my mind, but maybe in microservice architecture it's possible to avoid such situation following some rules?
Versioning in integration points can be an option.
For example if microservice-A get an information from microservice-B through a REST call, when microservice-B wants to change integration (change in rest call contract for instance), microservice-B can add new endpoint with new versioned mapping like "/getInformationFromMeV2" without deleting old one. By this way when you deploy microservice-B, microservice-A can still use the old endpoint for a while. After deploying microservice-A too, you can remove old endpoint from microservice-B in next deployment.
Note: Of course if microservice-A wants to use newly developed endpoint from microservice-B, microservice-B must be deployed before microservice A.
For async communication you can still apply this approach. If you use broker based communication like using Kafka (suppose that you want to deploy microservice-A first):
If microservice-A is consumer, then new topic is created and microservice-A subscribes the new topic too and new version is deployed for microservice-A. At that point microservice-A consume messages both from new and old topics (of course until microservice-B is deployed, all the messages will be sent to old topic) after microservice-B is also deployed, old topic is deleted.
If microservice-A is producer again new topic is created and after deployment of microservice-A new messages are sent to new topic. When Microservice-B is deployed too, it starts reading messages from beginning of new topic and old topic is deleted.
With this kind of approaches you can deploy microservices independently as convenient to microservices architecture.

How do micro services in Cloud Foundry communicate?

I'm a newbie in Cloud Foundry. In following the reference application provided by Predix (https://www.predix.io/resources/tutorials/tutorial-details.html?tutorial_id=1473&tag=1610&journey=Connect%20devices%20using%20the%20Reference%20App&resources=1592,1473,1600), the application consisted of several modules and each module is implemented as micro service.
My question is, how do these micro services talk to each other? I understand they must be using some sort of REST calls but the problem is:
service registry: Say I have services A, B, C. How do these components 'discover' the REST URLs of other components? As the component URL is only known after the service is pushed to cloud foundry.
How does cloud foundry controls the components dependency during service startup and service shutdown? Say A cannot start until B is started. B needs to be shutdown if A is shutdown.
The ref-app 'application' consists of several 'apps' and Predix 'services'. An app is bound to the service via an entry in the manifest.yml. Thus, it gets the service endpoint and other important configuration information via this binding. When an app is bound to a service, the 'cf env ' command returns the needed info.
There might still be some Service endpoint info in a property file, but that's something that will be refactored out over time.
The individual apps of the ref-app application are put in separate microservices, since they get used as components of other applications. Hence, the microservices approach. If there were startup dependencies across apps, the CI/CD pipeline that pushes the apps to the cloud would need to manage these dependencies. The dependencies in ref-app are simply the obvious ones, read-on.
While it's true that coupling of microservices is not in the design. There are some obvious reasons this might happen. Language and function. If you have a "back-end" microservice written in Java used by a "front-end" UI microservice written in Javascript on NodeJS then these are pushed as two separate apps. Theoretically the UI won't work too well without the back-end, but there is a plan to actually make that happen with some canned JSON. Still there is some logical coupling there.
The nice things you get from microservices is that they might need to scale differently and cloud foundry makes that quite easy with the 'cf scale' command. They might be used by multiple other microservices, hence creating new scale requirements. So, thinking about what needs to scale and also the release cycle of the functionality helps in deciding what comprises a microservice.
As for ordering, for example, the Google Maps api might be required by your application so it could be said that it should be launched first and your application second. But in reality, your application should take in to account that the maps api might be down. Your goal should be that your app behaves well when a dependent microservice is not available.
The 'apps' of the 'application' know about each due to their name and the URL that the cloud gives it. There are actually many copies of the reference app running in various clouds and spaces. They are prefaced with things like Dev or QA or Integration, etc. Could we get the Dev front end talking to the QA back-end microservice, sure, it's just a URL.
In addition to the aforementioned, etcd (which I haven't tried yet), you can also create a CUPS service 'definition'. This is also a set of key/value pairs. Which you can tie to the Space (dev/qa/stage/prod) and bind them via the manifest. This way you get the props from the environment.
If micro-services do need to talk to each other, generally its via REST as you have noticed.However microservice purists may be against such dependencies. That apart, service discovery is enabled by publishing available endpoints on to a service registry - etcd in case of CloudFoundry. Once endpoint is registered, various instances of a given service can register themselves to the registry using a POST operation. Client will need to know only about the published end point and not the individual service instance's end point. This is self-registration. Client will either communicate to a load balancer such as ELB, which looks up service registry or client should be aware of the service registry.
For (2), there should not be such a hard dependency between micro-services as per micro-service definition, if one is designing such a coupled set of services that indicates some imminent issues such as orchestrating and synchronizing. If such dependencies do emerge, you will have rely on service registries, health-checks and circuit-breakers for fall-back.