Background: I am developing a scheduler for a system in Service Oriented Architecture (SOA) with a framework similar to Internet Communications Engine(ICE) .
Services in the system run on multiple machines depending on their workloads. For example, service A will run on 10 machines when workload is 1000/second and on 100 machines when workload is 10000/second.
In the system, one service calls other services, too. As a result, the response time of service A not only depends on workload on every machine, but also depends on response times of service B,C,D which are called by service A. If service B goes wrong, service A goes wrong, too. When the service-calling network is complicated, a lot of service go wrong when one service starts to go wrong. It is hard to determine which service should have more machines.
Because of system performance reasons, we cannot have a whole picture of all services when scheduling. As a result, we need to make decision based on local information of service A.
The question is can we model service A to predict performance of service A given circumstances of service A (current workload of service A, current performance of service B, C, D and current resource utilization on the machines).
Purpose: we can use the model to find which factor causes the performance degradation of service A (Performance degradation of service B/C/D or resource limitation?).
Given: Training data can be historical performance (average response time, workload) of service A, B, C, D and resource utilization on machines where service A runs.
Related
Lets say I have 5 Microservices and each microservice has 3 instances each. To deploy these microservices Do we need 15 different servers to deploy each Microservice ???
So In large scale application Let's say i have 100 Microservice and each microservice has 3 instance running in that case i need 300 servers to deploy each microservice's instances??
Please correct me on this
There is nothing like that 1 MicroService Instance = 1 Physical Server.
Microservice means that service only knows about its presense and its own data.
If some other service want to give some task to another service they have to call that service via endpoint for example Http or via bus.
By saying this you can have a one big server and that can have all the services.
If you want all the services belong to one instance or for one customer can be in one server, it is also possible.
In this area, you have to compute how much resource required by each service and its instance and that will be the driving factor in deciding this strategy.
I have some general problems/questions regarding self managed Microservices (in Kubernetes).
The Situation:
I have a provider (Discord API) for my desired state, which tells me the count (or multiples of the count) of sharded connections (websocket -> stateful in some way) I should establish with the provider.
Currently a have a "monolithic" microservice (it can't be deployed in an autoscaling service and has to be stateful), which determines the count of connections i should have and a factor based on the currently active pods, that can establish a connection to this API.
It further (by heartbeating and updating the connection target of all those pods) manages the state of every pod and achieves this target configuration.
It also handles the case of a pod being removed from the service and a change of target configuration, by rolling out the updated target and only after updating the target discontinuing the old connections.
The Cons:
This does not in any way resemble a good microservice architecture
A failure of the manager (even when persisting the current state in a cache or db of some sort) results in the target of the target provider not being achieved and maybe one of the pods having a failure without graceful handling of the manager
The Pros:
Its "easy" to understand and maintain a centrally managed system
There is no case (assuming a running manager system) where a pod can fail and it wont be handled -> connection resumed on another pod
My Plan:
I would like this websocket connection pods to manage themselves in some way.
Theoretically there has to be a way in which a "swarm" (swarm here is just a descriptive word for pods within a service) can determine a swarm wide accepted target.
The tasks to achieve this target (or change of target) should then be allocated across the swarm by the swarm itself.
Every failure of a member of the swarm has to be recognized, and the now unhandled tasks (in my case websocket connections) have to be resumed on different members of the swarm.
Also updates of the target have to be rolled out across the swarm in a distinct manner, retaining the tasks for the old target till all tasks for the new target are handled.
My ideas so far:
As a general syncing point a cache like redis or a db like mongodb could be used.
Here the current target (and the old target, for creating earlier mentioned smooth target changes) could be stored, along with all tasks that have to be handled to achieve this desired target.
This should be relatively easy to set up and also a "voting process" for the current target could be possible - if even necessary (every swarm member checks the current target of the target provider and the target that is determined by most of the swarm members is set as the vote outcome).
But now we face the problem already mentioned in the pros for the managed system, I currently cant think of a way the failure of a swarm member can be recognized and managed by the swarm consistently.
How should a failure be determined without a constant exchange between swarm members, which i think should be avoided because of the:
swarms should operate entirely target driven and interact with each other as litte as possible
kubernetes itself isn't really designed to have easy intra service communication
Every contribution, idea or further question here helps.
My tech stack would be but isn't limited to:
Java with Micronaut for the application
Grpc as the only exchange protocol
Kubernetes as the orchestrator
Since you're on the JVM, you could use Akka Cluster to take care of failure detection between the pods (even in Kubernetes, though there's some care needed with service meshes to exempt the pod-to-pod communications from being routed through the mesh) and use (as one of many possibilities for this) Distributed Data's implementations of CRDTs to distribute state (in this case the target) among the pods.
This wouldn't require you to use Akka HTTP or Akka's gRPC implementations, so you could still use Micronaut for external interactions. It would effectively create a stateful self-organizing service which presents to Kubernetes as a regular stateless service.
If for some reason Akka isn't appealing, looking through the code and docs for its failure detection (phi-accrual) might provide some ideas for implementing a failure detector using (e.g.) periodic updates to a DB.
Disclaimer: I am employed by Lightbend, which provides commercial support for Akka and employs or has employed at some point most of the contributors to and maintainers of Akka.
I have created Four micro services. First service only handles registration n login module(A), Seconds service has the Post& Comment Module(B),Third services has Rating & Review Module (C)and ADMIN Module (D).
Problem
All micro services has their own database.But Service B is dependent to service A's DB. Service C is dependent to B and A's DB and Service D is also dependent to service A,B,C. I'm using postgres DB for all service A,B,C .
Option 1.
I can use JDBC connection Factory and connect service B to service A DB.But this is not a good practice b'coz if Service A changed their column then we have to change Service B module.
Option 2.
I can create Hot-standby replica of my service A and Service B but the problem here Hot-standby replica is Read Only i can't perform update n delete.
You should design your microservices so they don't need any dependency on other microservices. Otherwise it looks like distributed monolith. No matter if dependency is established on microservice level or any kind of database linking as both your options suggest.
IMHO the clean solution is:
think over again if you really need such granularity
if yes, then for each database, declare all entities needed by particular microservice. Duplicities are not problem - if B module (posts) needs database of users, let it have it's own copy of users table, not link to A module.
connect microservices by reliable messaging system (Kafka) in which an event in one microservice propagates to listeners in other microservices and let them update their data models
There is a lot of redundancy in this model, however it's robust and definitely closer to really distributed system. We successfully use it in our big fintech platform.
My company is interested in using a stand-alone Service Fabric cluster to manage communications with robots. In our scenario, each robot would host its own rosbridge server, and our Service Fabric application would maintain WebSocket clients to each robot. I envision a stateful service partitioned along device ids which opens connections on startup. It should monitor connection health via heartbeats, pass messages from the robots to some protocol gateway service, and listen to other services for messages to pass to the robots.
I have not seen discussion of this style of external communications in the Service Fabric documentation - I cannot tell if this is because:
There are no special considerations for managing WebSockets (or any two-way network protocol) this way from Service Fabric. I've seen no discussion of restrictions and see no reason, conceptually, why I can't do this. I originally thought replication would be problematic (duplicate messages?), but since only one replica can be primary at any time this appears to be a non-issue.
Service Fabric is not well-suited to bi-directional communication with external devices
I would appreciate some guidance on whether this architecture is feasible. If not, discussion on why it won't work will be helpful. General discussion of limitations around bi-directional communication between Service Fabric services and external devices is welcome. I would prefer if we could keep discussion to stand-alone clusters - we have no plans to use Azure services at this time.
Any particular reason you want SF to host the client and not the other way around?
Doing the way you suggest, I think you will face big challenges to make SF find these devices on your network and keep track of them, for example, Firewall, IPs, NAT, planned maintenance, failures, connection issues, unless you are planning to do it by hand.
From the brief description I saw in the docs your provided about rosbridge server, I could understand that you have to host it on a Server(like you would with a service fabric service) and your devices would connect to it, in this case, your devices would have installed the ROS to make this communication.
Regarding your concerns about the communication, service fabric services are just executable programs you would normally run on your local machine, if it works there will likely work on service fabric environment on premise, the only extra care you have to worry is the external access to the cluster(if in azure or network configurations) and service discovery.
In my point of view, you should use SF as the central point of communication, and each device would connect to SF services.
The other approach would be using Azure IoT Hub to bridge the communication between both. There is a nice Iot Hub + Service Fabric Sample that might be suitable for your needs.
Because you want to avoid Azure, you could in this case replace IoT Hub with another messaging platform or implement the rosbridge in your service to handle the calls.
I hope I understood everything right.
About the obstacles:
I think the major issue here is that bi-directional connection can be established between service replica and the robot.
This has two major problems:
Only primary replica has write access - i.e. only one replica would be able to modify state. This issue hence could be mitigated by creating a separate partition for each robot (but please remember that you can't change partition count after the service was created) or by creating a separate service instance for each robot (this would allow you to dynamically add or remove robots but would require additional logic related to service discoverability).
The replica can be shutdown (terminated), moved to another node (shutdown and start of new replica) or even demoted (the primary replica get's demoted to secondary and another secondary replica get's promoted to primary) by various reasons. So the service code and robot communication code should be able to handle this.
About WebSockets
This looks possible by implementing custom ICommunicationListener and other things using WebSockets.
We are transitioning from building applications on monolith application servers, to more microservices oriented applications on Spring Boot. We will publish health information with SB Actuator through HTTP or JMX.
What are the options/best practices to monitor services, that will be around 30-50 in total? Thanks for your input!
Not knowing too much detail about your architecture and services, here are some suggestions that represent (a subset of) the strategies that have been proven in systems i've worked on in production. For this I am assuming you are using one container/VM per micro service:
If your services are stateless (as they should be :-) and you have redundancy (as you should have :-) then you set up your load balancer to call your /health on each instance and if the health check fails then the load balancer should take the instance out of rotation. Depending on how tolerant your system is, you can set up various rules that define failure instead of just a single failure (e.g. 3 consecutive, etc.)
On each instance run a Nagios agent that calls your health check (/health) on the localhost. If this fails, generate an alert that specifies which instance failed.
You also want to ensure that a higher level alert is generated if none of your instances are healthy for a given service. You might be able to set this up in your load balancer or you can set up a monitor process outside the load balancer that calls your service periodically and if it does not get any response (i.e. none of the instances are responding) then it should sound all alarms. Hopefully this condition is never triggered in production because you dealt with the other alarms.
Advanced: In a cloud environment you can connect the alarms with automatic scaling features. In that way, unhealthy instances are torn down and healthy ones are brought up automatically every time an instance of a service is deemed unhealthy by the monitoring system