Behaviour when reducing instances of a Bluemix application - ibm-cloud

I have an orchestrator service which keeps track of the instances that are running and what request they are currently dealing with. If a new instance is required, I make a REST call to increase the instances and wait for the new instance to connect to the orchestrator. It's one request per instance.
The orchestrator tracks whether an instance is doing anything and knows which instances can be stopped, however there is nothing in the API that allows me to reduce the number of instances stopping a particular instance, which is what I am trying to achieve.
Is there anything I can do to manipulate the platform into deterministically stopping the instances that I want to stop? Perhaps by having long running HTTP requests to the instances I require and killing the request when it's no longer required, then making the API call to reduce the number of instances?
Part of the issue here is that I don't know the specifics of the current behavior...

Assuming you're talking about CloudFoundry/Instant Runtime applications, all of the instances of an applications are running behind a load balancer which uses round-robin to distribute requests across the instances (unless you have session affinity cookie set up). Differentiating between each instances for incoming requests or manual scaling is not recommended and it's an anti-pattern. You can not control which instance the scale down task will choose.
If you really want that level of control with each instance, maybe you should deploy them as separate applications. MyApp1, MyApp2, MyApp3, etc. All of your applications can have the same route (myapp.mybluemix.net). Each of the applications can now distinguish themselves by their name (VCAP_APPLICATION) allowing you terminate them.

Related

What is common strategy for synchronized communication between replica's of same PODS?

Lets say we have following apps ,
API app : Responsible for serving the user requests.
Backend app: Responsible for handling the user requests which are long running tasks. It updates the progress to database (postgres) and distributed cache (Redis).
Both apps are scalable service. Single Backend app handles multiple tenants e.g. Customer here but one customer is assigned to single backend app only.
I have a usecase where I need API layer to connect to specific replica which is handling that customer. Do we have a common Pattern for this ?
Few strategies in mind
Pub/Sub: Problem is we want sync guranteed response , probably using Redis
gRPC : Using POD IP to connect to specific pod is not a standard way
Creating a Service at runtime by adding labels to the replicas and use those. -- Looks promising
Do let me know if there is common pattern or example architecture of this or standard way of doing this?
Note :[Above is a simulation of production usecase, names and actual use case is changed]
You should aim to keep your services stateless, in a Kubernetes environment there is no telling when one pod might be replaced by another due to worker node maintenance.
If you have long running task that cannot be completed during the configured grace period for pods to shutdown during a worked node drain/evacuation you need to implement some kind of persistent work queue as your are think about in option 1. I suggest you look into the saga pattern.
Another pattern we usually employ is to let the worker service write the current state of the job into the database and let the client pull the status every few seconds. This does however require some way of handling half finished jobs that might be abandoned by pods that are forced to shutdown.

Does this make sense for Orleans or SF and if so guidance please

We’re working to take our software to Azure cloud and looking at Orleans and Service Fabric (SF) as potential frameworks. We need to:
Populate our analysis engines with lots of data (e.g., 100MB to 2GB) per engine instance.
Maintain that state, and if an engine instance goes idle for say 20 minutes or more, we’d like to unload it (i.e., and not pay for the engine instance resource).
Each engine instance will support one to several end users with a specific data set.
Each engine instance can be highly interactive generating lots of plot data near realtime. We’re maintaining state as we don’t want to pay the price to populate engine instance for each engine interaction.
An engine instance action can take a few seconds, a few minutes, to even tens of minutes. We’ll want some feedback.
Users may access an engine instance every few seconds (e.g., to steer the engine towards a result based on feedback) and will want live plot data.
Each user will want to talk to a specific engine instance.
As a user expresses interest in running a simulation (i.e., standing up an engine instance), ideally we want him to choose small/medium/large computing resource to run his engine instance (i.e., based on the problem he’s trying to solve he may want more or less computing/memory power).
We’re considering Orleans and SF but we’re having difficulty specifying architecture based on above requirements. We’ve considered:
Trying to think about an SF partition, or an Orleans silo as an ‘engine instance’ described above.
Leveraging both Orleans and SF notion of fault tolerance through replication.
Leveraging local (i.e., to partition or silo) storage to store results and maintain state (i.e., for long periods or until idle for 20 minutes).
We’ve not understood how to:
Limit a silo or a partition to a single engine instance so that we can control resourcing of the engine instance.
Keep a user’s engine instance data separate from another users engine instance data.
Direct a request from a user (e.g., through a web API) to a particular engine instance.
Does this make sense for Orleans, does it make more sense for SF? Any pointers on how to implement the above would be helpful.
When you say SF I assume you mean SF Actors right?
You can use them the way you want, but in both cases does not look as the right solution for your problem, because:
Actors are single threaded, if you plan to share the same instance with multiple clients, each one would have to wait for the previous one to finish before it start processing anything. If you need to monitor the status of a running actor, you would have to make the actor publish the updates to external subscribers.
Actor state is isolated, so you can't access the state of other actors, the way to do it is provide a method to return it, but if the actor is running a command you have to wait the completion, unless you make a separate state service to hold the processed data.
You can't limit the resources required for a actor, in service fabric you specify the resources needed for a service, but you can't do it for actors, and you can't limit the resources they use, when they hit the limit, service fabric will try to balance the resources for your, but nothing prevent the process to consume more memory than requested.
Both actor services communicates using the ask approach, so they will "block" the caller waiting for an answer, it is asynchronous but you still have to keep the caller 'waiting'. (block and wait is because there is not an idea of fire and forget like Akka that uses the Tell approach, where it delivery the message and forget.)
Based on some of your requirements, I think a containers would be a better approach. Because:
You can limit the resource consumption for each container
The data is isolated inside the container and not visible to others
But on containers you have to manage the replication and partitioning by yourself, so in this case I would recommend the best of both worlds:
Create SF services to host the shared data sets between the the users
SF Service+Actor to only store the results of users simulations.
Containers to run the simulations and send updates to actors
This is just an example, it all will depend on your requirements, architecture and how data will be isolated from each other.

How to monitor (micro)services?

I have a set of services. Every service contains some components.
Some of them are stateless, some of them are stateful, some are synchronous, some are asynchronous.
I used different approaches to monitoring and alerting.
Log-based alerting and metrics gathering. New Relic based. Own bicycle.
Basically, atm I am looking for a way, how to generalize and aggregate important metrics for all services in single place. One of things, I want is that we monitor more products, than separate services.
As an end result I see it as a single dashboard with small amount of widgets, but looking at those widgets I would be able to say for sure, if services are usable to end-customer.
Probably someone can recommend me some approach/methodology. Or give a reference to some best practices.
I like what you're trying to achieve! A service is not production-ready unless it's thoroughly monitored.
I believe what your're describing goes into the topics of health-checking and metrics.
... I would be able to say for sure, if services are usable to end-customer.
That however will require a little of both ;-) To ensure you're currently fulfilling your SLA, you have to make sure, that your services are all a) running and b) perform as requested. With both problems I suggest to look at the StatsD toolchain. Initially developed by Etsy, it has become the de-facto standard for gathering metrics.
To ensure all your services are running, we're relaying Kubernetes. It takes our description for what should run, be reachable from outside etc. and hosts that on our infrastructure. It also makes sure, that should things die - that they will be restarted. It helps with things like auto-scaling etc. as well! Awesome tooling and kudos to Google!
The way it ensures that is with health-checks. There are multiple ways how you can ensure your service node booted by Kubernetes is alive and kicking (namely HTTP calls and CLI scripts but this should be a modular thing should you need anything else!) If Kubernetes detects unhealthy nodes it will immediately phase them out and start another node instead.
Now, making sure, all your services perform as expected you'll need to gather some metrics. For all of our services (and all individual endpoints), we gather a few metrics via StatsD like:
Requests/sec
number of errors returned (404, etc...)
Response times (Average, Median, Percentiles depending on the services SLA)
Payload size (Average)
sometimes the number of concurrent requests per endpoint, the number of instances currently running
general metrics like the hosts current CPU and memory usage and uptime.
We gather a lot more metrics but that's about the bottom line. Since StatsD has become more of a "protocol specification" than a concrete product there are a myriad of collector, front- and backends to choose from. They help you visualize your systems state and many of them feature alerts of something or some combination of metrics go beyond their thresholds.
Let me know, if this was helpfull!
There's at least 3 types of things you will need to monitor: the host where the service is deployed, the component itself and the SLAs and some of them depend on the software stack you're using as well as the architecture.
With that said, you could for example use Nagios to monitor the hardware where the services are deployed, Splunk for the services metrics/SLAs as well as for any errors that might occur. You can also use SNMP packages in case something goes wrong and you have a more sophisticated support structure, this would be yours triggers. Without knowing how your infrastructure/services are set up it is complicated to go into deeper details.

Service Fabric dynamic partitioning

So I am doing some research into using Service Fabric for a very large application. One thing I need to have is a service that is partitioned by name, which seems fairly trivial at the application manifest level.
However, I really would like to be able to add and remove named partitions on the fly without having to republish the application.
Each partition represents our equivalent of a tenant, and we want to have a backend management app to add new tenants.
Each partition will be a long-running application that fires up a TCP server that uses a custom protocol, and I'll need to be able to query for the address by name from the cluster.
Is this possible with Service Fabric, and if so is there any documentation on this, or something I should be looking for?
Each partition represents our equivalent of a tenant, and we want to have a backend management app to add new tenants.
You need to rethink your model. Partitioning is for distributing data so it accessible fast, for read and write. But within the same logical container.
If you want to do some multitenant in Service Fabric you can deploy an Application multiple times to the cluster.
From Visual Studio it seems you can only have one instance of an Application. This is because in the ApplicationManifest.xml there are DefaultServices defined. This is okay for developing on the local Service Fabric cluster. For production you might want to consider deploying the application with powershell, this will open up the possibility to deploy the same application multiple times with settings for each instance(like: tenant name, security, ... )
And not only Applications can be deployed multiple times, stateful/stateless services as well. So you could have one application and for each tenant you deploy a service of a certain type. Services are findable via the naming service inside Service Fabric, see the FabricClient class for more info on that.
It is not possible to change the partition count for an existing application.
From https://azure.microsoft.com/en-us/documentation/articles/service-fabric-concepts-partitioning/#plan-for-partitioning (emphasis mine):
In rare cases, you may end up needing more partitions than you have initially chosen. As you cannot change the partition count after the fact, you would need to apply some advanced partition approaches, such as creating a new service instance of the same service type. You would also need to implement some client-side logic that routes the requests to the correct service instance, based on client-side knowledge that your client code must maintain.
You are encouraged to do up-front capacity planning to determine the maximum number of partitions you will need - and if you end up needing more, you'll need to implement some special client side handling to cope.
We had the same problem and ended up creating an instance of the service for each tenant. This is pretty easy to do and will scale to any number of tenants.

Application startup and shutdown based on authenticated user activity

There are applications and services in enterprises that do not need to run all the time and that have a limited user base (say a handful of people).
These applications can be shut down and started either based on scheduling or even better user activity. So, we are talking about on-demand service (say wrapped by a container) and node start-up and shut down.
Now, first to mention that the reason why I mention authenticated user activity is because is makes sense to startup and shutdown on that basis (i.e. not based on lower level network traffic). One can imagine corporate SSO (say OAuth 2 based) being involved.
So, my question is whether anyone has attempted to implement what I have described using Consul or Kubernetes?
In the case of Consul, it could be that the key-value store could be used to give "Micro" (i.e. small user base) class applications a TTL, each time an authenticated user requests access to a given "Micro" class application it's TTL is updated. During the TTL window we want to check the health of the node(s), containers and services - outside of the window we don't (since we want to save on op ex).
This question is similar to this autoscaling question, however different in the sense that this use case is about scaling from 0 nodes and then down to 0 based on an authenticated user base (most likely using SSO).
In the case of Kubernetes, the Horizontal Pod Autoscaling documentation lists the exact use case described under Next steps (i.e. the feature is on the backlog and may be implemented after v1.1. of Kubernetes). The cited feature description (Unidling proposal) is as follows:
Scale the number of pods starting from 0. All pods can be turned-off, and then turned-on when there is a demand for them. When a request to service with no pods arrives, kube-proxy will generate an event for autoscaler to create a new pod.
So basically, it may be possible to do what I've described in future using Kubernetes, but it is not possible right now. This in itself does not address the requirement to only scale from 0 based on authenticated user activity.
It's worth noting, as a cluster-agnostic aside, on-demand container activation based on systemd. This solution will of course not scale back down to 0 without a controlling process, but it's still worth noting.