Service instance count in Azure Fabric Service - azure-service-fabric

Is there a way to find out number of instances of a servicetype that are running in a fabric service cluster at any give time through code? One way is to look at the ApplicationManifest file and get the number of instances set in that, but it might be overwritten sometimes by a parameter file. Any ideas here ?

If you want to examine your services programatically then look at FabricClient which exposes a number of operations that could show you the status deployed services. For your specific question, get the number of running instances, have look at FabricClient.QueryClient.GetReplicaList...(...), it will give you a list of replicas (in the case of StatelessServices, that would be the same as instances).

Related

How to define stateles services that will run per environment in Service Fabric

I have an application manifest with five stateless services defined. I have multiple Application Parameters files, one per environment, to change the number of instances for each service. For one of the environments, I don't want two specific services to run at all (zero instances) but SF doesn't accept 0 instance parameter. How can I achieve that?
The best way to achieve this would be to stop using default services and instead use a script to start the required services in the appropriate environments.
The following links offer some comprehensive detail on this subject:
https://stackoverflow.com/a/50445801/490282
https://devblogs.microsoft.com/premier-developer/how-not-to-use-service-fabric-default-services/

Service Fabric Application - changing instance count on application update fails

I am building a CI/CD pipeline to release SF Stateless Application packages into clusters using parameters for everything. This is to ensure environments (DEV/UAT/PROD) can be scoped with different settings.
For example in a DEV cluster an application package may have an instance count of 3 (in a 10 node cluster)
I have noticed that if an application is in the cluster and running with an instance count (for example) of 3, and I change the deployment parameter to anything else (e.g. 5), the application package will upload and register the type, but will fail on attempting to do a rolling upgrade of the running application.
This also works the other way e.g. if the running app is -1 and you want to reduce the count on next rolling deployment.
Have I missed a setting or config somewhere, is this how it is supposed to be? At present its not lending itself to being something that is easily scaled without downtime.
At its simplest form we just want to be able to change instance counts on application updates, as we have an infrastructure-as-code approach to changes, builds and deployments for full tracking ability.
Thanks in advance
This is a common error when using Default services.
This has been already answered multiple times in these places:
Default service descriptions can not be modified as part of upgrade set EnableDefaultServicesUpgrade to true
https://blogs.msdn.microsoft.com/maheshk/2017/05/24/azure-service-fabric-error-to-allow-it-set-enabledefaultservicesupgrade-to-true/
https://github.com/Microsoft/service-fabric/issues/253#issuecomment-442074878

Get Redshift cluster status in outputs of cloudformation

I am creating a redshift cluster using CF and then I need to output the cluster status (basically if its available or not). There are ways to output the endpoints and port but I could not find any possible way of outputting the status.
How can I get that, or it is not possible ?
You are correct. According to AWS::Redshift::Cluster - AWS CloudFormation, the only available outputs are Endpoint.Address and Endpoint.Port.
Status is not something that you'd normally want to output from CloudFormation because the value changes.
If you really want to wait until the cluster is available, you could create a WaitCondition and then have something monitor the status and the signal for the Wait Condition to continue. This would probably need to be an Amazon EC2 instance with some User Data. Linux instances are charged per-second, so this would be quite feasible.

Advice on how to monitor (micro)services?

We are transitioning from building applications on monolith application servers, to more microservices oriented applications on Spring Boot. We will publish health information with SB Actuator through HTTP or JMX.
What are the options/best practices to monitor services, that will be around 30-50 in total? Thanks for your input!
Not knowing too much detail about your architecture and services, here are some suggestions that represent (a subset of) the strategies that have been proven in systems i've worked on in production. For this I am assuming you are using one container/VM per micro service:
If your services are stateless (as they should be :-) and you have redundancy (as you should have :-) then you set up your load balancer to call your /health on each instance and if the health check fails then the load balancer should take the instance out of rotation. Depending on how tolerant your system is, you can set up various rules that define failure instead of just a single failure (e.g. 3 consecutive, etc.)
On each instance run a Nagios agent that calls your health check (/health) on the localhost. If this fails, generate an alert that specifies which instance failed.
You also want to ensure that a higher level alert is generated if none of your instances are healthy for a given service. You might be able to set this up in your load balancer or you can set up a monitor process outside the load balancer that calls your service periodically and if it does not get any response (i.e. none of the instances are responding) then it should sound all alarms. Hopefully this condition is never triggered in production because you dealt with the other alarms.
Advanced: In a cloud environment you can connect the alarms with automatic scaling features. In that way, unhealthy instances are torn down and healthy ones are brought up automatically every time an instance of a service is deemed unhealthy by the monitoring system

Octopus - deploying multiple copies of same service

I've got an Octopus deployment for an NServiceBus consumer. Until recently, there's only been one queue to consume. Now we're trying to get smart about putting different types of messages in different queues. Right now we've broken that up into 3 queues, but that number might increase in the future.
The plan now is to install the NSB consumer service 3 times, in 3 separate folders, under 3 different names. The only difference in the 3 deployments will be an app.config setting:
<add key="NsbConsumeQueue" value="RedQueue" />
So we'll have a Red service, a Green service and a Blue service, and each one will be configured to consume the appropriate queue.
What's the best way to deploy these 3 services in Octopus? My ideal would be to declare some kind of list of services somewhere e.g.
ServiceName QueueName
----------- ---------
RedService RedQueue
GreenService GreenQueue
BlueService BlueQueue
and loop through those services, deploying each one in its own folder, and substituting the value of NsbConsumeQueue in app.config to the appropriate value. I don't think this can be done using variables, which leaves PowerShell.
Any idea how to write a PS script that would do this?
At my previous employer, we used the following script to deploy from Octopus:
http://www.layerstack.net/blog/posts/deploying-nservicebus-with-octopus-deploy
Add the two Powershell scripts to your project that contains the NServiceBus host. Be sure to override the host identifier or ServicePulse will go mad, because every deployment gets its own folder, due to Octopus.
But as mentioned in the comments, be sure that you're splitting endpoints for the right reason. We also had/have at least 4 services, but that's because we have a logical separation. For example, we have a finance service where all finance messages go to. And a sales service where all sales services go to. This follows the DDD bounded context principle and is there for reasons. I hope your services aren't actually called red, green and blue! :)
Powershell should not be needed for this. Variables in Octopus can be scoped to a step in the deployment process. So you could have 3 steps, one for each service, and 3 variables for the queue names, each scoped to one of the steps.
You could also add variables for the service names, and use those variables in the process step settings. That would let you see both the service names and queue names from the variables page.