Deploy mule applications in RTF(Runtime Fabric) on-demand fashion - deployment

I have multiple mule applications deployed in RTF(Runtime Fabric), some of them are used only once in a month, So I want to free-up cpu cores by deploying those less consumed applications on demand basis (means automate the deployments just before it is going to consume and un-deploy later).
Could you please suggest, how can we handle this? or Any thoughts or approaches are appreciated.

What about setting the reserved CPU to its lowest value (0.02 cores)? Meaning that the API will be always alive, but when not used it will scale down the CPU usage to this tiny percentage, would that work for you instead of undeploying/deploying on demand?

You don't need to do this. When app is not working it does not consume any CPU.

Related

Guessing kubernetes limits for kubernetes deployments

Is there any way we can correctly guess how much resource limits we need to keep for running deployments on kubernetes clusters.
Yes, you can guess that single threaded application most likely won't need more that 1 CPU.
For any other programs: no, there is not easy way to guess it. Every application is different, and reacts differently under different workloads.
The easiest way to figure out how many resources it needs is to run it and measure it.
Run some benchmarks/profilers and see how application behaves. Then make decisions based on that.

How to identify the network performance issue?

I am a little confuse about my message server's network bottleneck issue. I can obviously found the problem caused by the a lot of network operation, but I am not sure why and how to identify it.
Currently we are using GCP as our VM and 4 core/8G RAM for our message server. Redis & Cassandra is in other server at the same place. The problem happened at the network operation to the redis server and cassandra server.
I need to handle 3000+ requests at once to save data to redis and 12000+ requests to cassandra server.
My task consuming all my CPU power and the CPU usage down right after I merge the redis request and cassandra request to kind of batch request. The penalty is I have to delay my data saving.
What I want to know is how can I know the network's capability of my system. How many requests within 1 second is a reasonable task?. As my testing, this is obviously true that the bottleneck is the network operation, but I can't prove it. I can't even know how to estimate a reasonable network usage of my system? Are there some tools or other thing that can help to my make sure my network's problem? Or this is just a error config of my GCP system?
Thanks,
Eric
There is a "monitoring" label in each instance where you can check through graphs values like instance CPU, Network and RAM usage.
But to further check the performance of your instance you should use StackDriver Logging1 and Monitoring2. It stores a lot of information from the internal servers and the system performance. for that you will need to install the agent in the instance. It also stores information about your Load Balancer3, in case you are using one with your web application, which is very advisable since it scale your resources up or down with intelligent Autoscaling.
But in order to test out your network you will need to use some third party tool to overload the network. There are multiple tools to achieve this, like JMeter.

Is it possible to Autoscale Akka

I need an Akka cluster to run multiple CPU intensive jobs. I cannot predict how much CPU power I need. Sometimes load is high, while at other times, there isn't much load. I guess autoscaling is a good option, which means, example: I should be able to specify that I need minimum 2 and maximum 10 Actors. The cluster should scale up or down along with a cool off period as load goes up or down. Is there a way to do that?
I am guessing, maybe one can make an Docker image of the codebase, and autoscale it using Kubernetes. Is it possible? Is there a native Akka solution?
Thanks
If you consider a project like hseeberger/constructr and its issue 179, a native Akka solution should be based on akka/akka-management:
This repository contains interfaces to inspect, interact and manage various Parts of Akka, primarily Akka Cluster. Future additions may extend these concepts to other parts of Akka.
There is a demo for kubernetes.

How much load can Parse Server on Compute Engine instance handle

The server will run singly on one instance of compute engine. What could limit it's serving capacity and how much load can a single instance (4 vCPUs and 15GB Memory) handle.
Note : I've already looked at Kubernetes and even load-balancing multiple instances but accessing the database from multiple clients is a little too complicated for me right now. So please keep in mind if you're going to suggest containerisation, that I'm a beginner.
Any and all advice is welcome. Thanks!
The serving capacity of the server depends on a number of factors, which includes the requests you receive from the clients, the additional applications running in it etc. For a 4 core CPU, as per this help center article, you will get a peak performance of 8Gb/s, which is good for a single instance. Since you are using a single parse server alone on the VM, it should work very well with the above-mentioned configuration.
A container is a tool for a developer. It contains all the dependencies and library which required to run/test a particular application in a container. The applications running in the container are easily portable.
There is this help center article which might give a precise idea of containers and its usage. While the Kubernetes Engine will help you to deploy/manage these containerized application.

Strategy for distributed-computing inside microservices architecture?

I am looking for advice for the following problem:
I am working with other people on a microservices architecture where the microservices are distributed on different machines. Resources on the machines are very limited.
Currently, communication runs through a message broker.
In my use case, one microservice occasionally needs to run some heavy computation. I would like to perform the computation on a machine with low CPU usage and enough available memory space.
My first idea is that every machine installs a microservice which publishes CPU usage and available memory space in the message broker. Each microservice that needs to distribute their workload is looking for the fittest machines and installs "worker"-microservices on the fly. Results are published in the message broker. Since resources are limited, worker-microservices are uninstalled when not needed anymore.
I haven't found a similar use case yet. Do you guys know a better existing solution?
I am quite new to the topic of microservices and distributed computing, so i would appreciate some advice and help.