How to replace instrument cluster sample application? - android-automotive

How to configure that in startup own custom cluster application is loaded to cluster display instead of ClusterHomeSample application ?

Related

How to increase load on stateful application in kubernetes cluster using a script

I have implemented horizontal pod autoscaler for my stateful application. How can I increase the load on this application do that my CPU utilisation goes up and pods are autoscaled
you can use Kubernetes performance testing tutorial: Load test a cluster as an example how to perform load test on cluster
you can use kubernetes-jmeter to generate workloads to achieve load testing
you can write script using any lang you know to generate a load
In any case, I would like to recommend you start from 13-Step Guide to Performance Testing in Kubernetes to check how you can setup performance testing with monitoring

Set proper cluster name to GridGain web console

I am using two Gridgain clusters in Kubrenetes and they manage and monitor using single web console free version. Both clusters connected properly and working as I expected. But cluster name automatically generated and haven't any sense to identify cluster. Have any way to set proper cluster name for clusters in Gridgain web console. My dashboard view like this 1
$ control.sh --change-tag <new-tag>
source: https://ignite.apache.org/docs/latest/tools/control-script#cluster-id-and-tag

Where/How to configure Cassandra.yaml when deployed by Google Kubernetes Engine

I can't find the answer to a pretty easy question: Where can I configure Cassandra (normally using Cassandra.yaml) when its deployed on a cluster with kubernetes using the Google Kubernetes Engine?
So I'm completely new to distributed databases, Kubernetes etc. and I'm setting up a cassandra cluster (4VMs, 1 pod each) using the GKE for a university course right now.
I used the official example on how to deploy Cassandra on Kubernetes that can be found on the Kubernetes homepage (https://kubernetes.io/docs/tutorials/stateful-application/cassandra/) with a StatefulSet, persistent volume claims, central load balancer etc. Everything seems to work fine and I can connect to the DB via my java application (using the datastax java/cassandra driver) and via Google CloudShell + CQLSH on one of the pods directly. I created a keyspace, some tables and started filling them with data (~100million of entries planned), but as soon as the DB reaches some size, expensive queries result in a timeout exception (via datastax and via cql), just as expected. Speed isn't necessary for these queries right now, its just for testing.
Normally I would start with trying to increase the timeouts in the cassandra.yaml, but I'm unable to locate it on the VMs and have no clue where to configure Cassandra at all. Can someone tell me if these configuration files even exist on the VMs when deploying with GKE and where to find them? Or do I have to configure those Cassandra details via Kubectl/CQL/StatefulSet or somewhere else?
I think the faster way to configure cassandra in Kubernetes Engine, you could use the next deployment of Cassandra from marketplace, there you could configure your cluster and you could follow this guide that is also marked there to configure it correctly.
======
The timeout config seems to be a configuration that require to be modified inside the container (Cassandra configuration itself).
you could use the command: kubectl exec -it POD_NAME -- bash in order to open a Cassandra container shell, that will allow you to get into the container configurations and you could look up for the configuration and change it for what you require.
after you have the configuration that you require you will need to automate it in order to avoid manual intervention every time that one of your pods get recreated (as configuration will not remain after a container recreation). Next options are only suggestions:
Create you own Cassandra image from am own Docker file, changing the value of the configuration you require from there, because the image that you are using right now is a public image and the container will always be started with the config that the pulling image has.
Editing the yaml of your Satefulset where Cassandra is running you could add an initContainer, which will allow to change configurations of your running container (Cassandra) this will make change the config automatically with a script ever time that your pods run.
choose the option that better fits for you.

Deploy service dynamically according to load with Googl Kubernetes Engine

I'm currently working on an application deployed with Google Kubernetes Engine. I want to be able to change the behavior of a service if the load on my application reaches a certain point. The idea is to deploy a similar service which consumes less ressources so that my application can still work with a bigger load.
Is it possible with Google Kubernetes Engine ?
Yes it can be done with HPA and custom metrics in prometheus. We are using this setup to autoscale our deployments based on requests per minute.
Prometheus scrapes this metric from the application and prometheus adapter makes them available to kubernetes.
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
https://github.com/DirectXMan12/k8s-prometheus-adapter

Changing the configuration file in current use

Service fabric allows you to run the cluster with various configurations:
How do I switch the cluster to run under a different configuration file on my localhost?
Well, I'd say you have to remove the existing cluster first. For instance, if we consider 'Switch cluster mode' button that is available in SF Local Cluster Manager, then here is what it does according to the documentation -
When you change the cluster mode, the current cluster is removed from
your system and a new cluster is created. The data stored in the
cluster is deleted when you change cluster mode.
So here is the path that should work for you -
Run RemoveServiceFabricCluster.ps1 to remove Service Fabric from each machine in the configuration(the one that you'll pass as a parameter to this command)
Run CleanFabric.ps1 to remove Service Fabric from the current machine
Run CreateServiceFabricCluster.ps1 passing this time a new config file