Set proper cluster name to GridGain web console - kubernetes

I am using two Gridgain clusters in Kubrenetes and they manage and monitor using single web console free version. Both clusters connected properly and working as I expected. But cluster name automatically generated and haven't any sense to identify cluster. Have any way to set proper cluster name for clusters in Gridgain web console. My dashboard view like this 1

$ control.sh --change-tag <new-tag>
source: https://ignite.apache.org/docs/latest/tools/control-script#cluster-id-and-tag

Related

Why do I see files from another GCP Kubernetes cluster in a different cluster?

I am trying to learn Google Cloud and very new to it.
I created 1 project in GCP: project1.
And I created a Kubernetes cluster in project1 with the name p1-cluster1. In this cluster, I click on the Connect button and a terminal is opened on GCP page. I created a new directory with the name development under /home/me. And I have developed a personal project under /home/me/development so that I now have a bunch of code here.
I decided to develop a second personal project. For this purpose, I created a new project with the name project2 in GCP. And then created a new Kubernetes cluster with the name p2-cluster2. And when I connect to this cluster, a terminal window opens, and I automatically end up in my home folder /home/me. I expect to see an empty folder but I instead see the development folder that I created in project1 in p1-cluster1.
Why do I see contents/files of another project (and another cluster) in a different project (and cluster)? Am I doing something wrong? If this is normal, what is the benefit of creating different projects?
I can create a new folder with the name development_2. But what if I accidentally ruin things (i.e. operative system) when working under development folder..? Do I also ruin things for the project under the development_2 folder?
And also, if I simultaneously run 2 tasks in 2 different projects, will they be competing with each other for the system resources (i.e. memory and CPU)?
This is extremely confusing and I could not make it clear to me by looking at documentation. I would really appreciate help of more experienced people in this specific domain.
I suppose you use Cloud shell to "connect" to the clusters. This is just a convenient way instead of using your own machine's shell. You get a machine which can be used for development instead of your local machine.
Therefore you have the same files and the some directory. It is not on the GKE Cluster, but a "local" machine from the perspective of GKE.
In order to actually execute something on the cluster you have to use kubectl which has a concept of kube contexts. If you have multiple clusters, then you need multiple kube contexts.
So if you "Connect" that from p1-cluster1 your kube-context will be set to that cluster.
See these articles for more detail:
https://cloud.google.com/kubernetes-engine/docs/quickstart#launch
https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl
EDIT:
So when you do
gcloud container clusters get-credentials p1-cluster1
on your local machine, you can the context to call
kubectl get pods
on that cluster. Only that command is executed on the cluster.

Auto joining newly created VM/Servers as kubernetes node to master

Hi I am working on Google Cloud platform where I am not using GKE. Rather I am creating k8s cluster manually. Following is my setup,
Total 11 server
Out of these 5 servers would be static servers and don't need any scaling
remaining 5 server would need up scaling if CPU or RAM consumption goes beyond certain
limit. i.e. I will spin only 3 servers initially and if CPU/RAM threshold is crossed then I will spin 2 more using Google Cloud Load balancer.
1 k8s Master server
To implement this load balancer I have already created one Custom Image on which I have installed docker and kubernetes. Using this I have create one instance template and then instance group.
Now the problem statement is ,
Although I have created image with everything installed , When I am creating a instance group in which 3 VM are being created , these VMs does not automatically connect to my k8s master. Is there any way to automatically connect newly created VM as a node to k8s master so that I do not have run join command manually on each server ?
Thanks for the help in advance.
so that I do not have run join command manually on each server
I am assuming that you can successfully run the join command to join the newly created VMs to the Kubernetes master manually.
If that is the case, you can use the startup-script feature in the Google Compute Engine.
Here is the documentation:
https://cloud.google.com/compute/docs/instances/startup-scripts
https://cloud.google.com/compute/docs/instances/startup-scripts/linux#passing-directly
In short, startup-script is the feature from Google Compute Engine to automatically run our customized script during start-up.
And, the script could look something like this:
#! /bin/bash
kubeadm join .......

Where/How to configure Cassandra.yaml when deployed by Google Kubernetes Engine

I can't find the answer to a pretty easy question: Where can I configure Cassandra (normally using Cassandra.yaml) when its deployed on a cluster with kubernetes using the Google Kubernetes Engine?
So I'm completely new to distributed databases, Kubernetes etc. and I'm setting up a cassandra cluster (4VMs, 1 pod each) using the GKE for a university course right now.
I used the official example on how to deploy Cassandra on Kubernetes that can be found on the Kubernetes homepage (https://kubernetes.io/docs/tutorials/stateful-application/cassandra/) with a StatefulSet, persistent volume claims, central load balancer etc. Everything seems to work fine and I can connect to the DB via my java application (using the datastax java/cassandra driver) and via Google CloudShell + CQLSH on one of the pods directly. I created a keyspace, some tables and started filling them with data (~100million of entries planned), but as soon as the DB reaches some size, expensive queries result in a timeout exception (via datastax and via cql), just as expected. Speed isn't necessary for these queries right now, its just for testing.
Normally I would start with trying to increase the timeouts in the cassandra.yaml, but I'm unable to locate it on the VMs and have no clue where to configure Cassandra at all. Can someone tell me if these configuration files even exist on the VMs when deploying with GKE and where to find them? Or do I have to configure those Cassandra details via Kubectl/CQL/StatefulSet or somewhere else?
I think the faster way to configure cassandra in Kubernetes Engine, you could use the next deployment of Cassandra from marketplace, there you could configure your cluster and you could follow this guide that is also marked there to configure it correctly.
======
The timeout config seems to be a configuration that require to be modified inside the container (Cassandra configuration itself).
you could use the command: kubectl exec -it POD_NAME -- bash in order to open a Cassandra container shell, that will allow you to get into the container configurations and you could look up for the configuration and change it for what you require.
after you have the configuration that you require you will need to automate it in order to avoid manual intervention every time that one of your pods get recreated (as configuration will not remain after a container recreation). Next options are only suggestions:
Create you own Cassandra image from am own Docker file, changing the value of the configuration you require from there, because the image that you are using right now is a public image and the container will always be started with the config that the pulling image has.
Editing the yaml of your Satefulset where Cassandra is running you could add an initContainer, which will allow to change configurations of your running container (Cassandra) this will make change the config automatically with a script ever time that your pods run.
choose the option that better fits for you.

Kubernetes Cluster - How to automatically generate documentation/Architecture of services

We started using Kubernetes, a few time ago, and now we have deployed a fair amount of services. It's becoming more and more difficult to know exactly what is deployed. I suppose many people are facing the same issue, so is there already a solution to handle this issue?
I'm talking of a solution that when connected to kubernetes (via kubectl for example) can generate a kind of map off the cluster.
In order to display one or many resources you need to use kubectl get command.
To show details of a specific resource or group of resources you can use kubectl describe command.
Please check the links I provided for more details and examples.
You may also want to use Web UI (Dashboard)
Dashboard is a web-based Kubernetes user interface. You can use
Dashboard to deploy containerized applications to a Kubernetes
cluster, troubleshoot your containerized application, and manage the
cluster resources. You can use Dashboard to get an overview of
applications running on your cluster, as well as for creating or
modifying individual Kubernetes resources (such as Deployments, Jobs,
DaemonSets, etc). For example, you can scale a Deployment, initiate a
rolling update, restart a pod or deploy new applications using a
deploy wizard.
Let me know if that helped.

Adding multiple services of same type to Ambari

I am using Ambari to manage my Kafka cluster. I want to create another cluster which uses the same zookeeper of the previous cluster but is independent otherwise. I want to use the same Ambari service(UI) for this new one also. Is this possible?
It's possible to define a host config group in Ambari, such that a subset of hosts share similar configurations (such as a different ZK chroot for individual Kafka clusters), however, for any operation like service restarts and general display on the Kafka service page, different host groups would not be divided apart
In my experience, the host group feature has only been used for when some HDFS nodes have more disks attached, or more memory than others, so YARN and MapReduce settings were increased.
If you really need multiple isolated clusters, that's where external configuration management comes into play