Changing the configuration file in current use - powershell

Service fabric allows you to run the cluster with various configurations:
How do I switch the cluster to run under a different configuration file on my localhost?

Well, I'd say you have to remove the existing cluster first. For instance, if we consider 'Switch cluster mode' button that is available in SF Local Cluster Manager, then here is what it does according to the documentation -
When you change the cluster mode, the current cluster is removed from
your system and a new cluster is created. The data stored in the
cluster is deleted when you change cluster mode.
So here is the path that should work for you -
Run RemoveServiceFabricCluster.ps1 to remove Service Fabric from each machine in the configuration(the one that you'll pass as a parameter to this command)
Run CleanFabric.ps1 to remove Service Fabric from the current machine
Run CreateServiceFabricCluster.ps1 passing this time a new config file

Related

Auto joining newly created VM/Servers as kubernetes node to master

Hi I am working on Google Cloud platform where I am not using GKE. Rather I am creating k8s cluster manually. Following is my setup,
Total 11 server
Out of these 5 servers would be static servers and don't need any scaling
remaining 5 server would need up scaling if CPU or RAM consumption goes beyond certain
limit. i.e. I will spin only 3 servers initially and if CPU/RAM threshold is crossed then I will spin 2 more using Google Cloud Load balancer.
1 k8s Master server
To implement this load balancer I have already created one Custom Image on which I have installed docker and kubernetes. Using this I have create one instance template and then instance group.
Now the problem statement is ,
Although I have created image with everything installed , When I am creating a instance group in which 3 VM are being created , these VMs does not automatically connect to my k8s master. Is there any way to automatically connect newly created VM as a node to k8s master so that I do not have run join command manually on each server ?
Thanks for the help in advance.
so that I do not have run join command manually on each server
I am assuming that you can successfully run the join command to join the newly created VMs to the Kubernetes master manually.
If that is the case, you can use the startup-script feature in the Google Compute Engine.
Here is the documentation:
https://cloud.google.com/compute/docs/instances/startup-scripts
https://cloud.google.com/compute/docs/instances/startup-scripts/linux#passing-directly
In short, startup-script is the feature from Google Compute Engine to automatically run our customized script during start-up.
And, the script could look something like this:
#! /bin/bash
kubeadm join .......

Kubernetes: configure every deployment/pod individually on each node

I'm faced with a scenario where we think about using Kubernetes. But I'm not sure if this is the right tool for it:
We have multiple vehicles, each having a computer connected to our main server via cellular network. We want to deploy several applications on every vehicle, so the vehicles are our nodes. We do not need any scaling, every vehicle will have an identical set of deployed applications running in two pods. And if a vehicle's computer is shut down, we must not deploy the pods on another node. Although the set of applications are always the same, their configuration is different on each vehicle (node). For instance some vehicles have a camera and this camera can only be accessed if their serial number is provided to the application. Other vehicles have no camera at all.
The Problem:
Using DaemonSets we probably can achieve that all vehicles will have just these two pods with the same containers. But the individual configuration worries me. We thought to have environment variables on each vehicle's computer with the relevant configs. But env variables of the host system cannot be accessed inside the containers running in pods. Is there any possibility to provide a node-unique configuration to our deployments? Is Kubernetes the right tool to use here at all?
Sorry i wasn't able to understand the vehicle and all that may be due to i read single time
but i can help with this;
But env variables of the host system cannot be accessed inside the
containers running in pods. Is there any possibility to provide a
node-unique configuration to our deployments?
Yes, there are possibilities i am not sure how you are setting up environment at host or K8s node.
But there is Hostpath option you can use, you can mount your node path directory into the container directly. You can create a file including the env vars you want to pass to the app when creating the Kubernetes node, at a fix location, then create your pod to use the same mount path as hostpath.
If your node gets replaced during the scaling your new PODs or container won't get this file if you are adding the file env manually at the first time.
Keep this env file in user data (startup script) so when any node get created in the node pool it will spin up with file at default location.
Read more : https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
Add on :
If you want to use labels of a node in container : https://github.com/scottcrossen/kube-node-labels

Where/How to configure Cassandra.yaml when deployed by Google Kubernetes Engine

I can't find the answer to a pretty easy question: Where can I configure Cassandra (normally using Cassandra.yaml) when its deployed on a cluster with kubernetes using the Google Kubernetes Engine?
So I'm completely new to distributed databases, Kubernetes etc. and I'm setting up a cassandra cluster (4VMs, 1 pod each) using the GKE for a university course right now.
I used the official example on how to deploy Cassandra on Kubernetes that can be found on the Kubernetes homepage (https://kubernetes.io/docs/tutorials/stateful-application/cassandra/) with a StatefulSet, persistent volume claims, central load balancer etc. Everything seems to work fine and I can connect to the DB via my java application (using the datastax java/cassandra driver) and via Google CloudShell + CQLSH on one of the pods directly. I created a keyspace, some tables and started filling them with data (~100million of entries planned), but as soon as the DB reaches some size, expensive queries result in a timeout exception (via datastax and via cql), just as expected. Speed isn't necessary for these queries right now, its just for testing.
Normally I would start with trying to increase the timeouts in the cassandra.yaml, but I'm unable to locate it on the VMs and have no clue where to configure Cassandra at all. Can someone tell me if these configuration files even exist on the VMs when deploying with GKE and where to find them? Or do I have to configure those Cassandra details via Kubectl/CQL/StatefulSet or somewhere else?
I think the faster way to configure cassandra in Kubernetes Engine, you could use the next deployment of Cassandra from marketplace, there you could configure your cluster and you could follow this guide that is also marked there to configure it correctly.
======
The timeout config seems to be a configuration that require to be modified inside the container (Cassandra configuration itself).
you could use the command: kubectl exec -it POD_NAME -- bash in order to open a Cassandra container shell, that will allow you to get into the container configurations and you could look up for the configuration and change it for what you require.
after you have the configuration that you require you will need to automate it in order to avoid manual intervention every time that one of your pods get recreated (as configuration will not remain after a container recreation). Next options are only suggestions:
Create you own Cassandra image from am own Docker file, changing the value of the configuration you require from there, because the image that you are using right now is a public image and the container will always be started with the config that the pulling image has.
Editing the yaml of your Satefulset where Cassandra is running you could add an initContainer, which will allow to change configurations of your running container (Cassandra) this will make change the config automatically with a script ever time that your pods run.
choose the option that better fits for you.

How to repair bad IP addresses in standalone Service Fabric cluster

We've just shipped a standalone service fabric cluster to a customer site with a misconfiguration. Our setup:
Service Fabric 6.4
2 Windows servers, each running 3 Hyper-V virtual machines that host the cluster
We configured the cluster locally using static IP addresses for the nodes. When the servers arrived, the IP addresses of the Hyper-V machines were changed to conform to the customer's available IP addresses. Now we can't connect to the cluster, since every IP in the clusterConfig is wrong. Is there any way we can recover from this without re-installing the cluster? We'd prefer to keep the new IP's assigned to the VM's if possible.
I've tested this only on my test environment (I've never done this on production before so do it on your own risk), but since you can't connect to the cluster anyway I think it is worth to try.
Connect to each virtual machine which is a part of the cluster and do following steps:
Locate Service Fabric Cluster files (usually C:\ProgramData\SF\{nodeName}\Fabric)
Take ClusterManifest.current.xml file and copy it to temp folder (for example C:\temp)
Go to Fabric.Data subfolder, take InfrastructureManifest.xml file and copy it to the same temp folder
Inside each file you have copied change IP addresses for nodes to correct values
Stop FabricHostSvc process by running net stop FabricHostSvc command in powershell
After successful stop run this powershell (admin mode) command to update node cluster configuration:
New-ServiceFabricNodeConfiguration -ClusterManifestPath C:\temp\ClusterManifest.current.xml -InfrastructureManifestPath C:\temp\InfrastructureManifest.xml
Once the config is updated start FabricHostSvc net start FabricHostSvc
Do this for each node and pray for the best.

How to re-connect to Amazon kubernetes cluster after stopping & starting instances?

I create a cluster for trying out kubernetes using cluster/kube-up.sh in Amazon EC2. Then I stop it to save money when not using it. Next time I start the master & minion instances in amazon, *~/.kube/config has old IP-s for the cluster master as EC2 assigns new public IP to the instances.
Currently I havent found way to provide Elastic IP-s to cluster/kube-up.sh so that consistent IP-s between stopping & starting instances would be set in place. Also the certificate in ~/.kube/config for the old IP so manually changing IP doesn't work either:
Running: ./cluster/../cluster/aws/../../cluster/../_output/dockerized/bin/darwin/amd64/kubectl get pods --context=aws_kubernetes
Error: Get https://52.24.72.124/api/v1beta1/pods?namespace=default: x509: certificate is valid for 54.149.120.248, not 52.24.72.124
How to make kubectl make queries against the same kubernetes master on a running on different IP after its restart?
If the only thing that has changed about your cluster is the IP address of the master, you can manually modify the master location by editing the file ~/.kube/config (look for the line that says "server" with an IP address).
This use case (pausing/resuming a cluster) isn't something that we commonly test for so you may encounter other issues once your cluster is back up and running. If you do, please file an issue on the GitHub repository.
I'm not sure which version of Kubernetes you were using but in v1.0.6 you can pass MASTER_RESERVED_IP environment variable to kube-up.sh to assign a given Elastic IP to Kubernetes Master Node.
You can check all the available options for kube-up.sh in config-default.sh file for AWS in Kubernetes repository.