How can I add metadata to all instances in a GKE node-pool? - kubernetes

I want to create a preemptible node-pool in GKE where each instance runs a shutdown script when it is shutdown. In order to do this I need to add metadata to each of the instances.
How can I configure that?

When creating your node pool, you may click on “advanced edit”, and add the shutdown script in Metadata. The section to edit Metadata is found just above the Save and Cancel buttons.

Related

Reset / Rollback Kubernetes to just create state?

i am trying to setup a complete GitLab Routine to setup my Kubernetes Cluster with all installations and configurations automatically incl. decommissioning at the end.
However the Creation and Decommissioning Progress is one of the most time consuming because i am basically waiting for the provisioning till i can execute further commands.
as i have some times troubles in the bases of the Kubernetes Setup, i currently decomission my cluster and create a new one. But this is pretty un-economical and time consuming.
Question:
Is there a command or a series of commands to completely reset a Kubernetes to his state after creation ?
The closest is probably to do all your work in a new namespace and then delete that namespace when you are done. That automatically deletes all objects inside it.

Statefulset - Possible to Skip creation of pod 0 when it fails and proceed with the next one?

I currently do have a problem with the statefulset under the following condition:
I have a percona SQL cluster running with persistent storage and 2 nodes
now i do force both pods to fail.
first i will force pod-0 to fail
Afterwards i will force pod-1 to fail
Now the cluster is not able to recover without manual interference and possible dataloss
Why:
The statefulset is trying to bring pod-0 up first, however this one will not be brought online because of the following message:
[ERROR] WSREP: It may not be safe to bootstrap the cluster from this node. It was not the last one to leave the cluster and may not contain all the updates. To force cluster bootstrap with this node, edit the grastate.dat file manually and set safe_to_bootstrap to 1
What i could do alternatively, but what i dont really like:
I could change ".spec.podManagementPolicy" to "Parallel" but this could lead to race conditions when forming the cluster. Thus i would like to avoid that, i basically like the idea of starting the nodes one after another
What i would like to have:
the possibility to have ".spec.podManagementPolicy":"OrderedReady" activated but with the possibility to adjust the order somehow
to be able to put specific pods into "inactive" mode so they are being ignored until i enable them again
Is something like that available? Does someone have any other ideas?
Unfortunately, nothing like that is available in standard functions of Kubernetes.
I see only 2 options here:
Use InitContainers to somehow check the current state on relaunch.
That will allow you to run any code before the primary container is started so you can try to use a custom script in order to resolve the problem etc.
Modify the database startup script to allow it to wait for some Environment Variable or any flag file and use PostStart hook to check the state before running a database.
But in both options, you have to write your own logic of startup order.

accessing kubelet settings on gke to fix nodeHasDiskPressure

Everytime I make a new deployment of my app, my nodes start reporting nodeHasDiskPressure . After around 10 minutes or so the node goes back to a normal state. I found this SO answer regarding setting thresholds: DiskPressure crashing the node
.. but I am not sure how to actually set these thresholds on Google Kubernetes Engine
The kubelet option you mentioned can be added to you cluster "instance-template"
Make a copy of the instance-template that has been used for your cluster (instance-group) after clicked on copy before to save you can make some changes at the instance template,you can add those flags into : Instance-template --> Custom metadata--> kube-env
The flag will be added in this way;
KUBELET_TEST_ARGS: --image-gc-high-threshold=[your value] KUBELET_TEST_ARGS: --low-diskspace-threshold-mb=[your value] KUBELET_TEST_ARGS: --image-gc-low-threshold=[your value]
Once you set your values,save the instance template then edit the instance group of your cluster by changing the instance-template from the default to your custom one, once done it hit "rolling restart/replace" on your Dashboard on the instance group main page. This will restart your instances of your cluster with the new values.

CodeDeploy on programmatically created EC2 Instance

I have a auto-scaling-group setup. When there are no running instances per that group and my application deploys, the auto-scaling-group will spin up an instance and deploy. Fantastic. ... well sorta...
If there are more than one instances in that auto-scaling-group, then my scripts might point to one instance or another.
How do I deploy to a specific instance without having to setup all the CodeDeploy application, deployment-group, send a new revision, yada, yada, yada...
Or, do you have to take all of those steps each time? How then do you track your deployments? Surely there's a better way to this?
Ideally, I would like to create an Instance based on an AMI, associate that instance with my auto-scaling-group, then deploy specifically to that instance. But I can't create-deployment to an instance, only to a deployment-group.
This is maddening.
The problem you describe can easily be solved with HashiCorp Packer.
With a packerfile you can describe the way your application is supposed to be deployed to an instance. This instance is then snapshotted and turned into an available AMI.
After which you can update your target group for your autoscaling group with a new AMI.
Documentation for Packer can be found here:

JBoss 6 Cluster Node register & deregister listener in deployed application

I have a cluster over jboss6 AS in domain mode. I have an application deployed in it. My application need to have a listener(callback) when a new node become member of the cluster and also when gets removed. Is there a way to get the member node list and to add such a listener?
The simplest way is to get define a clustered cache in the configuration and get access to it from your code (see example). With the cache available, you can call cache.getCacheManager().addListener(Object) that can listen for org.infinispan.notifications.cachemanagerlistener.annotation.ViewChanged. See listener documentation for more info.