In an openwhisk deployment on a kubernetes cluster with multiple nodes, how can i specify some function to the particular nodes? - kubernetes

I want to run some of my openwhisk function to the specific node in openwhisk implemented with kubernetes cluster.
e.g If there are 3 invokers node N1,N2,and N3.
I want to run functions A,B,C in node N1, functions D, E, F to N2, respectively.
I tried to use tag to implement however it didn't work.

Related

How to control Kubernetes Nodes?

I'm looking to create a serverless like architecture in GKE (a job may use 3000-4000 nodes, typically it'll be 60-180). Each Pod in this system needs access to a GPU. Someone suggested I create 1 Pod inside 1 Node and have that act as a "function".
I could use n1-standard-1 with 1 x NVIDIA Tesla T4. Im stuck figuring out how I can manage all these Nodes. I am familiar with creating node-pools via gcloud something like this gcloud container node-pools create gpu-pool --num-nodes=1 --accelerator type=nvidia-tesla-t4,count=1 --zone us-central1-a --machine-type=n1-standard-1 --cluster k8s-gpu
What I am not sure of is how do I create an ephemeral Node with a defined Pod (custom image, etc) from an external trigger? With Lambda I can trigger functions via API gateway. I'm looking for something similar, i.e a 1:1 mapping between trigger and creating a Node.
This way in my job I can decide I need 100 workers and issue them via HTTP. Does this even make sense, are there any improvements to this approach?
You don't need to create nodes until your job runs, you can use cluster autoscaler to act (add/remove GPU node for the node pool) according to your request/workloads.

K8s Service to all replicas

I have a kubernetes cluster on which there are 2 different type of apps. I will call them A and B.
Each of these apps are replicated so that there are 2As and 2Bs.
Each A app needs to communicate with all Bs. I tried to create a Kubernetes Service, but obviously one A communicates with one B and the other A communicates with the other B.
Actually using Kubernetes Service for B:
What I would like to have:
Is there a way in Kubernetes to allow each A to communicate with all Bs without using IP addresses explicitly or splitting the Kubernetes Service for each B app?

How to deregister a kubernetes node from a kubernetes cluster

I have a node mistakenly registered on a cluster B while it is actually serving for cluster A.
Here 'registered on a cluster B' means I can see the node from kubectl get node from cluster B.
I want to deregister this node from cluster B, but keep the node intact.
I know regular process to delete a node is:
kubectl drain node xxx
kubectl delete node xxx
# on node
kubeadm reset
But I do not want pods on the node from cluster A to be deleted or transfered. And I want to make sure the node would not self-register to cluster B afterwards.
To be clear, let's say, cluster A has Pod A on the node, cluster B has Pod B on the node as well, I want to delete node from cluster B, but keep Pod A intact. (By the way, can I see Pod A from cluster B?)
Thank you in advance!
To deregister the node without removing any pod you run below command
kubectl delete node nodename
After this is done the node will not appear in kubectl get nodes
For the node to not self register again stop the kubelet process on that node by logging into that node and using below command.
systemctl stop kubelet
As this case has been already clarified I decided to publish a Community Wiki answer based on the following comment:
#mario nvm, I thought different clusters in one node affect each
other, actually they do not, they just share container runtime which
is more like 'read-only', and they have different kubelets of
themselves listening on different port. – Li Ziyan Aug 17 at 5:29
to make it clear also for other users what was actually the issue here and how it has been solved or simply clarified.
So if you design your infrastructure in such a way that you use one physical (or virtual) machine as Node for more than one kubernetes clusters (which I believe is not very common case) the infrastructure looks as follows:
Components that are shared:
physical (or virtual) node
common container runtime environment (e.g. docker)
Components that are separate:
two separate kubelets. Although they are running on the same physical/virtual node they are configured to listen on different ports and are registered within two master Nodes (or more specifically two different kube-apiservers being part of two different kubernetes control planes)
two logically separate, independent kubernetes Nodes which, although they are configured on the same physical node/host, are logically completely separate kubernetes Nodes, being part of two completely different kubernetes clusters that don't interfere with each other in any way.
I hope it helps to clarify possible confusion about this question and maybe help someone in case they have similar doubts.

Is there any wayto run bash script from one pod or init container to another pod?

I have two K8s services A and B running. When service B pods come up, I have to trigger some bash script that executes on service A pods. How we can achieve that ?
Actual case: when service B pods scales, the init container of service B pods will perform some action on service A pods.
You can't directly run scripts or other code in other pods. You need to make network calls to cause things to happen. The use case you describe sounds a little unusual, and a better way to do it might be to use the Kubernetes API in service A to watch for B pods, or to query the Service object for B to find out what pods are present when you need to know that.

How to deploy deployments in multiple nodes in kubernetes?

i have a bare-metal kubernetes cluster with 1 master node and 4 worker nodes.
I want to deploy my deployment objects on every 4 worker nodes but i can't.
I try nodeSelector but looks like it only works on last key:value pair label.
Please help me.
If you want to ensure that all nodes have that pod on them you can use a DaemonSet.
You can also use affinity/anti-affinity selectors.
Inter-pod affinity and anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled based on labels on pods that are already running on the node rather than based on labels on nodes. The rules are of the form “this pod should (or, in the case of anti-affinity, should not) run in an X if that X is already running one or more pods that meet rule Y”
If you don't want to two instances are located on the same host, check following link
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#never-co-located-in-the-same-node