Some folders on master node are not accessible from other nodes on a centos cluster - centos

I installed OpenCoarrays FORTRAN on master node of cluster. However, I can not access this folder from other nodes to run a simualtion using coarray. How can I solve this issue?

Related

How to specify "master" and "worker" nodes when using one machine to run Kubernetes?

I am using an Ubuntu 22.04 machine to run and test Kubernetes locally. I need some functionality like Docker-Desktop. I mean it seems both master and worker nodes/machines will be installed by Docker-Desktop on the same machine. But when I try to install Kubernetes and following the instructions like this, at some points it says run the following codes on master node:
sudo hostnamectl set-hostname kubernetes-master
Or run the following comands on the worker node machine:
sudo hostnamectl set-hostname kubernetes-worker
I don't know how to specify master/worker nodes if I have only my local Ubuntu machine?
Or should I run join command after kubeadm init command? Because I can't understand the commands I run in my terminal will be considered as a command for which master or worker machine?
I am a little bit confused about this master/worker nodes or client/server machine stuff while I am just using one machine for both client and server machines.
Prerequisites for installing kubernetes in cluster:
Ubuntu instance with 4 GB RAM - Master Node - (with ports open to all traffic)
Ubuntu instance with at least 2 GB RAM - Worker Node - (with ports open to all traffic)
It means you need to create 3 instances from any cloud provider like Google (GCP), Amazon (AWS), Atlantic.Net Cloud Platform, cloudsigma as per your convenience.
For creating an instance in gcp follow this guide. If you don’t have an account create a new account ,New customers also get $300 in free credits to run, test, and deploy workloads.
After creating instances you will get ips of the instance using them you can ssh into the instance using terminal in your local machine by using the command: ssh root#<ip address>
From there you can follow any guide for installing kubernetes by using worker and master nodes.
example:
sudo hostnamectl set-hostname <host name>
Above should be executed in the ssh of the worker node, similarly you need to execute it into the worker node.
The hostname does nothing about node roles.
If you do kubeadm init, the node will be a master node (currently called control plane).
This node can also be used as a worker node (currently called just a node), but by default, Pods cannot be scheduled on the control plane node.
You can turn off this restriction by removing its taints with the following command:
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
and then you can use this node as both control-plane and node.
But I guess some small kubernetes like k0s, k3s, and microk8s are better options for your use case rather than kubeadm.

How to access Kubernetes cluster from my local windows machine

I am new to Kubernetes. I have created K8s cluster on my VM's which are VMware VM's. I have created helm charts to deploy my application. I want to install my application without logging into any of the cluster machines. Want to install it from my local windows 10 machine. How to configure my local machine to communicate with the Cluster? I have installed kubectl on my machine. Thanks in advance.
You can copy the kubeconfig file into your local machine. In the master VM the file is saved at $HOME/.kube/config. Copy that into your $HOME/.kube/config. Then run kubectl get nodes to check connectoin

Can we configure same node as master and slave in Kubernetes

I am having two linux machines where I am learning Kubernetes. Since resources are limited, I want to configure the same node as master and slave, so the configuration looks like
192.168.48.48 (master and slave)
191.168.48.49 (slave)
How to perform this setup. Any help will be appreciated.
Yes, you can use minikube the Minikube install for single node cluster. Use kubeadm to install Kubernetes where 1 node is master and another one as Node. Here is the doc, but, make sure you satisfy the prerequisites for the nodes and small house-keeping needs to done as shown in the official document. Then you could install and create two machine cluster for testing purpose if you have two linux machines as you shown two different IP's.
Hope this helps.

Kubernetes double IP in different nodes

after using k8s on GKE for a couple of months I've decided to install my own cluster. Now, I've two ubuntu VMs and one of them is the kube-master and the second one is a node. Cluster run as I expect and I can see nodes (kube-master is a node also) when run kubectl get nodes. I've launched one pod each VMs bat I'm experiencing an issue both pod have the same IP. Can anybody help me to resolve this issue? I'm using flannel as network plugin atm.
Thanks in advance
Update
I've found the solution, thanks kubernetes group on slack. I didn't install the [cni][1]plugin so the kubelet didn't know the subnetwork status. I installed the plugin using this guide, made a configuration file following that. Restarted the kubelete service finally I saw the cluster was work as I expected.

Where does minikube configure master node components?

If i have installed K8S using minikube, where will the master node components be installed. (Ex: the api server, replication controller, etcd etc)
Is it in the host? or the VM?
I understand the worker node is the VM configured by minikube
Everything is installed in the Virtual Machine. Based on the localkube project, it is creating an All-in-one single-node cluster.
More information here: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/local-cluster-ux.md