I'm trying my hands on kubernetes and come across very basic question. I have setup single node kubernetes on ubuntu running in VirtualBox.
This is exactly what I have. My vagrant file is something like this (so on my mac I can have virtualbox running ubuntu)-
Vagrant.configure("2") do |config|
config.vm.synced_folder ".", "/vagrant"
config.vm.define "app" do |d|
d.vm.box = "ubuntu/trusty64"
d.vm.hostname = "kubernetes"
# Create a private network, which allows host-only access to the machine
# using a specific IP.
d.vm.network "private_network", ip: "192.168.20.10"
d.vm.provision "docker"
end
end
And to start the master I have init.sh something like this-
docker run --net=host -d gcr.io/google_containers/etcd:2.0.9 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock \
gcr.io/google_containers/hyperkube:v0.18.2 /hyperkube kubelet \
--api_servers=http://localhost:8080 \
--v=2 \
--address=0.0.0.0 \
--enable_server \
--hostname_override=127.0.0.1 \
--config=/etc/kubernetes/manifests
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.18.2 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
wget http://storage.googleapis.com/kubernetes-release/release/v0.19.0/bin/linux/amd64/kubectl
sudo chmod +x ./kubectl
This brings up simple kubernetes running in vm. Now I can see kubernetes services running if I get services using kubectl-
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.2 443/TCP
kubernetes-ro component=apiserver,provider=kubernetes <none> 10.0.0.1 80/TCP
I can curl in ssh to 10.0.0.1 and see the result. But My question is how can I expose this kubernetes master service to host machine or when I deploy this thing on server, how can I make this master service available to public ip ?
To expose Kubernetes to the host machine, make sure you exposing the container ports to ubuntu, using the -p option in docker run. Then you should be able to access kubernetes like it was running on the ubuntu box, if you want it to be as if it were running on the host, then port forward the ubuntu ports to your host system. For deployment to servers there are many ways to do this, gce has it's own container engine backed by kubernetes in alpha/beta right now. Otherwise, if you want to deploy with the exact same system, most likely you'll just need the right vagrant provider and ubuntu box and everything should be the same as your local setup otherwise.
Related
Problem
To make k8s multinodes dev env, I was trying to use NFS persistent volume in minikube with multi-nodes and cannot run pods properly. It seems there's something wrong with NFS setting. So I run minikube ssh and tried to mount the nfs volume manually first by mount command but it doesnt work, which bring me here.
When I run
sudo mount -t nfs 192.168.xx.xx(=macpc's IP):/PATH/TO/EXPORTED/DIR/ON/MACPC /PATH/TO/MOUNT/POINT/IN/MINIKUBE/NODE
in minikube master node, the output is
mount.nfs: requested NFS version or transport protocol is not supported
Some relavant info is
NFS client: minikube nodes
NFS server: my Mac PC
minikube driver: docker
Cluster comprises 3 nodes. (1 master and 2 worker nodes)
Currently there's no k8s resources (such as deployment, pv and pvc) in cluster.
minikube nodes' os is Ubuntu so I guess "nfs-utils" is not relavant and not installed. "nfs-common" is preinstalled in minikube.
Please see the following sections for more detail.
Goal
The goal is mount cmd in minikube nodes succeeds and nfs share on my Mac pc mounts properly.
What I've done so far is
On NFS server side,
created /etc/exports file on mac pc. The content is like
/PATH/TO/EXPORTED/DIR/ON/MACPC -mapall=user:group 192.168.xx.xx(=the output of "minikube ip")
and run nfsd update and then showmount -e cmd outputs
Exports list on localhost:
/PATH/TO/EXPORTED/DIR/ON/MACPC 192.168.xx.xx(=the output of "minikube ip")
rpcinfo -p shows rpcbind(=portmapper in linux), status, nlockmgr, rquotad, nfs, mountd are all up in tcp and udp
ping 192.168.xx.xx(=the output of "minikube ip") says
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
and continues
It seems I can't reach minikube from host.
On NFS client side,
started nfs-common and rpcbind services with systemctl cmd in all minikube nodes. By running sudo systemctl status rpcbind and sudo systemctl status nfs-common, I confirmed rpcbind and nfs-common are running.
minikube ssh output
Last login: Mon Mar 28 09:18:38 2022 from 192.168.xx.xx(=I guess my macpc's IP seen from minikube cluster)
so I run
sudo mount -t nfs 192.168.xx.xx(=macpc's IP):/PATH/TO/EXPORTED/DIR/ON/MACPC /PATH/TO/MOUNT/POINT/IN/MINIKUBE/NODE
in minikube master node.
The output is
mount.nfs: requested NFS version or transport protocol is not supported
rpcinfo -p shows only portmapper and status are running. I am not sure this is ok.
ping 192.168.xx.xx(=macpc's IP) works properly.
ping host.minikube.internal works properly.
nc -vz 192.168.xx.xx(=macpc's IP) 2049 outputs connection refused
nc -vz host.minikube.internal 2049 outputs succeeded!
Thanks in advance!
I decided to use another type of volume instead.
cluster info:
Minikube installation steps on centos VM:
curl -LO https://storage.googleapis.com/minikube/releases/v1.21.0/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
minikube start --addons=ingress --vm=true --memory=8192 --driver=none
PODs of this minikube cluster are not able to connect to internet.
However My host VM has internet connection with no firewall or iptables setup.
Can anybody help me debug this connection refused error
UPDATE:
I have noticed just now , I am able to connect non-https URLs, but not https URLs
How you have started the Minikube on the VM ? Which command did you used ?
If you are using the minikube --driver=docker it might won't work.
for starting the minikube on VM you have to change the driver
minikube --driver=none
in docker driver, it will create the container and install the kubernetes inside it and spawn the POD into further.
Check more at : https://minikube.sigs.k8s.io/docs/drivers/none/
if you on docker you can try :
pkill docker
iptables -t nat -F
ifconfig docker0 down
brctl delbr docker0
docker -d
"It will force docker to recreate the bridge and reinit all the network rules"
reference : https://github.com/moby/moby/issues/866#issuecomment-19218300
Try with
docker run --net=host -it ubuntu
Or else add dns in the config file in /etc/default/docker
DOCKER_OPTS="--dns 208.67.222.222 --dns 208.67.220.220"
Please check your container port and target port. This is my pod setup:
spec:
containers:
name: hello
image: "nginx"
ports:
containerPort: 5000
This is my service setup:
ports:
protocol: TCP
port: 60000
targetPort: 5000
If your target port and container port don't match, you will get a curl: (7) connection refused error(Source).
Checkout similar Stackoverflowlink for more information.
I am a beginner in swarm and I have some troubles with accessing to service from host by name of service.
My steps:
1) Creating 1 manager and 2 workers
$ docker-machine create --driver virtualbox manager1
$ docker-machine create --driver virtualbox worker1
$ docker-machine create --driver virtualbox worker2
2) Initialization manager:
$ docker-machine ssh myvm1 "docker swarm init --advertise-addr 192.168.99.100"
3) Initialization workers:
$ docker swarm join --token SWMTKN-1-2xrmha8wyxo471h85sttujbt28f95rm32d40ql3lr3kf3mf27q-4kjyqz4a5lz5ks390k35oc969 192.168.99.100:2377
4) Creating env:
$ docker-machine env manager1
$ eval $(docker-machine env manager1)
5) Creating overlay:
$ docker network create --driver overlay --subnet 10.10.10.0/24 my-overlay-network
6) Creating service:
$ docker service create -p 5000:5000 --replicas 3 --network my-overlay-network --name qwe vaomaohao/app_qwe
After this steps service was successfully deployed, but I can access to it only by IP address, not by service name.
Can you explain me please, why?
Thank you in advance!
a single solution but you need implemented it. You can use traefik or docker flow proxy, and file file hosts in windows or linux.
I recommend you traefik, have easy use. DFP Now project is not a good time.
Hosts File example:
Linux: /etc/hosts
Windows: c:\Windows\System32\Drivers\etc\hosts
172.16.1.186 yourdomain.swarm
I have installed a kubernetes cluster using this tutorial.
When I've set it up on VM Virtual Box - my host can connect with NodePort normally. When I've tried it on Compute Engine Virtual Machine instance, the Kubernetes cluster can't connect host with NodePort?
I have attached two pictures.
Thank you for your support.
Kubernetes cluster (bare metal) on Local VM Virtual Box
Kubernetes cluster (bare metal) on Google cloud Platform VM Instances
This took me a while to test but I finally have a result. So it turns out the reason of your issue is Calico and GCP firewall. To be more specific you have to add firewall rules before you can be successful with the connectivity.
Following this document on installing Calico for GCE:
GCE blocks traffic between hosts by default; run the following command
to allow Calico traffic to flow between containers on different hosts
(where the source-ranges parameter assumes you have created your
project with the default GCE network parameters - modify the address
range if yours is different):
So you need to allow the traffic to flow between containers:
gcloud compute firewall-rules create calico-ipip --allow 4 --network "default" --source-ranges "10.128.0.0/9"
Note that this IP should be changed. You can use for test purposes 10.0.0.0/8 but this is way to wide range so please narrow it down to your needs.
Then proceed with setting up instances for master and nodes.
You can actually skip most of the steps from the tutorial you posted, as connectivity is resolved by cloud provider. Here is a really simple script I use for Kubeadm on VM's. You can also perform this step by step.
#!/bin/bash
swapoff -a
echo net/bridge/bridge-nf-call-ip6tables = 1 >> /etc/ufw/sysctl.conf
echo net/bridge/bridge-nf-call-iptables = 1 >> /etc/ufw/sysctl.conf
echo net/bridge/bridge-nf-call-arptables = 1 >> /etc/ufw/sysctl.conf
apt-get install -y ebtables ethtool
apt-get update
apt-get install -y docker.io
apt-get install -y apt-transport-https
apt-get install -y curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
kubectl taint nodes --all node-role.kubernetes.io/master-
In my case I used simple Redis application from Kubernetes documentation
root#calico-master:/home/xxx# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29m
redis-master ClusterIP 10.107.41.117 <none> 6379/TCP 26m
root#calico-master:/home/xxx# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
redis-master-57fc67768d-5lx92 1/1 Running 0 27m 192.168.1.4 calico <none>
root#calico-master:/home/xxx# ping 192.168.1.4
PING 192.168.1.4 (192.168.1.4) 56(84) bytes of data.
64 bytes from 192.168.1.4: icmp_seq=1 ttl=63 time=1.48 ms
Before the firewall rules and regular Calico installation I was not able to ping, nor wgetfrom the service after that there is no problem with pinging the IP or hostname and also wget works:
> root#calico-master:/home/xxx# wget http://10.107.41.117:6379
> --2018-10-24 13:24:43-- http://10.107.41.117:6379/ Connecting to 10.107.41.117:6379... connected. HTTP request sent, awaiting response... 200 No headers, assuming HTTP/0.9 Length: unspecified
> Saving to: ‘index.html.2’
Steps above were also tested for type: NodePort and it works as well.
Another way is to use Flannel which I also tested and it worked out of the box for the needs of testing your issue. Be sure to read more about CNI’s so you can choose one that will suit your needs.
Hope this solves your problem.
That's because in minikube there's only one node for everything and the VM is that node. So if you are in the VM you can connect to the NodePort locally or on localhost.
In the case of GCP, you don't usually run work the master(s) and nodes on the same VMs. So you need to will get a reply from one of the nodes (VMs) where your pod is listening.
To get the list of nodes on your cluster you can simply run:
kubectl get nodes -o=wide
You should see for example an Internal IP for your nodes. Then you can try
curl http://<Internal IP>:<NodePort>
You can also get the IP details using,
kubectl describe nodes
Then ping trying to connect to your host using the corresponding IP.
I have a POD running which had a Java Application running. This Java Application talks to a MySql which is on-prem. The MySql accepts connections from 192.* ip's
I have the pod running on EKS worker nodes with Ip - 192.. I am able to telnet Mysql from the worker nodes. When the pod starts, the Java application tries to connect to the Mysql with the POD Ip (which is some random 172. ip) and fails with MySQL connection error.
How can I solve this?
Try to execute a shell inside the pod and connect to the MySQL server from there.
kubectl exec -it -n <namespace-name> <pod-name> -c <container-name> -- COMMAND [args...]
E.g.:
kubectl exec -it -n default mypod -c container1 -- bash
Then check the MySQL connectivity:
#/> mysql --host mysql.dns.name.or.ip --port 3306 --user root --password --verbose
Or start another pod with usual tools and check MySQL port connectivity:
$ kubectl run busybox --rm -it --image busybox --restart=Never
#/> ping mysql.dns.name.or.ip
#/> telnet mysql.dns.name.or.ip 3306
You should see some connection-related information that helps you to resolve your issue.
I can guess you just need to add a route to your cluster pods network on your MySQL host or its default network router.