Accessing to service by name - rest

I am a beginner in swarm and I have some troubles with accessing to service from host by name of service.
My steps:
1) Creating 1 manager and 2 workers
$ docker-machine create --driver virtualbox manager1
$ docker-machine create --driver virtualbox worker1
$ docker-machine create --driver virtualbox worker2
2) Initialization manager:
$ docker-machine ssh myvm1 "docker swarm init --advertise-addr 192.168.99.100"
3) Initialization workers:
$ docker swarm join --token SWMTKN-1-2xrmha8wyxo471h85sttujbt28f95rm32d40ql3lr3kf3mf27q-4kjyqz4a5lz5ks390k35oc969 192.168.99.100:2377
4) Creating env:
$ docker-machine env manager1
$ eval $(docker-machine env manager1)
5) Creating overlay:
$ docker network create --driver overlay --subnet 10.10.10.0/24 my-overlay-network
6) Creating service:
$ docker service create -p 5000:5000 --replicas 3 --network my-overlay-network --name qwe vaomaohao/app_qwe
After this steps service was successfully deployed, but I can access to it only by IP address, not by service name.
Can you explain me please, why?
Thank you in advance!

a single solution but you need implemented it. You can use traefik or docker flow proxy, and file file hosts in windows or linux.
I recommend you traefik, have easy use. DFP Now project is not a good time.
Hosts File example:
Linux: /etc/hosts
Windows: c:\Windows\System32\Drivers\etc\hosts
172.16.1.186 yourdomain.swarm

Related

Unable to connect internet/google.com from pod. Docker and k8 are able to pull images

I am trying to learn Kubernetes.
Create a single-node Kubernetes Cluster on Oracle Cloud using these steps here
cat /etc/resolv.conf
>> nameserver 169.254.169.254
kubectl run busybox --rm -it --image=busybox --restart=Never -- sh
cat /etc/resolv.conf
>> nameserver 10.33.0.10
nslookup google.com
>>Server: 10.33.0.10
Address: 10.33.0.10:53
;; connection timed out; no servers could be reached
ping 10.33.0.10
>>PING 10.33.0.10 (10.33.0.10): 56 data bytes
kubectl get svc -n kube-system -o wide
>> CLUSTER-IP - 10.33.0.10
kubectl logs --namespace=kube-system -l k8s-app=kube-dns
>>[ERROR] plugin/errors: 2 google.com. A: read udp 10.32.0.9:57385->169.254.169.254:53: i/o timeout
Not able to identify if this is an error of coredns or pod networking. Any direction would really help
Kubernetes has deprecated Docker as a container runtime after v1.20.
Kubernetes Development decision to deprecate Docker as an underlying runtime in favor of runtimes that use the Container Runtime Interface (CRI) created for Kubernetes.
To support this Mirantis and Docker came to the rescue by agreeing to partner in the maintenance of the shim code standalone.
More details here here
sudo systemctl enable docker
# -- Installin cri-dockerd
VER=$(curl -s https://api.github.com/repos/Mirantis/cri-dockerd/releases/latest|grep tag_name | cut -d '"' -f 4)
echo $VER
wget https://github.com/Mirantis/cri-dockerd/releases/download/${VER}/cri-dockerd-${VER}-linux-arm64.tar.gz
tar xvf cri-dockerd-${VER}-linux-arm64.tar.gz
install -o root -g root -m 0755 cri-dockerd /usr/bin/cri-dockerd
cp cri-dockerd /usr/bin/
# -- Verification
cri-dockerd --version
# -- Configure systemd units for cri-dockerd
wget https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.service
wget https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.socket
sudo cp cri-docker.socket cri-docker.service /etc/systemd/system/
sudo cp cri-docker.socket cri-docker.service /usr/lib/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable cri-docker.service
sudo systemctl enable --now cri-docker.socket
# -- Using cri-dockerd on new Kubernetes cluster
systemctl status docker | grep Active
I ran into similar issue with almost same scenario described above. The accepted solution https://stackoverflow.com/a/72104194/1119570 is wrong. This issue is a pure networking issue that is not related to any of EKS upgrade in any way.
The root cause for our issue was the fact that the Worker Node AWS EKS Linux 1.21 AMI being hardened by our security department which turns off the following setting in this file /etc/sysctl.conf:
net.ipv4.ip_forward = 0
After switching this setting to:
net.ipv4.ip_forward = 1 and rebooting the EC2 Node, everything started working properly. Hope this helps!

How to get service IP in a dynamic fashion while starting using docker-compose

How to get service IP in a dynamic fashion while starting using docker-compose. How to get the IP of service-a which is used as a configuration parameter while starting up docker-compose.yaml
service-a:
service-b:
depends_on:
service-a
The recommended way is not to use the IP address in docker-compose but use the service name instead. If you still want an IP for some reason then you can either set Static IP with your container (not recommended) Link: https://stackoverflow.com/a/45420160/970422
Or you get IP by using docker inspect commands. More details on this approach is mentioned here: https://www.freecodecamp.org/news/how-to-get-a-docker-container-ip-address-explained-with-examples/
Examples:
docker exec container-name cat /etc/hosts
docker exec -it container-name /bin/bash
ip -4 -o address
docker network inspect -f \
'{{json .Containers}}' 9f6bc3c15568 | \
jq '.[] | .Name + ":" + .IPv4Address'

Kubernetes Connectivity

I have a POD running which had a Java Application running. This Java Application talks to a MySql which is on-prem. The MySql accepts connections from 192.* ip's
I have the pod running on EKS worker nodes with Ip - 192.. I am able to telnet Mysql from the worker nodes. When the pod starts, the Java application tries to connect to the Mysql with the POD Ip (which is some random 172. ip) and fails with MySQL connection error.
How can I solve this?
Try to execute a shell inside the pod and connect to the MySQL server from there.
kubectl exec -it -n <namespace-name> <pod-name> -c <container-name> -- COMMAND [args...]
E.g.:
kubectl exec -it -n default mypod -c container1 -- bash
Then check the MySQL connectivity:
#/> mysql --host mysql.dns.name.or.ip --port 3306 --user root --password --verbose
Or start another pod with usual tools and check MySQL port connectivity:
$ kubectl run busybox --rm -it --image busybox --restart=Never
#/> ping mysql.dns.name.or.ip
#/> telnet mysql.dns.name.or.ip 3306
You should see some connection-related information that helps you to resolve your issue.
I can guess you just need to add a route to your cluster pods network on your MySQL host or its default network router.

Specific Docker Command in Kubernetes

I am trying to start a Bro container in a Pod. In docker I would normally run something like this:
docker run -d --net=host --name bro
Is there something in the container spec that would replicate that functionality?
You can use the hostNetwork option of the API to run a pod on the host's network.

Expose kubernetes master service to host

I'm trying my hands on kubernetes and come across very basic question. I have setup single node kubernetes on ubuntu running in VirtualBox.
This is exactly what I have. My vagrant file is something like this (so on my mac I can have virtualbox running ubuntu)-
Vagrant.configure("2") do |config|
config.vm.synced_folder ".", "/vagrant"
config.vm.define "app" do |d|
d.vm.box = "ubuntu/trusty64"
d.vm.hostname = "kubernetes"
# Create a private network, which allows host-only access to the machine
# using a specific IP.
d.vm.network "private_network", ip: "192.168.20.10"
d.vm.provision "docker"
end
end
And to start the master I have init.sh something like this-
docker run --net=host -d gcr.io/google_containers/etcd:2.0.9 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock \
gcr.io/google_containers/hyperkube:v0.18.2 /hyperkube kubelet \
--api_servers=http://localhost:8080 \
--v=2 \
--address=0.0.0.0 \
--enable_server \
--hostname_override=127.0.0.1 \
--config=/etc/kubernetes/manifests
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.18.2 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
wget http://storage.googleapis.com/kubernetes-release/release/v0.19.0/bin/linux/amd64/kubectl
sudo chmod +x ./kubectl
This brings up simple kubernetes running in vm. Now I can see kubernetes services running if I get services using kubectl-
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.2 443/TCP
kubernetes-ro component=apiserver,provider=kubernetes <none> 10.0.0.1 80/TCP
I can curl in ssh to 10.0.0.1 and see the result. But My question is how can I expose this kubernetes master service to host machine or when I deploy this thing on server, how can I make this master service available to public ip ?
To expose Kubernetes to the host machine, make sure you exposing the container ports to ubuntu, using the -p option in docker run. Then you should be able to access kubernetes like it was running on the ubuntu box, if you want it to be as if it were running on the host, then port forward the ubuntu ports to your host system. For deployment to servers there are many ways to do this, gce has it's own container engine backed by kubernetes in alpha/beta right now. Otherwise, if you want to deploy with the exact same system, most likely you'll just need the right vagrant provider and ubuntu box and everything should be the same as your local setup otherwise.