I have a StatefulSet that matchs the one described in the
StatefulSet Kubernetes tutorial which creates a mysql master/slaves structure. I have the following kubernetes objects:
NAME READY STATUS RESTARTS AGE
pod/mysql-0 2/2 Running 7 23h
pod/mysql-1 2/2 Running 6 23h
pod/mysql-2 2/2 Running 6 23h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 21d
service/mysql ClusterIP None <none> 3306/TCP 23h
service/mysql-read ClusterIP 10.152.183.89 <none> 3306/TCP 23h
NAME READY AGE
statefulset.apps/mysql 3/3 23h
EDIT: The mysql and mysql-read services connect with all the mysql pods.
The node is on my computer, I'm using microk8s.
Now I have a little program runnning on my computer (not in the cluster) that I would like to connect to the master (mysql-0) inside the StatefulSet. Id need something like a service that only connect with mysql-0. Any suggestions on how to do it?
EDIT: The idea is to find a solution that allow me to deploy the cluster just with the .yaml files. It isn't interesting to find one that involves more commands than kubectl apply
The program is the following one:
import mysql.connector
mydb = mysql.connector.connect(
host="localhost??",
user="root",
passwd="",
database="testtable"
)
mycursor = mydb.cursor()
sql = "INSERT INTO testtable (name) VALUES (%s)"
val = ("holaquetal")
mycursor.execute(sql, val)
mydb.commit()
I thought I could add a different label to the mysql-0 pod and add a NodePort service that looks for that label, but I dont want to do it by command line. Is it possible to add one label to mysql-0 inside the StatefulSet yaml?
Another idea could be to do a query DNS to the DNS server that microk8s provides, looking for "mysql.mysql-0", but I don't know how to connect the program to the DNS server, so that I can use host=mysql.mysql-0 or the whole CNAME.
If the node is on your computer so can use port-forward to forward the port of mysql from the pod to your local. Try this:
kubectl port-forward mysql-0 3306:3306
or forward directly to your master service
kubectl port-forward svc/mysql-read 3306:3306
Related
I run a local kubernetes cluster (Minikube) and I try to connect pgAdmin to postgresql, bot run in Kubernetes.
What would be the connection string? Shall I access by service ip address or by service name?
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dbpostgresql NodePort 10.103.252.31 <none> 5432:30201/TCP 19m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d21h
pgadmin-service NodePort 10.109.58.168 <none> 80:30200/TCP 40h
kubectl get ingress:
NAME CLASS HOSTS ADDRESS PORTS AGE
pgadmin-ingress <none> * 192.168.49.2 80 40h
kubectl get pod
NAME READY STATUS RESTARTS AGE
pgadmin-5569ddf4dd-49r8f 1/1 Running 1 40h
postgres-78f4b5db97-2ngck 1/1 Running 0 23m
I have tried with 10.103.252.31:30201 but without success.
Inside the cluster, services can refer to each other by DNS based on Service object names. So in this case you would use dbpostgresql or dbpostgresql.default.svc.cluster.local as the hostname.
Remember minikube is running inside its' own container, the NodePort clusterIPs you're getting back are open inside of minikube. So to get minikube's resolution of port and ip, run: minikube service <your-service-name> --url
This will return something like http://127.0.0.1:50946 which you can use to create an external DB connection.
Another option would be to use kubectl to forward a local port to the service running on localhost ex. kubectl port-forward service/django-service 8080:80
I'm currently learning kubernetes and started to deploy ELK stack on a minikube cluster (running on a linux EC2 instance), though i was able to run all the objects successfully, I'm not able to access any of the tool from my windows browser, looking for some inputs on how to access all below exposed ports from my windows browser.
Cluster details:
NAME READY STATUS RESTARTS AGE
pod/elasticsearch-deployment-5c7d5cb5fb-g55ft 1/1 Running 0 3m43s
pod/kibana-deployment-76d8744864-ddx4h 1/1 Running 0 3m43s
pod/logstash-deployment-56849fcd7b-bjlzf 1/1 Running 0 3m43s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elasticsearch-service ClusterIP XX.XX.XX.XX <none> 9200/TCP 3m43s
service/kibana-service ClusterIP XX.XX.XX.XX <none> 5601/TCP 3m43s
service/kubernetes ClusterIP XX.XX.XX.XX <none> 443/TCP 5m15s
service/logstash-service ClusterIP 10.XX.XX.XX <none> 9600/TCP,5044/TCP 3m43s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/elasticsearch-deployment 1/1 1 1 3m43s
deployment.apps/kibana-deployment 1/1 1 1 3m43s
deployment.apps/logstash-deployment 1/1 1 1 3m43s
NAME DESIRED CURRENT READY AGE
replicaset.apps/elasticsearch-deployment-5c7d5cb5fb 1 1 1 3m43s
replicaset.apps/kibana-deployment-76d8744864 1 1 1 3m43s
replicaset.apps/logstash-deployment-56849fcd7b 1 1 1 3m43s
Note: I also tried to run all the above services as NodePort and using the minikube ip i was able hit curl commands to check the status of the application, but still not able to access any of it via my browser
Generally if you want expose anything outside the cluster you need to user service type:
NodePort, LoadBalancer or use Ingress. If you will check Minikube documentaton, you will find that Minikube supports all those types.
If you thought about LoadBalancer, you can use minikube tunnel.
When you are using cloud environment and non standard ports, you should check firewall rules to check if port/traffic is open.
Regarding error from comment, it seems that you have issue with Kibana port 5601.
Did you check similar threads like this or this? If this won't be helpful, please provide Kibana configuration.
did you check just a normal port-forward instead of minikube ip, and expose. Those didnt work for me neither.
something like this may would help.
kubectl port-forward deployment/kibana-kibana 5601
I created four services, two ClusterIP and two NodePort services. Against each service I spin up two containers as shown below.
However, the problem is some services work fine and some while calling from inside the container not able to resolve the service hostname.
While narrowing down a problem, I created a below matrix:-
TYPES NodePort ClusterIp
NodePort Pass Fail
ClusterIp Pass Fail
which explains:-
A request (curl –v http://order-service-ip/swagger/index.html) from inside the container of aggregator-service(nodePort) fails and throws
could not resolve hostname error but vice-versa works. That is hitting
a request (curl –v http://aggregator-service/swagger/index.html) from
inside the container of order-service-ip works.
In the same way, calling of nodeport service from another nodeport container works.
But, calling of clusterIP service from a ClusterIp container fails and not able to resolve the hostname.
Surprisingly, a cluster IP container is able to resolve the hostname of Nodeport service and traversing back from the same nodePort container to the same ClusterIp service does not work.
Any suggestions are appreciated. I am stuck with this problem for more than four days now.
Here are the details of pods and services and end points are also working fine.
NAME ........ READY STATUS
nodeport-aggegator-deployment-64497699d-6jqz4 1/1 Running
nodeport-aggegator-deployment-64497699d-jx8n6 1/1 Running
clusterip-order-deployment-ip-594ff6b59b-pb4bp 1/1 Running
clusterip-order-deployment-ip-594ff6b59b-rbhj4 1/1 Running
nodeport-resources-deployment-6b98d47b5b-qvd59 1/1 Running
nodeport-resources-deployment-6b98d47b5b-zjrh7 1/1 Running
clusterip-product-deployment-ip-7589c74bfc-dx8l4 1/1 Running
clusterip-product-deployment-ip-7589c74bfc-mbqs5 1/1 Running
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
aggregator-service NodePort 10.100.66.74 <none> 8081:30392/TCP,443:30891/TCP
order-service-ip ClusterIP 10.100.118.19 <none> 8010/TCP,443/TCP
resources-service NodePort 10.100.81.65 <none> 8001:31076/TCP,443:30429/TCP
product-service-ip ClusterIP 10.100.66.14 <none> 8011/TCP,443/TCP
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP
Thanks
The problem was with in a code which had a port conflict. The one being used to redirect from Nodeport to ClusterIp was not correct.
I'm learning about Kubernetes and ingress controllers but I'm stucked getting this error when I try to apply kong ingress manifest...
ingress-kong-7dd57556c5-bh687 0/2 Init:0/1 0 29s
kong-migrations-gzlqj 0/1 Init:0/1 0 28s
postgres-0 0/1 Pending 0 28s
Is it possible to run this ingress on my home server without minikube ? If so, how?
Note: I have a FQDN pointing to my home server.
I guess you run manifest from Github
Issues with Pods
I have reproduced your case. As you have 3 pods, you have used option with DB.
If you will describe pods using
$ kubectl describe pod <podname> -n kong
you will receive error output:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 7s (x4 over 17s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
You can also check job in kong namespace.
It is work correctly on fresh Minikube cluster, so I guess you might apply same changes to storageclass.
Is it possible to run this ingress on my home server without minikube ? If so, how?
You have to use Kubernetes to do it. Since Minikube is supporting LoadBalancer you can can use it in Home.
You can check this thread about FQDN. As mentioned:
The host machine should be able to resolve the name of that FQDN. You
might add a record into the /etc/hosts at the Mac host to achieve
that:
10.0.0.2 mydb.mytestdomain
But in your case it should be IP address of LoadBalancer, kong-proxy.
Obtain LoadBalancer IP in Minikube
If you will deploy everything correctly you can check your services.
$ kubectl get svc -n kong
You will see kong-proxy service with LoadBalancer type wit <pending> EXTERNAL-IP.
To obtain ExternalIP you have to use minikbue tunnel.
Please note that you need have $ sudo minikube tunnel run in one console whole time.
Before Minikube tunnel
$ kubectl get svc -n kong
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-proxy LoadBalancer 10.110.218.74 <pending> 80:31881/TCP,443:31319/TCP 103m
kong-validation-webhook ClusterIP 10.108.204.137 <none> 443/TCP 103m
postgres ClusterIP 10.105.9.54 <none> 5432/TCP 103m
After
$ kubectl get svc -n kong
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-proxy LoadBalancer 10.110.218.74 10.110.218.74 80:31881/TCP,443:31319/TCP 104m
kong-validation-webhook ClusterIP 10.108.204.137 <none> 443/TCP 104m
postgres ClusterIP 10.105.9.54 <none> 5432/TCP 104m
Testing Kong
Here you can find how to get start with Kong. It will show you how to create Ingress. Later as I mentioned you have to edit ingress and add rule (host) similar like in K8s docs.
kubernetes v1.15.0 master is not able to reach pod ip address. I have been able to get it working till 1.14 but this time its not working any more. I have been using and setting up k8s clustors in ec2 using kubeadm.
Please find a log below; Any comments.
[ec2-user#ip-172-31-18-31 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-16-120.ap-south-1.compute.internal Ready <none> 97m v1.15.0
ip-172-31-18-31.ap-south-1.compute.internal Ready master 116m v1.15.0
[ec2-user#ip-172-31-18-31 ~]$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-deploy-7fd5fc7ff-dh9pw 1/1 Running 0 6m32s 10.44.0.3 ip-172-31-16-120.ap-south-1.compute.internal <none> <none>
hello-deploy-7fd5fc7ff-vrxbd 1/1 Running 0 6m32s 10.44.0.4 ip-172-31-16-120.ap-south-1.compute.internal <none> <none>
hello-pod1 1/1 Running 0 22m 10.44.0.1 ip-172-31-16-120.ap-south-1.compute.internal <none> <none>
[ec2-user#ip-172-31-18-31 ~]$ hostname
ip-172-31-18-31.ap-south-1.compute.internal
[ec2-user#ip-172-31-18-31 ~]$ curl http://10.44.0.4
Just simply create service for your pod to access it within the cluster, type of service should be ClusterIP.
Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a type in the ServiceSpec.
ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluste
Egg.:
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
selector:
app: test
Remember to match selector of service to pod's selector.