Kubernetes service not exposed when ClusterIp is set - kubernetes

I have the following YAML file -
apiVersion: v1
kind: Service
metadata:
labels:
name: mariadb
name: mariadb
spec:
ports:
- port: 3306
selector:
name: mariadb
When this service is created, a ClusterIP is automatically set.
My stateful set 'mariadb' is exposed using this service.
But if I login to another pod on Kubernetes, I cannot ping this pod using
ping mariadb-0.mariadb.[namespace].svc.cluster.local
It also does not work if the ServiceType is set to 'NodePort'.
If I update the service to
apiVersion: v1
kind: Service
metadata:
labels:
name: mariadb
name: mariadb
spec:
ports:
- port: 3306
clusterIP: None
selector:
name: mariadb
When I login to another pod on Kubernetes, I can ping this pod using
ping mariadb-0.mariadb.[namespace].svc.cluster.local
Is there any reason why this internal url is not accessible when the ClusterIP is set?

The key is 'clusterIP: None'.
If clusterIP is not set, k8s will allocate one for the service automatically, also the kube-dns will set a domain name for the service, named mariadb.[namespace].svc.cluster.local, that's your first case.
While if clusterIP is set to 'None', that means k8s doesn't allocate a ip for the service, in this case, kube-dns will set a domain name for every endpoints that the service points to, in your second case, it's mariadb-0.mariadb.[namespace].svc.cluster.local.
Also you can set clusterIP to a ip address, in that case, it's the same as your first case.
That's why you can ping mariadb-0.mariadb.[namespace].svc.cluster.local in your second case, while can't in your first case.

Related

How do I access Kubernetes pods through a single IP?

I have a set of pods running based on the following fleet:
apiVersion: "agones.dev/v1"
kind: Fleet
metadata:
name: bungee
spec:
replicas: 2
template:
metadata:
labels:
run: bungee
spec:
ports:
- name: default
containerPort: 25565
protocol: TCP
template:
spec:
containers:
- name: bungee
image: a/b:test
I can access these pods outside the cluster with <node-IP>:<port> where the port is random per pod given by Agones.
My goal is to be able to connect to these pods through a single IP, meaning I have to add some sort of load balancer. I tried using this service of type LoadBalancer, but I can't connect to any of the pods with it.
apiVersion: v1
kind: Service
metadata:
name: bungee-svc
spec:
type: LoadBalancer
loadBalancerIP: XXX.XX.XX.XXX
ports:
- port: 25565
protocol: TCP
selector:
run: bungee
externalTrafficPolicy: Local
Is a service like this the wrong approach here, and if so what should I use instead? If it is correct, why is it not working?
Edit: External IP field says pending while checking the service status. I am running Kubernetes on bare-metal.
Edit 2: Attempting to use NodePort as suggested, I see the service has not been given an external IP address. Trying to connect to <node-IP>:<nodePort> does not work. Could it be a problem related to the selector?
LoadBalancer Services could have worked, in clusters that are integrating with the API of the cloud provider hosting your Kubernetes nodes (cloud-controller-manager component). Since this is not your case, you're looking for a NodePort Service.
Something like:
apiVersion: v1
kind: Service
metadata:
name: bungee-svc
spec:
type: NodePort
ports:
- port: 25565
protocol: TCP
selector:
run: bungee
Having created that service, you can check its description - or yaml/json representation:
# kubectl describe svc xxx
Type: NodePort
IP: 10.233.24.89 <- ip within SDN
Port: tcp-8080 8080/TCP <- ports within SDN
TargetPort: 8080/TCP <- port on your container
NodePort: tcp-8080 31655/TCP <- port exposed on your nodes
Endpoints: 10.233.108.232:8080 <- pod:port ...
Session Affinity: None
Now, I know the port 31655 was allocated to my NodePort Service -- ports are unique on your cluster, they are picked within a range, depends on your cluster configuration.
I can connect to my service, accessing any Kubernetes node IP, on the port that was allocated to my NodePort service.
curl http://k8s-worker1.example.com:31655/
As a sidenote: a LoadBalancer Service extends a NodePort Service. While the externalIP won't ever show up, note that your Service was already allocated with its own port, as any NodePort Service - which is meant to receive traffic from whichever LoadBalancer would have been configured on behalf of your cluster, onto the cloud infrastructure it is integrated with.
And ... I have to say I'm not familiar with Agones. When you say "I can access these pods outside the cluster with <node-IP>:<port> where the port is random per pod given by Agones". Are you sure ports are allocated on a per-pod basis, and bound to a given node? Or could it be they're already using a NodePort Service. Give it another look: have you tried connecting that port on other nodes of your cluster?

Access external database from Kubernetes

I have a kubernetes (v1.18.6) with 1 service (loadbalancer), 2 pods in a develoment:
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: app
ports:
- protocol: "TCP"
port: 6000
targetPort: 5000
type: LoadBalancer
A network policy to access Intenert (it is necesary for me):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internet-access
spec:
podSelector:
matchLabels:
networking/allow-internet-access: "true"
policyTypes:
- Ingress
- Egress
ingress:
- {}
Deployment config file
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
progressDeadlineSeconds: 120
selector:
matchLabels:
app: app
replicas: 2
template:
metadata:
labels:
app: app
spec:
imagePullSecrets:
- name: myregistrykey
containers:
- name: app
image: app
imagePullPolicy: Always
ports:
- containerPort: 5000
It is working correctly. But now, I want to connect this imagen to an external database (in another network only access by internet). For this proposition I use this service:
apiVersion: v1
kind: Service
metadata:
name: postgresql
spec:
clusterIP: None
ports:
- port: 25060
---
apiVersion: v1
kind: Endpoints
metadata:
name: postgresql
subsets:
- addresses:
- ip: 206............
ports:
- port: 25060
name: postgresql
It is all the services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-service LoadBalancer 10.245.134.137 206........... 6000:31726/TCP 2d4h
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 3d7h
postgresql ClusterIP None <none> 25060/TCP 19h
But when I try to connect I receive a timeout error of the database, like can't connect to the database.
I have an internet connection in the image.
I find the solution, the problem was the rules of inbound of the database. I must add the IP of Kubernetes.
Thx.
Here is what worked for me:
Define a service , but set clusterIP: None , so no endpooint is created.
And then create an endpoint yourself with the SAME NAME as your service and set the IP and port of your db.
In your example , you have a type in your endpoint: the name of your endpoint is postgresql not postgresSql.
My example:
---
service.yaml
kind: Service
apiVersion: v1
metadata:
name: backend-mobile-db-service
spec:
clusterIP: None
ports:
- port: 5984
---
kind: Endpoints
apiVersion: v1
metadata:
name: backend-mobile-db-service
subsets:
- addresses:
- ip: 192.168.1.50
ports:
- port: 5984
name: backend-mobile-db-service
For better visibility I am placing the answer OP mentioned in question:
I find the solution, the problem was the rules of inbound of the database. I must add the IP of Kubernetes
The service definition should be corrected. Default service type is clusterIP which doesn't work for external database. You need to update the service type as given below
type: ExternalName
also ensure that service name and the endpoint name should match. it is different in your yaml. please check
If I understand correctly, you have your cluster with application on Digital Ocean cloud and your PostgreSQL is outside this cluster.
In your Application Deployment <> application service you have used services with selectors so you didn't need to create Endpoints manually.
In your external database service you have used services without selectors so you had to create Endpoint manually.
As database is external service, using clusterIP: None is pointless as it will try to match pods inside the cluster. I guess you added it as you read in this docs.
Last thing is that in Endpoint you set ip: 206... which is the same as application service LoadBalancer ip?
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-service LoadBalancer 10.245.134.137 206........... 6000:31726/TCP 2d4h
subsets:
- addresses:
- ip: 206............
It is only a part of information so I am guessing. However in this part you should provide IP of desired database, not your application Loadbalancer IP.
Now based on scenario you can connect:
Database outside cluster with IP address
Remotely hosted database with URI
Remotely hosted database with URI and port remapping
Detailed information about above scenarios you can find in Kubernetes best practices: mapping external services
Based on your current config I assume you want to use scenario 1.
If this database and cluster are somewhere in cloud you could use internal Database IP. If not you should provide IP of machine where this Database is hosted.
You can also read Kubernetes Access External Services article.
Please let me know if you will still have issue after IP change

Expose cluster in k8s on localhost

Because docker supports out of the box kubernetes (on my Mac) I thought I try it out and see if I can load balance a simple webservice. For that, I created a simple image, which exposes port 3000 and only returns Hello World. And I created a k8s config yaml
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
spec:
type: NodePort
externalIPs:
- 192.168.2.85
ports:
- port: 8080
targetPort: 3000
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: hello/world:latest
ports:
- containerPort: 3000
Apply it
$> kubectl apply -f ./example.yaml
I see 3 pods running, and a service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-kubernetes NodePort 10.99.38.46 192.168.2.85 8080:30244/TCP 42m
I've used NodePort above, but I'm not sure if I can use Loadbalancer here as well.
Anyway, in the browser I get the message This site can’t be reached when I goto http://192.168.2.85:8080 or `http://192.168.2.85:30244 (I never know which port to use)
So, I think I'm close, but I still missed something :( Any help would be appreciated!
the port number is wrong.
use http://NODEIP:NODEPORT
in your case, try
http://NODEIP:30244
k explain service.spec.externalIPs
KIND: Service VERSION: v1
FIELD: externalIPs <[]string>
DESCRIPTION:
externalIPs is a list of IP addresses for which nodes in the cluster will
also accept traffic for this service. These IPs are not managed by
Kubernetes. The user is responsible for ensuring that traffic arrives at a
node with this IP. A common example is external load-balancers that are not
part of the Kubernetes system.
Problem here is we don't know your network settings. IS this minikube for mac? Is the 192.168.2.x network reachable for you? In my case using minikube all I had to do was to edit the externalIP to be reachable from my network. So what I did to get this working was:
minikube IP in my case 192.168.99.100 (IP address of minikubeVM)
changed externalIP to 192.168.99.100
k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-kubernetes NodePort 10.105.212.118 192.168.99.100 8080:32298/TCP 46m
And I was able to reach the application using 192.168.99.100:8080.
Also note that in your case you have 8081 port (But I guess P Ekambaram already mentioned this).

Kubernetes to find Pod IP from another Pod

I have the following pods hello-abc and hello-def.
And I want to send data from hello-abc to hello-def.
How would pod hello-abc know the IP address of hello-def?
And I want to do this programmatically.
What's the easiest way for hello-abc to find where hello-def?
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-abc-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-abc
spec:
containers:
- name: hello-abc
image: hello-abc:v0.0.1
imagePullPolicy: Always
args: ["/hello-abc"]
ports:
- containerPort: 5000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-def-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-def
spec:
containers:
- name: hello-def
image: hello-def:v0.0.1
imagePullPolicy: Always
args: ["/hello-def"]
ports:
- containerPort: 5001
---
apiVersion: v1
kind: Service
metadata:
name: hello-abc-service
spec:
ports:
- port: 80
targetPort: 5000
protocol: TCP
selector:
app: hello-abc
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: hello-def-service
spec:
ports:
- port: 80
targetPort: 5001
protocol: TCP
selector:
app: hello-def
type: NodePort
Preface
Since you have defined a service that routes to each deployment, if you have deployed both services and deployments into the same namespace, you can in many modern kubernetes clusters take advantage of kube-dns and simply refer to the service by name.
Unfortunately if kube-dns is not configured in your cluster (although it is unlikely) you cannot refer to it by name.
You can read more about DNS records for services here
In addition Kubernetes features "Service Discovery" Which exposes the ports and ips of your services into any container which is deployed into the same namespace.
Solution
This means, to reach hello-def you can do so like this
curl http://hello-def-service:${HELLO_DEF_SERVICE_PORT}
based on Service Discovery https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
Caveat: Its very possible that if the Service port changes, only pods that are created after the change in the same namespace will receive the new environment variables.
External Access
In addition, you can also reach this your service externally since you are using the NodePort feature, as long as your NodePort range is accessible from outside.
This would require you to access your service by node-ip:nodePort
You can find out the NodePort which was randomly assigned to your service with kubectl describe svc/hello-def-service
Ingress
To reach your service from outside you should implement an ingress service such as nginx-ingress
https://github.com/helm/charts/tree/master/stable/nginx-ingress
https://github.com/kubernetes/ingress-nginx
Sidecar
If your 2 services are tightly coupled, you can include both in the same pod using the Kubernetes Sidecar feature. In this case, both containers in the pod would share the same virtual network adapter and accessible via localhost:$port
https://kubernetes.io/docs/concepts/workloads/pods/pod/#uses-of-pods
Service Discovery
When a Pod is run on a Node, the kubelet adds a set of environment
variables for each active Service. It supports both Docker links
compatible variables (see makeLinkVariables) and simpler
{SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT variables, where the
Service name is upper-cased and dashes are converted to underscores.
Read more about service discovery here:
https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
You should be able to reach hello-def-service from pods in hello-abc via DNS as specified here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services
However, kube-dns or CoreDNS has to be configured/installed in your k8s cluster before DNS records can be utilized in your cluster.
Specifically, you should be reach hello-def-service via the DNS record http://hello-def-service for the service running in the same namespace as hello-abc-service
And you should be able to reach hello-def-service running in another namespace ohter_namespace via the DNS record hello-def-service.other_namespace.svc.cluster.local.
If, for some reason, you do not have DNS add-ons installed in your cluster, you still can find the virtual IP of the hello-def-service via environment variables in hello-abc pods. As is documented here.

Minikube expose MySQL running on localhost as service

I have minikube version v0.17.1 running on my machine. I want to simulate the environment I will have in AWS, where my MySQL instance will be outside of my Kubernetes cluster.
Basically, how can I expose my local MySQL instance running on my machine to the Kubernetes cluster running via minikube?
Kubernetes allows you to create a service without selector, and cluster will not create related endpoint for this service, this feature is usually used to proxy a legacy component or an outside component.
Create a service without selector
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 1443
targetPort: <YOUR_MYSQL_PORT>
Create a relative Endpoint object
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: <YOUR_MYSQL_ADDR>
ports:
- port: <YOUR_MYSQL_PORT>
Get service IP
$ kubectl get svc my-service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service <SERVICE_IP> <none> 1443/TCP 18m
Access your MYSQL from service <SERVICE_IP>:1443 or my-service:1443
As of minikube 1.10, there is a special hostname host.minikube.internal that resolves to the host running the minikube VM or container. You can then configure this hostname in your pod's environment variables or the ConfigMap that defines the relevant settings.
Option 1 - use a headless service without selectors
Because this service has no selector, the corresponding Endpoints object will not be created. You can manually map the service to your own specific endpoints (See doc).
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
ports:
- port: 80
targetPort: 8080
---
kind: Endpoints
apiVersion: v1
metadata:
name: my-service
subsets:
- addresses:
- ip: 10.0.2.2
ports:
- port: 8080
Option 2 - use ExternalName service
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
type: ExternalName
externalName: minikube.host
The only caveat is that it needs to be able to resolve minikube.host. Simply add this line to the etc/hosts file should do it.
10.0.2.2 minikube.host
ExternalName doesn't support port mapping at the moment.
Another note: The IP 10.0.2.2 is known to work with Virtual Box only (see SO).
For xhyve, try replacing that with 192.168.99.1 (see GitHub issue and issue). A demo GitHub.
Just a reminder, if on Windows, open your firewall.