Kubernetes service is getting external ip as pending - kubernetes

I running a kubernetes LB but the external ip says "pending", looks like it is trying to get a IP but, I need it as "localhost" to access it in my browser:
What do I miss?
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dev-pypi LoadBalancer 10.106.128.15 <pending> 80:30914/TCP 2m7s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 47h
service/qa-pypi LoadBalancer 10.97.62.94 <pending> 8200:30114/TCP 94m
Thanks in advance.
This is my yaml file:
kind: Service
apiVersion: v1
metadata:
name: qa-pypi
spec:
type: LoadBalancer
selector:
app: pypi-qa
ports:
- protocol: TCP
port: 8200
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: qa-pypi
labels:
app: pypi-qa
spec:
replicas: 2
selector:
matchLabels:
app: pypi-qa
template:
metadata:
labels:
app: pypi-qa
spec:
containers:
- name: pypi-qa
imagePullPolicy: IfNotPresent
image: myimg2
ports:
- containerPort: 8080
volumeMounts:
- name: storageqa
mountPath: /app/local
volumes:
- name: storageqa
persistentVolumeClaim:
claimName: persistvolumeqa
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persistvolumeqa
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Thank you !!!!!!!!!!!!!!!!!!!!

You already have here a lot of advises how to use localhost, what you can do with NodePort. Let me fill the gap and explain why you see Pending state.
Look, minikube itself, doesnt allocate and provide you LoadBalancer ip-address. LoadBalancer service type is widely used in cloud implementations, like EKS, AKS, GKE and others. Because cloud service providers create you loadbalancer in the background as soon as you chose this type.
If you want to use LoadBalancer with minikube you should configure minikube first. What you can do is use built-in metallb minikube addon.
MetalLB had addressed the gap and provides the network LoadBalancer
implementation as an addon.
During metallb installation and configuration you are able to set the range og local ip-adressed that would be assigned to LoadBalancer service instead of Pending state
If you want to know more about this and check real example of configuration - check for example MetalLB Configuration in Minikube — To enable Kubernetes service of type “LoadBalancer” article

you can use ingress to access your application from the browser
ingress
settingup_ingress
tutorial
if you doesn't find your host after setup it may need to register at root ./etc/hosts
127.0.0.1 localhost
127.0.0.1 https-my-nginx.com

To get expose your app in localhost you can try the NodePort type. If you are trying with kubeadm you can use NodePort type or ClusterIp type, so that you can able to expose it locally. Usually, Load-balancer is widely used in deployment on cloud machines.
If you are trying with minikube, then run minikube tunnel so that external-IP will be added for Loadbalancer.

you have to configure external IP on your machine(platform) first then only external ip will be allocated to your service. It can be VIP which you can advertise or address pool needs to make, other wise setting up ingress will also not going to serve the purpose as ingress itself needs external IP to be allocated for communication.
it's a networking thing you need to setup on your private machine.
if you don't want to use this option then can go for node port with that you can access externally with nodeip:nodeport.

Related

Access external database from Kubernetes

I have a kubernetes (v1.18.6) with 1 service (loadbalancer), 2 pods in a develoment:
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: app
ports:
- protocol: "TCP"
port: 6000
targetPort: 5000
type: LoadBalancer
A network policy to access Intenert (it is necesary for me):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internet-access
spec:
podSelector:
matchLabels:
networking/allow-internet-access: "true"
policyTypes:
- Ingress
- Egress
ingress:
- {}
Deployment config file
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
progressDeadlineSeconds: 120
selector:
matchLabels:
app: app
replicas: 2
template:
metadata:
labels:
app: app
spec:
imagePullSecrets:
- name: myregistrykey
containers:
- name: app
image: app
imagePullPolicy: Always
ports:
- containerPort: 5000
It is working correctly. But now, I want to connect this imagen to an external database (in another network only access by internet). For this proposition I use this service:
apiVersion: v1
kind: Service
metadata:
name: postgresql
spec:
clusterIP: None
ports:
- port: 25060
---
apiVersion: v1
kind: Endpoints
metadata:
name: postgresql
subsets:
- addresses:
- ip: 206............
ports:
- port: 25060
name: postgresql
It is all the services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-service LoadBalancer 10.245.134.137 206........... 6000:31726/TCP 2d4h
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 3d7h
postgresql ClusterIP None <none> 25060/TCP 19h
But when I try to connect I receive a timeout error of the database, like can't connect to the database.
I have an internet connection in the image.
I find the solution, the problem was the rules of inbound of the database. I must add the IP of Kubernetes.
Thx.
Here is what worked for me:
Define a service , but set clusterIP: None , so no endpooint is created.
And then create an endpoint yourself with the SAME NAME as your service and set the IP and port of your db.
In your example , you have a type in your endpoint: the name of your endpoint is postgresql not postgresSql.
My example:
---
service.yaml
kind: Service
apiVersion: v1
metadata:
name: backend-mobile-db-service
spec:
clusterIP: None
ports:
- port: 5984
---
kind: Endpoints
apiVersion: v1
metadata:
name: backend-mobile-db-service
subsets:
- addresses:
- ip: 192.168.1.50
ports:
- port: 5984
name: backend-mobile-db-service
For better visibility I am placing the answer OP mentioned in question:
I find the solution, the problem was the rules of inbound of the database. I must add the IP of Kubernetes
The service definition should be corrected. Default service type is clusterIP which doesn't work for external database. You need to update the service type as given below
type: ExternalName
also ensure that service name and the endpoint name should match. it is different in your yaml. please check
If I understand correctly, you have your cluster with application on Digital Ocean cloud and your PostgreSQL is outside this cluster.
In your Application Deployment <> application service you have used services with selectors so you didn't need to create Endpoints manually.
In your external database service you have used services without selectors so you had to create Endpoint manually.
As database is external service, using clusterIP: None is pointless as it will try to match pods inside the cluster. I guess you added it as you read in this docs.
Last thing is that in Endpoint you set ip: 206... which is the same as application service LoadBalancer ip?
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-service LoadBalancer 10.245.134.137 206........... 6000:31726/TCP 2d4h
subsets:
- addresses:
- ip: 206............
It is only a part of information so I am guessing. However in this part you should provide IP of desired database, not your application Loadbalancer IP.
Now based on scenario you can connect:
Database outside cluster with IP address
Remotely hosted database with URI
Remotely hosted database with URI and port remapping
Detailed information about above scenarios you can find in Kubernetes best practices: mapping external services
Based on your current config I assume you want to use scenario 1.
If this database and cluster are somewhere in cloud you could use internal Database IP. If not you should provide IP of machine where this Database is hosted.
You can also read Kubernetes Access External Services article.
Please let me know if you will still have issue after IP change

Expose cluster in k8s on localhost

Because docker supports out of the box kubernetes (on my Mac) I thought I try it out and see if I can load balance a simple webservice. For that, I created a simple image, which exposes port 3000 and only returns Hello World. And I created a k8s config yaml
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
spec:
type: NodePort
externalIPs:
- 192.168.2.85
ports:
- port: 8080
targetPort: 3000
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: hello/world:latest
ports:
- containerPort: 3000
Apply it
$> kubectl apply -f ./example.yaml
I see 3 pods running, and a service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-kubernetes NodePort 10.99.38.46 192.168.2.85 8080:30244/TCP 42m
I've used NodePort above, but I'm not sure if I can use Loadbalancer here as well.
Anyway, in the browser I get the message This site can’t be reached when I goto http://192.168.2.85:8080 or `http://192.168.2.85:30244 (I never know which port to use)
So, I think I'm close, but I still missed something :( Any help would be appreciated!
the port number is wrong.
use http://NODEIP:NODEPORT
in your case, try
http://NODEIP:30244
k explain service.spec.externalIPs
KIND: Service VERSION: v1
FIELD: externalIPs <[]string>
DESCRIPTION:
externalIPs is a list of IP addresses for which nodes in the cluster will
also accept traffic for this service. These IPs are not managed by
Kubernetes. The user is responsible for ensuring that traffic arrives at a
node with this IP. A common example is external load-balancers that are not
part of the Kubernetes system.
Problem here is we don't know your network settings. IS this minikube for mac? Is the 192.168.2.x network reachable for you? In my case using minikube all I had to do was to edit the externalIP to be reachable from my network. So what I did to get this working was:
minikube IP in my case 192.168.99.100 (IP address of minikubeVM)
changed externalIP to 192.168.99.100
k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-kubernetes NodePort 10.105.212.118 192.168.99.100 8080:32298/TCP 46m
And I was able to reach the application using 192.168.99.100:8080.
Also note that in your case you have 8081 port (But I guess P Ekambaram already mentioned this).

Expose Digital Ocean's Managed Kubernetes Cluster

I have been playing with Digital Ocean's new managed Kubernetes service. I have created a new cluster using Digital Ocean's dashboard and, seemingly, successfully deployed my yaml file (attached).
running in context kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api-svc NodePort XX.XXX.XXX.XXX <none> 8080:30000/TCP 2h
kubernetes ClusterIP XX.XXX.X.X <none> 443/TCP 2h
My question is, how do I go exposing my service without a load balancer?
I have been able to do this locally using minikube. To get the cluster IP I run minikube ip and use port number 30000, as specified in my nodePort config, to reach the api-svc service.
From what I understand, Digital Ocean's managed service abstracts the master node away. So where would I find the public IP address to access my cluster?
Thank you in advance!
my yaml file for reference
apiVersion: v1
kind: Secret
metadata:
name: regcred
data:
.dockerconfigjson: <my base 64 key>
type: kubernetes.io/dockerconfigjson
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api-deployment
labels:
app: api-deployment
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: <my-dockerhub-user>/api:latest
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: api-svc
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30000
protocol: TCP
selector:
app: api
type: NodePort
You can hit any of your worker nodes' ip. Example http://worker-node-ip:30000/. You can get the worker nodes ip from the digitalocean dashboard or use doctl cli.
Slightly more detailed answer: DigitalOcean manages firewall rules for your NodePort services automatically, so once you expose the service, the NodePort is automatically open to public traffic from all worker nodes in your cluster. See docs
To find the public IP of any of your worker nodes, execute the following doctl commands:
# Get the first worker node from the first node-pool of your cluster
NODE_NAME=$(doctl kubernetes cluster node-pool get <cluster-name> <pool-name> -o json | jq -r '.[0].nodes[0].name')
WORKER_NODE_IP=$(doctl compute droplet get $NODE_NAME --template '{{.PublicIPv4}}')
Using "type: NodePort" presume use of node external address (any node) and may be unsustainable because nodes might be changed/upgraded.

Kubernetes service not exposed when ClusterIp is set

I have the following YAML file -
apiVersion: v1
kind: Service
metadata:
labels:
name: mariadb
name: mariadb
spec:
ports:
- port: 3306
selector:
name: mariadb
When this service is created, a ClusterIP is automatically set.
My stateful set 'mariadb' is exposed using this service.
But if I login to another pod on Kubernetes, I cannot ping this pod using
ping mariadb-0.mariadb.[namespace].svc.cluster.local
It also does not work if the ServiceType is set to 'NodePort'.
If I update the service to
apiVersion: v1
kind: Service
metadata:
labels:
name: mariadb
name: mariadb
spec:
ports:
- port: 3306
clusterIP: None
selector:
name: mariadb
When I login to another pod on Kubernetes, I can ping this pod using
ping mariadb-0.mariadb.[namespace].svc.cluster.local
Is there any reason why this internal url is not accessible when the ClusterIP is set?
The key is 'clusterIP: None'.
If clusterIP is not set, k8s will allocate one for the service automatically, also the kube-dns will set a domain name for the service, named mariadb.[namespace].svc.cluster.local, that's your first case.
While if clusterIP is set to 'None', that means k8s doesn't allocate a ip for the service, in this case, kube-dns will set a domain name for every endpoints that the service points to, in your second case, it's mariadb-0.mariadb.[namespace].svc.cluster.local.
Also you can set clusterIP to a ip address, in that case, it's the same as your first case.
That's why you can ping mariadb-0.mariadb.[namespace].svc.cluster.local in your second case, while can't in your first case.

Minikube expose MySQL running on localhost as service

I have minikube version v0.17.1 running on my machine. I want to simulate the environment I will have in AWS, where my MySQL instance will be outside of my Kubernetes cluster.
Basically, how can I expose my local MySQL instance running on my machine to the Kubernetes cluster running via minikube?
Kubernetes allows you to create a service without selector, and cluster will not create related endpoint for this service, this feature is usually used to proxy a legacy component or an outside component.
Create a service without selector
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 1443
targetPort: <YOUR_MYSQL_PORT>
Create a relative Endpoint object
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: <YOUR_MYSQL_ADDR>
ports:
- port: <YOUR_MYSQL_PORT>
Get service IP
$ kubectl get svc my-service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service <SERVICE_IP> <none> 1443/TCP 18m
Access your MYSQL from service <SERVICE_IP>:1443 or my-service:1443
As of minikube 1.10, there is a special hostname host.minikube.internal that resolves to the host running the minikube VM or container. You can then configure this hostname in your pod's environment variables or the ConfigMap that defines the relevant settings.
Option 1 - use a headless service without selectors
Because this service has no selector, the corresponding Endpoints object will not be created. You can manually map the service to your own specific endpoints (See doc).
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
ports:
- port: 80
targetPort: 8080
---
kind: Endpoints
apiVersion: v1
metadata:
name: my-service
subsets:
- addresses:
- ip: 10.0.2.2
ports:
- port: 8080
Option 2 - use ExternalName service
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
type: ExternalName
externalName: minikube.host
The only caveat is that it needs to be able to resolve minikube.host. Simply add this line to the etc/hosts file should do it.
10.0.2.2 minikube.host
ExternalName doesn't support port mapping at the moment.
Another note: The IP 10.0.2.2 is known to work with Virtual Box only (see SO).
For xhyve, try replacing that with 192.168.99.1 (see GitHub issue and issue). A demo GitHub.
Just a reminder, if on Windows, open your firewall.