Accessing Kubernetes pods/services through one IP (Preferably Master Node) - kubernetes

I have a local Kubernetes installation with a master node and two worker nodes. Is there a way to access all services/pods that will be installed on Kubernetes through master node's ip?
What i mean is say i have a test service running on port 30001 on each worker and i want to access this service like http://master-node:30001. Every help is appreciated.

You can use "the proxy verb" to acces nodes, pods, or services through the master. Only HTTP and HTTPS can be proxied. See these docs and these docs.

There are some ways to do it:
Define a NodePort Kubernetes service
Use kubefwd or port forwarding command
Use proxy command (Only support HTTP & HTTPS)
In this answer, I explain how to define a NodePort Service.
The NodePort service is explained as below (Service - Kubernetes)
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
Here is an example of the NodePort service for PostgreSQL:
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: postgres
labels:
app: postgres
spec:
ports:
- port: 5432
type: NodePort
selector:
app: postgres
The port field stands for both service port and default target port. There is also a nodePort field that allows you to choose the port to access the service from outside of the cluster (via the node's IP and the nodePort)
To view the node's Port (if you don't specify it from the manifest), you can run the command:
kubectl get services -n postgres
The output should look similar to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
postgres NodePort 10.96.156.75 <none> 5432:30864/TCP 6d9h app=postgres
In this case, the nodePort is 30864, this is the port to access to the service from outside the cluster.
To find out the node's IP, the command to use is:
kubectl get nodes -o wide
The output should look similar to:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
homedev-control-plane Ready master 30d v1.19.1 172.18.0.2 <none> Ubuntu Groovy Gorilla (development branch) 5.9.1-arch1-1 containerd://1.4.0
If what you need is the IP only:
kubectl get nodes -o wide --no-headers | awk '{print $6}'
In this case, the node's IP is 172.18.0.2. Hence to connect to the Postgres in the local Kubernetes cluster from your host machine, the command would look like this:
psql -U postgres -h 172.18.0.2 -p 30864-d postgres

Related

Load balancer error during Kubernetes 3 tier configuration [duplicate]

I am trying to deploy nginx on kubernetes, kubernetes version is v1.5.2,
I have deployed nginx with 3 replica, YAML file is below,
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-example
spec:
replicas: 3
revisionHistoryLimit: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.10
ports:
- containerPort: 80
and now I want to expose its port 80 on port 30062 of node, for that I created a service below,
kind: Service
apiVersion: v1
metadata:
name: nginx-ils-service
spec:
ports:
- name: http
port: 80
nodePort: 30062
selector:
app: nginx
type: LoadBalancer
this service is working good as it should be, but it is showing as pending not only on kubernetes dashboard also on terminal.
It looks like you are using a custom Kubernetes Cluster (using minikube, kubeadm or the like). In this case, there is no LoadBalancer integrated (unlike AWS or Google Cloud). With this default setup, you can only use NodePort or an Ingress Controller.
With the Ingress Controller you can setup a domain name which maps to your pod; you don't need to give your Service the LoadBalancer type if you use an Ingress Controller.
If you are using Minikube, there is a magic command!
$ minikube tunnel
Hopefully someone can save a few minutes with this.
Reference link
https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel
If you are not using GCE or EKS (you used kubeadm) you can add an externalIPs spec to your service YAML. You can use the IP associated with your node's primary interface such as eth0. You can then access the service externally, using the external IP of the node.
...
spec:
type: LoadBalancer
externalIPs:
- 192.168.0.10
I created a single node k8s cluster using kubeadm. When i tried PortForward and kubectl proxy, it showed external IP as pending.
$ kubectl get svc -n argocd argocd-server
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argocd-server LoadBalancer 10.107.37.153 <pending> 80:30047/TCP,443:31307/TCP 110s
In my case I've patched the service like this:
kubectl patch svc <svc-name> -n <namespace> -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.31.71.218"]}}'
After this, it started serving over the public IP
$ kubectl get svc argo-ui -n argo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argo-ui LoadBalancer 10.103.219.8 172.31.71.218 80:30981/TCP 7m50s
To access a service on minikube, you need to run the following command:
minikube service [-n NAMESPACE] [--url] NAME
More information here : Minikube GitHub
When using Minikube, you can get the IP and port through which you
can access the service by running:
minikube service [service name]
E.g.:
minikube service kubia-http
If it is your private k8s cluster, MetalLB would be a better fit. Below are the steps.
Step 1: Install MetalLB in your cluster
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
Step 2: Configure it by using a configmap
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.42.42.100-172.42.42.105 #Update this with your Nodes IP range
Step 3: Create your service to get an external IP (would be a private IP though).
FYR:
Before MetalLB installation:
After MetalLB installation:
If running on minikube, don't forget to mention namespace if you are not using default.
minikube service << service_name >> --url --namespace=<< namespace_name >>
Following #Javier's answer. I have decided to go with "patching up the external IP" for my load balancer.
$ kubectl patch service my-loadbalancer-service-name \
-n lb-service-namespace \
-p '{"spec": {"type": "LoadBalancer", "externalIPs":["192.168.39.25"]}}'
This will replace that 'pending' with a new patched up IP address you can use for your cluster.
For more on this. Please see karthik's post on LoadBalancer support with Minikube for Kubernetes
Not the cleanest way to do it. I needed a temporary solution. Hope this helps somebody.
If you are using minikube then run commands below from terminal,
$ minikube ip
$ 172.17.0.2 // then
$ curl http://172.17.0.2:31245
or simply
$ curl http://$(minikube ip):31245
In case someone is using MicroK8s: You need a network load balancer.
MicroK8s comes with metallb, you can enable it like this:
microk8s enable metallb
<pending> should turn into an actual IP address then.
A general way to expose an application running on a set of Pods as a network service is called service in Kubernetes. There are four types of service in Kubernetes.
ClusterIP
The Service is only reachable from within the cluster.
NodePort
You'll be able to communicate the Service from outside the cluster using NodeIP:NodePort.default node port range is 30000-32767, and this range can be changed by define --service-node-port-range in the time of cluster creation.
LoadBalancer
Exposes the Service externally using a cloud provider's load balancer.
ExternalName
Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
Only the LoadBalancer gives value for the External-IP Colum. and it only works if the Kubernetes cluster is able to assign an IP address for that particular service. you can use metalLB load balancer for provision IPs to your load balancer services.
I hope your doubt may go away.
You can patch the IP of Node where pods are hosted ( Private IP of Node ) , this is the easy workaround .
Taking reference with above posts , Following worked for me :
kubectl patch service my-loadbalancer-service-name \
-n lb-service-namespace \
-p '{"spec": {"type": "LoadBalancer", "externalIPs":["xxx.xxx.xxx.xxx Private IP of Physical Server - Node - where deployment is done "]}}'
Adding a solution for those who encountered this error while running on amazon-eks.
First of all run:
kubectl describe svc <service-name>
And then review the events field in the example output below:
Name: some-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"some-service","namespace":"default"},"spec":{"ports":[{"port":80,...
Selector: app=some
Type: LoadBalancer
IP: 10.100.91.19
Port: <unset> 80/TCP
TargetPort: 5000/TCP
NodePort: <unset> 31022/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 68s service-controller Ensuring load balancer
Warning SyncLoadBalancerFailed 67s service-controller Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB
Review the error message:
Failed to ensure load balancer: could not find any suitable subnets for creating the ELB
In my case, the reason that no suitable subnets were provided for creating the ELB were:
1: The EKS cluster was deployed on the wrong subnets group - internal subnets instead of public facing.
(*) By default, services of type LoadBalancer create public-facing load balancers if no service.beta.kubernetes.io/aws-load-balancer-internal: "true" annotation was provided).
2: The Subnets weren't tagged according to the requirements mentioned here.
Tagging VPC with:
Key: kubernetes.io/cluster/yourEKSClusterName
Value: shared
Tagging public subnets with:
Key: kubernetes.io/role/elb
Value: 1
If you are using a bare metal you need the NodePort type
https://kubernetes.github.io/ingress-nginx/deploy/baremetal/
LoadBalancer works by default in other cloud providers like Digital Ocean, Aws, etc
k edit service ingress-nginx-controller
type: NodePort
spec:
externalIPs:
- xxx.xxx.xxx.xx
using the public IP
Use NodePort:
$ kubectl run user-login --replicas=2 --labels="run=user-login" --image=kingslayerr/teamproject:version2 --port=5000
$ kubectl expose deployment user-login --type=NodePort --name=user-login-service
$ kubectl describe services user-login-service
(Note down the port)
$ kubectl cluster-info
(IP-> Get The IP where master is running)
Your service is accessible at (IP):(port)
The LoadBalancer ServiceType will only work if the underlying infrastructure supports the automatic creation of Load Balancers and have the respective support in Kubernetes, as is the case with the Google Cloud Platform and AWS. If no such feature is configured, the LoadBalancer IP address field is not populated and still in pending status , and the Service will work the same way as a NodePort type Service
minikube tunnel
The below solution works in my case.
First of all, try this command:
minikube tunnel
If it's not working for you. follow the below:
I restart minikube container.
docker minikube stop
then
docker minikube start
After that re-run kubernetes
minikube dashboard
After finish execute :
minikube tunnel
I have the same problem.
Windows 10 Desktop + Docker Desktop 4.7.1 (77678) + Minikube v1.25.2
Following the official docs on my side, I resolve with:
PS C:\WINDOWS\system32> kubectl expose deployment sito-php --type=LoadBalancer --port=8080 --name=servizio-php
service/servizio-php exposed
PS C:\WINDOWS\system32> minikube tunnel
* Tunnel successfully started
* NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...
* Starting tunnel for service servizio-php.
PS E:\docker\apache-php> kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 33h
servizio-php LoadBalancer 10.98.218.86 127.0.0.1 8080:30270/TCP 4m39s
The open browser on http://127.0.0.1:8080/
same issue:
os>kubectl get svc right-sabertooth-wordpress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
right-sabertooth-wordpress LoadBalancer 10.97.130.7 "pending" 80:30454/TCP,443:30427/TCP
os>minikube service list
|-------------|----------------------------|--------------------------------|
| NAMESPACE | NAME | URL |
|-------------|----------------------------|--------------------------------|
| default | kubernetes | No node port |
| default | right-sabertooth-mariadb | No node port |
| default | right-sabertooth-wordpress | http://192.168.99.100:30454 |
| | | http://192.168.99.100:30427 |
| kube-system | kube-dns | No node port |
| kube-system | tiller-deploy | No node port |
|-------------|----------------------------|--------------------------------|
It is, however,accesible via that http://192.168.99.100:30454.
There are three types of exposing your service
Nodeport
ClusterIP
LoadBalancer
When we use a loadbalancer we basically ask our cloud provider to give us a dns which can be accessed online
Note not a domain name but a dns.
So loadbalancer type does not work in our local minikube env.
Those who are using minikube and trying to access the service of kind NodePort or LoadBalancer.
We don’t get the external IP to access the service on the local
system. So a good option is to use minikube IP
Use the below command to get the minikube IP once your service is exposed.
minikube service service-name --url
Now use that URL to serve your purpose.
Check kube-controller logs. I was able to solve this issue by setting the clusterID tags to the ec2 instance I deployed the cluster on.
If you are not on a supported cloud (aws, azure, gcloud etc..) you can't use LoadBalancer without MetalLB https://metallb.universe.tf/
but it's in beta yet..
Delete existing service and create a same new service solved my problems. My problems is that the loading balancing IP I defines is used so that external endpoint is pending. When I changed a new load balancing IP it still couldn't work.
Finally, delete existing service and create a new one solved my problem.
For your use case best option is to use NordPort service instead of loadbalancer type because loadbalancer is not available.
I was getting this error on the Docker-desktop. I just exit and turn it on again(Docker-desktop). It took few seconds, then It worked fine.
Deleting all older services and creating new resolved my issue. IP was bound to older service. just try "$kubectl get svc" and then delete all svc's one by one "$kubectl delete svc 'svc name' "
May be the subnet in which you are deploying your service, have not enough ip's
If you are trying to do this in your on-prem cloud, you need an L4LB service to create the LB instances.
Otherwise you end up with the endless "pending" message you described. It is visible in a video here: https://www.youtube.com/watch?v=p6FYtNpsT1M
You can use open source tools to solve this problem, the video provides some guidance on how the automation process should work.

How do I actually connect to botfront on kubernetes?

I tried deploying on EKS, and my config.yaml follows this suggested format:
botfront:
app:
# The complete external host of the Botfront application (eg. botfront.yoursite.com). It must be set even if running on a private or local DNS (it populates the ROOT_URL).
host: botfront.yoursite.com
mongodb:
enabled: true # disable to use an external mongoDB host
# Username of the MongoDB user that will have read-write access to the Botfront database. This is not the root user
mongodbUsername: username
# Password of the MongoDB user that will have read-write access to the Botfront database. This is not the root user
mongodbPassword: password
# MongoDB root password
mongodbRootPassword: rootpassword
And I ran this command:
helm install -f config.yaml -n botfront --namespace botfront botfront/botfront
and the deployment appeared successful with all pods listed as running.
But botfront.yoursite.com goes nowhere. I checked the ingress and it matches, but there are no external ip addresses or anything. I don't know how to actually access my botfront site once deployed on kubernetes.
What am I missing?
EDIT:
With nginx lb installed kubectl get ingresses -n botfront now
returns:
NAME CLASS HOSTS ADDRESS PORTS AGE
botfront-app-ingress <none> botfront.cream.com a182b0b24e4fb4a0f8bd6300b440e5fa-423aebd224ce20ac.elb.us-east-2.amazonaws.com 80 4d1h
and
kubectl get svc -n botfront returns:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
botfront-api-service NodePort 10.100.207.27 <none> 80:31723/TCP 4d1h
botfront-app-service NodePort 10.100.26.173 <none> 80:30873/TCP 4d1h
botfront-duckling-service NodePort 10.100.75.248 <none> 80:31989/TCP 4d1h
botfront-mongodb-service NodePort 10.100.155.11 <none> 27017:30358/TCP 4d1h
If you run kubectl get svc -n botfront, it will show you all the Services that expose your botfront
$ kubectl get svc -n botfront
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
botfront-api-service NodePort 10.3.252.32 <none> 80:32077/TCP 63s
botfront-app-service NodePort 10.3.249.247 <none> 80:31201/TCP 63s
botfront-duckling-service NodePort 10.3.248.75 <none> 80:31209/TCP 63s
botfront-mongodb-service NodePort 10.3.252.26 <none> 27017:31939/TCP 64s
Each of them is of type NodePort, which means it exposes your app on the external IP address of each of your EKS cluster nodes on a specific port.
So if you your node1 ip happens to be 1.2.3.4 you can acess botfront-api-service on 1.2.3.4:32077. Don't forget to allow access to this port on firewall/security groups. If you have any registered domain e.g. yoursite.com you can configure for it a subdomain botfront.yoursite.com and point it to one of your EKS nodes. Then you'll be able to access it using your domain. This is the simplest way.
To be able to access it in a more effective way than by using specific node's IP and non-standard port, you may want to expose it via Ingress which will create an external load balancer, making your NodePort services available under one external IP adress and standard http port.
Update: I see that this chart already comes with ingress that exposes your app:
$ kubectl get ingresses -n botfront
NAME HOSTS ADDRESS PORTS AGE
botfront-app-ingress botfront.yoursite.com 80 70m
If you retrieve its yaml definition by:
$ kubectl get ingresses -n botfront -o yaml
you'll see that it uses the following annotation:
kubernetes.io/ingress.class: nginx
which means you need nginx-ingress controller installed on your EKS cluster. This might be one reason why it fails. As you can see in my example, this ingress doesn't get any external IP. That's because nginx-ingress wasn't installed on my GKE cluster. Not sure about EKS but as far as I know it doesn't come with nginx-ingress preinstalled.
One more thing: I assume that in your config.yaml you put some real domain name that you have registered instead of botfront.yoursite.com. Suppose your domain is yoursite.com and you successfully created subdomain botfront.yoursite.com, you should redirected it to the IP of your load balancer (the one used by your ingress).
If you run kubectl get ingresses -n botfront but the ADDRESS is empty, you probably don't have nginx-ingress installed and the underlying load balancer cannot be created. If you have here some external IP address, then redirect your registered domain to this address.

Kubernetes: how to access service if nodePort is random?

I'm new to K8s and am currently using Minikube to play around with the platform. How do I configure a public (i.e. outside the cluster) port for the service? I followed the nginx example, and K8s service tutorials. In my case, I created the service like so:
kubectl expose deployment/mysrv --type=NodePort --port=1234
The service's port is 1234 for anyone trying to access it from INSIDE the cluster. The minikube tutorials say I need to access the service directly through it's random nodePort, which works for manual testing purposes:
kubectl describe service mysrv | grep NodePort
...
NodePort: <unset> 32387/TCP
# curl "http://`minikube ip`:32387/"
But I don't understand how, in a real cluster, the service could have a fixed world-accessible port. The nginx examples describe something about using the LoadBalancer service kind, but they don't even specify ports there...
Any ideas how to fix the external port for the entire service?
The minikube tutorials say I need to access the service directly through it's random nodePort, which works for manual testing purposes:
When you create service object of type NodePort with a $ kubectl expose command you cannot choose your NodePort port. To choose a NodePort port you will need to create a YAML definition of it.
You can manually specify the port in service object of type Nodeport with below example:
apiVersion: v1
kind: Service
metadata:
name: example-nodeport
spec:
type: NodePort
selector:
app: hello # selector for deployment
ports:
- name: example-port
protocol: TCP
port: 1234 # CLUSTERIP PORT
targetPort: 50001 # POD PORT WHICH APPLICATION IS RUNNING ON
nodePort: 32222 # HERE!
You can apply above YAML definition by invoking command:
$ kubectl apply -f FILE_NAME.yaml
Above service object will be created only if nodePort port is available to use.
But I don't understand how, in a real cluster, the service could not have a fixed world-accessible port.
In clusters managed by cloud providers (for example GKE) you can use a service object of type LoadBalancer which will have a fixed external IP and fixed port.
Clusters that have nodes with public IP's can use service object of type NodePort to direct traffic into the cluster.
In minikube environment you can use a service object of type LoadBalancer but it will have some caveats described in last paragraph.
A little bit of explanation:
NodePort
Nodeport is exposing the service on each node IP at a static port. It allows external traffic to enter with the NodePort port. This port will be automatically assigned from range of 30000 to 32767.
You can change the default NodePort port range by following this manual.
You can check what is exactly happening when creating a service object of type NodePort by looking on this answer.
Imagine that:
Your nodes have IP's:
192.168.0.100
192.168.0.101
192.168.0.102
Your pods respond on port 50001 with hello and they have IP's:
10.244.1.10
10.244.1.11
10.244.1.12
Your Services are:
NodePort (port 32222) with:
ClusterIP:
IP: 10.96.0.100
port:7654
targetPort:50001
A word about targetPort. It's a definition for port on the pod that is for example a web server.
According to above example you will get hello response with:
NodeIP:NodePort (all the pods could respond with hello):
192.168.0.100:32222
192.168.0.101:32222
192.168.0.102:32222
ClusterIP:port (all the pods could respond with hello):
10.0.96.100:7654
PodIP:targetPort (only the pod that request is sent to can respond with hello)
10.244.1.10:50001
10.244.1.11:50001
10.244.1.12:50001
You can check access with curl command as below:
$ curl http://NODE_IP:NODEPORT
In the example you mentioned:
$ kubectl expose deployment/mysrv --type=NodePort --port=1234
What will happen:
It will assign a random port from range of 30000 to 32767 on your minikube instance directing traffic entering this port to pods.
Additionally it will create a ClusterIP with port of 1234
In the example above there was no parameter targetPort. If targetPort is not provided it will be the same as port in the command.
Traffic entering a NodePort will be routed directly to pods and will not go to the ClusterIP.
From the minikube perspective a NodePort will be a port on your minikube instance. It's IP address will be dependent on the hypervisor used. Exposing it outside your local machine will be heavily dependent on operating system.
LoadBalancer
There is a difference between a service object of type LoadBalancer(1) and an external LoadBalancer(2):
Service object of type LoadBalancer(1) allows to expose a service externally using a cloud provider’s LoadBalancer(2). It's a service within Kubernetes environment that through service controller can schedule a creation of external LoadBalancer(2).
External LoadBalancer(2) is a load balancer provided by cloud provider. It will operate at Layer 4.
Example definition of service of type LoadBalancer(1):
apiVersion: v1
kind: Service
metadata:
name: example-loadbalancer
spec:
type: LoadBalancer
selector:
app: hello
ports:
- port: 1234 # LOADBALANCER PORT
targetPort: 50001 # POD PORT WHICH APPLICATION IS RUNNING ON
nodePort: 32222 # PORT ON THE NODE
Applying above YAML will create a service of type LoadBalancer(1)
Take a specific look at:
ports:
- port: 1234 # LOADBALANCER PORT
This definition will simultaneously:
specify external LoadBalancer(2) port as 1234
specify ClusterIP port as 1234
Imagine that:
Your external LoadBalancer(2) have:
ExternalIP: 34.88.255.5
port:7654
Your nodes have IP's:
192.168.0.100
192.168.0.101
192.168.0.102
Your pods respond on port 50001 with hello and they have IP's:
10.244.1.10
10.244.1.11
10.244.1.12
Your Services are:
NodePort (port 32222) with:
ClusterIP:
IP: 10.96.0.100
port:7654
targetPort:50001
According to above example you will get hello response with:
ExternalIP:port (all the pods could respond with hello):
34.88.255.5:7654
NodeIP:NodePort (all the pods could respond with hello):
192.168.0.100:32222
192.168.0.101:32222
192.168.0.102:32222
ClusterIP:port (all the pods could respond with hello):
10.0.96.100:7654
PodIP:targetPort (only the pod that request is sent to can respond with hello)
10.244.1.10:50001
10.244.1.11:50001
10.244.1.12:50001
ExternalIP can be checked with command: $ kubectl get services
Flow of the traffic:
Client -> LoadBalancer:port(2) -> NodeIP:NodePort -> Pod:targetPort
Minikube: LoadBalancer
Note: This feature is only available for cloud providers or environments which support external load balancers.
-- Kubernetes.io: Create external LoadBalancer
On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On Minikube, the LoadBalancer type makes the Service accessible through the minikube service command.
-- Kubernetes.io: Hello minikube
Minikube can create service object of type LoadBalancer(1) but it will not create an external LoadBalancer(2).
The ExternalIP in command $ kubectl get services will have pending status.
To address that there is no external LoadBalancer(2) you can invoke $ minikube tunnel which will create a route from host to minikube environment to access the CIDR of ClusterIP directly.
There is a small mistake in Dawid Kruk’s answer,
Traffic entering a NodePort will be routed directly to pods and will
not go to the ClusterIP.
But as k8s documented here:
NodePort: Exposes the Service on each Node's IP at a static port (the
NodePort). A ClusterIP Service, to which the NodePort Service
routes, is automatically created. You'll be able to contact the
NodePort Service, from outside the cluster, by requesting
:.
Traffic entering a NodePort did go to ClusterIP.

Where do I find the host IP address for an app deployed in minikube

I'm deploying a spring boot app in minikube that connects to a database running on the host. Where do I find the IP address that the app can use to get back to the host? For docker I can use ifconfig and get the IP address from the docker0 entry. ifconfig shows another device with IP address 172.18.0.1. Would that be how my app would get back to the host?
I think I understood you correctly and this is what you are asking for.
Minikube is started as a VM on your machine. You need to know the IP which Minikube starts with. This can be done with minikube status or minikube ip, output might look like:
$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.1
This will only provide you the IP address of Minikube not your application.
In order to connect to your app from outside the Minikube you need to expose it as a Service.
Example of a Service might look like this:
apiVersion: v1
kind: Service
metadata:
name: webapp
spec:
type: NodePort
ports:
- nodePort: 31317
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: webapp
You can see results:
$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
postgres ClusterIP 10.0.0.140 <none> 5432/TCP 32m app=postgres
webapp NodePort 10.0.0.235 <none> 8080:31317/TCP 2s app=webapp
You will be able to connect to the webapp from inside the Cluster using 10.0.0.235:8080 of from outside the Cluster using Minikube IP and port 31317.
I also recommend going through Hello Minikube tutorial.
It was the 172.18.0.1 IP address. I passed it to the Spring app running in minikube with a configmap like this:
kubectl create configmap springdatasourceurl --from-literal=SPRING_DATASOURCE_URL=jdbc:postgresql://172.18.0.1:5432/bookservice
The app also needed SPRING_DATASOURCE_DRIVER_CLASS_NAME to be set in a configmap and that credentials SPRING_DATASOURCE_PASSWORD and SPRING_DATASOURCE_USERNAME be set as secrets.
More information on configmap and secret are here.

Kubernetes Service not being assigned an (external) IP address

There are various answers for very similar questions around SO that all show what I expect my deployment to look like, however mine does not.
I am running Minikube 0.25, with Kubernetes 1.9 on Windows
10.
I have successfully created a node, a replication controller, and a
single pod template has been replicated 10 times.
The node is Minikube, and is assigned the IP address 10.49.106.251
The dashboard is available at 10.49.106.251:30000
I am deploying a service with a YAML file, but the service is never assigned an external IP - the result is the same if I happen to use kubectl expose.
The YAML file that I am using:
kind: Service
apiVersion: v1
metadata:
name: hello-service
spec:
type: NodePort
selector:
app: hello-world
ports:
- protocol: TCP
port: 8080
I can also use the YAML file to assign an external IP - I assign it the same value as the node IP address. Either way results in no possible connection to the service. I should also point out that the 10 replicated pods all match the selector.
The result of running kubectl get svc for the default, and after updating the external IP are below:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-service NodePort 10.108.61.233 <none> 8080:32406/TCP 1m
hello-service NodePort 10.108.61.233 10.49.106.251 8080:32406/TCP 1m
The tutorial I have been following, and the other answers on SO show a result similar to:
hello-service NodePort 10.108.61.233 <nodes> 8080:32406/TCP 1m
Where the difference is that the external IP is set to <nodes>
I have encountered a number of issues when running locally - is this just another case of doing so, or has someone else identified a way to get around the external IP assignment issue?
For local development purpose, I have also met with the problem of exposing a 'public IP' for my local development cluster.
Fortunately, I have found one of the kubectl command which can help:
kubectl port-forward service/service-name 9092
Where 9092 is the container port to expose, so that I can access applications inside the cluster, on my local development environment.
The important note is that it is not a 'production' grade solution.
Works well as a temporary hack to get to the cluster insides.
Using NodePort means it will open a port on all nodes of your cluster. In your example above, the port exposed to the outside world is 32406.
In order to access hello-service (if it is http) it will be http://[ the node ip]:32406/. This will hit your minikube and the the request will be routed to your pod in roundrobin fashion.
same problem when trying to deploy a simple helloworld image locally with Kubernetes v1.9.2
After two weeks of attempts , It seems that Kubernetes expose all nginx web server applications internally in port 80 not 8080
So this should work kubectl expose deployment hello-service --type=NodePort --port=80