I am learning k8s. My question is that how to let k8s get service url as minikube command "minikube get service xxx --url" do?
Why I ask is because that when pod is down and up/created/initiated again, there is no need to change url by visiting service url. While
I deploy pod as NodePort, I could access pod with host IP and port, but if it is reinitiated/created again, the port changes.
My case is illustrated below: I have
one master(172.16.100.91) and
one node(hostname node3, 172.16.100.96)
I create pod and service as below, helllocomm deployed as NodePort, and helloext deployed as ClusterIP. hellocomm and helloext are both
spring boot hello world applications.
docker build -t jshenmaster2/hellocomm:0.0.2 .
kubectl run hellocomm --image=jshenmaster2/hellocomm:0.0.2 --port=8080
kubectl expose deployment hellocomm --type NodePort
docker build -t jshenmaster2/helloext:0.0.1 .
kubectl run helloext --image=jshenmaster2/helloext:0.0.1 --port=8080
kubectl expose deployment helloext --type ClusterIP
[root#master2 shell]# kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
hellocomm NodePort 10.108.175.143 <none> 8080:31666/TCP 8s run=hellocomm
helloext ClusterIP 10.102.5.44 <none> 8080/TCP 2m run=helloext
[root#master2 hello]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
hellocomm-54584f59c5-7nxp4 1/1 Running 0 18m 192.168.136.2 node3
helloext-c455859cc-5zz4s 1/1 Running 0 21m 192.168.136.1 node3
In above, my pod is deployed at node3(172.16.100.96), so I could access hellocomm by 172.16.100.96:31666/hello,
With this scenario, one could see easily that when node3 is down, a new pod is created/initiated, the port changes also.
so that my client lost connection. I do not want this solution.
My current question is that as helloext is deployed as ClusteriP and it is also a service as shown above. does that mean ClusterIP
10.102.5.44 and port 8080 would be service url, http://10.102.5.44:8080/hello?
Do I need to create service by yaml file again? What is the difference from service created by command against by yaml file? How to write
following yaml file if I have to create service by yaml?
Below is yaml definition template I need to fill, How to fill?
apiVersion: v1
kind: Service
matadata:
name: string helloext
namespace: string default
labels:
- name: string helloext
annotations:
- name: string hello world
spec:
selector: [] ?
type: string ?
clusterIP: string anything I could give?
sessionAffinity: string ? (yes or no)
ports:
- name: string helloext
protocol: string tcp
port: int 8081? (port used by host machine)
targetPort: int 8080? (spring boot uses 8080)
nodePort: int ?
status: since I am not using loadBalancer in deploymennt, I could forget this.
loadBalancer:
ingress:
ip: string
hostname: string
NodePort, as the name suggests, opens a port directly on the node (actually on all nodes in the cluster) so that you can access your service. By default it's random - that's why when a pod dies, it generates a new one for you. However, you can specify a port as well (3rd paragraph here) - and you will be able to access on the same port even after the pod has been re-created.
The clusterIP is only accessible inside the cluster, as it's a private IP. Meaning, in a default scenario you can access this service from another container / node inside the cluster. You can exec / ssh into any running container/node and try it out.
Yaml files can be version controlled, documented, templatized (Helm), etc.
Check https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#servicespec-v1-core for details on each field.
EDIT:
More detailed info on services here: https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
What about creating a ingress and point it to the service to access it outside of the cluster?
Related
This is the simplest config straight from the docs, but when I create the service, kubectl lists the target port as something random. Setting the target port to 1337 in the YAML:
apiVersion: v1
kind: Service
metadata:
name: sails-svc
spec:
selector:
app: sails
ports:
- port: 1337
targetPort: 1337
type: LoadBalancer
And this is what k8s sets up for services:
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP <X.X.X.X> <none> 443/TCP 23h
sails LoadBalancer <X.X.X.X> <X.X.X.X> 1337:30203/TCP 3m6s
svc-postgres ClusterIP <X.X.X.X> <none> 5432/TCP 3m7s
Why is k8s setting the target port to 30203, when I'm specifying 1337? It does the same thing if I try other port numbers, 80 gets 31887. I've read the docs but disabling those attributes did nothing in GCP. What am I not configuring correctly?
Kubectl get services output includes Port:NodePort:Protocol information.By default and for convenience, the Kubernetes control plane will allocate a port from a range default: 30000-32767(Refer the example in this documentation)
To get the TargetPort information try using
kubectl get service <your service name> --output yaml
This command shows all ports details and stable external IP address under loadBalancer:ingress:
Refer this documentation from more details on creating a service type loadbalancer
Maybe this was tripping me up more that it should have due to some redirects I didn't realize that were happening, but ironing out some things with my internal container and this worked.
Yields:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.3.240.1 <none> 443/TCP 28h
sails LoadBalancer 10.3.253.83 <X.X.X.X> 1337:30766/TCP 9m59s
svc-postgres ClusterIP 10.3.248.7 <none> 5432/TCP 12m
I can curl against the EXTERNAL-IP:1337. The internal target port was what was tripping me up. I thought that meant my pod needed to open up to that port and pod applications were supposed to bind to that port (i.e. 30766), but that's not the case. That port is some internal port mapping to the pod I still don't fully understand yet, but the pod still gets external traffic on port 1337 to the pod's 1337 port. I'd like to understand what's going on there better, as I get more into the k8s Networking section of the docs, or if anyone can enlighten me.
I have some questions regarding my minikube cluster, specifically why there needs to be a tunnel, what the tunnel means actually, and where the port numbers come from.
Background
I'm obviously a total kubernetes beginner...and don't have a ton of networking experience.
Ok. I have the following docker image which I pushed to docker hub. It's a hello express app that just prints out "Hello world" at the / route.
DockerFile:
FROM node:lts-slim
RUN mkdir /code
COPY package*.json server.js /code/
WORKDIR /code
RUN npm install
EXPOSE 3000
CMD ["node", "server.js"]
I have the following pod spec:
web-pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: web-pod
spec:
containers:
- name: web
image: kahunacohen/hello-kube:latest
ports:
- containerPort: 3000
The following service:
web-service.yaml
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app: web-pod
ports:
- port: 8080
targetPort: 3000
protocol: TCP
name: http
And the following deployment:
web-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 2
selector:
matchLabels:
app: web-pod
service: web-service
template:
metadata:
labels:
app: web-pod
service: web-service
spec:
containers:
- name: web
image: kahunacohen/hello-kube:latest
ports:
- containerPort: 3000
protocol: TCP
All the objects are up and running and look good after I create them with kubectl.
I do this:
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7h5m
web-service NodePort 10.104.15.61 <none> 8080:32177/TCP 25m
Then, as per a book I'm reading if I do:
$ curl $(minikube ip):8080 # or :32177, # or :3000
I get no response.
I found when I do this, however I can access the app by going to http://127.0.0.1:52650/:
$ minikube service web-service
|-----------|-------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-------------|-------------|---------------------------|
| default | web-service | http/8080 | http://192.168.49.2:32177 |
|-----------|-------------|-------------|---------------------------|
🏃 Starting tunnel for service web-service.
|-----------|-------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-------------|-------------|------------------------|
| default | web-service | | http://127.0.0.1:52472 |
|-----------|-------------|-------------|------------------------|
Questions
what this "tunnel" is and why we need it?
what the targetPort is for (8080)?
What this line means when I do kubectl get services:
web-service NodePort 10.104.15.61 <none> 8080:32177/TCP 25m
Specifically, what is that port mapping means and where 32177 comes from?
Is there some kind of problem with simply mapping the internal port to the same port number externally, e.g. 3000:3000? If so, do we specifically have to provide this mapping?
Let me answer on all your questions.
0 - There's no need to create pods separately (unless it's something to test), this should be done by creating deployments (or statefulsets, depends on the app and needs) which will create a replicaset which will be responsible for keeping right amount of pods in operational conditions. (you can get familiar with deployments in kubernetes.
1 - Tunnel is used to expose the service from inside of VM where minikube is running to the host machine's network. Works with LoadBalancer service type. Please refer to access applications in minikube.
1.1 - Reason why the application is not accessible on the localhost:NodePort is NodePort is exposed within VM where minikube is running, not on your local machine.
You can find minikube VM's IP by running minikube IP and then curl %GIVEN_IP:NodePort. You should get a response from your app.
2 - targetPort indicates the service with which port connection should be established. Please refer to define the service.
In minikube it may be confusing since it's pointed to the service port, not to the targetPort which is define within the service. I think idea was to indicate on which port service is accessible within the cluster.
3 - As for this question, there are headers presented, you can treat them literally. For instance:
$ kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
web-service NodePort 10.106.206.158 <none> 80:30001/TCP 21m app=web-pod
NodePort comes from your web-service.yaml for service object. Type is explicitly specified and therefore NodePort is allocated. If you don't specify type of service, it will be created as ClusterIP type and will be accessible only within kubernetes cluster. Please refer to Publishing Services (ServiceTypes).
When service is created with ClusterIP type, there won't be a NodePort in output. E.g.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web-service ClusterIP 10.106.206.158 <none> 80/TCP 23m
External-IP will pop up when LoadBalancer service type is used. Additionally for minikube address will appear once you run minikube tunnel in a different shell. After your service will be accessible on your host machine by External-IP + service port.
4 - There are not issues with such mapping. Moreover this is a default behaviour for kubernetes:
Note: A Service can map any incoming port to a targetPort. By default
and for convenience, the targetPort is set to the same value as the
port field.
Please refer to define a service
Edit:
Depending on the driver of minikube (usually this is a virtual box or docker - can be checked on linux VM in .minikube/profiles/minikube/config.json), minikube can have different port forwarding. E.g. I have a minikube based on docker driver and I can see some mappings:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ebcbc898b557 gcr.io/k8s-minikube/kicbase:v0.0.23 "/usr/local/bin/entr…" 5 days ago Up 5 days 127.0.0.1:49157->22/tcp, 127.0.0.1:49156->2376/tcp, 127.0.0.1:49155->5000/tcp, 127.0.0.1:49154->8443/tcp, 127.0.0.1:49153->32443/tcp minikube
For instance 22 for ssh to ssh into minikube VM. This may be an answer why you got response from http://127.0.0.1:52650/
I'm following this kubernetes tutorial to create a service https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#creating-a-service
I'm using minikube on my local environment. Everything works fine but I cannot curl my cluster IP. I have an operation timeout:
curl: (7) Failed to connect to 10.105.7.117 port 80: Operation timed out
My kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d17h
my-nginx ClusterIP 10.105.7.117 <none> 80/TCP 42h
It seems that I'm having the same issue that this guys here who did not find any answer to his problem: https://github.com/kubernetes/kubernetes/issues/86471
I have tried to do the same on my gcloud console but I have the same result. I can only curl my external IP service.
If I understood well, I'm suppose to be already in my minikube local cluster when I start minikube, so for me I should be able to curl the service like it is mention in the tutorial.
What I'm doing wrong?
Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a type in the ServiceSpec:
ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster. That is why you cannot access your service via ClusterIP from outside the cluster.
NodePort - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <NodeIP>:<NodePort>. Superset of ClusterIP.
kind: Service
apiVersion: v1
metadata:
name: example
namespace: example
spec:
type: NodePort
selector:
app: example
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: ui
Then execute command:
$ kubectl get svc --namespace=example
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-ui NodePort yy.zz.xx.xx <none> 8080:30960/TCP 1d
Get minikube ip to get the nodeIP
$ minikube ip
aa.bb.cc.dd
then you can curl it:
curl http://aa.bb.cc.dd:8080
LoadBalancer - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.
kind: Service
apiVersion: v1
metadata:
name: example
spec:
selector:
app: example
ports:
- protocol: "TCP"
port: 8080
targetPort: 8080
type: LoadBalancer
externalIPs:
- <your minikube ip>
then you can curl it:
$ curl http://yourminikubeip:8080/
ExternalName - Exposes the Service using an arbitrary name (specified by externalName in the spec) by returning a CNAME record with the name. No proxy is used. This type requires v1.7 or higher of kube-dns. The service itself is only exposed within the cluster, however, the FQDN external-name is not handled or controlled by the cluster. This is likely a publicly accessible URL so you can curl from anywhere. You'll have to configure your domain in a way that restricts who can access it.
The service type externalName is external to the cluster and really only allows for a CNAME redirect from within your cluster to an external path.
See more: esposing-services-kubernetes.
ClusterIP is only available inside the kubernetes network.
If you want to be able to hit this from outside of the cluster use a LoadBalancer to expose a public IP that you can then access from outside of the cluster
Or..
kubectl port-forward <pod_name> 8080:80
then curl
curl http://localhost:8080
which will route through the port-forward to port 80 of the pod.
I have a requirement to expose pods using selector as a NodePort service from command-line. There can be one or more pods. And the service needs to include the pods dynamically as they come and go. For example,
NAMESPACE NAME READY STATUS RESTARTS AGE
rakesh rakesh-pod1 1/1 Running 0 4d18h
rakesh rakesh-pod2 1/1 Running 0 4d18h
I can create a service that selects the pods I want using a service-definition yaml file -
apiVersion: v1
kind: Service
metadata:
name: np-service
namespace: rakesh
spec:
type: NodePort
ports:
- name: port1
port: 30005
targetPort: 30005
nodePort: 30005
selector:
abc.property: rakesh
However, I need to achieve the same via commandline. I tried the following -
kubectl -n rakesh expose pod --selector="abc.property: rakesh" --port=30005 --target-port=30005 --name=np-service --type=NodePort
and a few other variations without success.
Note: I understand that currently, there is no way to specify the node-port using command-line. A random port is allocated between 30000-32767. I am fine with that as I can patch it later to the required port.
Also note: These pods are not part of a deployment unfortunately. Otherwise, expose deployment might have worked.
So, it kind of boils down to selecting pods based on selector and exposing them as a service.
My kubernetes version: 1.12.5 upstream
My kubectl version: 1.12.5 upstream
You can do:
kubectl expose $(kubectl get po -l abc.property=rakesh -o name) --port 30005 --name np-service --type NodePort
You can create a NodePort service with a specified name using create nodeport command:
kubectl create nodeport NAME [--tcp=port:targetPort] [--dry-run=server|client|none]
For example:
kubectl create service nodeport myservice --node-port=31000 --tcp=3000:80
You can check Kubectl reference for more:
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-service-nodeport-em-
From the kubernetes docs I see that there is a DNS based service discovery mechanism. Does Google Container Engine support this. If so, what's the format of DNS name to discover a service running inside Container Engine. I couldn't find the relevant information in the Container Engine docs.
The DNS name for services is as follow: {service-name}.{namespace}.svc.cluster.local.
Assuming you configured kubectl to work with your cluster you should be able to get your service and namespace details by the following the steps below.
Get your namespace
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
kube-system <none> Active
You should ignore the kube-system entry, because that is for the cluster itself. All other entries are your namespaces. By default there will be one extra namespace called default.
Get your services
$ kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
broker-partition0 name=broker-partition0,type=broker name=broker-partition0 10.203.248.95 5050/TCP
broker-partition1 name=broker-partition1,type=broker name=broker-partition1 10.203.249.91 5050/TCP
kubernetes component=apiserver,provider=kubernetes <none> 10.203.240.1 443/TCP
service-frontend name=service-frontend,service=frontend name=service-frontend 10.203.246.16 80/TCP
104.155.61.198
service-membership0 name=service-membership0,partition=0,service=membership name=service-membership0 10.203.246.242 80/TCP
service-membership1 name=service-membership1,partition=1,service=membership name=service-membership1 10.203.248.211 80/TCP
This command lists all the services available in your cluster. So for example, if I want to get the IP address of the service-frontend I can use the following DNS: service-frontend.default.svc.cluster.local.
Verify DNS with busybox pod
You can create a busybox pod and use that pod to execute nslookup command to query the DNS server.
$ kubectl create -f - << EOF
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
EOF
Now you can do an nslookup from the pod in your cluster.
$ kubectl exec busybox -- nslookup broker-partition0.default.svc.cluster.local
Server: 10.203.240.10
Address 1: 10.203.240.10
Name: service-frontend.default.svc.cluster.local
Address 1: 10.203.246.16
Here you see that the Addres 1 entry is the IP of the service-frontend service, the same as the IP address listed by the kubectl get services.
It should work the same way as mentioned in the doc you linked to. Have you tried that? (i.e. "my-service.my-ns")