I'm running my application EKS cluster, few days back we encounter the issues, let say we have application pod is running with one replicas count in different AWS node lets call vm name as like below.
ams-99547bd55-9fp6r 1/1 Running 0 7m31s 10.255.114.81 ip-10-255-12-11.eu-central-1.compute.internal
mongodb-58746b7584-z82nd 1/1 Running 0 21h 10.255.113.10 ip-10-255-12-11.eu-central-1.compute.internal
Here the my running serivces
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ams-service NodePort 172.20.81.165 <none> 3030:30010/TCP 18m
mongodb-service NodePort 172.20.158.81 <none> 27017:30003/TCP 15d
I have setting.conf.yaml file running as config map where i have application related configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: ama-settings
labels:
conf: ams-settings
data:
config : |
"git": {
"prefixUrl": "ssh://git#10.255.12.11:30001/app-server/repos/ams/",
"author": {
"name": "app poc",
"mail": "app#domain.com"
},
"mongodb": {
"host": "10.255.12.11",
"port": "30003",
"database": "ams",
"ssl": false,
}
This is working as we expected, but in case if existing node ip where my pod is running, some of the reason when i'm deleting my running pod and trying to re-deploy that time my pod is placed in some other AWS node basically EC2 vm.
During this time my application is not working then I need edit my setting.conf.yaml file to update with new AWS node IP where my pod is running.
Here the question how to use the service name instead of AWS node IP, because we don't want change the ip address frequently in case if any existing VM is goes down.
ideally, instead of using the AWS IP you should be using the 0.0.0.0/0 Refrence doc
example in Node
const cors = require("cors");
app.use(cors());
const port = process.env.PORT || 8000;
app.listen(port,"0.0.0.0" ,() => {
console.log(`Server is running on port ${port}`);
});
however, if you want to add the service name :
you can use the full certified name, but I am not sure it will work on as host 0.0.0.0 would be better option
<service.name>.<namespace name>.svc.cluster.local
example
ams-service.default.svc.cluster.local
Related
This is the simplest config straight from the docs, but when I create the service, kubectl lists the target port as something random. Setting the target port to 1337 in the YAML:
apiVersion: v1
kind: Service
metadata:
name: sails-svc
spec:
selector:
app: sails
ports:
- port: 1337
targetPort: 1337
type: LoadBalancer
And this is what k8s sets up for services:
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP <X.X.X.X> <none> 443/TCP 23h
sails LoadBalancer <X.X.X.X> <X.X.X.X> 1337:30203/TCP 3m6s
svc-postgres ClusterIP <X.X.X.X> <none> 5432/TCP 3m7s
Why is k8s setting the target port to 30203, when I'm specifying 1337? It does the same thing if I try other port numbers, 80 gets 31887. I've read the docs but disabling those attributes did nothing in GCP. What am I not configuring correctly?
Kubectl get services output includes Port:NodePort:Protocol information.By default and for convenience, the Kubernetes control plane will allocate a port from a range default: 30000-32767(Refer the example in this documentation)
To get the TargetPort information try using
kubectl get service <your service name> --output yaml
This command shows all ports details and stable external IP address under loadBalancer:ingress:
Refer this documentation from more details on creating a service type loadbalancer
Maybe this was tripping me up more that it should have due to some redirects I didn't realize that were happening, but ironing out some things with my internal container and this worked.
Yields:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.3.240.1 <none> 443/TCP 28h
sails LoadBalancer 10.3.253.83 <X.X.X.X> 1337:30766/TCP 9m59s
svc-postgres ClusterIP 10.3.248.7 <none> 5432/TCP 12m
I can curl against the EXTERNAL-IP:1337. The internal target port was what was tripping me up. I thought that meant my pod needed to open up to that port and pod applications were supposed to bind to that port (i.e. 30766), but that's not the case. That port is some internal port mapping to the pod I still don't fully understand yet, but the pod still gets external traffic on port 1337 to the pod's 1337 port. I'd like to understand what's going on there better, as I get more into the k8s Networking section of the docs, or if anyone can enlighten me.
I'm following this kubernetes tutorial to create a service https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#creating-a-service
I'm using minikube on my local environment. Everything works fine but I cannot curl my cluster IP. I have an operation timeout:
curl: (7) Failed to connect to 10.105.7.117 port 80: Operation timed out
My kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d17h
my-nginx ClusterIP 10.105.7.117 <none> 80/TCP 42h
It seems that I'm having the same issue that this guys here who did not find any answer to his problem: https://github.com/kubernetes/kubernetes/issues/86471
I have tried to do the same on my gcloud console but I have the same result. I can only curl my external IP service.
If I understood well, I'm suppose to be already in my minikube local cluster when I start minikube, so for me I should be able to curl the service like it is mention in the tutorial.
What I'm doing wrong?
Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a type in the ServiceSpec:
ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster. That is why you cannot access your service via ClusterIP from outside the cluster.
NodePort - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <NodeIP>:<NodePort>. Superset of ClusterIP.
kind: Service
apiVersion: v1
metadata:
name: example
namespace: example
spec:
type: NodePort
selector:
app: example
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: ui
Then execute command:
$ kubectl get svc --namespace=example
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-ui NodePort yy.zz.xx.xx <none> 8080:30960/TCP 1d
Get minikube ip to get the nodeIP
$ minikube ip
aa.bb.cc.dd
then you can curl it:
curl http://aa.bb.cc.dd:8080
LoadBalancer - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.
kind: Service
apiVersion: v1
metadata:
name: example
spec:
selector:
app: example
ports:
- protocol: "TCP"
port: 8080
targetPort: 8080
type: LoadBalancer
externalIPs:
- <your minikube ip>
then you can curl it:
$ curl http://yourminikubeip:8080/
ExternalName - Exposes the Service using an arbitrary name (specified by externalName in the spec) by returning a CNAME record with the name. No proxy is used. This type requires v1.7 or higher of kube-dns. The service itself is only exposed within the cluster, however, the FQDN external-name is not handled or controlled by the cluster. This is likely a publicly accessible URL so you can curl from anywhere. You'll have to configure your domain in a way that restricts who can access it.
The service type externalName is external to the cluster and really only allows for a CNAME redirect from within your cluster to an external path.
See more: esposing-services-kubernetes.
ClusterIP is only available inside the kubernetes network.
If you want to be able to hit this from outside of the cluster use a LoadBalancer to expose a public IP that you can then access from outside of the cluster
Or..
kubectl port-forward <pod_name> 8080:80
then curl
curl http://localhost:8080
which will route through the port-forward to port 80 of the pod.
I have created a VPC-native cluster on GKE, master authorized networks disabled on it.
I think I did all things correctly but I still can't access to the app externally.
Below is my service manifest.
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 (0c01309)
creationTimestamp: null
labels:
io.kompose.service: app
name: app
spec:
ports:
- name: '3000'
port: 80
targetPort: 3000
protocol: TCP
nodePort: 30382
selector:
io.kompose.service: app
type: NodePort
The app's container port is 3000 and I checked it is working from logs.
I added firewall to open the 30382port in my vpc network too.
I still can't access to the node with the specified nodePort.
Is there anything I am missing?
kubectl get ep:
NAME ENDPOINTS AGE
app 10.20.0.10:3000 6h17m
kubernetes 34.69.50.167:443 29h
kubectl get svc:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app NodePort 10.24.6.14 <none> 80:30382/TCP 6h25m
kubernetes ClusterIP 10.24.0.1 <none> 443/TCP 29h
In Kubernetes, the service is used to communicate with pods.
To expose the pods outside the kubernetes cluster, you will need k8s service of NodePort type.
The NodePort setting applies to the Kubernetes services. By default Kubernetes services are accessible at the ClusterIP which is an internal IP address reachable from inside of the Kubernetes cluster only. The ClusterIP enables the applications running within the pods to access the service. To make the service accessible from outside of the cluster a user can create a service of type NodePort.
Please note that it is needed to have external IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port. As a result kubeproxy on Kubernetes node (the external IP address is attached to) will proxy that port to the pods selected by the service.
I built my own 1 host kubernetes cluster (1 host, 1 node, many namespaces, many pods and services) on a virtual machine, running on a always-on server.
The applications running on the cluster are working fine (basically, a NodeJS backend and HTML frontend).
So far, I have a NodePort Service, which is exposing Port 30000:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik-ingress-service NodePort 10.109.211.16 <none> 443:30000/TCP 147d
So, now I can access the web interface by typing https://<server-alias>:30000 in my browser adress bar.
But I would like to access it without giving the port, by only typing https://<server-alias>.
I know, this can be done with the kubectl port-forwarding command:
kubectl -n kube-system port-forward --address 0.0.0.0 svc/traefik-ingress-service 443:443
This works. But it does not seem to be a very professional thing to do.
Port forwarding also seems to keep disconnecting from time to time. Sometimes, it throws an error and quits, but leaves the process open, which leaves the port open - have to kill the process manually.
So, is there a way to do that access-my-application stuff professionally? How do the cluster provider (AWS, GCP...) do that?
Thank you!
Using Ingress Nginx you can access to you website with the name server:
Step 1: Install Nginx ingress in you cluster you can flow this link
After the installation is completed you will have a new pod
NAME READY STATUS
nginx-ingress-xxxxx 1/1 Running
And a new Service
NAME TYPE CLUSTER-IP EXTERNAL-IP
nginx-ingress LoadBalancer 10.109.x.y a.b.c.d
Step 2 : Create new deployment for you application but be sure that you are using the same name space for nginx ingress svc/pod and you application and you set the svc type to ClusterIP
Step 3: Create Kubernetes Ingress Object
Now you have to create the ingress object
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: **Same Name Space**
spec:
rules:
- host: your DNS <server-alias>
http:
paths:
- backend:
serviceName: svc Name
servicePort: svc Port
Now you can access to your website using the .
To create a DNS for free you can use freenom or you can use /etc/hosts
update it with :
server-alias a.b.c.d
Since the Type of your Traefik Ingress Service is NodePort, you get to access to the port provided which will have a value from 30000-32000.
You can also configure it to be of type LoadBalancer and interface with a cloud-based Load Balancer.
Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/
Here's a very related question: Should I use NodePort in my Traefik deployment on Kubernetes?
I am learning k8s. My question is that how to let k8s get service url as minikube command "minikube get service xxx --url" do?
Why I ask is because that when pod is down and up/created/initiated again, there is no need to change url by visiting service url. While
I deploy pod as NodePort, I could access pod with host IP and port, but if it is reinitiated/created again, the port changes.
My case is illustrated below: I have
one master(172.16.100.91) and
one node(hostname node3, 172.16.100.96)
I create pod and service as below, helllocomm deployed as NodePort, and helloext deployed as ClusterIP. hellocomm and helloext are both
spring boot hello world applications.
docker build -t jshenmaster2/hellocomm:0.0.2 .
kubectl run hellocomm --image=jshenmaster2/hellocomm:0.0.2 --port=8080
kubectl expose deployment hellocomm --type NodePort
docker build -t jshenmaster2/helloext:0.0.1 .
kubectl run helloext --image=jshenmaster2/helloext:0.0.1 --port=8080
kubectl expose deployment helloext --type ClusterIP
[root#master2 shell]# kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
hellocomm NodePort 10.108.175.143 <none> 8080:31666/TCP 8s run=hellocomm
helloext ClusterIP 10.102.5.44 <none> 8080/TCP 2m run=helloext
[root#master2 hello]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
hellocomm-54584f59c5-7nxp4 1/1 Running 0 18m 192.168.136.2 node3
helloext-c455859cc-5zz4s 1/1 Running 0 21m 192.168.136.1 node3
In above, my pod is deployed at node3(172.16.100.96), so I could access hellocomm by 172.16.100.96:31666/hello,
With this scenario, one could see easily that when node3 is down, a new pod is created/initiated, the port changes also.
so that my client lost connection. I do not want this solution.
My current question is that as helloext is deployed as ClusteriP and it is also a service as shown above. does that mean ClusterIP
10.102.5.44 and port 8080 would be service url, http://10.102.5.44:8080/hello?
Do I need to create service by yaml file again? What is the difference from service created by command against by yaml file? How to write
following yaml file if I have to create service by yaml?
Below is yaml definition template I need to fill, How to fill?
apiVersion: v1
kind: Service
matadata:
name: string helloext
namespace: string default
labels:
- name: string helloext
annotations:
- name: string hello world
spec:
selector: [] ?
type: string ?
clusterIP: string anything I could give?
sessionAffinity: string ? (yes or no)
ports:
- name: string helloext
protocol: string tcp
port: int 8081? (port used by host machine)
targetPort: int 8080? (spring boot uses 8080)
nodePort: int ?
status: since I am not using loadBalancer in deploymennt, I could forget this.
loadBalancer:
ingress:
ip: string
hostname: string
NodePort, as the name suggests, opens a port directly on the node (actually on all nodes in the cluster) so that you can access your service. By default it's random - that's why when a pod dies, it generates a new one for you. However, you can specify a port as well (3rd paragraph here) - and you will be able to access on the same port even after the pod has been re-created.
The clusterIP is only accessible inside the cluster, as it's a private IP. Meaning, in a default scenario you can access this service from another container / node inside the cluster. You can exec / ssh into any running container/node and try it out.
Yaml files can be version controlled, documented, templatized (Helm), etc.
Check https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#servicespec-v1-core for details on each field.
EDIT:
More detailed info on services here: https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
What about creating a ingress and point it to the service to access it outside of the cluster?