spring app + swagger (works on 8080) are working successfully in my local, but when i deploy app on kubernetes, the port information is missing and it calls host without port. so request fails.
in local, these informations are shown in UI like below
[ Base URL: localhost:8080/customerservice/api ]
http://localhost:8080/customerservice/api/v2/api-docs
in kubernetes deployed,
[ Base URL: kubernetes-ip/customerservice/api ]
http://kubernetes-ip:32004/customerservice/api/v2/api-docs
but base url must be kubernetes-ip:32004/customerservice/api
i have created kubernetes service (nodeport) to access deployment and 32004 is service port.
kind: Service
apiVersion: v1
metadata:
name: customer-service
namespace: altyapi
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
nodePort: 32004
selector:
app: customer
type: NodePort
where am i missing here? thanks for helps and suggestions.
Related
I am learning Kubernetes and have a simple deployment and a nodePort service. I am not able to access my deployment using nodePort. I tried hyperkit, docker and virtualbox .
Context
My java application is running on 8080 port ( tomcat server )
My service port is 8080.
My nodePort is 32000.
Here is the service file
apiVersion: v1
kind: Service
metadata:
name: file-process-service
labels:
app: file-process-service
spec:
type: NodePort
ports:
- name: http
port: 8080 ---------> Service
targetPort: 8080 ----> Tomcat Port
nodePort: 32000 ----> NodePort
protocol: TCP
selector:
app: file-process-app
Minukube URl is
minikube service file-process-service --url
>> http://192.168.59.100:32000
Now, when I am trying to access it via postman, I am getting connection refused. Can any one help me where I am doing it wrong or how can I debug it further?
Thanks DavidMaze - I am attaching the end points. It is None for my service
It was hinted by #DavidMaze - I found the service is created however end points were not created. I further debug the issue and found that in the service I have mentioned the wrong pods selectors, hence no ep was created.
I'm trying to achieve the following scenario:
My VM should be able to connect over a ClusterIP Service to the Pods behind it. The new beta feature here looks like this should be possible, but maybe I'm doing something wrong or misunderstood something...
The full documentation is here:
https://cloud.google.com/kubernetes-engine/docs/how-to/cloud-dns?hl=de#vpc_scope_dns
The DNS is working, I get a service IP, a route to the default network is available.
I can connect via pod IP. But it seems the service IP is not routable from outside the cluster. I know, that normally a ClusterIP is not available from outside the cluster. But then, I don't understand why this feature exists and it also does not match with the diagram provided in the docs. As this feature seems to provide cross-cluster/VM communication via services.
apiVersion: v1
kind: Service
metadata:
name: my-cip-service
namespace: default
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
run: load-balancer-example
sessionAffinity: None
type: ClusterIP
Do I understand the feature wrong or am I missing a configuration?
Traffic Check:
This only works with headless Kubernetes services. So you'll need to modify your service spec to included clusterIP: None:
apiVersion: v1
kind: Service
metadata:
name: my-cip-service
namespace: default
spec:
clusterIP: None
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
run: load-balancer-example
sessionAffinity: None
type: ClusterIP
I have a legacy application we've started running in Kubernetes. The application listens on two different ports, one for the general web page and another for a web service. In the long run we may try to change some of this but for the moment we're trying to get the legacy application to run as is. The current configuration has a single service for both ports:
apiVersion: v1
kind: Service
metadata:
name: app
spec:
selector:
app: my-app
ports:
- name: web
port: 8080
protocol: TCP
targetPort: 8080
- name: service
port: 8081
protocol: TCP
targetPort: 8081
Then I'm using a single ingress to route traffic to the correct service port based on path:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app
annotations:
nginx.ingress.kubernetes.io/upstream-hash-by: "$remote_addr"
spec:
rules:
- host: myapp.test.com
http:
paths:
- backend:
serviceName: app
servicePort: 8080
path: /app
- backend:
serviceName: app
servicePort: 8081
path: /service
This works great for routing. Requests coming into the ingress get routed to the correct service port based on path. However, the problem I have is that for this legacy app to work the requests to both ports 8080 and 8081 need to be routed to the same pod for each client. You can see I tried adding the upstream-hash-by annotation. This seemed to ensure that all requests to 8080 from one client went to the same pod and all requests to 8081 from one client went to the same pod but not that those are the same pod for any one client. When I run with a single pod instance everything is great but when I start spinning up additional pods some clients get /app requests routed to one pod and /service requests to another and in this application that does not currently work. I have tried other annotations in the ingress including nginx.ingress.kubernetes.io/affinity: "cookie" and nginx.ingress.kubernetes.io/affinity-mode: "persistent" as well as trying to add sessionAffinity: ClientIP to the service but so far nothing seems to work. The goal is that all requests to either path get routed to the same pod for any one client. Any help would be greatly appreciated.
Session persistence settings will only work if you set the kube proxy settings such that it forwards requests to local pod only and not to random pods across the cluster.
you can do this by setting the service level settings to:
service.spec.externalTrafficPolicy: Local
you can read more here:
https://kubernetes.io/docs/tutorials/services/source-ip/
after doing this , you ingress annotations should work. I have tested this with external load balancer only , not with ingress though.
keeping everything else the same and having this service definition should work
apiVersion: v1
kind: Service
metadata:
name: app
spec:
externalTrafficPolicy: Local
selector:
app: my-app
ports:
- name: web
port: 8080
protocol: TCP
targetPort: 8080
- name: service
port: 8081
protocol: TCP
targetPort: 8081
If the Google Container Engine cluster has the service configured as LoadBalancer it's available to the general public as expected. But if I change that to NodePort it is not available as <nodeIp>:<nodePort>.
The service (web-service.yml) looks like that:
apiVersion: v1
kind: Service
metadata:
name: web
labels:
name: web
spec:
type: NodePort
# type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP
nodePort: 30000
selector:
name: web
I would be very happy if someone could tell me why it isn't working.
Here is some background
The cluster consists of a MongoDB deployment (db-deployment.yml) with an according service (db-service.yml) and a Jetty deployment (web-deployment.yml) with an according service (web-service.yml)
It can be found at GitHub as part of this project with the according Readme.md file.
Did you open the port (30000) in the firewall? Also make sure you use the public IP of your VM.
See this answer
I'm trying to expose my Deployment to a port which I can access through my local computer via Minikube.
I have tried two YAML configurations (one a load balancer, one just a service exposing a port).
I: http://pastebin.com/gL5ZBZg7
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: bot
II: http://pastebin.com/sSuyhzC5
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: bot
The deployment and the docker container image both expose port 8000, and the Pod is tagged with app:bot.
The first results in a service with a port which never finishes, and the external IP never gets assigned.
The second results in a port of bot:8000 TCP, bot:0 TCP in my dashboard and when I try "minikube service bot" nothing happens. The same happens if I type in "kubectl expose service bot".
I am on Mac OS X.
How can I set this up properly?
The LoadBalancer service is meant for Cloud providers and not really relevant for minikube.
From the documentation:
On cloud providers which support external load balancers, setting the type field to "LoadBalancer" will provision a load balancer for your Service.
Using a Service of type NodePort (see documentation) as mentioned in the Networking part of the minikube documentation is the intended way of exposing services on minikube.
So your configuration should look like this:
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
type: NodePort
ports:
- port: 8000
targetPort: 8000
nodePort: 30356
protocol: TCP
selector:
app: bot
And access your application through:
> IP=$(minikube ip)
> curl "http://$IP:30356"
Hope that helps.
Minikube now has the service command to access a service.
Use kubectl service <myservice>.
That will give you a URL which you can use to talk to the service.