I have build a custom tcserver image exposing port 80 8080 and 8443. Basically you have an apache and inside the configuration you have a proxy pass to forward it to the tcserver tomcat.
EXPOSE 80 8080 8443
After that I created a kubernetes yaml to build the pod exposing only port 80.
apiVersion: v1
kind: Pod
metadata:
name: tcserver
namespace: default
spec:
containers:
- name: tcserver
image: tcserver-test:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
And the service along with it.
apiVersion: v1
kind: Service
metadata:
name: tcserver-svc
labels:
app: tcserver
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
selector:
app: tcserver
But the problem is that I'm unable to access it.
If I log to the pod (kubectl exec -it tcserver -- /bin/bash), I'm able to do a curl -k -v http://localhost and it will reply.
I believe I'm doing something wrong with the service, but I don't know what.
Any help will be appreciated.
SVC change
As suggested by sfgroups, I added the targetPort: 80 to the svc, but still not working.
When I try to curl the IP, I get a No route to host
[root#testmaster tcserver]# curl -k -v http://172.30.62.162:30080/
* About to connect() to 172.30.62.162 port 30080 (#0)
* Trying 172.30.62.162...
* No route to host
* Failed connect to 172.30.62.162:30080; No route to host
* Closing connection 0
curl: (7) Failed connect to 172.30.62.162:30080; No route to host
This is the describe from the svc:
[root#testmaster tcserver]# kubectl describe svc tcserver-svc
Name: tcserver-svc
Namespace: default
Labels: app=tcserver
Annotations: <none>
Selector: app=tcserver
Type: NodePort
IP: 172.30.62.162
Port: <unset> 80/TCP
NodePort: <unset> 30080/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
When you look at the kubectl describe service output, you'll see it's not actually attached to any pods:
Endpoints: <none>
That's because you say in the service spec that the service will attach to pods labeled with app: tcserver
spec:
selector:
app: tcserver
But, in the pod spec's metadata, you don't specify any labels at all
metadata:
name: tcserver
namespace: default
# labels: {}
And so the fix here is to add to the pod spec the appropriate label
metadata:
labels:
app: tcserver
Also note that it's a little unusual in practice to deploy a bare pod. Usually they're wrapped up in a higher-level controller, most often a deployment, that actually creates the pods. The deployment spec has a template pod spec and it's the pod's labels that matter.
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcserver
# Labels here are useful, but the service doesn't look for them
spec:
template:
metadata:
labels:
# These labels are what the service cares about
app: tcserver
spec:
containers: [...]
I see target post is missing, can you add traget port and test?
apiVersion: v1
kind: Service
metadata:
name: tcserver-svc
labels:
app: tcserver
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
targetPort: 80
selector:
app: tcserver
Related
I have a running pod that was created with the following pod-definition.yaml:
apiVersion: v1
kind: Pod
metadata:
name: microservice-one-pod-name
labels:
app: microservice-one-app-label
type: front-end
spec:
containers:
- name: microservice-one
image: vismarkjuarez1994/microserviceone
ports:
- containerPort: 2019
I then created a Service using the following service-definition.yaml:
kind: Service
apiVersion: v1
metadata:
name: microserviceone-service
spec:
ports:
- port: 30008
targetPort: 2019
protocol: TCP
selector:
app: microservice-one-app-label
type: NodePort
I then ran kubectl describe node minikube to find the Node IP I should be connecting to -- which yielded:
Addresses:
InternalIP: 192.168.49.2
Hostname: minikube
But I get no response when I run the following curl command:
curl 192.168.49.2:30008
The request also times out when I try to access 192.168.49.2:30008 from a browser.
The pod logs show that the container is up and running. Why can't I access my Service?
The problem is that you are trying to access your service at the port parameter which is the internal port at which the service will be exposed, even when using NodePort type.
The parameter you were searching is called nodePort, which can optionally be specified together with port and targetPort. Quoting the documentation:
By default and for convenience, the Kubernetes control plane will
allocate a port from a range (default: 30000-32767)
Since you didn't specify the nodePort, one in the range was automatically picked up. You can check which one by:
kubectl get svc -owide
And then access your service externally at that port.
As an alternative, you can change your service definition to be something like:
kind: Service
apiVersion: v1
metadata:
name: microserviceone-service
spec:
ports:
- port: 30008
targetPort: 2019
nodePort: 30008
protocol: TCP
selector:
app: microservice-one-app-label
type: NodePort
But take in mind that you may need to delete your service and create it again in order to change the nodePort allocated.
I think you missed the Port in your service:
apiVersion: v1
kind: Pod
metadata:
name: microservice-one-pod-name
labels:
app: microservice-one-app-label
type: front-end
spec:
containers:
- name: microservice-one
image: vismarkjuarez1994/microserviceone
ports:
- containerPort: 2019
and your service should be like this:
kind: Service
apiVersion: v1
metadata:
name: microserviceone-service
spec:
ports:
- port: 2019
targetPort: 2019
nodePort: 30008
protocol: TCP
selector:
app: microservice-one-app-label
type: NodePort
You can access to your app after enabling the Minikube ingress if you want trying Ingress with Minikube.
minikube addons enable ingress
I am trying to expose a deployment I made on minikube:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-test
labels:
app: debian
spec:
replicas: 1
selector:
matchLabels:
app: debian
strategy: {}
template:
metadata:
labels:
app: debian
spec:
containers:
- image: agracia10/debian_bash:latest
name: debian
ports:
- containerPort: 8006
resources: {}
restartPolicy: Always
status: {}
I decided to follow was is written on here
I try to expose the deployment using the following command:
kubectl expose pod deployment-test-8497d6f458-xxhgm --type=NodePort --port=8080 --target-port=80
but when I try to then access the service created by the expose command, using the url provided by
minikube service deployment-test-8497d6f458-xxhgm --url
it throws an error using packetsender to try and connect to the service:
packet sender log
Im not really sure what the reason for this could be, I think it has something to do with the fact that when I get the services it says on the external ip field. Also, when I try and retrieve the node IP using minikube ip it gives an address, but when the minikube service --url it gives the 127.0.0.1 address. In any case, using either one does not work.
it's not working due to a port configuration mismatch.
You deployment container running on the 8006 but you have exposed the 8080 and your target port is : --target-port=80
so due to this it's not working.
Ideal flow of traffic goes like :
service (node port, cluster IP or any) > Deployment > PODs
Below sharing the example for deployment and service
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog-app-server-instance
labels:
app: blog-app
spec:
replicas: 1
selector:
matchLabels:
app: blog-app
template:
metadata:
labels:
app: blog-app
spec:
containers:
- name: agracia10/debian_bash:latest
image: blog-app-server
ports:
- containerPort: 8006
---
apiVersion: v1
kind: Service
metadata:
name: blog-app-service
labels:
app: blog-app
spec:
selector:
app: blog-app
type: NodePort
ports:
- port: 80
nodePort: 31364
targetPort: 8006
protocol: TCP
name: HTTP
so things I have changed are image and target port.
Once your Node port service is up and running you will send the request on Port 80 or 31364
i will redirect the request internally to the target port which is 8006 for the container also.
Using this command you exposed your deployment on wrong target point
kubectl expose pod deployment-test-8497d6f458-xxhgm --type=NodePort --port=8080 --target-port=80
ideally it should be 8006
As I know the simplest way to expose the deployment to service we can run this command, you don't expose the pod but expose the deployment.
kubectl expose deployment deployment-test --port 80
my deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
my service yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
enter image description here
enter image description here
and then I curl 10.104.239.140, but get an error curl: (7) Failed connect to 10.104.239.140:80; Connection timed out
Who can tell me what's wrong?
welcome to SO. That service you've deployed is of type ClusterIP which means it can only be accessed from within the cluster. In your case, it seems you're trying to access it from outside the cluster and thus the connection timed out.
What you can do is, deploy a service of type NodePort or LoadBalancer to access it from outside the cluster. You can read more about different service types here.
You're service would end up something like this:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort ## or LoadBalancer(supported by Cloud providers like AWS)
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30001
im trying to access a deloyment on our Kubernetes cluster on Azure. This is a Azure Kubernetes Service (AKS). Here are the configuration files for the deployment and the service that should expose the deployment.
Configurations
apiVersion: apps/v1
kind: Deployment
metadata:
name: mira-api-deployment
spec:
replicas: 1
selector:
matchLabels:
app: mira-api
template:
metadata:
labels:
app: mira-api
spec:
containers:
- name: backend
image: registry.gitlab.com/izit/mira-backend
ports:
- containerPort: 8080
name: http
protocol: TCP
imagePullSecrets:
- name: regcred
apiVersion: v1
kind: Service
metadata:
name: mira-api-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
run: mira-api
When I check the cluster after applying these configurations I, I see the pod running correctly. Also the service is created and has public IP assigned.
After this deployment I don't see any requests getting handled. I get a error message in my browser saying the site is inaccessible. Any ideas what I could have configured wrong?
Your service selector labels and pod labels do not match.
You have app: mira-api label in deployment's pod template but have run: mira-api in service's label selector.
Change your service selector label to match the pod label as follows.
apiVersion: v1
kind: Service
metadata:
name: mira-api-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: mira-api
To make sure your service is selecting the backend pods or not, you can run kubectl describe svc <svc name> command and check if it has any Endpoints listed.
# kubectl describe svc postgres
Name: postgres
Namespace: default
Labels: app=postgres
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"postgres"},"name":"postgres","namespace":"default"},"s...
Selector: app=postgres
Type: ClusterIP
IP: 10.106.7.183
Port: default 5432/TCP
TargetPort: 5432/TCP
Endpoints: 10.244.2.117:5432 <------- This line
Session Affinity: None
Events: <none>
I have a manifest as the following
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-redis
spec:
selector:
matchLabels:
app: my-redis
replicas: 1
template:
metadata:
labels:
app: my-redis
spec:
containers:
- name: my-redis
image: redis
ports:
- name: redisport1
containerPort: 6379
hostPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
labels:
app: my-redis
spec:
type: NodePort
selector:
name: my-redis
ports:
- name: redisport1
port: 6379
targetPort: 6379
nodePort: 30036
protocol: TCP
This is a sample that reproduces my problem. My intention here is to create a simple cluster that has a pod with a redis container in it, and it should be exposed to my localhost. Still, get services gives me the following output:
redis-service NodePort 10.107.233.66 <none> 6379:30036/TCP 10s
If I swap NodePort with LoadBalancer, I get an external-ip but still port doesn't work.
Can you help me identify why I'm failing to map the 6379 port to my localhost, please?
Thanks,
In order to access your app through node port, you have to use this url
http://{node ip}:{node port}.
If you are using minikube, your minikube ip is the node ip. You can retrieve it using minikube ip command.
You can also use minikube service redis-service --url command to get the url to access your application through node port.
For anybody who's interested in the question, I found the problem. After Ijaz's fix, I also needed to change the selector to match the label in the pod, it was a typo on my end!
pod has "app=my-redis" tag, but Service selector had "name=my-redis". Matching them fixed the access problem.
Dont need the hostPort:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-redis
spec:
selector:
matchLabels:
app: my-redis
replicas: 1
template:
metadata:
labels:
app: my-redis
spec:
containers:
- name: my-redis
image: redis
ports:
- name: redisport1
containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
labels:
app: my-redis
spec:
type: NodePort
selector:
name: my-redis
ports:
- name: redisport1
port: 6379
targetPort: 6379
nodePort: 30036
protocol: TCP
now the nodePort 30036 can be used to access the service on any worker node.
If the cluster node is somewhere else and you want to make the port available on you local client , then just do kubectl port forward
kubectl port-forward svc/redis-service 6379:6379
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
Notes:
On-prem installs of k8s dont support service type of load balancer
ClusterIP is the IP on the pod network
Node IP is the IP of some machine that is running the k8s cluster