I have a local minishift cluster and i configured a simple web app with a service for it.
The service seems to be connected with the pod and send traffic, but when i try to create a route to expose the app, it fails with the error above. I tried many different solutions but nothings seems to work.
Deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-openshift
spec:
replicas: 1
selector:
matchLabels:
app: hello-openshift
template:
metadata:
labels:
app: hello-openshift
spec:
containers:
- name: hello-openshift
image: openshift/hello-openshift:latest
ports:
- containerPort: 8080
Here is the service.yaml:
apiVersion: v1
kind: Service
metadata:
name: automationportal-service
labels:
{{- include "automation-portal.labels" . | nindent 4 }}
spec:
type: clusterIP
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: hello-openshift
route.yaml:
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: automationportal-route
labels:
annotations:
spec:
host:
port:
targetPort: http
to:
kind: Service
name: automationportal-service
Related
I have an application with Pods that are not part of a deployments and I use services like nodePorts I access my application through ipv4:nodePorts/url-microservice when I want to scale my pods do I need to have a deployment with replicas?
I tried using deployment with nodePorts but it doesn't work this way anymore: ipv4:nodePorts/url-microservice
I'll post my deployments and service for someone to see if I'm wrong somewhere
Deployments:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
replicas: 1
selector:
matchLabels:
run: my-gateway
template:
metadata:
labels:
run: my-gateway
spec:
containers:
- name: my-gateway
image: rafaelribeirosouza86/shopping:api-gateway
imagePullPolicy: Always
ports:
- containerPort: 31534
protocol: TCP
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
replicas: 1
selector:
matchLabels:
run: my-adm-contact
template:
metadata:
labels:
run: my-adm-contact
spec:
containers:
- name: my-adm-contact
image: rafaelribeirosouza86/shopping:my-adm-contact
imagePullPolicy: Always
ports:
- containerPort: 30001
protocol: TCP
imagePullSecrets:
- name: regcred
Services:
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact-service
namespace: default
spec:
# clusterIP: 10.99.233.224
ports:
- port: 30001
protocol: TCP
targetPort: 30001
nodePort: 30001
# externalTrafficPolicy: Local
selector:
app: my-adm-contact
# type: ClusterIP
# type: LoadBalancer
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: my-gateway-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 31534
protocol: TCP
targetPort: 31534
nodePort: 31534
# externalTrafficPolicy: Local
selector:
app: my-gateway
# type: ClusterIP
# type: LoadBalancer
type: NodePort
Try:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-gateway
...
spec:
...
template:
metadata:
labels:
run: my-gateway # <-- take note
...
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-adm-contact
...
spec:
...
template:
metadata:
labels:
run: my-adm-contact # <-- take note
...
---
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact-service
...
selector:
run: my-adm-contact # <-- wrong selector, changed from 'app' to 'run'
---
apiVersion: v1
kind: Service
metadata:
name: my-gateway-service
...
selector:
run: my-gateway # <-- wrong selector, changed from 'app' to 'run'
I have two Kubernetes deployments: composite-app (1 pod) and product-app (2 pods), both listening in port 8080. The first one needs to call the second one sometimes.
However, the first deployment can't find the second one. When it tries to call it using the product.app host it fails:
Exception: I/O error on GET request for "http://product-app:8080/product/123": product-app;
nested exception is UnknownHostException
Am I using the right host? So far I've tried (to no avail):
product
product-app.default.pod.cluster.local
product-app
Here's my YAML:
apiVersion: v1
kind: Service
metadata:
name: composite-service
spec:
type: NodePort
selector:
app: composite-app
ports:
- targetPort: 8080
port: 8080
nodePort: 30091
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: composite-deploy
spec:
replicas: 1
selector:
matchLabels:
app: composite-app
template:
metadata:
labels:
app: composite-app
spec:
containers:
- name: composite-container
image: 192.168.49.2:2376/composite-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You need to define a Service object for the product-deploy Deployment as well for the other pod to be able to connect to it. The Service can be of type ClusterIP if it is not needed to be exposed to the external world.
apiVersion: v1
kind: Service
metadata:
name: product-service
spec:
type: ClusterIP
selector:
app: product-app
ports:
- targetPort: 8080
port: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You can connect to the other pod using the pod's IP without the service. However, that is not recommended since the pod's IP can be changed across pod updates.
You can then connect to the product-app pod from the composite-app using product-service.
After creating a service and an endpoint object ->
---
apiVersion: v1
kind: Service
metadata:
name: external-service
namespace: default
spec:
ports:
- protocol: TCP
port: 8200
---
apiVersion: v1
kind: Endpoints
metadata:
name: external-service
subsets:
- addresses:
- ip: $EXTERNAL_ADDR
ports:
- port: 8200
How can I point to the service in the deployment.yaml file. I want to remove the hardcoded IP the env variable
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: devwebapp
labels:
app: devwebapp
spec:
replicas: 1
selector:
matchLabels:
app: devwebapp
template:
metadata:
labels:
app: devwebapp
spec:
serviceAccountName: internal-app
containers:
- name: app
image: app:k8s
imagePullPolicy: Always
env:
- name: ADDRESS
value: "http://$EXTERNAL_SERVICE:8200"
Simply changing the value to http://external-service didn't help.
Thank you in advance!
I had to set the value to http://external-service:8200. The port was specified in the Endpoints so didn't bother to add it in the deployment.
you don't need to create endpoints separately just use selector in service spec. it will automatically create desired endpoints.
this one will work for you:
---
apiVersion: v1
kind: Service
metadata:
name: external-service
namespace: default
spec:
selector:
app: devwebapp
ports:
- protocol: TCP
port: 8200
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: devwebapp
labels:
app: devwebapp
spec:
replicas: 1
selector:
matchLabels:
app: devwebapp
template:
metadata:
labels:
app: devwebapp
spec:
serviceAccountName: internal-app
containers:
- name: app
image: app:k8s
imagePullPolicy: Always
ports:
- containerPort: 8200
env:
- name: ADDRESS
value: http://external-service:8200
I have currently create this kubernetes file: for deploy two API's in a Cluster on GCloud. I had tried make two kinds of "type" on kind Service.
First of all I had set the service type as a NodePort and couldn't connect to it, after that I had try use LoadBalancer, although, even with the external IP and the Endpoints I'm not able to access any API.
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxxxxxxxxxxxx
labels:
app: xxxxxxxx
spec:
replicas: 1
selector:
matchLabels:
app: xxxxxxxxx
template:
metadata:
labels:
app: xxxxxxxx
spec:
containers:
- name: xxxxx
image: xxxxxxxxx
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: xxxxxxxxx
spec:
selector:
app: xxxxxxxx
ports:
- protocol: TCP
port: 8080
targetPort: 3000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: yyyyyy
labels:
app: yyyyyy
spec:
replicas: 1
selector:
matchLabels:
app: yyyyyy
template:
metadata:
labels:
app: yyyyyy
spec:
containers:
- name: yyyyyy
image: yyyyyy
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: yyyyyy
spec:
selector:
app: yyyyyy
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer
Could anyone help me on this issue?
Regards.
There are a lot of examples of deploying Service (type:LoadBalancer) and have it redirect traffic to Deployment on GKE documentation.
Please follow these tutorials:
https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk
Also in your question you don't list any "errors" or "events". Please take a look at kubectl describe output of the Service. If you aren't getting the load balancer working, there might be an error like you ran out of IP addresses in your GCP project.
I am deploying nginx image using following deployment files in Google Cloud.
For Replicationcontroller :
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-web
labels:
name: nginx-web
app: demo
spec:
replicas: 2
template:
metadata:
labels:
name: nginx-web
spec:
containers:
- name: nginx-web
image: nginx
ports:
- containerPort: 5000
name: http
protocol: TCP
For Service Deployment
apiVersion: v1
kind: Service
metadata:
name: nginx-web
labels:
name: nginx-web
app: demo
spec:
selector:
name: nginx-web
type: LoadBalancer
ports:
- port: 84
targetPort: 5000
protocol: TCP
But when I do curl on external_IP (I got from loadbalancer) on port 84, I get connection refused error. What might be the issue?
The nginx image you are using in your replication controller is listening on port 80 (that's how the image is build).
You need to fix your replication controller spec like this:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-web
labels:
name: nginx-web
app: demo
spec:
replicas: 2
template:
metadata:
labels:
name: nginx-web
spec:
containers:
- name: nginx-web
image: nginx
ports:
- containerPort: 80
name: http
protocol: TCP
And also adjust your service like this:
apiVersion: v1
kind: Service
metadata:
name: nginx-web
labels:
name: nginx-web
app: demo
spec:
selector:
name: nginx-web
type: LoadBalancer
ports:
- port: 84
targetPort: 80
protocol: TCP