I have an application with Pods that are not part of a deployments and I use services like nodePorts I access my application through ipv4:nodePorts/url-microservice when I want to scale my pods do I need to have a deployment with replicas?
I tried using deployment with nodePorts but it doesn't work this way anymore: ipv4:nodePorts/url-microservice
I'll post my deployments and service for someone to see if I'm wrong somewhere
Deployments:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
replicas: 1
selector:
matchLabels:
run: my-gateway
template:
metadata:
labels:
run: my-gateway
spec:
containers:
- name: my-gateway
image: rafaelribeirosouza86/shopping:api-gateway
imagePullPolicy: Always
ports:
- containerPort: 31534
protocol: TCP
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
replicas: 1
selector:
matchLabels:
run: my-adm-contact
template:
metadata:
labels:
run: my-adm-contact
spec:
containers:
- name: my-adm-contact
image: rafaelribeirosouza86/shopping:my-adm-contact
imagePullPolicy: Always
ports:
- containerPort: 30001
protocol: TCP
imagePullSecrets:
- name: regcred
Services:
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact-service
namespace: default
spec:
# clusterIP: 10.99.233.224
ports:
- port: 30001
protocol: TCP
targetPort: 30001
nodePort: 30001
# externalTrafficPolicy: Local
selector:
app: my-adm-contact
# type: ClusterIP
# type: LoadBalancer
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: my-gateway-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 31534
protocol: TCP
targetPort: 31534
nodePort: 31534
# externalTrafficPolicy: Local
selector:
app: my-gateway
# type: ClusterIP
# type: LoadBalancer
type: NodePort
Try:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-gateway
...
spec:
...
template:
metadata:
labels:
run: my-gateway # <-- take note
...
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-adm-contact
...
spec:
...
template:
metadata:
labels:
run: my-adm-contact # <-- take note
...
---
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact-service
...
selector:
run: my-adm-contact # <-- wrong selector, changed from 'app' to 'run'
---
apiVersion: v1
kind: Service
metadata:
name: my-gateway-service
...
selector:
run: my-gateway # <-- wrong selector, changed from 'app' to 'run'
Related
I have a local minishift cluster and i configured a simple web app with a service for it.
The service seems to be connected with the pod and send traffic, but when i try to create a route to expose the app, it fails with the error above. I tried many different solutions but nothings seems to work.
Deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-openshift
spec:
replicas: 1
selector:
matchLabels:
app: hello-openshift
template:
metadata:
labels:
app: hello-openshift
spec:
containers:
- name: hello-openshift
image: openshift/hello-openshift:latest
ports:
- containerPort: 8080
Here is the service.yaml:
apiVersion: v1
kind: Service
metadata:
name: automationportal-service
labels:
{{- include "automation-portal.labels" . | nindent 4 }}
spec:
type: clusterIP
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: hello-openshift
route.yaml:
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: automationportal-route
labels:
annotations:
spec:
host:
port:
targetPort: http
to:
kind: Service
name: automationportal-service
I have two Kubernetes deployments: composite-app (1 pod) and product-app (2 pods), both listening in port 8080. The first one needs to call the second one sometimes.
However, the first deployment can't find the second one. When it tries to call it using the product.app host it fails:
Exception: I/O error on GET request for "http://product-app:8080/product/123": product-app;
nested exception is UnknownHostException
Am I using the right host? So far I've tried (to no avail):
product
product-app.default.pod.cluster.local
product-app
Here's my YAML:
apiVersion: v1
kind: Service
metadata:
name: composite-service
spec:
type: NodePort
selector:
app: composite-app
ports:
- targetPort: 8080
port: 8080
nodePort: 30091
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: composite-deploy
spec:
replicas: 1
selector:
matchLabels:
app: composite-app
template:
metadata:
labels:
app: composite-app
spec:
containers:
- name: composite-container
image: 192.168.49.2:2376/composite-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You need to define a Service object for the product-deploy Deployment as well for the other pod to be able to connect to it. The Service can be of type ClusterIP if it is not needed to be exposed to the external world.
apiVersion: v1
kind: Service
metadata:
name: product-service
spec:
type: ClusterIP
selector:
app: product-app
ports:
- targetPort: 8080
port: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You can connect to the other pod using the pod's IP without the service. However, that is not recommended since the pod's IP can be changed across pod updates.
You can then connect to the product-app pod from the composite-app using product-service.
After creating a service and an endpoint object ->
---
apiVersion: v1
kind: Service
metadata:
name: external-service
namespace: default
spec:
ports:
- protocol: TCP
port: 8200
---
apiVersion: v1
kind: Endpoints
metadata:
name: external-service
subsets:
- addresses:
- ip: $EXTERNAL_ADDR
ports:
- port: 8200
How can I point to the service in the deployment.yaml file. I want to remove the hardcoded IP the env variable
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: devwebapp
labels:
app: devwebapp
spec:
replicas: 1
selector:
matchLabels:
app: devwebapp
template:
metadata:
labels:
app: devwebapp
spec:
serviceAccountName: internal-app
containers:
- name: app
image: app:k8s
imagePullPolicy: Always
env:
- name: ADDRESS
value: "http://$EXTERNAL_SERVICE:8200"
Simply changing the value to http://external-service didn't help.
Thank you in advance!
I had to set the value to http://external-service:8200. The port was specified in the Endpoints so didn't bother to add it in the deployment.
you don't need to create endpoints separately just use selector in service spec. it will automatically create desired endpoints.
this one will work for you:
---
apiVersion: v1
kind: Service
metadata:
name: external-service
namespace: default
spec:
selector:
app: devwebapp
ports:
- protocol: TCP
port: 8200
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: devwebapp
labels:
app: devwebapp
spec:
replicas: 1
selector:
matchLabels:
app: devwebapp
template:
metadata:
labels:
app: devwebapp
spec:
serviceAccountName: internal-app
containers:
- name: app
image: app:k8s
imagePullPolicy: Always
ports:
- containerPort: 8200
env:
- name: ADDRESS
value: http://external-service:8200
I have a golang webapp pod running in kubernetes cluster, and I tried to deploy a prometheus pod to monitor the golang webapp pod.
I specified prometheus.io/port: to 2112 in the service.yaml file, which is the port that the golang webapp is listening on, but when I go to the Prometheus dashboard, it says that the 2112 endpoint is down.
I'm following this guide, tried this thread's solution thread, but still getting result saying 2112 endpoint is down.
Below is the my service.yaml and deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: prometheus-service
namespace: monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
prometheus.io/port: '2112'
spec:
selector:
app: prometheus-server
type: NodePort
ports:
- port: 8080
targetPort: 9090
nodePort: 30000
---
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: goapp
spec:
type: NodePort
selector:
app: golang
ports:
- name: main
protocol: TCP
port: 80
targetPort: 2112
nodePort: 30001
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: monitoring
labels:
app: prometheus-server
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: monitoring
name: golang
spec:
replicas: 1
template:
metadata:
labels:
app: golang
spec:
containers:
- name: gogo
image: effy77/gogo2
ports:
- containerPort: 2112
selector:
matchLabels:
app: golang
I will try add prometheus.io/port: 2112 to the prometheus deployment part, as I suspect that might be the cause.
I was confused with where to put the annotations,got my clarifications from this thread, I needed to put it under the service's metadata that needs to be scraped by prothemeus. So in my case it needs to be in goapp's metadata.
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: goapp
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
prometheus.io/port: '2112'
I am deploying nginx image using following deployment files in Google Cloud.
For Replicationcontroller :
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-web
labels:
name: nginx-web
app: demo
spec:
replicas: 2
template:
metadata:
labels:
name: nginx-web
spec:
containers:
- name: nginx-web
image: nginx
ports:
- containerPort: 5000
name: http
protocol: TCP
For Service Deployment
apiVersion: v1
kind: Service
metadata:
name: nginx-web
labels:
name: nginx-web
app: demo
spec:
selector:
name: nginx-web
type: LoadBalancer
ports:
- port: 84
targetPort: 5000
protocol: TCP
But when I do curl on external_IP (I got from loadbalancer) on port 84, I get connection refused error. What might be the issue?
The nginx image you are using in your replication controller is listening on port 80 (that's how the image is build).
You need to fix your replication controller spec like this:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-web
labels:
name: nginx-web
app: demo
spec:
replicas: 2
template:
metadata:
labels:
name: nginx-web
spec:
containers:
- name: nginx-web
image: nginx
ports:
- containerPort: 80
name: http
protocol: TCP
And also adjust your service like this:
apiVersion: v1
kind: Service
metadata:
name: nginx-web
labels:
name: nginx-web
app: demo
spec:
selector:
name: nginx-web
type: LoadBalancer
ports:
- port: 84
targetPort: 80
protocol: TCP