I have a Kubernetes cluster with 2 Slaves. I have 4 docker containers which all use a tomcat image and expose port 8080 and 8443. When I now put each container into a separate pod I get an issue with the ports since I only have 2 worker nodes.
What would be the best strategy for my scenario?
Current error message is: 1 PodToleratesNodeTaints, 2 PodFitsHostPorts.
Put all containers into one pod? This is my current setup (times 4)
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: myApp1
namespace: appNS
labels:
app: myApp1
spec:
replicas: 1
selector:
matchLabels:
app: myApp1
template:
metadata:
labels:
app: myApp1
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
containers:
- image: myregistry:5000/myApp1:v1
name: myApp1
ports:
- name: http-port
containerPort: 8080
- name: https-port
containerPort: 8443
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
---
kind: Service
apiVersion: v1
metadata:
name: myApp1-srv
namespace: appNS
labels:
version: "v1"
app: "myApp1"
spec:
type: NodePort
selector:
app: "myApp1"
ports:
- protocol: TCP
name: http-port
port: 8080
- protocol: TCP
name: https-port
port: 8443
You should not use hostNetwork unless absolutely necessary. Without host network you can have multiple pods listening on the same port number as each will have its own, dedicated network namespace.
Related
I have two pods created with deployment and service. my problem is as follows the pod "my-gateway" accesses the url "adm-contact" of "http://127.0.0.1:3000/adm-contact" which accesses another pod called "my-adm-contact" as can i make this work? I tried the following command: kubectl port-forward my-gateway-5b85498f7d-5rwnn 3000:3000 8879:8879 but it gives this error:
E0526 21:56:34.024296 12428 portforward.go:400] an error occurred forwarding 3000 -> 3000: error forwarding port 3000 to pod 2d5811c20c3762c6c249a991babb71a107c5dd6b080c3c6d61b4a275b5747815, uid : exit status 1: 2022/05/27 00:56:35 socat[2494] E connect(16, AF=2 127.0.0.1:3000, 16): Connection refused
Remembering that the images created with dockerfile are with EXPOSE 3000 8879
follow my yamls:
Deployment my-adm-contact:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
selector:
matchLabels:
run: my-adm-contact
template:
metadata:
labels:
run: my-adm-contact
spec:
containers:
- name: my-adm-contact
image: my-contact-adm
imagePullPolicy: Never
ports:
- containerPort: 8879
hostPort: 8879
name: admcontact8879
readinessProbe:
httpGet:
path: /adm-contact
port: 8879
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
Sevice my-adm-contact:
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact
labels:
run: my-adm-contact
spec:
selector:
app: my-adm-contact
ports:
- name: 8879-my-adm-contact
port: 8879
protocol: TCP
targetPort: 8879
type: LoadBalancer
status:
loadBalancer: {}
Deployment my-gateway:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
selector:
matchLabels:
run: my-gateway
template:
metadata:
labels:
run: my-gateway
spec:
containers:
- name: my-gateway
image: api-gateway
imagePullPolicy: Never
ports:
- containerPort: 3000
hostPort: 3000
name: home
#- containerPort: 8879
# hostPort: 8879
# name: adm
readinessProbe:
httpGet:
path: /adm-contact
port: 8879
path: /
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
Service my-gateway:
apiVersion: v1
kind: Service
metadata:
name: my-gateway
labels:
run: my-gateway
spec:
selector:
app: my-gateway
ports:
- name: 3000-my-gateway
port: 3000
protocol: TCP
targetPort: 3000
- name: 8879-my-gateway
port: 8879
protocol: TCP
targetPort: 8879
type: LoadBalancer
status:
loadBalancer: {}
What k8s-cluster environment are you running this in? I ask because the service.type of LoadBalancer is a special kind: at pod initialisation your cloud provider's admission controller will spot this and add in a loadbalancer config See https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
If you're not deploying this in a suitable cloud environment, your services won't do anything.
I had a quick look at your SO profile and - sorry if this is presumptious, I don't mean to be - it looks like you're relatively new to k8s. You shouldn't need to do any port-forwarding/kubectl proxying, and this should be a lot simpler than you might think.
When you create a service k8s will 'create' a DNS entry for you which points to the pod(s) specified by your selector.
I think you're trying to reach a setup where code running in my-gateway pod can connect to http://adm-contact on port 3000 and reach a listening service on the adm-contact pod. Is that correct?
If so, the outline solution is to expose tcp/3000 in the adm-contact pod, and create a service called adm-contact that has a selector for adm-contact pod.
This is a sample manifest I've just created which runs nginx and then creates a service for it, allowing any pod on the cluster to connect to it e.g. curl http://nginx-service.default.svc In this example I'm exposing port 80 because I didn't want to have to modify the nginx config, but the principle is the same.
apiVersion: v1
kind: Pod
metadata:
labels:
app: nginx
name: nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: ClusterIP
The k8s docs on Services are pretty helpful if you want more https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
a service can be reached on it's own name from pods in it's namespace:
so a service foo in namespace bar can be reached at http://foo from a pod in namespace bar
from other namespaces that service is reachable at http://foo.bar.svc.cluster.local. Change out the servicename and namespace for your usecase.
k8s dns is explained here in the docs:
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
I have taken the YAML you provided and assembled it here.
From another comment I see the URL you're trying to connect to is: http://gateway-service.default.svc.cluster.local:3000/my-adm-contact-service
The ability to resolve service names to pods only functions inside the cluster: coredns (a k8s pod) is the part which recognises when a service has been created and what IP(s) it's available at.
So another pod in the cluster e.g. one created by kubectl run bb --image=busybox -it -- sh would be able to resolve the command ping gateway-service, but pinging gateway-service from your desktop will fail because they're not both seeing the same DNS.
The api-gateway container will be able to make a connect to my-adm-contact-service on ports 3000 or 8879, and the my-adm-contact container will equally be able to connect to gateway-service on port 3000 - but only when those containers are running inside the cluster.
I think you're trying to access this from outside the cluster, so now the port/service types are correct you could re-try a kubectl port-forward svc/gateway-service 3000:3000 This will let you connect to 127.0.0.1:3000 and the traffic will be routed to port 3000 on the api-gateway container.
If you need to proxy to the other my-adm-contact-service then you'll have to issue similar kubectl commands in other shells, one per service:port combination. For completeness, if you wanted to route traffic from your local machine to all three container/port sets, you'd run:
# format kubectl port-forward svc/name src:dest (both TCP)
kubectl port-forward svc/gateway-service 3000:3000
kubectl port-forward svc/my-adm-contact-service 8879:8879
kubectl port-forward svc/my-adm-contact-service 3001:3000 #NOTE the changed local port, because localhost:3000 is already used
You will need a new shell for each kubectl, or run it as a background job.
apiVersion: v1
kind: Pod
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
containers:
- image: my-contact-adm
imagePullPolicy: Never
name: my-adm-contact
ports:
- containerPort: 8879
protocol: TCP
- containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact-service
spec:
ports:
- port: 8879
protocol: TCP
targetPort: 8879
name: adm8879
- port: 3000
protocol: TCP
targetPort: 3000
name: adm3000
selector:
app: my-adm-contact
type: ClusterIP
---
apiVersion: v1
kind: Pod
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
containers:
- image: api-gateway
imagePullPolicy: Never
name: my-gateway
ports:
- containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: gateway-service
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: my-gateway
type: ClusterIP
I'm not sure if this is considered to be a best practice, but for ease of management I have created a Deployment that consists of 2 containers (Api Event server and Api server). Api server can send events that need to be processed by Api Event server and returned back. It is easier for me to manage these on in one pod to allow localhost access between them and not worry about defining ClusterIP services for all my environments.
One of my concerns is that if say Api Event server exits with error, pod will still be active as Api server continues to run. Is there a way to tell kubernetes to terminate a pod if one of it's containers fails?
Here is my deployment, here only port 8080 is exposed to the public via LoadBalancer service. Perhaps I can somehow add liveliness and readiness probe to both of these?
apiVersion: apps/v1
kind: Deployment
metadata:
name: development-api
spec:
replicas: 2
selector:
matchLabels:
app: development-api
template:
metadata:
labels:
app: development-api
spec:
containers:
- name: development-api-server
image: <my-server-image>
ports:
- containerPort: 8080
protocol: TCP
- name: development-events-server
image: <my-events-image>
ports:
- containerPort: 3000
protocol: TCP
Use liveness and readiness probes.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
In your case
apiVersion: apps/v1
kind: Deployment
metadata:
name: development-api
spec:
replicas: 2
selector:
matchLabels:
app: development-api
template:
metadata:
labels:
app: development-api
spec:
containers:
- name: development-api-server
image: <my-server-image>
ports:
- containerPort: 8080
protocol: TCP
livenessProbe:
tcpSocket:
port: 8080
- name: development-events-server
image: <my-events-image>
ports:
- containerPort: 3000
protocol: TCP
livenessProbe:
tcpSocket:
port: 3000
Part of my deployment looks like this
client -- main service __ service 1
|__ service 2
NOTE: Each of these 4 services is a container and I'm trying to do this where each is in it's own Pod (without using multi container pod)
Where main service must make a call to service 1, get results then send those results to service 2, get that result and send it back to the web client
main service operates in this order
receive request from web client pot :80
make request to http://localhost:8000 (service 1)
make request to http://localhost:8001 (service 2)
merge results
respond to web client with result
My deployments for service 1 and 2 look like this
SERVICE 1
apiVersion: v1
kind: Service
metadata:
name: serviceone
spec:
selector:
run: serviceone
ports:
- port: 80
targetPort: 5050
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: serviceone-deployment
spec:
replicas: 1
selector:
matchLabels:
run: serviceone
template:
metadata:
labels:
run: serviceone
spec:
containers:
- name: serviceone
image: test.azurecr.io/serviceone:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5050
SERVICE 2
apiVersion: v1
kind: Service
metadata:
name: servicetwo
spec:
selector:
run: servicetwo
ports:
- port: 80
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: servicetwo-deployment
spec:
replicas: 1
selector:
matchLabels:
run: servicetwo
template:
metadata:
labels:
run: servicetwo
spec:
containers:
- name: servicetwo
image: test.azurecr.io/servicetwo:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
But I don't know what the service and deployment would look like for the main service that has to make request to two other services.
EDIT: This is my attempt at the service/deployment for main service
apiVersion: v1
kind: Service
metadata:
name: mainservice
spec:
selector:
run: mainservice
ports:
- port: 80 # incoming traffic from web client pod
targetPort: 80 # traffic goes to container port 80
selector:
run: serviceone
ports:
- port: ?
targetPort: 8000 # the port the container is hardcoded to send traffic to service one
selector:
run: servicetwo
ports:
- port: ?
targetPort: 8001 # the port the container is hardcoded to send traffic to service two
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mainservice-deployment
spec:
replicas: 1
selector:
matchLabels:
run: mainservice
template:
metadata:
labels:
run: mainservice
spec:
containers:
- name: mainservice
image: test.azurecr.io/mainservice:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
EDIT 2: alternate attempt at the service after finding this https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services
apiVersion: v1
kind: Service
metadata:
name: mainservice
spec:
selector:
run: mainservice
ports:
- name: incoming
port: 80 # incoming traffic from web client pod
targetPort: 80 # traffic goes to container port 80
- name: s1
port: 8080
targetPort: 8000 # the port the container is hardcoded to send traffic to service one
- name: s2
port: 8081
targetPort: 8001 # the port the container is hardcoded to send traffic to service two
The main service doesn't need to know anything about the services it calls other than their names. Simply access those services using the name of the Service, i.e. service1 and service2 (http://service1:80) and the requests will be forwarded to the correct pod.
Reference: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
Using Istio with Kubernetes and facing an issue with a simple routerule splitting traffic between two Deployments. Only one of the Deployment (howdy) is returning results correctly. The other Deployment (hello) is reporting "no healthy upstream".
Here is the k8s manifest:
apiVersion: v1
kind: Service
metadata:
name: greeting
labels:
name: greeting
spec:
selector:
app: greeting
ports:
- name: http
protocol: TCP
port: 8081
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: greeting-hello
labels:
name: greeting-hello
spec:
replicas: 1
template:
metadata:
labels:
app: greeting
greeting: hello
spec:
containers:
- name: greeting
image: arungupta/greeting:hello
imagePullPolicy: Always
ports:
- containerPort: 8081
readinessProbe:
httpGet:
path: /resources/greeting
port: 8081
initialDelaySeconds: 50
periodSeconds: 5
--- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: greeting-howdy
labels:
name: greeting-howdy
spec:
replicas: 1
template:
metadata:
labels:
app: greeting
greeting: howdy
spec:
containers:
- name: greeting
image: arungupta/greeting:howdy
imagePullPolicy: Always
ports:
- containerPort: 8081
readinessProbe:
httpGet:
path: /resources/greeting
port: 8081
initialDelaySeconds: 50
periodSeconds: 5
And the route is:
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: greeting-50-50
spec:
destination:
name: greeting
route:
- labels:
app: greeting
greeting: hello
weight: 100
- labels:
app: greeting
greeting: howdy
weight: 0
Any idea?
This is also tracked at https://github.com/aws-samples/aws-microservices-deploy-options/issues/239
I've reproduced your deployment issue and found the following errors during the process:
Warning Unhealthy 48m (x7 over 49m) kubelet, gke-cluster-1-default-pool-e44042f6-kndh Readiness probe failed: Get http://10.0.2.30:8081/resources/greeting: dial tcp 10.0.2.30:8081: getsockopt: connection refused
I would recommend you to check if docker containers in this particular configuration are
exposing endpoints on the specified (8081) port. I also found that pods are not registered in any
of the endpoints:
$ kubectl describe service
[...]
IP: 10.3.247.97
Port: http 8081/TCP
TargetPort: 8081/TCP
Endpoints:
Session Affinity: None
Events: <none>
The problem is that the application did not bind to the port which was set as a container port,
and the application did not receive connection.
I have created a cluster using the google cloud platform (container engine) and deployed a pod using the following YAML file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-name
spec:
replicas: 1
template:
metadata:
name: pod-name
labels:
app: app-label
spec:
containers:
- name: container-name
image: gcr.io/project-id/image-name
resources:
requests:
cpu: 1
ports:
- name: port80
containerPort: 80
- name: port443
containerPort: 443
- name: port6001
containerPort: 6001
Then I want to create a service that enables the pod to listen on all these ports. I know that the following YAML file works to create a service that listens on one port:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- port: 80
targetPort: 80
selector:
app: app-label
type: LoadBalancer
However when I want the pod to listen on multiple ports like this, it doesn't work:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- port: 80
targetPort: 80
- port: 443
targetPort: 443
- port: 6001
targetPort: 6001
selector:
app: app-label
type: LoadBalancer
How can I make my pod listen to multiple ports?
You have two options:
You could have multiple services, one for each port. As you pointed out, each service will end up with a different IP address
You could have a single service with multiple ports. In this particular case, you must give all ports a name.
In your case, the service becomes:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: something
port: 6001
targetPort: 6001
selector:
app: app-label
type: LoadBalancer
This is necessary so that endpoints can be disambiguated.