I am working on kubernetes to create two pods using deployment:
First deployment - pods have container and running is beanstalkd.
The second one has a worker which is running on php7/nginx and has an application codebase.
I am getting exception:
"user_name":"anonymous","message":"exception 'Pheanstalk_Exception_ConnectionException' with message 'Socket error 0: php_network_getaddresses: getaddrinfo failed: Try again (connecting to test-beanstalkd:11300)' in /var/www/html/vendor/pda/pheanstalk/classes/Pheanstalk/Socket/NativeSocket.php:35\nStack trace:\n#0 "
How to communicate between them:
My beanstalkd.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: test-beanstalkd
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app: test-beanstalkd
template:
metadata:
labels:
app: test-beanstalkd
spec:
containers:
# Our PHP-FPM application
- image: schickling/beanstalkd
name: test-beanstalkd
args:
- -p
- "11300"
- -z
- "1073741824"
---
apiVersion: v1
kind: Service
metadata:
name: test-beanstalkd-svc
namespace: test
labels:
run: test-beanstalkd
spec:
ports:
- port: 11300
protocol: TCP
selector:
app: test-beanstalkd
selector:
app: test-beanstalkd
type: NodePort
the below is our worker.yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
name: test-worker
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app: test-worker
template:
metadata:
labels:
app: test-worker
spec:
volumes:
# Create the shared files volume to be used in both pods
- name: shared-files
emptyDir: {}
containers:
# Our PHP-FPM application
- image: test-worker:master
name: worker
env:
- name: beanstalkd_host
value: "test-beanstalkd"
volumeMounts:
- name: nginx-config-volume
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
---
apiVersion: v1
kind: Service
metadata:
name: test-worker-svc
namespace: test
labels:
run: test-worker
spec:
ports:
- port: 80
protocol: TCP
selector:
app: worker
type: NodePort
the mistake is that in the env of test-worker the beanstalkd_host variable needs to be set to test-beanstalkd-svc because it is the name of the service.
Related
I have a flask pod that connects to a mongodb service through the environment variable SERVICE_HOST (DNS discovery didn't work for some reason), when I change something in mongodb service and re-apply it, the flask pod won't be able to connect to the service anymore since the service host changes, I have to recreate it everytime manually, is there a way to automate this, sort of like docker-compose depends_on directive ?
flask yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy23-api-deployment
labels:
app: proxy23-api
spec:
replicas: 2
selector:
matchLabels:
app: proxy23-api
template:
metadata:
labels:
app: proxy23-api
spec:
containers:
- name: proxy23-api
image: my_image
ports:
- containerPort: 5000
env:
- name: DB_URI
value: mongodb://$(PROXY23_DB_SERVICE_SERVICE_HOST):27017
- name: DB_NAME
value: db
- name: PORT
value: "5000"
imagePullSecrets:
- name: registry-credentials
---
apiVersion: v1
kind: Service
metadata:
name: proxy23-api-service
spec:
selector:
app: proxy23-api
type: NodePort
ports:
- port: 9002
targetPort: 5000
nodePort: 30002
mongodb yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy23-db-deployment
labels:
app: proxy23-db
spec:
replicas: 1
selector:
matchLabels:
app: proxy23-db
template:
metadata:
labels:
app: proxy23-db
spec:
containers:
- name: proxy23-db
image: mongo:bionic
ports:
- containerPort: 27017
volumeMounts:
- name: proxy23-storage
mountPath: /data/db
volumes:
- name: proxy23-storage
persistentVolumeClaim:
claimName: proxy23-db-pvc
---
apiVersion: v1
kind: Service
metadata:
name: proxy23-db-service
spec:
selector:
app: proxy23-db
type: NodePort
ports:
- port: 27017
targetPort: 27017
nodePort: 30003
I have two Kubernetes deployments: composite-app (1 pod) and product-app (2 pods), both listening in port 8080. The first one needs to call the second one sometimes.
However, the first deployment can't find the second one. When it tries to call it using the product.app host it fails:
Exception: I/O error on GET request for "http://product-app:8080/product/123": product-app;
nested exception is UnknownHostException
Am I using the right host? So far I've tried (to no avail):
product
product-app.default.pod.cluster.local
product-app
Here's my YAML:
apiVersion: v1
kind: Service
metadata:
name: composite-service
spec:
type: NodePort
selector:
app: composite-app
ports:
- targetPort: 8080
port: 8080
nodePort: 30091
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: composite-deploy
spec:
replicas: 1
selector:
matchLabels:
app: composite-app
template:
metadata:
labels:
app: composite-app
spec:
containers:
- name: composite-container
image: 192.168.49.2:2376/composite-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You need to define a Service object for the product-deploy Deployment as well for the other pod to be able to connect to it. The Service can be of type ClusterIP if it is not needed to be exposed to the external world.
apiVersion: v1
kind: Service
metadata:
name: product-service
spec:
type: ClusterIP
selector:
app: product-app
ports:
- targetPort: 8080
port: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You can connect to the other pod using the pod's IP without the service. However, that is not recommended since the pod's IP can be changed across pod updates.
You can then connect to the product-app pod from the composite-app using product-service.
After creating a service and an endpoint object ->
---
apiVersion: v1
kind: Service
metadata:
name: external-service
namespace: default
spec:
ports:
- protocol: TCP
port: 8200
---
apiVersion: v1
kind: Endpoints
metadata:
name: external-service
subsets:
- addresses:
- ip: $EXTERNAL_ADDR
ports:
- port: 8200
How can I point to the service in the deployment.yaml file. I want to remove the hardcoded IP the env variable
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: devwebapp
labels:
app: devwebapp
spec:
replicas: 1
selector:
matchLabels:
app: devwebapp
template:
metadata:
labels:
app: devwebapp
spec:
serviceAccountName: internal-app
containers:
- name: app
image: app:k8s
imagePullPolicy: Always
env:
- name: ADDRESS
value: "http://$EXTERNAL_SERVICE:8200"
Simply changing the value to http://external-service didn't help.
Thank you in advance!
I had to set the value to http://external-service:8200. The port was specified in the Endpoints so didn't bother to add it in the deployment.
you don't need to create endpoints separately just use selector in service spec. it will automatically create desired endpoints.
this one will work for you:
---
apiVersion: v1
kind: Service
metadata:
name: external-service
namespace: default
spec:
selector:
app: devwebapp
ports:
- protocol: TCP
port: 8200
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: devwebapp
labels:
app: devwebapp
spec:
replicas: 1
selector:
matchLabels:
app: devwebapp
template:
metadata:
labels:
app: devwebapp
spec:
serviceAccountName: internal-app
containers:
- name: app
image: app:k8s
imagePullPolicy: Always
ports:
- containerPort: 8200
env:
- name: ADDRESS
value: http://external-service:8200
I'm well versed in Docker, but must be doing something wrong here with K8. I'm running skaffold with minikube and trying to get DNS between containers working. Here's my deployment:
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-api
labels:
app: my-api
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
name: my-api
labels:
app: my-api
spec:
containers:
- name: my-api-postgres
image: postgres:11.2-alpine
env:
- name: POSTGRES_USER
value: "my-api"
- name: POSTGRES_DB
value: "my-api"
- name: POSTGRES_PASSWORD
value: "my-pass"
ports:
- containerPort: 5432
- name: my-api-redis
image: redis:5.0.4-alpine
command: ["redis-server"]
args: ["--appendonly", "yes"]
ports:
- containerPort: 6379
- name: my-api-node
image: my-api-node
command: ["npm"]
args: ["run", "start-docker-dev"]
ports:
- containerPort: 3000
However, in this scenario my-api-node can't contact my-api-postgres via the DNS hostname my-api-postgres. Any idea what I'm doing wrong?
You have defined all 3 containers as part of the same pod. Pods have a common network namespace so in your current setup (which is not correct, more on that in a second), you could talk to the other containers using localhost:<port>.
The 'correct' way of doing this would be to create a deployment for each application, and front those deployments with services.
Your example would roughly become (untested):
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-api-node
namespace: my-api
labels:
app: my-api-node
spec:
replicas: 1
selector:
matchLabels:
app: my-api-node
template:
metadata:
name: my-api-node
labels:
app: my-api-node
spec:
containers:
- name: my-api-node
image: my-api-node
command: ["npm"]
args: ["run", "start-docker-dev"]
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
namespace: my-api
name: my-api-node
spec:
selector:
app: my-api-node
ports:
- protocol: TCP
port: 3000
targetPort: 3000
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-api-redis
namespace: my-api
labels:
app: my-api-redis
spec:
replicas: 1
selector:
matchLabels:
app: my-api-redis
template:
metadata:
name: my-api-redis
labels:
app: my-api-redis
spec:
containers:
- name: my-api-redis
image: redis:5.0.4-alpine
command: ["redis-server"]
args: ["--appendonly", "yes"]
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
namespace: my-api
name: my-api-redis
spec:
selector:
app: my-api-redis
ports:
- protocol: TCP
port: 6379
targetPort: 6379
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-api-postgres
namespace: my-api
labels:
app: my-api-postgres
spec:
replicas: 1
selector:
matchLabels:
app: my-api-postgres
template:
metadata:
name: my-api-postgres
labels:
app: my-api-postgres
spec:
containers:
- name: my-api-postgres
image: postgres:11.2-alpine
env:
- name: POSTGRES_USER
value: "my-api"
- name: POSTGRES_DB
value: "my-api"
- name: POSTGRES_PASSWORD
value: "my-pass"
ports:
- containerPort: 5432
---
apiVersion: v1
kind: Service
metadata:
namespace: my-api
name: my-api-postgres
spec:
selector:
app: my-api-postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
DNS records get registered for services so you are connecting to those and being forwarded to the pods behind it (simplified). If you need to get to your node app from the outside world, that's a whole additional deal, and you should look at LoadBalancer type services, or Ingress.
As an addition to johnharris85 DNS, when you will separate your apps, which you should do in your scenario.
Multi-container Pods are usually used in specific use cases, like for example sidecar containers to help the main container with some particular tasks or proxies, bridges and adapters to for example provide connectivity to some specific destination.
In your case you can easily separate them. In this case you have a deployment with 1 Pod in which there are 3 containers which communicate with each other by localhost and not DNS names as already mentioned.
After which I recommend you to read about DNS inside of Kubernetes and how the communication works with the services stepping up into the game.
In case of pods you can read more here.
Where can I add socksParentProxy in Kubernetes deployment file to communicate polipo with tor. I already created tor service tor:9150 and tor deployment. Here is a my YAML file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: polipo-deployment
labels:
app: myauto
spec:
selector:
matchLabels:
name: polipo-pod
app: myauto
template:
metadata:
name: polipo-deployment
labels:
name: polipo-pod
app: myauto
spec:
containers:
- env:
- name: socksParentProxy
value: tor:9150
name: polipo
image: 'clue/polipo'
ports:
- containerPort: 8123
replicas: 1
As in the documentation, you should use args:
containers:
name: polipo
image: 'clue/polipo'
args: ["socksParentProxy=tor:9150"]
ports:
- containerPort: 8123