communication between polipo and tor kubernetes deployment - kubernetes

Where can I add socksParentProxy in Kubernetes deployment file to communicate polipo with tor. I already created tor service tor:9150 and tor deployment. Here is a my YAML file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: polipo-deployment
labels:
app: myauto
spec:
selector:
matchLabels:
name: polipo-pod
app: myauto
template:
metadata:
name: polipo-deployment
labels:
name: polipo-pod
app: myauto
spec:
containers:
- env:
- name: socksParentProxy
value: tor:9150
name: polipo
image: 'clue/polipo'
ports:
- containerPort: 8123
replicas: 1

As in the documentation, you should use args:
containers:
name: polipo
image: 'clue/polipo'
args: ["socksParentProxy=tor:9150"]
ports:
- containerPort: 8123

Related

Restart pod when another service is recreated

I have a flask pod that connects to a mongodb service through the environment variable SERVICE_HOST (DNS discovery didn't work for some reason), when I change something in mongodb service and re-apply it, the flask pod won't be able to connect to the service anymore since the service host changes, I have to recreate it everytime manually, is there a way to automate this, sort of like docker-compose depends_on directive ?
flask yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy23-api-deployment
labels:
app: proxy23-api
spec:
replicas: 2
selector:
matchLabels:
app: proxy23-api
template:
metadata:
labels:
app: proxy23-api
spec:
containers:
- name: proxy23-api
image: my_image
ports:
- containerPort: 5000
env:
- name: DB_URI
value: mongodb://$(PROXY23_DB_SERVICE_SERVICE_HOST):27017
- name: DB_NAME
value: db
- name: PORT
value: "5000"
imagePullSecrets:
- name: registry-credentials
---
apiVersion: v1
kind: Service
metadata:
name: proxy23-api-service
spec:
selector:
app: proxy23-api
type: NodePort
ports:
- port: 9002
targetPort: 5000
nodePort: 30002
mongodb yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy23-db-deployment
labels:
app: proxy23-db
spec:
replicas: 1
selector:
matchLabels:
app: proxy23-db
template:
metadata:
labels:
app: proxy23-db
spec:
containers:
- name: proxy23-db
image: mongo:bionic
ports:
- containerPort: 27017
volumeMounts:
- name: proxy23-storage
mountPath: /data/db
volumes:
- name: proxy23-storage
persistentVolumeClaim:
claimName: proxy23-db-pvc
---
apiVersion: v1
kind: Service
metadata:
name: proxy23-db-service
spec:
selector:
app: proxy23-db
type: NodePort
ports:
- port: 27017
targetPort: 27017
nodePort: 30003

Kubernetes two pods communication ( One is Beanstalkd and another is worker )

I am working on kubernetes to create two pods using deployment:
First deployment - pods have container and running is beanstalkd.
The second one has a worker which is running on php7/nginx and has an application codebase.
I am getting exception:
"user_name":"anonymous","message":"exception 'Pheanstalk_Exception_ConnectionException' with message 'Socket error 0: php_network_getaddresses: getaddrinfo failed: Try again (connecting to test-beanstalkd:11300)' in /var/www/html/vendor/pda/pheanstalk/classes/Pheanstalk/Socket/NativeSocket.php:35\nStack trace:\n#0 "
How to communicate between them:
My beanstalkd.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: test-beanstalkd
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app: test-beanstalkd
template:
metadata:
labels:
app: test-beanstalkd
spec:
containers:
# Our PHP-FPM application
- image: schickling/beanstalkd
name: test-beanstalkd
args:
- -p
- "11300"
- -z
- "1073741824"
---
apiVersion: v1
kind: Service
metadata:
name: test-beanstalkd-svc
namespace: test
labels:
run: test-beanstalkd
spec:
ports:
- port: 11300
protocol: TCP
selector:
app: test-beanstalkd
selector:
app: test-beanstalkd
type: NodePort
the below is our worker.yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
name: test-worker
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app: test-worker
template:
metadata:
labels:
app: test-worker
spec:
volumes:
# Create the shared files volume to be used in both pods
- name: shared-files
emptyDir: {}
containers:
# Our PHP-FPM application
- image: test-worker:master
name: worker
env:
- name: beanstalkd_host
value: "test-beanstalkd"
volumeMounts:
- name: nginx-config-volume
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
---
apiVersion: v1
kind: Service
metadata:
name: test-worker-svc
namespace: test
labels:
run: test-worker
spec:
ports:
- port: 80
protocol: TCP
selector:
app: worker
type: NodePort
the mistake is that in the env of test-worker the beanstalkd_host variable needs to be set to test-beanstalkd-svc because it is the name of the service.

Connecting to a Kafka Kubernetes cluster from WITHIN the cluster

I am new to Kubernetes deployment.
I CAN connect to the kafka cluster from outside, BUT am not able to do the same from WITHIN the cluster.
Here are my configurations:
kafka-broker:
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: crd-instrument
name: kafkaservice
spec:
replicas: 1
selector:
matchLabels:
app: kafkaservice
template:
metadata:
labels:
app: kafkaservice
spec:
hostname: kafkaservice
containers:
- name: kafkaservice
imagePullPolicy: IfNotPresent
image: wurstmeister/kafka
env:
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "INTERNAL_PLAINTEXT:PLAINTEXT,EXTERNAL_PLAINTEXT:PLAINTEXT"
- name: KAFKA_LISTENERS
value: "INTERNAL_PLAINTEXT://0.0.0.0:9092,EXTERNAL_PLAINTEXT://0.0.0.0:9093"
- name: KAFKA_ADVERTISED_LISTENERS
value: "INTERNAL_PLAINTEXT://kafkaservice:9092,EXTERNAL_PLAINTEXT://127.0.0.1:30035"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "INTERNAL_PLAINTEXT"
- name: KAFKA_CREATE_TOPICS
value: "crd_instrument_req:1:1,crd_instrument_resp:1:1,crd_instrument_resol:1:1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeperservice:2181"
ports:
- name: port9092
containerPort: 9092
- name: port9093
containerPort: 9093
kafka-service:
---
apiVersion: v1
kind: Service
metadata:
namespace: crd-instrument
name: kafkaservice
labels:
app: kafkaservice
spec:
selector:
app: kafkaservice
ports:
- name: port9092
port: 9092
targetPort: 9092
protocol: TCP
Kafka-service-external
---
apiVersion: v1
kind: Service
metadata:
namespace: crd-instrument
name: kafkaservice-external
labels:
app: kafkaservice-external
spec:
selector:
app: kafkaservice
ports:
- name: port9093
port: 9093
protocol: TCP
nodePort: 30035
type: NodePort
This is the client yml thats started in the same namespace:
---
apiVersion: v1
kind: Pod
metadata:
name: crd-instrument-client
labels:
app: crd-instrument-client
namespace: crd-instrument
spec:
containers:
- name: crd-instrument-client
image: crd_instrument_client:1.0
imagePullPolicy: Never
The code within is trying to connect with BOOTSTRAP_SERVERS as "kafkaservice:9092"
Its not connecting. Where am I going wrong, if someone can help pointing out please..

How to configure axon server in pod for Kubernetes

I have 3 services which are axon, command and query. I am trying running them via Kubernetes. With docker-compose and swarm works perfectly. But somehow not working via K8s.
Getting following error:
Connecting to AxonServer node axonserver:8124 failed: UNAVAILABLE: Unable to resolve host axonserver
Below are my config files.
`
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: axonserver
labels:
app: axonserver
spec:
serviceName: axonserver
replicas: 1
selector:
matchLabels:
app: axonserver
template:
metadata:
labels:
app: axonserver
spec:
containers:
- name: axonserver
image: axoniq/axonserver
env:
- name: AXONSERVER_HOSTNAME
value: axonserver
imagePullPolicy: Always
ports:
- name: grpc
containerPort: 8124
protocol: TCP
- name: gui
containerPort: 8024
protocol: TCP
`
Here is command-service yaml contains service as well.
apiVersion:
kind: Pod
metadata:
name: command-service
labels:
name: peanuts
app: axonserver
spec:
replicas: 1
template:
metadata:
labels:
app: axonserver
spec:
containers:
- image: celcin/command-svc
name: command-service
ports:
- containerPort: 8080
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: command-service
labels:
name: peanuts
app: axonserver
spec:
ports:
- name: "8081"
port: 8081
targetPort: 8080
selector:
labels:
app: axonserver
`
Here is last service as query-service yml file
` apiVersion: v1
kind: Pod
metadata:
name: query-service
labels:
name: peanuts
app: axonserver
spec:
replicas: 1
template:
metadata:
labels:
app: axonserver
spec:
containers:
- image: celcin/query-svc
name: query-service
ports:
- containerPort: 8080
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: query-service
labels:
name: peanuts
app: axonserver
spec:
ports:
- name: "8082"
port: 8082
targetPort: 8080
selector:
labels:
app: axonserver`
your YAML is somehow mixed. If I understood you correctly, you have three services:
command-service
query-service
axonserver
Your setup should be configured in a way that command-service and query-service expose their ports, but both use ports exposed by axonserver. Here is my attempt for your YAML:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: axonserver
labels:
app: axonserver
spec:
serviceName: axonserver
replicas: 1
selector:
matchLabels:
app: axonserver
template:
metadata:
labels:
app: axonserver
spec:
containers:
- name: axonserver
image: axoniq/axonserver
imagePullPolicy: Always
- name: grpc
containerPort: 8124
protocol: TCP
- name: gui
containerPort: 8024
protocol: TCP
The ports your defined in:
ports:
- name: command-srv
containerPort: 8081
protocol: TCP
- name: query-srv
containerPort: 8082
protocol: TCP
are not ports of Axon Server, but of your command-service and query-service and should be exposed in those containers.
Kind regards,
Simon

DNS in Kubernetes deployment not working as expected

I'm well versed in Docker, but must be doing something wrong here with K8. I'm running skaffold with minikube and trying to get DNS between containers working. Here's my deployment:
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-api
labels:
app: my-api
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
name: my-api
labels:
app: my-api
spec:
containers:
- name: my-api-postgres
image: postgres:11.2-alpine
env:
- name: POSTGRES_USER
value: "my-api"
- name: POSTGRES_DB
value: "my-api"
- name: POSTGRES_PASSWORD
value: "my-pass"
ports:
- containerPort: 5432
- name: my-api-redis
image: redis:5.0.4-alpine
command: ["redis-server"]
args: ["--appendonly", "yes"]
ports:
- containerPort: 6379
- name: my-api-node
image: my-api-node
command: ["npm"]
args: ["run", "start-docker-dev"]
ports:
- containerPort: 3000
However, in this scenario my-api-node can't contact my-api-postgres via the DNS hostname my-api-postgres. Any idea what I'm doing wrong?
You have defined all 3 containers as part of the same pod. Pods have a common network namespace so in your current setup (which is not correct, more on that in a second), you could talk to the other containers using localhost:<port>.
The 'correct' way of doing this would be to create a deployment for each application, and front those deployments with services.
Your example would roughly become (untested):
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-api-node
namespace: my-api
labels:
app: my-api-node
spec:
replicas: 1
selector:
matchLabels:
app: my-api-node
template:
metadata:
name: my-api-node
labels:
app: my-api-node
spec:
containers:
- name: my-api-node
image: my-api-node
command: ["npm"]
args: ["run", "start-docker-dev"]
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
namespace: my-api
name: my-api-node
spec:
selector:
app: my-api-node
ports:
- protocol: TCP
port: 3000
targetPort: 3000
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-api-redis
namespace: my-api
labels:
app: my-api-redis
spec:
replicas: 1
selector:
matchLabels:
app: my-api-redis
template:
metadata:
name: my-api-redis
labels:
app: my-api-redis
spec:
containers:
- name: my-api-redis
image: redis:5.0.4-alpine
command: ["redis-server"]
args: ["--appendonly", "yes"]
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
namespace: my-api
name: my-api-redis
spec:
selector:
app: my-api-redis
ports:
- protocol: TCP
port: 6379
targetPort: 6379
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-api-postgres
namespace: my-api
labels:
app: my-api-postgres
spec:
replicas: 1
selector:
matchLabels:
app: my-api-postgres
template:
metadata:
name: my-api-postgres
labels:
app: my-api-postgres
spec:
containers:
- name: my-api-postgres
image: postgres:11.2-alpine
env:
- name: POSTGRES_USER
value: "my-api"
- name: POSTGRES_DB
value: "my-api"
- name: POSTGRES_PASSWORD
value: "my-pass"
ports:
- containerPort: 5432
---
apiVersion: v1
kind: Service
metadata:
namespace: my-api
name: my-api-postgres
spec:
selector:
app: my-api-postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
DNS records get registered for services so you are connecting to those and being forwarded to the pods behind it (simplified). If you need to get to your node app from the outside world, that's a whole additional deal, and you should look at LoadBalancer type services, or Ingress.
As an addition to johnharris85 DNS, when you will separate your apps, which you should do in your scenario.
Multi-container Pods are usually used in specific use cases, like for example sidecar containers to help the main container with some particular tasks or proxies, bridges and adapters to for example provide connectivity to some specific destination.
In your case you can easily separate them. In this case you have a deployment with 1 Pod in which there are 3 containers which communicate with each other by localhost and not DNS names as already mentioned.
After which I recommend you to read about DNS inside of Kubernetes and how the communication works with the services stepping up into the game.
In case of pods you can read more here.