k3d multi cluster communication - kubernetes

Assuming having 2 separate k3d clusters (namely: vault, dev)
is there is a way to have a distinct URL for each cluster (preferably with https) for example: vault.cluster.internal and dev.cluster.internal
and allow apps deployed in dev.cluster.internal to lookup something or interact with apps in the vault.cluster.internal ?
The cluster definitions are as follows:
dev.yaml:
apiVersion: k3d.io/v1alpha4
kind: Simple
metadata:
name: dev
servers: 1
agents: 3
network: k3d-cluster
kubeAPI:
host: "dev.cluster.internal"
hostIP: "127.0.0.1"
image: rancher/k3s:v1.24.3-k3s1
ports:
- port: 3000:3000
nodeFilters:
- loadbalancer
options:
k3d:
wait: true
timeout: "60s"
k3s:
extraArgs:
- arg: --tls-san=dev.cluster.internal
nodeFilters:
- server:*
- arg: --disable=metrics-server
nodeFilters:
- server:*
- arg: --disable=traefik
nodeFilters:
- server:*
kubeconfig:
updateDefaultKubeconfig: true
switchCurrentContext: false
and the vault.yaml:
apiVersion: k3d.io/v1alpha4
kind: Simple
metadata:
name: vault
servers: 1
agents: 3
network: k3d-cluster
kubeAPI:
host: "vault.cluster.internal"
hostIP: "127.0.0.1"
image: rancher/k3s:v1.24.3-k3s1
ports:
- port: 8200:8200
nodeFilters:
- loadbalancer
options:
k3d:
wait: true
timeout: "60s"
k3s:
extraArgs:
- arg: --tls-san=vault.cluster.internal
nodeFilters:
- server:*
- arg: --disable=metrics-server
nodeFilters:
- server:*
- arg: --disable=traefik
nodeFilters:
- server:*
kubeconfig:
updateDefaultKubeconfig: true
switchCurrentContext: false
Can this be done without using service mesh?
Can I update the coredns in the clusters to allow resolving the other cluster host names, and how?
Can this be done with docker network configurations, and how?
This is basically to simulate real world clusters (but for local development)

I found 3 solutions for the problem.
The first solution is to add HostAliases section to the dev cluster definition, and make it point to the external IP of the vault cluster loadbalancer:
for example:
you can run the following command on the vault cluster after initializing it
$ kubectl --context k3d-vault --namespace vault get services
NAME TYPE CLUSTER-IP EXTERNAL-IP ...
...
vault LoadBalancer 10.43.34.131 172.24.0.3 ...
^^^^^^^^^^
...
dev.yaml would be
#...
ports:
- port: 3000:3000
nodeFilters:
- loadbalancer
hostAliases:
- ip: 172.24.0.3
hostnames:
- vault.cluster.internal
#...
# (alternatively, this can be automated using the following command without editing `dev.yaml` file)
$ KMS_IP=$(kubectl --context k3d-vault --namespace vault get services | grep LoadBalancer | awk -F " " '{ print $4 }')
$ k3d cluster create --config dev.yaml --host-alias $KMS_IP:vault.cluster.internal
this solution allow resolving of hostname (as you would expect in a production cluster)...
The second solution works similarly but using docker network inspect k3d-cluster (where k3d-cluster is the docker network name in cluster definition)
Similarly, run docker network inspect k3d-cluster and note down the IP of the loadbalancer subnet defined by docker:
...
"cad3f3XXXXXX": {
"Name": "k3d-vault-serverlb",
"EndpointID": "47d5XXXX"
"MacAddress": "02:42:ac:18:00:04",
"IPv4Address": "172.24.0.4/16", #<<< This IP can be used in dev cluster HostAliases
"IPv6Address": ""
}
...
The last solution is simpler but less flexible.
it uses host.k3d.internal as the name for the other cluster (allowing to resolve it) but you have to take care of port mapping as all of the clusters would be resolving to use the same URL for the services (which isn't ideal, but easy enough to test multi-cluster communication/bugs/etc).
In other words, configure the dev cluster VAULT_ADDR to be host.k3d.internal:8200 instead of vault.cluster.internal:8200
This is not flexible with TLS/HTTPS (AFAIK).

Related

Unable to see services with traefik

I'm a beginner and Im a bit confused about how traefik works...
I want to use the app freqtrade (trading bot) as a docker service and replicate it with different type of configuration, if you have 5min you can go check this guy I want to do the same thing...
But I don't understant why I can't see my app running with traefik :
What I did :
Configure my domain to my server like that :
server config
And on this machine I create a docker swarm and the treafik service with this tutorial and then, my docker compose file look like that :
```
version: '3.3'
services:
traefik:
# Use the latest v2.2.x Traefik image available
image: traefik:v2.2
ports:
# Listen on port 80, default for HTTP, necessary to redirect to HTTPS
- 80:80
# Listen on port 443, default for HTTPS
- 443:443
networks:
- traefik-public
deploy:
placement:
constraints:
# Make the traefik service run only on the node with this label
# as the node with it has the volume for the certificates
- node.labels.traefik-public.traefik-public-certificates == true
labels:
# Enable Traefik for this service, to make it available in the public network
- traefik.enable=true
# Use the traefik-public network (declared below)
- traefik.docker.network=traefik-public
# Use the custom label "traefik.constraint-label=traefik-public"
# This public Traefik will only use services with this label
# That way you can add other internal Traefik instances per stack if needed
- traefik.constraint-label=traefik-public
# admin-auth middleware with HTTP Basic auth
# Using the environment variables USERNAME and HASHED_PASSWORD
- traefik.http.middlewares.admin-auth.basicauth.users=${USERNAME?Variable not set}:${HASHED_PASSWORD?Variable not set}
# https-redirect middleware to redirect HTTP to HTTPS
# It can be re-used by other stacks in other Docker Compose files
- traefik.http.middlewares.https-redirect.redirectscheme.scheme=https
- traefik.http.middlewares.https-redirect.redirectscheme.permanent=true
# traefik-http set up only to use the middleware to redirect to https
# Uses the environment variable DOMAIN
- traefik.http.routers.traefik-public-http.rule=Host(`${DOMAIN?Variable not set}`)
- traefik.http.routers.traefik-public-http.entrypoints=http
- traefik.http.routers.traefik-public-http.middlewares=https-redirect
# traefik-https the actual router using HTTPS
# Uses the environment variable DOMAIN
- traefik.http.routers.traefik-public-https.rule=Host(`${DOMAIN?Variable not set}`)
- traefik.http.routers.traefik-public-https.entrypoints=https
- traefik.http.routers.traefik-public-https.tls=true
# Use the special Traefik service api#internal with the web UI/Dashboard
- traefik.http.routers.traefik-public-https.service=api#internal
# Use the "le" (Let's Encrypt) resolver created below
- traefik.http.routers.traefik-public-https.tls.certresolver=le
# Enable HTTP Basic auth, using the middleware created above
- traefik.http.routers.traefik-public-https.middlewares=admin-auth
# Define the port inside of the Docker service to use
- traefik.http.services.traefik-public.loadbalancer.server.port=8080
volumes:
# Add Docker as a mounted volume, so that Traefik can read the labels of other services
- /var/run/docker.sock:/var/run/docker.sock:ro
# Mount the volume to store the certificates
- traefik-public-certificates:/certificates
command:
# Enable Docker in Traefik, so that it reads labels from Docker services
- --providers.docker
# Add a constraint to only use services with the label "traefik.constraint-label=traefik-public"
- --providers.docker.constraints=Label(`traefik.constraint-label`, `traefik-public`)
# Do not expose all Docker services, only the ones explicitly exposed
- --providers.docker.exposedbydefault=false
# Enable Docker Swarm mode
- --providers.docker.swarmmode
# Create an entrypoint "http" listening on port 80
- --entrypoints.http.address=:80
# Create an entrypoint "https" listening on port 443
- --entrypoints.https.address=:443
# Create the certificate resolver "le" for Let's Encrypt, uses the environment variable EMAIL
- --certificatesresolvers.le.acme.email=${EMAIL?Variable not set}
# Store the Let's Encrypt certificates in the mounted volume
- --certificatesresolvers.le.acme.storage=/certificates/acme.json
# Use the TLS Challenge for Let's Encrypt
- --certificatesresolvers.le.acme.tlschallenge=true
# Enable the access log, with HTTP requests
- --accesslog
# Enable the Traefik log, for configurations and errors
- --log
# Enable the Dashboard and API
- --api
volumes:
# Create a volume to store the certificates, there is a constraint to make sure
# Traefik is always deployed to the same Docker node with the same volume containing
# the HTTPS certificates
traefik-public-certificates:
networks:
traefik-public:
driver: overlay
attachable: true
```
And deploy it :
docker stack deploy -c traefik.yml traefik
After that traefik works fine. Why I can't see the port 8080 in my entrypoint ? is it important for others services ?
Entrypoint traefik
I try to disable the firewall in configuration of the server and also do ufw allow 8080 but nothing change...
I create my a application like I create traefik service with this docker-compose file :
---
version: '3'
networks:
traefik_traefik-public:
external: true
services:
freqtrade:
image: freqtradeorg/freqtrade:stable
# image: freqtradeorg/freqtrade:develop
# Use plotting image
# image: freqtradeorg/freqtrade:develop_plot
# Build step - only needed when additional dependencies are needed
# build:
# context: .
# dockerfile: "./docker/Dockerfile.custom"
restart: unless-stopped
container_name: freqtrade
volumes:
- "./user_data:/freqtrade/user_data"
# Expose api on port 8080 (localhost only)
# Please read the https://www.freqtrade.io/en/stable/rest-api/ documentation
# before enabling this.
networks:
- traefik_traefik-public
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
delay: 5s
command: >
trade
--logfile /freqtrade/user_data/logs/freqtrade.log
--db-url sqlite:////freqtrade/user_data/tradesv3.sqlite
--config /freqtrade/user_data/config.json
--strategy SampleStrategy
labels:
- traefik.http.routers.bot001.tls=true'
- traefik.http.routers.bot001.rule=Host(`bot001.bots.lordgoliath.com`)'
- traefik.http.services.bot001.loadbalancer.server.port=8080'
and this is a part of the configuation file of the bot (to access to the UI)
"api_server": {
"enabled": true,
"enable_openapi": true,
"listen_ip_address": "0.0.0.0",
"listen_port": 8080,
"verbosity": "info",
"jwt_secret_key": "somethingrandom",
"CORS_origins": ["https://bots.lordgoliath.com"],
"username": "api",
"password": "api"
},
then :
docker stack deploy -c docker-compose.yml freqtrade
So I have that :
goliath#localhost:~/freqtrade_test/user_data$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
nkvpjjztjibg freqtrade_freqtrade replicated 1/1 freqtradeorg/freqtrade:stable
6qryu28ute9i traefik_traefik replicated 1/1 traefik:v2.2 *:80->80/tcp, *:443->443/tcp
I see the bot running with the command docker service logs freqtrade_freqtrade but
when I try to go on my domain to see it have only the Traefik dashboard and can't see anything else running.
traefik http
traefik https
how I can see my app freqtrade running ? how can I access to the bot UI via my domain ?
Thanks !
Sorry for my bad English I hope this is clear enough to understand my problem
UPDATE
docker service inspect --pretty freqtrade_freqtrade
ID: o6bpaso69i9n6etybtj09xsqi
Name: ft1_freqtrade
Labels:
com.docker.stack.image=freqtradeorg/freqtrade:stable
com.docker.stack.namespace=ft1
Service Mode: Replicated
Replicas: 1
Placement:
Constraints: [node.role == manager]
UpdateConfig:
Parallelism: 1
On failure: pause
Monitoring Period: 5s
Max failure ratio: 0
Update order: stop-first
RollbackConfig:
Parallelism: 1
On failure: pause
Monitoring Period: 5s
Max failure ratio: 0
Rollback order: stop-first
ContainerSpec:
Image: freqtradeorg/freqtrade:stable#sha256:3b2f2acb5b9cfedaa7b07cf56af01d1a750bce4c3054bdbaf40ac27935c984eb
Args: trade --logfile /freqtrade/user_data/logs/freqtrade.log --db-url sqlite:////freqtrade/user_data/tradesv3.sqlite --config /freqtrade/user_data/config.json --strategy SampleStrategy
Mounts:
Target: /freqtrade/user_data
Source: /home/goliath/freqtrade_test/user_data
ReadOnly: false
Type: bind
Resources:
Networks: traefik_traefik-public
Endpoint Mode: vip
UPDATE NEW docker-compose.yml
---
version: '3'
networks:
traefik_traefik-public:
external: true
services:
freqtrade:
image: freqtradeorg/freqtrade:stable
# image: freqtradeorg/freqtrade:develop
# Use plotting image
# image: freqtradeorg/freqtrade:develop_plot
# Build step - only needed when additional dependencies are needed
# build:
# context: .
# dockerfile: "./docker/Dockerfile.custom"
restart: unless-stopped
container_name: freqtrade
volumes:
- "./user_data:/freqtrade/user_data"
# Expose api on port 8080 (localhost only)
# Please read the https://www.freqtrade.io/en/stable/rest-api/ documentation
# before enabling this.
networks:
- traefik_traefik-public
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
delay: 5s
labels:
- 'traefik.enabled=true'
- 'traefik.http.routers.bot001.tls=true'
- 'traefik.http.routers.bot001.rule=Host(`bot001.bots.lordgoliath.com`)'
- 'traefik.http.services.bot001.loadbalancer.server.port=8080'
command: >
trade
--logfile /freqtrade/user_data/logs/freqtrade.log
--db-url sqlite:////freqtrade/user_data/tradesv3.sqlite
--config /freqtrade/user_data/config.json
--strategy SampleStrategy
UPDATE docker network ls
goliath#localhost:~/freqtrade_test$ docker network ls
NETWORK ID NAME DRIVER SCOPE
003e00401b5d bridge bridge local
9f3d9a222928 docker_gwbridge bridge local
09a33afad0c9 host host local
r4u268yenm5u ingress overlay swarm
bed40e4a5c62 none null local
qo9w45gitke5 traefik_traefik-public overlay swarm
This is the minimal config you need to integrate in order to see the traefik dashboard on localhost:8080
version: "3.9"
services:
traefik:
image: traefik:latest
command: |
--api.insecure=true
ports:
- 8080:8080
Then, your minimal configuration to get traefik to route example.com to itself:
version: "3.9"
networks:
public:
attachable: true
name: traefik
services:
traefik:
image: traefik:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
command: |
--api.insecure=true
--providers.docker.exposedbydefault=false
--providers.docker.swarmmode
--providers.docker.network=traefik
ports:
- 80:80
networks:
- public
deploy:
labels:
traefik.enable: "true"
traefik.http.routers.traefik.rule: Host(`example.com`)
traefik.http.services.traefik.loadbalancer.server.port: 8080
Now, minimal https support - using Traefik self signed certs to start with. Note that we configure tls on the https entrypoint, which means traefik implicitly creates http and https variants for each router.
version: "3.9"
networks:
public:
attachable: true
name: traefik
services:
traefik:
image: traefik:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
command: |
--api.insecure=true
--providers.docker.exposedbydefault=false
--providers.docker.swarmmode
--providers.docker.network=traefik
--entrypoints.http.address=:80
--entrypoints.https.address=:443
--entrypoints.https.http.tls=true
deploy:
placement:
constraints:
- node.role == manager
ports:
# - 8080:8080
- 80:80
- 443:443
networks:
- public
deploy:
labels:
traefik.enable: "true"
traefik.http.routers.traefik.rule: Host(`example.com`)
traefik.http.services.traefik.loadbalancer.server.port: 8080
At this point, gluing in your le config should be simple.
Your freqtrade stack compose would need to be this. If this is a single node swarm, just omit the placement constraints, but when the swarm is large enough to have workers, then tasks that don't need to be on managers should explicitly be kept on workers.
Traefik needs to talk to the swarm api over the docker socket, which is on manager nodes only, which is why it must be node.role==manager.
version: "3.9"
networks:
traefik:
external: true
services:
freqtrade:
image: freqtradeorg/freqtrade:stable
command: ...
volumes: ...
networks:
- traefik
deploy:
placement:
constraints:
- node.role == worker
restart_policy:
max_attempts: 5
labels:
traefik.enabled: "true"
traefik.http.routers.bot001.rule: Host(`bot001.bots.lordgoliath.com`)
traefik.http.services.bot001.loadbalancer.server.port: 8080

'No healthy upstream' error when Envoy proxy is set up manually

I have a very simple environment with a client, a server and an envoy proxy, each running on a separate docker, communicating over http.
When I set it using docker-compose it works.
However, when I set up the dockers and the network manually (with docker network create, setting the aliases, etc.), I get a "503 - no healthy upstream" message when the client tries to send requests to the server. curl to the network alias works from the envoy container. Any idea what is the difference between using docker-compose and setting up the network and containers manually?
envoy.yaml:
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 10000
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains: ["*"]
routes:
- match: { prefix: "/" }
route: { cluster: service }
http_filters:
- name: envoy.filters.http.router
typed_config: {}
clusters:
- name: service
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: round_robin
load_assignment:
cluster_name: service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: server-stub
port_value: 5000
admin:
access_log_path: "/tmp/envoy.log"
address:
socket_address:
address: 0.0.0.0
port_value: 9901
The docker-compose file that worked (but I don't want to use docker-compose, I am using scripts that set up each docker separately):
version: "3.8"
services:
envoy:
image: envoyproxy/envoy:v1.16-latest
ports:
- "10000:10000"
- "9901:9901"
volumes:
- ./envoy.yaml:/etc/envoy/envoy.yaml
server-stub:
build:
context: .
dockerfile: Dockerfile
ports:
- "5000:5000"
I can't reproduce this. It works fine with your docker-compose file, and it works fine manually. Here are the manual steps I took:
$ docker network create test-net
$ docker container run --network test-net --name envoy -p 10000:10000 -p 9901:9901 --mount type=bind,src=/home/john/projects/tester/envoy.yaml,dst=/etc/envoy/envoy.yaml envoyproxy/envoy:v1.16-latest
$ docker run --network test-net --name server-stub johnharris85/simple-hostname-reporter:3
My sample app also listens on port 5000. I used your exact envoy config. Using Docker 20.10.8 if relevant.

Unable to import local minikube cluster to rancher on Mac (cattle-cluster-agent fails on :8080/health)

I have installed rancher on Mac, and used custom port numbers; am able to connect to localhost:443 and work through rancher GUI.
docker run --privileged -d --restart=unless-stopped -p 980:80 -p 981:443 --name rancher rancher/rancher
I then created a minikube local cluster, & tried to import that into rancher via GUI.
minikube -p dxmcs3 start
As suggested by Rancher as part of GUI, I ran the following to import; since the minikube POD needs to be able to access rancher endpoint on my host machine (mac), I updated the CATTLE_SERVER host to point to https://host.minikube.internal:981 prior to running.
curl --insecure -sfL https://localhost:981/v3/import/zsh5mtnkkrtz7tj7scbpb59q5tsjzmhg7r5476z7gdnh4xgjczt7cd_c-6sfj7.yaml | sed 's/https:\/\/localhost:981/https:\/\/host.minikube.internal:981/g' | kubectl apply -f -
The deployments go well, but the cluster's import shows up as "pending" forever on Rancher UI.
on digging, I noticed that the rancher/rancher-agent:v2.5.9 POD fails to start. More specifically, "cluster-register" container fails on health check up start. It tries to to http://(pod-ip):8080/health & fails.
containers:
- env:
- name: CATTLE_FEATURES
- name: CATTLE_IS_RKE
value: "false"
- name: CATTLE_SERVER
value: https://host.minikube.internal:981
- name: CATTLE_CA_CHECKSUM
value: d8e8de5d121fac709e414123ff792458931530569555a3d233d555deb62a9490
- name: CATTLE_CLUSTER
value: "true"
- name: CATTLE_K8S_MANAGED
value: "true"
- name: CATTLE_CLUSTER_REGISTRY
image: rancher/rancher-agent:v2.5.9
imagePullPolicy: IfNotPresent
name: cluster-register
readinessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 2
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
Appreciate the community's help with resolving this. Or, Any pointer to a page I can refer. I really want to be able to have Rancher manage multiple minikube based local k8s environments for my learning & experimentation.

How to setup ansible playbook that is able to execute kubectl (kubernetes) commands

I'm trying to write simple ansible playbook that would be able to execute some arbitrary command against the pod (container) running in kubernetes cluster.
I would like to utilise kubectl connection plugin: https://docs.ansible.com/ansible/latest/plugins/connection/kubectl.html but having struggle to figure out how to actually do that.
Couple of questions:
Do I need to first have inventory for k8s defined? Something like: https://docs.ansible.com/ansible/latest/plugins/inventory/k8s.html. My understanding is that I would define kube config via inventory which would be used by the kubectl plugin to actually connect to the pods to perform specific action.
If yes, is there any example of arbitrary command executed via kubectl plugin (but not via shell plugin that invokes kubectl on some remote machine - this is not what I'm looking for)
I'm assuming that, during the ansible-playbook invocation, I would point to k8s inventory.
Thanks.
I would like to utilise kubectl connection plugin: https://docs.ansible.com/ansible/latest/plugins/connection/kubectl.html but having struggle to figure out how to actually do that.
The fine manual describes how one uses connection plugins, and while it is possible to use in in tasks, that is unlikely to make any sense unless your inventory started with Pods.
The way I have seen that connection used is to start by identifying the Pods against which you might want to take action, and then run a playbook against a unique group for that purpose:
- hosts: all
tasks:
- set_fact:
# this is *just an example for brevity*
# in reality you would use `k8s:` or `kubectl get -o name pods -l my-selector=my-value` to get the pod names
pod_names:
- nginx-12345
- nginx-3456
- add_host:
name: '{{ item }}'
groups:
- my-pods
with_items: '{{ pod_names }}'
- hosts: my-pods
connection: kubectl
tasks:
# and now you are off to the races
- command: ps -ef
# watch out if the Pod doesn't have a working python installed
# as you will have to use raw: instead
# (and, of course, disable "gather_facts: no")
- raw: ps -ef
First install k8s collections
ansible-galaxy collection install community.kubernetes
and here is play-book, it will sort all pods and run a command in every pod
---
-
hosts: localhost
vars_files:
- vars/main.yaml
collections:
- community.kubernetes
tasks:
-
name: Get the pods in the specific namespace
k8s_info:
kubeconfig: '{{ k8s_kubeconfig }}'
kind: Pod
namespace: test
register: pod_list
-
name: Print pod names
debug:
msg: "pod_list: {{ pod_list | json_query('resources[*].status.podIP') }} "
- set_fact:
pod_names: "{{pod_list|json_query('resources[*].metadata.name')}}"
-
k8s_exec:
kubeconfig: '{{ k8s_kubeconfig }}'
namespace: "{{ namespace }}"
pod: "{{ item.metadata.name }}"
command: apt update
with_items: "{{ pod_list.resources }}"
register: exec
loop_control:
label: "{{ item.metadata.name }}"
Maybe you can use like this...
- shell: |
kubectl exec -i -n {{ namespace }} {{ pod_name }} -- bash -c 'clickhouse-client --query "INSERT INTO customer FORMAT CSV"
--user=test --password=test < /mnt/azure/azure/test/test.tbl'
As per the latest documentation you can use the following k8s modules
The following are some of the examples
- name: Create a k8s namespace
kubernetes.core.k8s:
name: testing
api_version: v1
kind: Namespace
state: present
- name: Create a Service object from an inline definition
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Service
metadata:
name: web
namespace: testing
labels:
app: galaxy
service: web
spec:
selector:
app: galaxy
service: web
ports:
- protocol: TCP
targetPort: 8000
name: port-8000-tcp
port: 8000
- name: Remove an existing Service object
kubernetes.core.k8s:
state: absent
api_version: v1
kind: Service
namespace: testing
name: web

How to test if container is running postgres

I just deployed a docker with Postgres on it on AWS EKS.
Below is the description details.
How do i access or test if postgres is working. I tried accessing both IP with post within VPC from worker node.
psql -h #IP -U #defaultuser -p 55432
Below is the deployment.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 55432
# envFrom:
# - configMapRef:
# name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: efs
Surprisingly I am able to connect to psql but on 5432. :( Not sure what I am doing wrong. I passed containerPort as 55432
In short, you need to run the following command to expose your database on 55432 port.
kubectl expose deployment postgres --port=55432 --target-port=5432 --name internal-postgresql-svc
From now on, you can connect to it via port 55432 from inside your cluster by using the service name as a hostname, or via its ClusterIP address:
kubectl get internal-postgresql-svc
What you did in your deployment manifest file, you just attached additional information about the network connections a container uses, between misleadingly, because your container exposes 5432 port only (you can verify it by your self here). You should use a Kubernetes Service - abstraction which enables access to your PODs, and does the necessary port mapping behind the scene.
Please check also different port Types, if you want to expose your postgresql database outside of the Kubernetes cluster.
To test if progress is running fine inside POD`s container:
kubectl run postgresql-postgresql-client --rm --tty -i --restart='Never' --namespace default --image bitnami/postgresql --env="PGPASSWORD=<HERE_YOUR_PASSWORD>" --command -- psql --host <HERE_HOSTNAME=SVC_OR_IP> -U <HERE_USERNAME>