kind - exposing service to host - kubernetes

I would like to run an application in local cluster for development purposes with kind using docker. Based on the description https://kind.sigs.k8s.io/docs/user/quick-start/ I defined the cluster
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
extraPortMappings:
- containerPort: 30000
hostPort: 5432
protocol: TCP
and the deployment with container:
containers:
- name: postgres
image: postgres:14.0
ports:
- containerPort: 5432
and the service
apiVersion: v1
kind: Service
metadata:
name: database
spec:
selector:
name: app
type: NodePort
ports:
- name: postgres
port: 5432
targetPort: 5432
nodePort: 30000
which I assumed should allow me to connect with dbeaver from my windows 11 host. This looks to be not working so I would like to ask, how should I configure that to being able to access it from host. What I have already tried is: localhost:30000, 127.0.0.1:30000 and also 127.0.0.1:5432, localhost:5432
Also kubectl get services command tells me that:
Type: NodePort, Port(S): 5432:30000/TCP, External-IP: <none>, Cluster-Ip:10.96.211.69, name:something

I found a solution, I turned out to be that I placed extractPortMappings inside worker node instead of control-plane. It's weird that it doesn't fail but after moving this part to correct place it started to work!
So the solution is to change to this:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000
hostPort: 5432
protocol: TCP
- role: worker

Related

How to allow for tcp service (not http) on custom port inside kubernetes

I have a container running an OPC-server on port 4840. I am trying to configure my microk8s to allow my OPC-Client to connect to the port 4840. Here are examples of my deployment and service:
(No namespace is defined here but they are deployed through azure pipelines and that is where the namespace is defined, the namespace for the deployment and service is "jawcrusher")
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: jawcrusher
spec:
replicas: 1
selector:
matchLabels:
app: jawcrusher
strategy: {}
template:
metadata:
labels:
app: jawcrusher
spec:
volumes:
- name: jawcrusher-config
configMap:
name: jawcrusher-config
containers:
- image: XXXmicrok8scontainerregistry.azurecr.io/jawcrusher:#{Version}#
name: jawcrusher
ports:
- containerPort: 4840
volumeMounts:
- name: jawcrusher-config
mountPath: "/jawcrusher/config/config.yml"
subPath: "config.yml"
imagePullSecrets:
- name: acrsecret
service.yml
apiVersion: v1
kind: Service
metadata:
name: jawcrusher-service
spec:
ports:
- name: 7070-4840
port: 7070
protocol: TCP
targetPort: 4840
selector:
app: jawcrusher
type: ClusterIP
status:
loadBalancer: {}
I am using a k8s-client called Lens and in this client there is a functionality to forward local ports to the service. If I do this I can connect to the OPC-Server with my OPC-Client using the url localhost:4840. To me that indicates that the service and deployment is set up correctly.
So now I want to tell microk8s to serve my OPC-Server from port 4840 "externally". So for example if my dns to the server is microk8s.xxxx.internal I would like to connect with my OPC-Client to microk8s.xxxx.internal:4840.
I have followed this tutorial as much as I can: https://minikube.sigs.k8s.io/docs/tutorials/nginx_tcp_udp_ingress/.
It says to update the tcp-configuration for the ingress, this is how it looks after I updated it:
nginx-ingress-tcp-microk8s-conf:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-tcp-microk8s-conf
namespace: ingress
uid: a32690ac-34d2-4441-a5da-a00ec52d308a
resourceVersion: '7649705'
creationTimestamp: '2023-01-12T14:12:07Z'
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"name":"nginx-ingress-tcp-microk8s-conf","namespace":"ingress"}}
managedFields:
- manager: kubectl-client-side-apply
operation: Update
apiVersion: v1
time: '2023-01-12T14:12:07Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
- manager: kubectl-patch
operation: Update
apiVersion: v1
time: '2023-02-14T07:50:30Z'
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:4840: {}
selfLink: /api/v1/namespaces/ingress/configmaps/nginx-ingress-tcp-microk8s-conf
data:
'4840': jawcrusher/jawcrusher-service:7070
binaryData: {}
It also says to update a deployment called ingress-nginx-controller but in microk8s it seems to be a daemonset called nginx-ingress-microk8s-controller. This is what it looks like after adding a new port:
nginx-ingress-microk8s-controller:
spec:
containers:
- name: nginx-ingress-microk8s
image: registry.k8s.io/ingress-nginx/controller:v1.2.0
args:
- /nginx-ingress-controller
- '--configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf'
- >-
--tcp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-tcp-microk8s-conf
- >-
--udp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-udp-microk8s-conf
- '--ingress-class=public'
- ' '
- '--publish-status-address=127.0.0.1'
ports:
- name: http
hostPort: 80
containerPort: 80
protocol: TCP
- name: https
hostPort: 443
containerPort: 443
protocol: TCP
- name: health
hostPort: 10254
containerPort: 10254
protocol: TCP
####THIS IS WHAT I ADDED####
- name: jawcrusher
hostPort: 4840
containerPort: 4840
protocol: TCP
After I have updated the daemonset it restarts all the pods. The port seem to be open, if I run this script is outputs:
Test-NetConnection -ComputerName microk8s.xxxx.internal -Port 4840
ComputerName : microk8s.xxxx.internal
RemoteAddress : 10.161.64.124
RemotePort : 4840
InterfaceAlias : Ethernet 2
SourceAddress : 10.53.226.55
TcpTestSucceeded : True
Before I did the changes it said TcpTestSucceeded: False.
But the OPC-Client cannot connect. It just says:
Could not connect to server: BadCommunicationError.
Does anyone see if I made a mistake somewhere or knows how to do this in microk8s.
Update 1:
I see an error message in the ingress-daemonset-pod logs when I try to connect to the server with my OPC-Client:
2023/02/15 09:57:32 [error] 999#999: *63002 connect() failed (111:
Connection refused) while connecting to upstream, client:
10.53.225.232, server: 0.0.0.0:4840, upstream: "10.1.98.125:4840", bytes from/to client:0/0, bytes from/to upstream:0/0
10.53.225.232 is the client machines ip address and 10.1.98.125 is the ip number of the pod running the OPC-server.
So it seems like it has understood that external port 4840 should be proxied/forwarded to my service which in turn points to the OPC-server-pod. But why do I get an error...
Update 2:
Just to clearify, if I run kubectl port forward command and point to my service it works. But not if I try to connect directly to the 4840 port. So for example this works:
kubectl port-forward service/jawcrusher-service 5000:4840 -n jawcrusher --address='0.0.0.0'
This allows me to connect with my OPC-client to the server on port 5000.
You should simply do a port forwarding from your localhost port x to your service/deployment/pod on port y with kubectl command.
Lets say you have a Nats streaming server in your k8s, it's using tcp over port 4222, your command in that case would be:
kubectl port-forward service/nats 4222:4222
In this case it will forward all traffic on localhost over port 4222 to the service named nats inside your cluster on port 4222. Instead of service you could forward to a specific pod or deployment...
Use kubectl port-forward -h to see all your options...
In case you are using k3d to setup a k3s in docker or rancher desktop you could add the port parameter to your k3d command:
k3d cluster create k3s --registry-create registry:5000 -p 8080:80#loadbalancer -p 4222:4222#server:0
The problem was never with microk8s or the ingress configuration. The problem was that my server was bound to the loopback address (127.0.0.1).
When I changed the configuration so the server listened to 0.0.0.0 instead of 127.0.0.1 it started working.

kubernates networking python server in tomcat container

First of all, I am sorry that the grammar may be incorrect because I used Google Translate.
1.Deploy pods and services in a Kubernetes environment.
apiVersion: v1
kind: Pod
metadata:
name: testml
labels:
app: testml-pod
spec:
containers:
- name: testmlserver
image: test_ml_server:2.8
ports:
- containerPort: 8080
- containerPort: 5100
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- name: testserver-api
mountPath: /app/test/api
- name: testmlserver-csv
mountPath: /app/test/csv
- name: testmldb
image: test_ml_db:1.4
ports:
- containerPort: 1433
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- name: estmldb
mountPath: /var/opt/mssql/data
volumes:
- name: testmlserver-api
hostPath:
path: /usr/testhostpath/testmlserver/api
- name: testmlserver-csv
hostPath:
path: /usr/testmlhostpath/testserver/csv
- name: testmldb
hostPath:
path: /usr/testmlhostpath/testmldb
After the server container is deployed, run the python server in the container.
apiVersion: v1
kind: Service
metadata:
name: testml-service
spec:
type: NodePort
ports:
- name: testml-server-port
port: 8080
targetPort: 8080
protocol: TCP
nodePort: 30080
- name: testml-python-port
port: 5100
targetPort: 5100
protocol: TCP
nodePort: 30051
- name: testml-db-port
port: 1433
targetPort: 1433
protocol: TCP
nodePort: 30014
selector:
app: test-pod
In this way, both pods and services have been deployed.
Connect to the server(tomcat) container and run the python server file.
At this time, the address value used when calling the python server from the web is 'http://testml:5100'
I tried to write it and communicate with it.
However, Cross-Origin Read Blocking (CORB) has occurred
I tried also with'http://localhost:5100' because there is another way to communicate in one container, but Connetc refused.
In the docker-compose environment, I checked that the python server is called when communicating with localhost, but I do not know the cause of the error in the kubernets environment.
Checking various things As a result of checking the port in the server (tomcat) container, it is confirmed that 0.0.0.0 does not apply to only the python port.
How can I call the python server normally in the server container?
In web server(tomcat) connect with the db container by pod name as shown below. POD NAME => testml
<property name="url" value="jdbc:log4jdbc:sqlserver://testml:1433;database=test_ml;autoReconnect=true" />
In the same way, I tried to connect the python server with pod name, but it fails.
<api.host.server=http://testml:5100>
I think you can't connect by pod's name unless you have a headless service defined. You can connect via Pod's IP but that is not a recommended approach since the Pod's IP is dynamic and can change across updates.
However, as you have created a Service object as well, you can use that for communication using it's name as http://testml-service:port.
Further, as the Service object is of type NodePort, you can also connect via the IP of the nodes of the cluster.

Allow two pods to communicate with each other

First time using Kubernetes. I have an API and a database, and I want the two pods to communicate with each other.
Based on the docs, I should create a service.
I have created a service for each of the two pods, though still not able to connect to the pod using the services IP address.
For example if the MySQL service that is created has an IP address of 11.22.33.44, I can run the following command to try to connect to the pod of that service:
mysql -h11.22.33.44 -uuser -ppassword foo
...and it will hang and eventually the connection will time out.
I create the pod and service like so:
kubectl create -f ./mysql.yaml
mysql.yaml:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
selector:
app: mysql
ports:
- protocol: TCP
port: 80
targetPort: 3306
---
apiVersion: v1
kind: Pod
metadata:
name: mysql
spec:
containers:
- name: mysql
image: my-custom-mysql-image:latest
ports:
- containerPort: 3306
protocol: TCP
name: mysql
env:
- name: MYSQL_DATABASE
value: "foo"
- name: MYSQL_USER
value: "user"
- name: MYSQL_ROOT_PASSWORD
value: "password"
- name: MYSQL_HOST
value: "127.0.0.1"
your service has a selector defined
selector:
app: mysql
yet your Pod has no labels whatsoever, hence the service can not identify it as its backend and has no endpoint to direct traffic for ClusterIP. You should also stick to standard port number on service as well, so like this :
ports:
- protocol: TCP
port: 3306
targetPort: 3306

Kubernetes cluster, two containers(different pods) are running on same port

Can I create two pods where containers are running on same ports in one kubernetes cluster? considering that will create a separate service for both.
Something like this :
-- Deployment 1
kind: Deployment
spec:
containers:
- name: <name>
image: <image>
imagePullPolicy: Always
ports:
- containerPort: 8080
-- Service 1
kind: Service
spec:
type: LoadBalancer
ports:
- port: 8081
targetPort: 8080
-- Deployment 2
kind: Deployment
spec:
containers:
- name: <name>
image: <image>
imagePullPolicy: Always
ports:
- containerPort: 8080
-- Service 2
kind: Service
spec:
type: LoadBalancer
ports:
- port: 8082
targetPort: 8080
but this approach is not working.
Sure you can. Every POD (which is the basic workload unit in k8s) is isolated from the others in terms of networking (as long as you don't mess with advanced networking options) so you can have as many pods as you want that bind the same port. You can't have two containers inside the same POD that bind the same port, though.
Yes, they are different containers in different pods, so there shouldn't be any conflict between them.

Tunnelling via pod

I have multiple Kubernetes pods running on a server. One of the pods contains a database application that only accepts connections from a specific subnet (i.e. other Kubernetes pods).
I'm trying to connect to the DB application from the server itself, but the connection is refused because the server's IP is not part of the allowed subnet.
Is there a way to create a simple pod that accepts connections from the server and forwards them to the pod containing the DB app?
Unfortunately, the DB app cannot be reconfigured to accept other connections.
Thank you
The easiest solution is probably to add another container to your pod running socat or something similar and make it listen and connect to your local pod's IP (important: connect to the pod ip, not 127.0.0.1 if your database program is configured to only accept connections from the overlay network).
Then modify the service you have for these pods and add the extra port.
The example below assumes port 2000 is running your program and 2001 will be the port that is forwarded to 2000 inside the pod.
Example (the example is running netcat simulating your database program):
apiVersion: v1
kind: Pod
metadata:
name: database
labels:
app: database
spec:
containers:
- name: alpine
image: alpine
command: ["nc","-v","-n","-l","-p","2000"]
ports:
- containerPort: 2000
- name: socat
image: toughiq/socat
ports:
- containerPort: 2001
env:
- name: LISTEN_PROTO
value: "TCP4"
- name: LISTEN_PORT
value: "2001"
- name: TARGET_PROTO
value: "TCP4"
- name: TARGET_HOST
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: TARGET_PORT
value: "2000"
---
apiVersion: v1
kind: Service
metadata:
name: database
spec:
selector:
app: database
ports:
- name: myport
port: 2000
targetPort: 2000
protocol: TCP
- name: socat
port: 2001
targetPort: 2001
protocol: TCP
externalIPs: [xxxxxx]