Deploying a dhcpd server - kubernetes

I'm trying to deploy a DHCP server in a pod on my Kubernetes cluster.
I've created the following resources:
$ cat dhcpd-deployment.yaml
kind: Deployment
metadata:
name: dhcpd
namespace: kube-mngt
spec:
selector:
matchLabels:
app: dhcpd
replicas: 1
template:
metadata:
labels:
app: dhcpd
spec:
nodeSelector:
kubernetes.io/hostname: neo1
containers:
- name: dhcpd
image: 10.0.100.1:5000/dhcpd:latest
volumeMounts:
- name: dhcpd-config
mountPath: /etc/dhcp
volumes:
- name: dhcpd-config
persistentVolumeClaim:
claimName: dhcpd-config-volume-claim
$ kubectl create -f dhcpd-deployment.yaml
$ cat dhcpd-service.yaml
apiVersion: v1
kind: Service
metadata:
name: dhcpd
namespace: kube-mngt
spec:
selector:
app: dhcpd
ports:
- name: dhcp
protocol: UDP
port: 67
targetPort: 67
$ kubectl create -f dhcpd-service.yaml
Everything is created successfully, pod and service but unfortunately, the DHCPD pod does not receive any packet on UDP port 67.
Did I miss something?

I've found the solution to make the dhcpd pod working well.
The example below is to server a external network outside of the k8s service network (clusterIPs).
The dhcp configuration is like following:
include "/etc/dhcp/dhcpd-options.conf";
subnet 192.168.0.0 netmask 255.255.0.0 {}
# management network
subnet 10.0.0.0 netmask 255.255.0.0 {
option routers 10.0.255.254;
option broadcast-address 10.0.255.255;
next-server 10.0.100.6;
include "/etc/dhcp/lease-bmc.conf";
include "/etc/dhcp/lease-node.conf";
}
The k8s service is as following:
$ cat dhcpd-service.yaml
apiVersion: v1
kind: Service
metadata:
name: dhcpd
namespace: kube-mngt
spec:
selector:
app: dhcpd
ports:
- protocol: UDP
port: 67
targetPort: 67
externalIPs:
- 10.0.100.5
Then, configure the switch (interface vlan X) to specify an helper-address that point the the dhcp server (in our case, 10.0.100.5)
interface Vlan1
ip address 10.0.255.254 255.255.0.0 secondary
ip address 10.0.0.1 255.255.0.0
ip helper-address 10.0.100.5
!

Related

How to allow for tcp service (not http) on custom port inside kubernetes

I have a container running an OPC-server on port 4840. I am trying to configure my microk8s to allow my OPC-Client to connect to the port 4840. Here are examples of my deployment and service:
(No namespace is defined here but they are deployed through azure pipelines and that is where the namespace is defined, the namespace for the deployment and service is "jawcrusher")
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: jawcrusher
spec:
replicas: 1
selector:
matchLabels:
app: jawcrusher
strategy: {}
template:
metadata:
labels:
app: jawcrusher
spec:
volumes:
- name: jawcrusher-config
configMap:
name: jawcrusher-config
containers:
- image: XXXmicrok8scontainerregistry.azurecr.io/jawcrusher:#{Version}#
name: jawcrusher
ports:
- containerPort: 4840
volumeMounts:
- name: jawcrusher-config
mountPath: "/jawcrusher/config/config.yml"
subPath: "config.yml"
imagePullSecrets:
- name: acrsecret
service.yml
apiVersion: v1
kind: Service
metadata:
name: jawcrusher-service
spec:
ports:
- name: 7070-4840
port: 7070
protocol: TCP
targetPort: 4840
selector:
app: jawcrusher
type: ClusterIP
status:
loadBalancer: {}
I am using a k8s-client called Lens and in this client there is a functionality to forward local ports to the service. If I do this I can connect to the OPC-Server with my OPC-Client using the url localhost:4840. To me that indicates that the service and deployment is set up correctly.
So now I want to tell microk8s to serve my OPC-Server from port 4840 "externally". So for example if my dns to the server is microk8s.xxxx.internal I would like to connect with my OPC-Client to microk8s.xxxx.internal:4840.
I have followed this tutorial as much as I can: https://minikube.sigs.k8s.io/docs/tutorials/nginx_tcp_udp_ingress/.
It says to update the tcp-configuration for the ingress, this is how it looks after I updated it:
nginx-ingress-tcp-microk8s-conf:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-tcp-microk8s-conf
namespace: ingress
uid: a32690ac-34d2-4441-a5da-a00ec52d308a
resourceVersion: '7649705'
creationTimestamp: '2023-01-12T14:12:07Z'
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"name":"nginx-ingress-tcp-microk8s-conf","namespace":"ingress"}}
managedFields:
- manager: kubectl-client-side-apply
operation: Update
apiVersion: v1
time: '2023-01-12T14:12:07Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
- manager: kubectl-patch
operation: Update
apiVersion: v1
time: '2023-02-14T07:50:30Z'
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:4840: {}
selfLink: /api/v1/namespaces/ingress/configmaps/nginx-ingress-tcp-microk8s-conf
data:
'4840': jawcrusher/jawcrusher-service:7070
binaryData: {}
It also says to update a deployment called ingress-nginx-controller but in microk8s it seems to be a daemonset called nginx-ingress-microk8s-controller. This is what it looks like after adding a new port:
nginx-ingress-microk8s-controller:
spec:
containers:
- name: nginx-ingress-microk8s
image: registry.k8s.io/ingress-nginx/controller:v1.2.0
args:
- /nginx-ingress-controller
- '--configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf'
- >-
--tcp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-tcp-microk8s-conf
- >-
--udp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-udp-microk8s-conf
- '--ingress-class=public'
- ' '
- '--publish-status-address=127.0.0.1'
ports:
- name: http
hostPort: 80
containerPort: 80
protocol: TCP
- name: https
hostPort: 443
containerPort: 443
protocol: TCP
- name: health
hostPort: 10254
containerPort: 10254
protocol: TCP
####THIS IS WHAT I ADDED####
- name: jawcrusher
hostPort: 4840
containerPort: 4840
protocol: TCP
After I have updated the daemonset it restarts all the pods. The port seem to be open, if I run this script is outputs:
Test-NetConnection -ComputerName microk8s.xxxx.internal -Port 4840
ComputerName : microk8s.xxxx.internal
RemoteAddress : 10.161.64.124
RemotePort : 4840
InterfaceAlias : Ethernet 2
SourceAddress : 10.53.226.55
TcpTestSucceeded : True
Before I did the changes it said TcpTestSucceeded: False.
But the OPC-Client cannot connect. It just says:
Could not connect to server: BadCommunicationError.
Does anyone see if I made a mistake somewhere or knows how to do this in microk8s.
Update 1:
I see an error message in the ingress-daemonset-pod logs when I try to connect to the server with my OPC-Client:
2023/02/15 09:57:32 [error] 999#999: *63002 connect() failed (111:
Connection refused) while connecting to upstream, client:
10.53.225.232, server: 0.0.0.0:4840, upstream: "10.1.98.125:4840", bytes from/to client:0/0, bytes from/to upstream:0/0
10.53.225.232 is the client machines ip address and 10.1.98.125 is the ip number of the pod running the OPC-server.
So it seems like it has understood that external port 4840 should be proxied/forwarded to my service which in turn points to the OPC-server-pod. But why do I get an error...
Update 2:
Just to clearify, if I run kubectl port forward command and point to my service it works. But not if I try to connect directly to the 4840 port. So for example this works:
kubectl port-forward service/jawcrusher-service 5000:4840 -n jawcrusher --address='0.0.0.0'
This allows me to connect with my OPC-client to the server on port 5000.
You should simply do a port forwarding from your localhost port x to your service/deployment/pod on port y with kubectl command.
Lets say you have a Nats streaming server in your k8s, it's using tcp over port 4222, your command in that case would be:
kubectl port-forward service/nats 4222:4222
In this case it will forward all traffic on localhost over port 4222 to the service named nats inside your cluster on port 4222. Instead of service you could forward to a specific pod or deployment...
Use kubectl port-forward -h to see all your options...
In case you are using k3d to setup a k3s in docker or rancher desktop you could add the port parameter to your k3d command:
k3d cluster create k3s --registry-create registry:5000 -p 8080:80#loadbalancer -p 4222:4222#server:0
The problem was never with microk8s or the ingress configuration. The problem was that my server was bound to the loopback address (127.0.0.1).
When I changed the configuration so the server listened to 0.0.0.0 instead of 127.0.0.1 it started working.

pointing selenium tests through nodePort in a service

I have this in a selenium-hub-service.yml file:
apiVersion: v1
kind: Service
metadata:
name: selenium-srv
spec:
selector:
app: selenium-hub
ports:
- port: 4444
nodePort: 30001
type: NodePort
sessionAffinity: None
When I do kubectl describe service on terminal, I get the endpoint of kubernetes service as 192.168.49.2:8443. I then take that and point the browser to 192.168.49.2:30001 but browser is not able to reach that endpoint. I was expecting to reach selenium hub.
When I do minikube service selenium-srv --url, which gives me http://127.0.0.1:56498 and point browser to it, I can reach the hub.
My question is: why am I not able to reach through nodePort?
I would like to do it through nodePort way because I know the port beforehand and if kubernetes service end point remains constant then it may be easy to point my tests to a known endpoint when I integrate it with azure pipeline.
EDIT: output of kubectl get service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d
selenium-srv NodePort 10.96.34.117 <none> 4444:30001/TCP 2d2h
Posted community wiki based on this Github topic. Feel free to expand it.
The information below assumes that you are using the default driver docker.
Minikube on macOS behaves a bit differently than on Linux. While on Linux, you have special interfaces used for docker and for connecting to the minikube node port, like this one:
3: docker0:
...
inet 172.17.0.1/16
And this one:
4: br-42319e616ec5:
...
inet 192.168.49.1/24 brd 192.168.49.255 scope global br-42319e616ec5
There is no such solution implemented on macOS. Check this:
This is a known issue, Docker Desktop networking doesn't support ports. You will have to use minikube tunnel.
Also:
there is no bridge0 on Macos, and it makes container IP unreachable from host.
That means you can't connect to your service using IP address 192.168.49.2.
Check also this article: Known limitations, use cases, and workarounds - Docker Desktop for Mac:
There is no docker0 bridge on macOS
Because of the way networking is implemented in Docker Desktop for Mac, you cannot see a docker0 interface on the host. This interface is actually within the virtual machine.
I cannot ping my containers
Docker Desktop for Mac can’t route traffic to containers.
Per-container IP addressing is not possible
The docker (Linux) bridge network is not reachable from the macOS host.
There are few ways to setup minikube to use NodePort at the localhost address on Mac, like this one:
minikube start --driver=docker --extra-config=apiserver.service-node-port-range=32760-32767 --ports=127.0.0.1:32760-32767:32760-32767`
You can also use minikube service command which will return a URL to connect to a service.
is your deployment running on port 4444 ?
try this
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: selenium-hub
labels:
app: selenium-hub
spec:
replicas: 1
selector:
matchLabels:
app: selenium-hub
template:
metadata:
labels:
app: selenium-hub
spec:
containers:
- name: selenium-hub
image: selenium/hub:3.141
ports:
- containerPort: 4444
resources:
limits:
memory: "1000Mi"
cpu: ".5"
livenessProbe:
httpGet:
path: /wd/hub/status
port: 4444
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /wd/hub/status
port: 4444
initialDelaySeconds: 30
timeoutSeconds: 5
service.yaml
apiVersion: v1
kind: Service
metadata:
name: selenium-hub
labels:
app: selenium-hub
spec:
ports:
- port: 4444
targetPort: 4444
name: port0
selector:
app: selenium-hub
type: NodePort
sessionAffinity: None
if you want to use to chrome
apiVersion: apps/v1
kind: Deployment
metadata:
name: selenium-node-chrome
labels:
app: selenium-node-chrome
spec:
replicas: 2
selector:
matchLabels:
app: selenium-node-chrome
template:
metadata:
labels:
app: selenium-node-chrome
spec:
volumes:
- name: dshm
emptyDir:
medium: Memory
containers:
- name: selenium-node-chrome
image: selenium/node-chrome-debug:3.141
ports:
- containerPort: 5555
volumeMounts:
- mountPath: /dev/shm
name: dshm
env:
- name: HUB_HOST
value: "selenium-hub"
- name: HUB_PORT
value: "4444"
testing python code
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
def check_browser(browser):
driver = webdriver.Remote(
command_executor='http://<IP>:<PORT>/wd/hub',
desired_capabilities=getattr(DesiredCapabilities, browser)
)
driver.get("http://google.com")
assert "google" in driver.page_source
driver.quit()
print("Browser %s checks out!" % browser)
check_browser("CHROME")

Connecting to a Kubernetes service resulted in connection refused

I'm trying to deploy my web application using Kubernetes. I used Minikube to create the cluster and successfully exposed my frontend react app using ingress. Yet when I attached the backend service's URL in "env" field in the frontend's deployment.yaml, it does not work. When I tried to connect to the backend service through the frontend pod, the connection refused.
frontend deployment yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: <image_name>
imagePullPolicy: Never
ports:
- containerPort: 80
env:
- name: REACT_APP_API_V1_ENDPOINT
value: http://backend-svc
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: frontend-svc
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: frontend
Ingress for frontend
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: front-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-read-timeout: "12h"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: front-testk.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-svc
port:
number: 80
Backend deployment yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: backend
labels:
name: backend
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: <image_name>
ports:
- containerPort: 80
imagePullPolicy: Never
restartPolicy: Always
---
kind: Service
apiVersion: v1
metadata:
name: backend-svc
labels:
app: backend
spec:
selector:
app: backend
ports:
- name: http
port: 80
targetPort: 80
% kubectl get svc backend-svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
backend-svc ClusterIP 10.109.107.145 <none> 80/TCP 21h app=backend
Connected inside frontend pod and try to connect to the backend using the ENV created during deploy:
❯ kubectl exec frontend-75579c8499-x766s -it sh
/app # apk update && apk add curl
OK: 10 MiB in 20 packages
/app # env
REACT_APP_API_V1_ENDPOINT=http://backend-svc
/app # curl $REACT_APP_API_V1_ENDPOINT
curl: (7) Failed to connect to backend-svc port 80: Connection refused
/app # nslookup backend-svc
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: backend-svc.default.svc.cluster.local
Address: 10.109.107.145
** server can't find backend-svc.cluster.local: NXDOMAIN
Exec into my backend pod
# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 1/node
# netstat -lnturp
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
As I suspected your application listens on port 8080. If you closely at your output from netstat you will notice that Local Address is 0.0.0.0:8080:
# netstat -tulpn
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 πŸ‘‰ 0 0.0.0.0:8080 0.0.0.0:* LISTEN 1/node
In order to fix that you have to correct your targetPort in your service:
kind: Service
apiVersion: v1
metadata:
name: backend-svc
labels:
app: backend
spec:
selector:
app: backend
ports:
- name: http
port: 80
targetPort: 8080 # πŸ‘ˆ change this to 8080
There is no need to change the port on the deployment side since the containerPort is primarily informational. Not specifying a port there does not prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible.

Access redis by service name in Kubernetes

I created a redis deployment and service in kubernetes,
I can access redis from another pod by service ip, but I can't access it by service name
the redis yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: myapp-ns
spec:
replicas: 1
selector:
matchLabels:
component: redis
template:
metadata:
labels:
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: myapp-ns
spec:
type: ClusterIP
selector:
component: redis
ports:
- port: 6379
targetPort: 6379
I applied your file, and I am able to ping and telnet to the service both from within the same namespace and from a different namespace. To test this, I created pods in the same namespace and in a different namespace and installed telnet and ping. Then I exec'ed into them and did the below tests:
Same Namespace
kubectl exec -it <same-namespace-pod> /bin/bash
# ping redis
PING redis.<redis-namespace>.svc.cluster.local (172.20.211.84) 56(84) bytes of data.
# telnet redis 6379
Trying 172.20.211.84...
Connected to redis.<redis-namespace>.svc.cluster.local.
Escape character is '^]'.
Different Namespace
kubectl exec -it <different-namespace-pod> /bin/bash
# ping redis.<redis-namespace>.svc.cluster.local
PING redis.test.svc.cluster.local (172.20.211.84) 56(84) bytes of data.
# telnet redis.<redis-namespace>.svc.cluster.local 6379
Trying 172.20.211.84...
Connected to redis.<redis-namespace>.svc.cluster.local.
Escape character is '^]'.
If you are not able to do that due to dns resolution issues, you could look at your /etc/resolv.conf in your pod to make sure it has the search prefixes svc.cluster.local and cluster.local
I created a redis deployment and service in kubernetes, I can access
redis from another pod by service ip, but I can't access it by service
name
Keep in mind that you can use the Service name to access the backend Pods it exposes only within the same namespace. Looking at your Deployment and Service yaml manifests, we can see they're deployed within myapp-ns namespace. It means that only from a Pod which is deployed within this namespace you can access your Service by using it's name.
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: myapp-ns ### πŸ‘ˆ
spec:
replicas: 1
selector:
matchLabels:
component: redis
template:
metadata:
labels:
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: myapp-ns ### πŸ‘ˆ
spec:
type: ClusterIP
selector:
component: redis
ports:
- port: 6379
targetPort: 6379
So if you deploy the following Pod:
apiVersion: v1
kind: Pod
metadata:
name: redis-client
namespace: myapp-ns ### πŸ‘ˆ
spec:
containers:
- name: redis-client
image: debian
you will be able to access your Service by its name, so the following commands (provided you've installed all required tools) will work:
redis-cli -h redis
telnet redis 6379
However if your redis-cliet Pod is deployed to completely different namespace, you will need to use fully qualified domain name (FQDN) which is build according to the rule described here:
redis-cli -h redis.myapp-ns.svc.cluster.local
telnet redis.myapp-ns.svc.cluster.local 6379

Can't access service in my local kubernetes cluster using NodePort

I have a manifest as the following
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-redis
spec:
selector:
matchLabels:
app: my-redis
replicas: 1
template:
metadata:
labels:
app: my-redis
spec:
containers:
- name: my-redis
image: redis
ports:
- name: redisport1
containerPort: 6379
hostPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
labels:
app: my-redis
spec:
type: NodePort
selector:
name: my-redis
ports:
- name: redisport1
port: 6379
targetPort: 6379
nodePort: 30036
protocol: TCP
This is a sample that reproduces my problem. My intention here is to create a simple cluster that has a pod with a redis container in it, and it should be exposed to my localhost. Still, get services gives me the following output:
redis-service NodePort 10.107.233.66 <none> 6379:30036/TCP 10s
If I swap NodePort with LoadBalancer, I get an external-ip but still port doesn't work.
Can you help me identify why I'm failing to map the 6379 port to my localhost, please?
Thanks,
In order to access your app through node port, you have to use this url
http://{node ip}:{node port}.
If you are using minikube, your minikube ip is the node ip. You can retrieve it using minikube ip command.
You can also use minikube service redis-service --url command to get the url to access your application through node port.
For anybody who's interested in the question, I found the problem. After Ijaz's fix, I also needed to change the selector to match the label in the pod, it was a typo on my end!
pod has "app=my-redis" tag, but Service selector had "name=my-redis". Matching them fixed the access problem.
Dont need the hostPort:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-redis
spec:
selector:
matchLabels:
app: my-redis
replicas: 1
template:
metadata:
labels:
app: my-redis
spec:
containers:
- name: my-redis
image: redis
ports:
- name: redisport1
containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
labels:
app: my-redis
spec:
type: NodePort
selector:
name: my-redis
ports:
- name: redisport1
port: 6379
targetPort: 6379
nodePort: 30036
protocol: TCP
now the nodePort 30036 can be used to access the service on any worker node.
If the cluster node is somewhere else and you want to make the port available on you local client , then just do kubectl port forward
kubectl port-forward svc/redis-service 6379:6379
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
Notes:
On-prem installs of k8s dont support service type of load balancer
ClusterIP is the IP on the pod network
Node IP is the IP of some machine that is running the k8s cluster