Kubernetes HostPort with dnsmasq issue [closed] - kubernetes

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I'm trying to setup a dnsmasq pod via kubernetes. Yaml file is like below:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dnsmasq1
labels:
name: dnsmasq1
spec:
serviceName: "dnsmasq1"
replicas: 1
selector:
matchLabels:
name: dnsmasq1
volumeClaimTemplates:
- metadata:
name: dnsmasqconf-pv1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ceph-rbd-sc
template:
metadata:
labels:
name: dnsmasq1
spec:
hostNetwork: false
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
name: dnsmasq1
topologyKey: "kubernetes.io/hostname"
hostname: dnsmasq1
containers:
- name: dnsmasq1
image: jpillora/dnsmasq
ports:
- containerPort: 8080
hostPort: 8082
imagePullPolicy: IfNotPresent
env:
- name: HTTP_USER
value: "****"
- name: HTTP_PASS
value: "****"
volumeMounts:
- mountPath: /mnt/config
name: dnsmasqconf-pv1
resources:
requests:
memory: 250Mi
limits:
memory: 250Mi
nodeSelector:
etiket: worker
dnsPolicy: "None"
dnsConfig:
nameservers:
- 8.8.8.8
volumes:
- name: dnsmasqconf-pv1
persistentVolumeClaim:
claimName: dnsmasqconf-pv1
This works fine and I can reach to pod using the node's IP address. I decided to test the pod as a dns server on a test machine but the entries are not resolved. I think this is because I'm not using the Pod's IP as dns server but the node's. How can I set this pod to be used a dns server externally? I don't have a cloud provider so I don't think I can use loadbalancer IP here.

I don't think you can use dnsmasq as external dns server as dnsmasq is a lightweight DNS forwarder, designed to provide DNSservices to a small-scale network. It can serve the names of local machines which are not in the global DNS. dnsmasq makes it simple to specify the nameserver to use for a given domain and it is ideal to manage communication in a kubernetes cluster.
In /etc/NetworkManager/NetworkManager.conf, add or uncomment the following line in the [main] section:
dns=dnsmasq
Create /etc/NetworkManager/dnsmasq.d/kube.conf with this line:
server=/cluster.local/10.90.0.10
This tells dnsmasq that queries for anything in the cluster.local domain should be forwarded to the DNS server at 10.90.0.10. This happens to be the default IP address of the kube-dns service in the kube-system namespace. If your cluster’s DNS service has a different IP address, you’ll need to specify it instead.
Now, after you run systemctl restart NetworkManager, your /etc/resolv.conf should look something like this:
#Generated by NetworkManager
search localdomain
nameserver 127.0.0.1
The important line is nameserver 127.0.0.1. This means DNS requests are sent to localhost, which is handled by dnsmasq.

Related

How to get kubernetes service external ip dynamically inside manifests file?

We are creating a deployment in which the command needs the IP of the pre-existing service pointing to a statefulset. Below is the manifest file for the deployment. Currently, we are manually entering the service external IP inside this deployment manifest. Now we would like it to auto-populate during runtime. Is there a way to achieve this dynamically using environment variables or another way?
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-api
namespace: app-api
spec:
selector:
matchLabels:
app: app-api
replicas: 1
template:
metadata:
labels:
app: app-api
spec:
containers:
- name: app-api
image: asia-south2-docker.pkg.dev/rnd20/app-api/api:09
command: ["java","-jar","-Dallow.only.apigateway.request=false","-Dserver.port=8084","-Ddedupe.searcher.url=http://10.10.0.6:80","-Dspring.cloud.zookeeper.connect-string=10.10.0.6:2181","-Dlogging$.file.path=/usr/src/app/logs/springboot","/usr/src/app/app_api/dedupe-engine-components.jar",">","/usr/src/app/out.log"]
livenessProbe:
httpGet:
path: /health
port: 8084
httpHeaders:
- name: Custom-Header
value: ""
initialDelaySeconds: 60
periodSeconds: 60
ports:
- containerPort: 4016
resources:
limits:
cpu: 1
memory: "2Gi"
requests:
cpu: 1
memory: "2Gi"
NOTE: The IP in question here is the Internal load balancer IP, i.e. the external IP for the service and the service is in a different namespace. Below is the manifest for the same
apiVersion: v1
kind: Service
metadata:
name: app
namespace: app
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: app
spec:
selector:
app: app
type: LoadBalancer
ports:
- name: container
port: 80
targetPort: 8080
protocol: TCP
You could use the following command instead:
command:
- /bin/bash
- -c
- |-
set -exuo pipefail
ip=$(dig +search +short servicename.namespacename)
exec java -jar -Dallow.only.apigateway.request=false -Dserver.port=8084 -Ddedupe.searcher.url=http://$ip:80 -Dspring.cloud.zookeeper.connect-string=$ip:2181 -Dlogging$.file.path=/usr/src/app/logs/springboot /usr/src/app/app_api/dedupe-engine-components.jar > /usr/src/app/out.log
It first resolves the ip address using dig (if you don't have dig in your image - you need to substitute it with something else you have), then execs your original java command.
As of today I'm not aware of any "native" kubernetes way to provide IP meta information directly to the pod.
If you are sure they exist before, and you deploy into the same namespace, you can read them from environment variables. It's documented here: https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables.
When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. It adds {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT variables, where the Service name is upper-cased and dashes are converted to underscores. It also supports variables (see makeLinkVariables) that are compatible with Docker Engine's "legacy container links" feature.
For example, the Service redis-master which exposes TCP port 6379 and has been allocated cluster IP address 10.0.0.11, produces the following environment variables:
REDIS_MASTER_SERVICE_HOST=10.0.0.11
REDIS_MASTER_SERVICE_PORT=6379
REDIS_MASTER_PORT=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp
REDIS_MASTER_PORT_6379_TCP_PORT=6379
REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11
Note, those wont update after the container is started.

How to mount Kubernetes NFS share from external computer?

I'm able to create an NFS server using Docker, then mount it from an external computer on the network. My laptop for example.
Docker example:
docker run -itd --privileged \
--restart unless-stopped \
-e SHARED_DIRECTORY=/data \
-v /home/user/nfstest/data:/data \
-p 2049:2049 \
itsthenetwork/nfs-server-alpine
I am to mount the above server onto a local machine using
sudo mount -v 192.168.60.80:/ nfs
From my local laptop, I always get permission errors when trying to mount an NFS share that is hosted by Kubernetes. Can someone please explain why I may be getting permission errors not allowing me to mount the NFS share when creating the NFS server with Kubernetes?
As an example, I create my persistent volume:
piVersion: v1 [0/1864]
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 80Gi
accessModes:
- ReadWriteMany
storageClassName: local-nfs-storage
local:
path: "/home/rancher/nexus-data"
persistentVolumeReclaimPolicy: Retain
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kubenode-02
Then create the persistent volume claim:
apiVersion: v1 [0/1865]
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
storageClassName: local-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 80Gi
Last, I create the NFS server deployment. I use the external IP service (externalIPs) so that I can connect to the pod from anywhere outside of my Kubernetes network. I can Nmap the ports from my laptop and see that they are open.
---
kind: Service
apiVersion: v1
metadata:
name: nfs-service
spec:
selector:
role: nfs
ports:
# Open the ports required by the NFS server
# Port 2049 for TCP
- name: tcp-2049
targetPort: 2049
port: 2049
protocol: TCP
# Port 111 for UDP
- name: udp-111
targetPort: 111
port: 111
protocol: UDP
# Use host network with external IP service
externalIPs:
- 192.168.60.42
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-server
spec:
replicas: 1
selector:
matchLabels:
role: nfs
template:
metadata:
name: nfs-server
labels:
role: nfs
spec:
volumes:
- name: nfs-pv-storage
persistentVolumeClaim:
claimName: nfs-pvc
securityContext:
fsGroup: 1000
containers:
- name: nfs-server-container
image: itsthenetwork/nfs-server-alpine
securityContext:
privileged: true
env:
# Pass the path to share data
- name: SHARED_DIRECTORY
value: "/exports"
volumeMounts:
- mountPath: "/exports"
name: nfs-pv-storage
I have tried playing with Pod Security policies such as fsGroup, runAsUser, and runAsGroup. I have played with different permissions for the local folder I am mounting the NFS share onto. I have also tried different options for /etc/exports within the NFS pod. No matter what, I get a permission denied when trying to mount the NFS share to a local computer.
I AM able to mount the Kubernetes NFS share from within a separate Kubernetes Pod.
Is there something I do not understand about how the Kubernetes network handles external traffic?

Kubernetes HA data across several workers [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I have set up a Kubernetes system with 1 Master and 3 worker Nodes, and a load balancer. But at the moment my pipes are stuck as I'm struggling to find a solution, how can I setup a WordPress website with traffic that is replicated on all nodes. All for me is clear only I don't get, how to get all 3 Workers ( VPS servers in different countries) to have the same data so that pods can work and scale, and if one worker is dead the second and third can continue providing all services. IS PVE the solution or some other? Please point me in the direction to start searching.
Thanks.
You can create a PersistentVolumeClaim in ReadWriteMany mode that creates a PersistentVolume that holds your WordPress site data then create a Deployment with 3 replicas that mounts that PersistentVolume.
Example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wordpress-data
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
storageClass: fast # update this to whatever persistent storage class is available on your cluster. See https://kubernetes.io/docs/concepts/storage/storage-classes/
resources:
requests:
storage: 10Gi
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
replicas: 3
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:latest
ports:
- containerPort: 80
name: http
protocol: TCP
volumeMounts:
- mountPath: "/var/www/html"
name: wordpress-data
volumes:
- name: wordpress-data
persistentVolumeClaim:
claimName: wordpress-data # notice this is referencing the PersistentVolumeClaim we declared above
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
type: NodePort # or LoadBalancer
selector:
app: wordpress
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80

Trying to set up an ingress with tls and open to some IPs only on GKE

I'm having trouble setting up an ingress open only to some specific IPs, checked docs, tried a lot of stuff and an IP out of the source keep accessing. that's a Zabbix web interface on an alpine with nginx, set up a service on node-port 80 then used an ingress to set up a loadbalancer on GCP, it's all working, the web interface is working fine, but how can I make it accessible only to desired IPs?
my firewall rules are ok and it's only accessible through load balancer IP
Also, I have a specific namespace for this deploy.
Cluster version 1.11.5-gke.5
EDIT i'm using GKE standard ingress GLBC
My template is config as follow can someone help enlighten me on what is missing:
apiVersion: v1
kind: ReplicationController
metadata:
name: zabbix-web
namespace: zabbix-prod
labels:
app: zabbix
tier: frontend
spec:
replicas: 1
template:
metadata:
labels:
name: zabbix-web
app: zabbix
spec:
volumes:
- name: cloudsql-instance-credentials
secret:
defaultMode: 420
secretName: cloudsql-instance-credentials
containers:
- command:
- /cloud_sql_proxy
- -instances=<conection>
- -credential_file=/secrets/cloudsql/credentials.json
image: gcr.io/cloudsql-docker/gce-proxy:1.11
imagePullPolicy: IfNotPresent
name: cloudsql-proxy
resources: {}
securityContext:
allowPrivilegeEscalation: false
runAsUser: 2
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /secrets/cloudsql
name: credentials
readOnly: true
- name: zabbix-web
image: zabbix/zabbix-web-nginx-mysql:alpine-3.2-latest
ports:
- containerPort: 80
env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
key: <user>
name: <user>
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
key: <pass>
name: <pass>
- name: DB_SERVER_HOST
value: 127.0.0.1
- name: MYSQL_DATABASE
value: <db>
- name: ZBX_SERVER_HOST
value: <db>
readinessProbe:
failureThreshold: 3
httpGet:
path: /index.php
port: 80
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: "zabbix-web-service"
namespace: "zabbix-prod"
labels:
app: zabbix
spec:
ports:
- port: 80
targetPort: 80
selector:
name: "zabbix-web"
type: "NodePort"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: zabbix-web-ingress
namespace: zabbix-prod
annotations:
ingress.kubernetes.io/service.spec.externalTrafficPolicy: local
ingress.kubernetes.io/whitelist-source-range: <xxx.xxx.xxx.xxx/32>
spec:
tls:
- secretName: <tls-cert>
backend:
serviceName: zabbix-web-service
servicePort: 80
You can whitelist IPs by configuring Ingress and Cloud Armour:
Switch to project:
gcloud config set project $PROJECT
Create a policy:
gcloud compute security-policies create $POLICY_NAME --description "whitelisting"
Change default policy to deny:
gcloud compute security-policies rules update 2147483647 --action=deny-403 \
--security-policy $POLICY_NAME
On lower priority than the default whitelist all IPs you want to whitelist:
gcloud compute security-policies rules create 2 \
--action allow \
--security-policy $POLICY_NAME \
--description "allow friends" \
--src-ip-ranges "93.184.17.0/24,151.101.1.69/32"
With a maximum of ten per range.
Note you need valid CIDR ranges, for that you can use CIDR to IP Range -> IP Range to CIDR.
View the policy as follows:
gcloud compute security-policies describe $POLICY_NAME
To throw away an entry:
gcloud compute security-policies rules delete $PRIORITY --security-policy $POLICY_NAME
or the full policy:
gcloud compute security-policies delete $POLICY_NAME
Create a BackendConfig for the policy:
# File backendconfig.yaml:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
namespace: <namespace>
name: <name>
spec:
securityPolicy:
name: $POLICY_NAME
$ kubectl apply -f backendconfig.yaml
backendconfig.cloud.google.com/backendconfig-name created
Add the BackendConfig to the Service:
metadata:
namespace: <namespace>
name: <service-name>
labels:
app: my-app
annotations:
cloud.google.com/backend-config: '{"ports": {"80":"backendconfig-name"}}'
spec:
type: NodePort
selector:
app: hello-app
ports:
- port: 80
protocol: TCP
targetPort: 8080
Use the right selectors and point the receiving port of the Service to the BackendConfig created earlier.
Now Cloud Armour will add the policy to the GKE service.
Visible in https://console.cloud.google.com/net-security/securitypolicies (after selecting $PROJECT).
AFAIK, you can't restrict IP addresses through GLBC or on GCP L7 Load Balancer itself. Note that GLBC is also a work in progress as of this writing.
ingress.kubernetes.io/whitelist-source-range works great but when you are using something like an nginx ingress controller because nginx itself can restrict IP addresses.
The general way to restrict/whitelist IP addresses is using VPC Firewall Rules (which seems like you are doing already). Essentially you can restrict/whitelist the IP addresses to the network where your K8s nodes are running on.
One of the best options to accomplish your goal is using firewall rules since you can't restrict IP addresses through the Global LB or on GCP L7 LB itself. However, another option if you are using Ingress on your Kubernetes cluster, it is possible to restrict access to your application based on dedicated IP addresses.
One possible use case would be that you have a development setup and don’t want to make all the fancy new features available to everyone, especially competitors. In such cases, IP whitelisting to restrict access can be used.
This can be done with specifying the allowed client IP source ranges through the ingress.kubernetes.io/whitelist-source-range annotation.
The value is a comma separated list of CIDR block.
For example:
10.0.0.0/24, 1.1.1.1/32.
Please get more information here.
For anyone who stumbles on this question via Google like I did, there is now a solution. You can implement this via a BackendConfig from the cloud.google.com Kubernetes API in conjunction with a GCE CloudArmor policy.
https://cloud.google.com/kubernetes-engine/docs/how-to/cloud-armor-backendconfig

Kubernetes multi-container pod [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
Hello I try to have a Pod with 2 container, one a c++ app, one a mysql database. I used to have the mysql deployed in its own service, but i got latency issue. So i want to try multi-container pod.
But i've been struggling to connect my app with the mysql through localhost. It says..
Can\'t connect to local MySQL server through socket
\'/var/run/mysqld/mysqld.sock
Here is my kubernetes.yaml. Please I need help :(
# Database setup
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: storage-camera
labels:
group: camera
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: camera-pv
labels:
group: camera
spec:
storageClassName: db-camera
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: storage-camera
---
# Service setup
apiVersion: v1
kind: Service
metadata:
name: camera-service
labels:
group: camera
spec:
ports:
- port: 50052
targetPort: 50052
selector:
group: camera
tier: service
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: camera-service
labels:
group: camera
tier: service
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
minReadySeconds: 60
template:
metadata:
labels:
group: camera
tier: service
spec:
containers:
- image: asia.gcr.io/test/db-camera:latest
name: db-camera
env:
- name : MYSQL_ROOT_PASSWORD
value : root
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: camera-persistent-storage
mountPath: /var/lib/mysql
- name: camera-service
image: asia.gcr.io/test/camera-service:latest
env:
- name : DB_HOST
value : "localhost"
- name : DB_PORT
value : "3306"
- name : DB_NAME
value : "camera"
- name : DB_ROOT_PASS
value : "password"
ports:
- name: http-cam
containerPort: 50052
volumes:
- name: camera-persistent-storage
persistentVolumeClaim:
claimName: camera-pv
restartPolicy: Always
Your MySQL client is configured to use a socket and not talk over the network stack, cf. the MySQL documentation:
On Unix, MySQL programs treat the host name localhost specially, in a
way that is likely different from what you expect compared to other
network-based programs. For connections to localhost, MySQL programs
attempt to connect to the local server by using a Unix socket file.
This occurs even if a --port or -P option is given to specify a port
number. To ensure that the client makes a TCP/IP connection to the
local server, use --host or -h to specify a host name value of
127.0.0.1, or the IP address or name of the local server. You can also specify the connection protocol explicitly, even for localhost, by
using the --protocol=TCP option.
If you still want camera-service to talk over the file system socket you need to mount the file system for the camera-service as well. Currently you only mount it for db-camera