K3S OpenVPN install (Raspberry Pi) - kubernetes

I started using K3S, so I'm an absolute noob. Now I'm wondering how I can create the .yaml Files for pods by my own or use a docker image. (Couldn't find detailed infos about that)
I want a OpenVPN or any other suggested VPN Server running, so I can access my home devices from anywhere. It would safe a lot of headache and time, if someone could be so nice and help me a little.
Before, I've had a OpenVPN Server running, when I only had 1 Raspi. But it looks like everything from the install to the config changed with my k3s Kubernetes Cluster.
How I Made my k3s Cluster with Rancher: https://youtu.be/X9fSMGkjtug
Tried for 3hrs to figure it out, found no real step by step guide for beginners...
I already have a Cloudflare ddns script running to update my Domain with correct IP.
Thank you very much!

here is ther example of Open VPN client YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: openvpn-client
spec:
selector:
matchLabels:
app: openvpn-client
vpn: vpn-id
replicas: 1
template:
metadata:
labels:
app: openvpn-client
vpn: vpn-id
spec:
volumes:
- name: vpn-config
secret:
secretName: vpn-config
items:
- key: client.ovpn
path: client.ovpn
- name: vpn-auth
secret:
secretName: vpn-auth
items:
- key: auth.txt
path: auth.txt
- name: route-script
configMap:
name: route-script
items:
- key: route-override.sh
path: route-override.sh
- name: tmp
emptyDir: {}
initContainers:
- name: vpn-route-init
image: busybox
command: ['/bin/sh', '-c', 'cp /vpn/route-override.sh /tmp/route/route-override.sh; chown root:root /tmp/route/route-override.sh; chmod o+x /tmp/route/route-override.sh;']
volumeMounts:
- name: tmp
mountPath: /tmp/route
- name: route-script
mountPath: /vpn/route-override.sh
subPath: route-override.sh
containers:
- name: vpn
image: dperson/openvpn-client
command: ["/bin/sh","-c"]
args: ["openvpn --config 'vpn/client.ovpn' --auth-user-pass 'vpn/auth.txt' --script-security 3 --route-up /tmp/route/route-override.sh;"]
stdin: true
tty: true
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
env:
- name: TZ
value: "Turkey"
volumeMounts:
- name: vpn-config
mountPath: /vpn/client.ovpn
subPath: client.ovpn
- name: vpn-auth
mountPath: /vpn/auth.txt
subPath: auth.txt
- name: tmp
mountPath: /tmp/route
- name: app1
image: python:3.6-stretch
command:
- sleep
- "100000"
tty: true
dnsConfig:
nameservers:
- 8.8.8.8
- 8.8.4.4
you can also read more about the deployment :https://bugraoz93.medium.com/openvpn-client-in-a-pod-kubernetes-d3345c66b014
You can also use the HELM Chart for same which will make easy to setup anything on Kubernetes via pre-made YAML scripts : https://itnext.io/use-helm-to-deploy-openvpn-in-kubernetes-to-access-pods-and-services-217dec344f13
Docker Open VPN : https://github.com/dperson/openvpn-client

Related

Mounting volume resulting empty folder in kubernetes minikube?

I have created a deployment and I wanted to mount the host path to the container, and when I check the container I see only empty folder.
Why am I getting this error? What can be the cause?
EDIT: I am using Windows OS.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myservicepod6
labels:
app: servicepod
spec:
replicas: 1
selector:
matchLabels:
app: servicepod
template:
metadata:
labels:
app: servicepod
spec:
containers:
- name: php
image: php:7.2-apache
command: ["/bin/sh", "-c"]
args: ["service apache2 start; sleep infinity"]
ports:
- name: serviceport
containerPort: 80
volumeMounts:
- mountPath: /var/www/html/
name: hostvolume
volumes:
- name: hostvolume
hostPath:
path: /C/Users/utkarsh/pentesting/learnings/kubernetes/app/objectmanagement/deployments/src/*
EDIT FOR THE ANSWER -
I start the minkube - minikube start --mount-string="$HOME/test/src/code/file:/data"
Then I changed the deployment file like below
Showing only volume part
spec:
volumes:
- name: hostvolume
hostPath:
path: /C/Users/utkarsh/pentesting/learnings/kubernetes/app/deployments/src
containers:
- name: php
image: php:7.2-apache
command: ["/bin/sh", "-c"]
args: ["service apache2 start; sleep infinity"]
ports:
- name: serviceport
containerPort: 80
volumeMounts:
- name: hostvolume
mountPath: /test/src/code/file
When I log into the pod and went to the directory (/test/src/code/file) I found the directory empty
let me know what am I missing?
After a detailed search and hit and trial method - Found the way
Only for minikube:-
First we need to mount the host folder into the directory name:
minikube mount src/:/var/www/html
Then we need to define hostPath and mountPath as
/var/www/html
Because now we have mount the folder to html folder.
volumes:
- name: hostvolume
hostPath:
path: /var/www/html
containers:
- name: php
image: php:7.2-apache
command: ["/bin/sh", "-c"]
args: ["service apache2 start; sleep infinity"]
workingDir: /var/www/html
ports:
- name: serviceport
containerPort: 80
volumeMounts:
- name: hostvolume
mountPath: /var/www/html

kubernetes deployment file inject environment variables on a pre script

I have an elixir app connection to postgres using sql proxy
here is my deployment.yaml I deploy on kubernetes and works well,
the postgres connection password and user name are taken in the image from the environment variables in the yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app
namespace: production
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: my-app
tier: backend
spec:
securityContext:
runAsUser: 0
runAsNonRoot: false
containers:
- name: my-app
image: my-image:1.0.1
volumeMounts:
- name: secrets-volume
mountPath: /secrets
readOnly: true
- name: config-volume
mountPath: /beamconfig
ports:
- containerPort: 80
args:
- foreground
env:
- name: POSTGRES_HOSTNAME
value: localhost
- name: POSTGRES_USERNAME
value: postgres
- name: POSTGRES_PASSWORD
value: 123456
# proxy_container
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=my-project:region:my-postgres-instance=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: cloudsql
mountPath: /cloudsql
# volumes
volumes:
- name: secrets-volume
secret:
secretName: gcloud-json
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: cloudsql
emptyDir:
now due to security requirements I'd like to put sensitive environments encrypted, and have a script decrypting them
my yaml file would look like this:
env:
- name: POSTGRES_HOSTNAME
value: localhost
- name: ENCRYPTED_POSTGRES_USERNAME
value: hgkdhrkhgrk
- name: ENCRYPTED_POSTGRES_PASSWORD
value: fkjeshfke
then I have script that would run on all environments with prefix ENCRYPTED_ , will decrypt them and insert the dycrpted value under the environment variable without the ENCRYPTED_ prefix
is there a way to do that?
the environments variables should be injected before the image starts running
another requirement is that the pod running the image would decrypt the variables - since its the only one which has permissions to do it (working with work load identity)
something like:
- command:
- sh
- /decrypt_and_inject_environments.sh

RabbitMQ configuration files is not coping in the Kubernetes deployment

I'm trying to deploy RabbitMQ on the Kubernetes cluster and using the initcontainer to copy a file from ConfigMap. However, the file is not copying after POD is in a running state.
Initially, I have tried without using an initcontainer, but I was getting an error like "touch: cannot touch '/etc/rabbitmq/rabbitmq.conf': Read-only file system."
kind: Deployment
metadata:
name: broker01
namespace: s2sdocker
labels:
app: broker01
spec:
replicas: 1
selector:
matchLabels:
app: broker01
template:
metadata:
name: broker01
labels:
app: broker01
spec:
initContainers:
- name: configmap-copy
image: busybox
command: ['/bin/sh', '-c', 'cp /etc/rabbitmq/files/definitions.json /etc/rabbitmq/']
volumeMounts:
- name: broker01-definitions
mountPath: /etc/rabbitmq/files
- name: pre-install
mountPath: /etc/rabbitmq
containers:
- name: broker01
image: rabbitmq:3.7.17-management
envFrom:
- configMapRef:
name: broker01-rabbitmqenv-cm
ports:
volumeMounts:
- name: broker01-data
mountPath: /var/lib/rabbitmq
- name: broker01-log
mountPath: /var/log/rabbitmq/log
- name: broker01-definitions
mountPath: /etc/rabbitmq/files
volumes:
- name: pre-install
emptyDir: {}
- name: broker01-data
persistentVolumeClaim:
claimName: broker01-data-pvc
- name: broker01-log
persistentVolumeClaim:
claimName: broker01-log-pvc
- name: broker01-definitions
configMap:
name: broker01-definitions-cm
The file "definitions.json" should be copied to /etc/reabbitmq folder. I have followed "Kubernetes deployment read-only filesystem error". But issue did not fix.
After making changes in the "containers volumeMount section," I was able to copy the file on to /etc/rabbitmq folder.
Please find a modified code here.
- name: broker01
image: rabbitmq:3.7.17-management
envFrom:
- configMapRef:
name: broker01-rabbitmqenv-cm
ports:
volumeMounts:
- name: broker01-data
mountPath: /var/lib/rabbitmq
- name: broker01-log
mountPath: /var/log/rabbitmq/log
- name: pre-install
mountPath: /etc/rabbitmq
can you check permissions on /etc/rabbitmq/.
does the user has permission to copy the file to above location?
- name: pre-install
mountPath: /etc/rabbitmq
I see that /etc/rabbitmq is a mount point. it is a ready only file system and hence the file copy is failed.
can you update the permissions on 'pre-install' mount point

How to allow a Kubernetes Job access to a file on host

I've been though the Kubernetes documentation thoroughly but am still having problems interacting with a file on the host filesystem with an application running inside a K8 job launched pod. This happens with even the simplest utility so I have included an stripped down example of my yaml config. The local file, 'hello.txt', referenced here does exist in /tmp on the host (ie. outside the Kubernetes environment) and I have even chmod 777'd it. I've also tried different places in the hosts filesystem than /tmp.
The pod that is launched by the Kubernetes Job terminates with Status=Error and generates the log ls: /testing/hello.txt: No such file or directory
Because I ultimately want to use this programmatically as part of a much more sophisticated workflow it really needs to be a Job not a Deployment. I hope that is possible. My current config file which I am launching with kubectl just for testing is:
apiVersion: batch/v1
kind: Job
metadata:
name: kio
namespace: kmlflow
spec:
# ttlSecondsAfterFinished: 5
template:
spec:
containers:
- name: kio-ingester
image: busybox
volumeMounts:
- name: test-volume
mountPath: /testing
imagePullPolicy: IfNotPresent
command: ["ls"]
args: ["-l", "/testing/hello.txt"]
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /tmp
# this field is optional
# type: Directory
restartPolicy: Never
backoffLimit: 4
Thanks in advance for any assistance.
Looks like when the volume is mounted , the existing data can't be accessed.
You will need to make use of init container to pre-populate the data in the volume.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: config-data
image: busybox
command: ["echo","-n","{'address':'10.0.1.192:2379/db'}", ">","/data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
hostPath: {}
Reference:
https://medium.com/#jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519

Is there a way to get UID in pod spec

What I want to do is providing pod with unified log store, currently persisted to hostPath, but I also want this path including UID so I can easily get its path after pod destroyed.
For example:
apiVersion: v1
kind: Pod
metadata:
name: pod-with-logging-support
spec:
containers:
- image: python:2.7
name: web-server
command:
- "sh"
- "-c"
- "python -m SimpleHTTPServer > /logs/http.log 2>&1"
volumeMounts:
- mountPath: /logs
name: log-dir
volumes:
- name: log-dir
hostPath:
path: /var/log/apps/{metadata.uid}
type: DirectoryOrCreate
metadata.uid is what I want to fill in, but I do not how to do it.
For logging it's better to use another strategy.
I suggest you to look at this link.
Your logs are best managed if streamed to stdout and grabbed by an agent, like shown in this picture:
Don't persist your log on filesystem, but gather them using an agent and put them together for further analysis.
Fluentd is very popular and deserves to be known.
After searching the doc from kubernetes, I finally see a solution for my specific problem. This feature is exactly what I wanted.
So I can create the pod with
apiVersion: v1
kind: Pod
metadata:
name: pod-with-logging-support
spec:
containers:
- image: python:2.7
name: web-server
command:
- "sh"
- "-c"
- "python -m SimpleHTTPServer > /logs/http.log 2>&1"
env:
- name: POD_UID
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.uid
volumeMounts:
- mountPath: /logs
name: log-dir
subPath: $(POD_UID)
volumes:
- name: log-dir
hostPath:
path: /var/log/apps/
type: DirectoryOrCreate