How to connect to samba server from container running in kubernetes? - kubernetes

I created a kubernetes cluster in amazon. Then I run my pod (container) and volume into this cluster. Now I want to run the samba server into the volume and connect my pod to samba server. Is there any tutorial how can I solve this problem? By the way I am working at windows 10. Here is my deployment code with volume:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
labels:
app : application
spec:
replicas: 2
selector:
matchLabels:
project: k8s
template:
metadata:
labels:
project: k8s
spec:
containers:
- name : k8s-web
image: mine/flask:latest
volumeMounts:
- mountPath: /test-ebs
name: my-volume
ports:
- containerPort: 8080
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: pv0004
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0004
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: [my-Id-volume]

you can check out the smaba container docker image at : https://github.com/dperson/samba
---
kind: Service
apiVersion: v1
metadata:
name: smb-server
labels:
app: smb-server
spec:
type: LoadBalancer
selector:
app: smb-server
ports:
- port: 445
name: smb-server
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: smb-server
spec:
replicas: 1
selector:
matchLabels:
app: smb-server
template:
metadata:
name: smb-server
labels:
app: smb-server
spec:
containers:
- name: smb-server
image: dperson/samba
env:
- name: PERMISSIONS
value: "0777"
args: ["-u", "username;test","-s","share;/smbshare/;yes;no;no;all;none","-p"]
volumeMounts:
- mountPath: /smbshare
name: data-volume
ports:
- containerPort: 445
volumes:
- name: data-volume
hostPath:
path: /smbshare
type: DirectoryOrCreate

Related

Traefik returning 404 for local deployment

I'm following this tutorial and making changes as necessary to set up a self hosted instance of Ghost blog. I'm new to Kubernetes, and am self hosting this locally on some Raspberry Pis. I applied all deployments, services, myqsl, secrets, PVCs etc, and added ghost to /etc/hosts. When i visit ghost/ in browser, I get a 404 error. Even though I'm targeting the service. Here are my YAMLs:
MySQL PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
type: longhorn
app: example
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Ghost PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ghost-pv-claim
labels:
type: longhorn
app: ghost
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
MySQL Password Secret
apiVersion: v1
kind: Secret
metadata:
name: mysql-pass
type: Opaque
data:
password: <base_64_encoded_pwd>
Ghost SQL deployment
apiVersion: v1
kind: Service
metadata:
name: ghost-mysql
labels:
app: ghost
spec:
ports:
- port: 3306
selector:
app: ghost
tier: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: ghost-mysql
labels:
app: ghost
spec:
selector:
matchLabels:
app: ghost
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: ghost
tier: mysql
spec:
containers:
- image: arm64v8/mysql:latest
imagePullPolicy: Always
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
- name: MYSQL_USER
value: ghost
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-vol
mountPath: /var/lib/mysql
volumes:
- name: mysql-vol
persistentVolumeClaim:
claimName: mysql-pv-claim
Ghost Blog Deployment
apiVersion: v1
kind: Service
metadata:
name: ghost-svc
labels:
app: ghost
tier: frontend
spec:
selector:
app: ghost
tier: frontend
ports:
- protocol: TCP
port: 2368
targetPort: 2368
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ghost-deploy
spec:
replicas: 1
selector:
matchLabels:
app: ghost
tier: frontend
template:
metadata:
labels:
app: ghost
tier: frontend
spec:
# securityContext:
# runAsUser: 1000
# runAsGroup: 50
containers:
- name: blog
image: ghost
imagePullPolicy: Always
ports:
- containerPort: 2368
env:
# - name: url
# value: https://www.myblog.com
- name: database__client
value: mysql
- name: database__connection__host
value: ghost-mysql
- name: database__connection__user
value: root
- name: database__connection__password
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
- name: database__connection__database
value: ghost
volumeMounts:
- mountPath: /var/lib/ghost/content
name: ghost-vol
volumes:
- name: ghost-vol
persistentVolumeClaim:
claimName: ghost-pv-claim
Traefik Ingress
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ghost-ingress
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`ghost`)
kind: Rule
services:
- name: ghost-svc
port: 80
Added ghost to /etc/hosts (Mac) also.
Not sure what I'm doing wrong but I imagine its certs / ingress related. Any ideas?

Kubernetes error while creating mount source path : file exists

after re-deploying my kubernetes statefulset, the pod is now failing due to error while creating mount source path
'/var/lib/kubelet/pods/1559ef17-9c48-401d-9a2f-9962a4a16151/volumes/kubernetes.io~csi/pvc-6b9ac265-d0ec-4564-adb2-1c7b3f6631ca/mount': mkdir /var/lib/kubelet/pods/1559ef17-9c48-401d-9a2f-9962a4a16151/volumes/kubernetes.io~csi/pvc-6b9ac265-d0ec-4564-adb2-1c7b3f6631ca/mount: file exists
I'm assuming this is because the persistent volume/PVC already exists and so it cannot be created, but I thought that was the point of the statefulset, that the data would persist and you could just mount it again? How should I fix this?
Thanks.
apiVersion: v1
kind: Service
metadata:
name: foo-service
spec:
type: ClusterIP
ports:
- name: http
port: 80
selector:
app: foo-app
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: foo-statefulset
namespace: foo
spec:
selector:
matchLabels:
app: foo-app
serviceName: foo-app
replicas: 1
template:
metadata:
labels:
app: foo-app
spec:
serviceAccountName: foo-service-account
containers:
- name: foo
image: blahblah
imagePullPolicy: Always
volumeMounts:
- name: foo-data
mountPath: "foo"
- name: stuff
mountPath: "here"
- name: config
mountPath: "somedata"
volumes:
- name: stuff
persistentVolumeClaim:
claimName: stuff-pvc
- name: config
configMap:
name: myconfig
volumeClaimTemplates:
- metadata:
name: foo-data
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: "foo-storage"
resources:
requests:
storage: 2Gi

How to deploy phpadmin in Azure Kubernetes?

I have deployed MySQL using this YAML file.
apiVersion: v1
kind: Service
metadata:
name: mysqlsb
labels:
app: dataenv
spec:
ports:
- port: 3306
selector:
app: dataenv
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: dataenv
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dataenv-mysql
labels:
app: dataenv
spec:
selector:
matchLabels:
app: dataenv
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: dataenv
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
The instance is running and I can create tables via command line.
How do I deploy phpMyAdmin to manage this pod?
You can use port forwarding
kubectl port-forward service/<<svcname>> 3306:3306
based on your service name:
kubectl port-forward service/mysqlsb 3306:3306
Then you can access it from your desktop (via phpmyadmin or any other GUI) using servername as localhost and port 3306

kubernetes how do I expose pods to things outside of cluster machine?

I read the following kubernetes docs which resulted in the following yaml's to run postgresql & pgadmin in a cluster:
--- pgadmin-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin-deployment
spec:
replicas: 1
selector:
matchLabels:
app: pgadmin-pod
template:
metadata:
labels:
app: pgadmin-pod
spec:
containers:
- name: pgadmin-container
image: dpage/pgadmin4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 80
env:
- name: PGADMIN_DEFAULT_EMAIL
value: email#example.com
- name: PGADMIN_DEFAULT_PASSWORD
value: password
--- pgadmin-service.yaml
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
type: NodePort
ports:
- port: 30000
targetPort: 80
selector:
app: pgadmin-pod
--- postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-pod
template:
metadata:
labels:
app: postgres-pod
spec:
containers:
- name: postgres-container
image: postgres:9.6-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: database
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_USER
value: username
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgrepvc
volumes:
- name: postgrepvc
persistentVolumeClaim:
claimName: postgres-pv-claim
--- postgres-service.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: NodePort
ports:
- port: 30001
targetPort: 5432
selector:
app: postgres-pod
--- postgres-storage.yaml
postgres-storage.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
I then run the following command kubectl create -f ./ which results in the following:
kubernetes pods / svc's
Then I try to access pgAdmin on 10.43.225.170:30000 from outside of the cluster, but I get "10.43.225.170 took too long to respond." no matter what I try.
So how do I expose pgAdmin & postgress to the outside world, and is there a way to give them static ip's so I don't have to update ip's in connection strings each time I re-deploy on kubernetes, or do I have to use statefulset for this?
Problems here
you are trying to reach node internal ip 10.43.225.170 instead of external one.
nodePort service configured incorrectly. In addition you are trying to call incorrect port
You haven't specified what platform you use. I'm using GKE, so in my case its easier because I have external IP's automatically assigned during cluster node creation. But I had to manually create ingress firewall rule to allow access from outside to nodes and required ports (30000,30001)
In any case, to be able to use nodePort - you should have external IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port
Going next. You are trying to call <NodeIP>:spec.ports[*].port.
As per Type NodePort documentation:
Service is visible as <NodeIP>:spec.ports[*].nodePort
You need explicitly specify nodePort
I have changed a bit your deployment, can access pgAdmin after deploying it and opening corresponding ports in firewall.
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin-deployment
spec:
replicas: 1
selector:
matchLabels:
app: pgadmin-pod
template:
metadata:
labels:
app: pgadmin-pod
spec:
containers:
- name: pgadmin-container
image: dpage/pgadmin4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 80
env:
- name: PGADMIN_DEFAULT_EMAIL
value: email#example.com
- name: PGADMIN_DEFAULT_PASSWORD
value: password
---
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
type: NodePort
ports:
- nodePort: 30000
targetPort: 80
port: 80
selector:
app: pgadmin-pod
--- postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-pod
template:
metadata:
labels:
app: postgres-pod
spec:
containers:
- name: postgres-container
image: postgres:9.6-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: database
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_USER
value: username
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgrepvc
volumes:
- name: postgrepvc
persistentVolumeClaim:
claimName: postgres-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: NodePort
ports:
- nodePort: 30001
targetPort: 5432
port: 5432
selector:
app: postgres-pod
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
Check:
kubectl apply -f pg_my.yaml
deployment.apps/pgadmin-deployment created
service/pgadmin-service created
service/postgres-service created
persistentvolume/postgres-pv-volume created
persistentvolumeclaim/postgres-pv-claim created
#In my case I take node external ip from any node from `kubectl get nodes -o wide` output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
gke-cluster-1-default-pool-*******-***** Ready <none> 20d v1.18.16-gke.502 10.186.0.7 *.*.*.*
curl *.*.*.*:30000
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to target URL: /login?next=%2F.

Why am I getting "1 pod has unbound immediate PersistentVolumeClaims" when working with 2 deployments

I am trying to do a fairly simple red/green setup using Minikube where I want 1 pod running a red container and 1 pod running a green container and a service to hit each. To do this my k82 file is like...
apiVersion: v1
kind: PersistentVolume
metadata:
name: main-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Users/jackiegleason/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: main-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
labels:
app: express-app
name: express-service-red
spec:
type: LoadBalancer
ports:
- port: 3000
selector:
app: express-app-red
---
apiVersion: v1
kind: Service
metadata:
labels:
app: express-app
name: express-service-green
spec:
type: LoadBalancer
ports:
- port: 3000
selector:
app: express-app-green
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: express-app-deployment-red
labels:
app: express-app-red
tier: app
spec:
replicas: 1
selector:
matchLabels:
tier: app
template:
metadata:
labels:
app: express-app-red
tier: app
spec:
volumes:
- name: express-app-storage
persistentVolumeClaim:
claimName: main-volume-claim
containers:
- name: express-app-container
image: partyk1d24/hello-express:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/external"
name: express-app-storage
ports:
- containerPort: 3000
protocol: TCP
name: express-endpnt
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: express-app-deployment-green
labels:
app: express-app-green
tier: app
spec:
replicas: 1
selector:
matchLabels:
tier: app
template:
metadata:
labels:
app: express-app-green
tier: app
spec:
volumes:
- name: express-app-storage
persistentVolumeClaim:
claimName: main-volume-claim
containers:
- name: express-app-container
image: partyk1d24/hello-express:latest
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_TYPE
value: "Green"
volumeMounts:
- mountPath: "/var/external"
name: express-app-storage
ports:
- containerPort: 3000
protocol: TCP
name: express-endpnt
But when I run I get...
0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
What am I missing? It worked fine without the second deployment so what am I missing?
Thank you!
You cannot use the same a PV with accessMode: ReadWriteOnce multiple times.
To do this you would need to use a volume that supports ReadWriteMany access mode.
Check out k8s documentation for the list of Volume Plugins that support this feature.
Additionally, as David already menioned, it's much better to log to the STDOUT.
You can also check solutions like FluentBit/fluentd or ELK stack.