How to deploy phpadmin in Azure Kubernetes? - kubernetes

I have deployed MySQL using this YAML file.
apiVersion: v1
kind: Service
metadata:
name: mysqlsb
labels:
app: dataenv
spec:
ports:
- port: 3306
selector:
app: dataenv
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: dataenv
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dataenv-mysql
labels:
app: dataenv
spec:
selector:
matchLabels:
app: dataenv
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: dataenv
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
The instance is running and I can create tables via command line.
How do I deploy phpMyAdmin to manage this pod?

You can use port forwarding
kubectl port-forward service/<<svcname>> 3306:3306
based on your service name:
kubectl port-forward service/mysqlsb 3306:3306
Then you can access it from your desktop (via phpmyadmin or any other GUI) using servername as localhost and port 3306

Related

Traefik returning 404 for local deployment

I'm following this tutorial and making changes as necessary to set up a self hosted instance of Ghost blog. I'm new to Kubernetes, and am self hosting this locally on some Raspberry Pis. I applied all deployments, services, myqsl, secrets, PVCs etc, and added ghost to /etc/hosts. When i visit ghost/ in browser, I get a 404 error. Even though I'm targeting the service. Here are my YAMLs:
MySQL PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
type: longhorn
app: example
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Ghost PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ghost-pv-claim
labels:
type: longhorn
app: ghost
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
MySQL Password Secret
apiVersion: v1
kind: Secret
metadata:
name: mysql-pass
type: Opaque
data:
password: <base_64_encoded_pwd>
Ghost SQL deployment
apiVersion: v1
kind: Service
metadata:
name: ghost-mysql
labels:
app: ghost
spec:
ports:
- port: 3306
selector:
app: ghost
tier: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: ghost-mysql
labels:
app: ghost
spec:
selector:
matchLabels:
app: ghost
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: ghost
tier: mysql
spec:
containers:
- image: arm64v8/mysql:latest
imagePullPolicy: Always
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
- name: MYSQL_USER
value: ghost
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-vol
mountPath: /var/lib/mysql
volumes:
- name: mysql-vol
persistentVolumeClaim:
claimName: mysql-pv-claim
Ghost Blog Deployment
apiVersion: v1
kind: Service
metadata:
name: ghost-svc
labels:
app: ghost
tier: frontend
spec:
selector:
app: ghost
tier: frontend
ports:
- protocol: TCP
port: 2368
targetPort: 2368
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ghost-deploy
spec:
replicas: 1
selector:
matchLabels:
app: ghost
tier: frontend
template:
metadata:
labels:
app: ghost
tier: frontend
spec:
# securityContext:
# runAsUser: 1000
# runAsGroup: 50
containers:
- name: blog
image: ghost
imagePullPolicy: Always
ports:
- containerPort: 2368
env:
# - name: url
# value: https://www.myblog.com
- name: database__client
value: mysql
- name: database__connection__host
value: ghost-mysql
- name: database__connection__user
value: root
- name: database__connection__password
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
- name: database__connection__database
value: ghost
volumeMounts:
- mountPath: /var/lib/ghost/content
name: ghost-vol
volumes:
- name: ghost-vol
persistentVolumeClaim:
claimName: ghost-pv-claim
Traefik Ingress
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ghost-ingress
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`ghost`)
kind: Rule
services:
- name: ghost-svc
port: 80
Added ghost to /etc/hosts (Mac) also.
Not sure what I'm doing wrong but I imagine its certs / ingress related. Any ideas?

kubernetes how do I expose pods to things outside of cluster machine?

I read the following kubernetes docs which resulted in the following yaml's to run postgresql & pgadmin in a cluster:
--- pgadmin-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin-deployment
spec:
replicas: 1
selector:
matchLabels:
app: pgadmin-pod
template:
metadata:
labels:
app: pgadmin-pod
spec:
containers:
- name: pgadmin-container
image: dpage/pgadmin4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 80
env:
- name: PGADMIN_DEFAULT_EMAIL
value: email#example.com
- name: PGADMIN_DEFAULT_PASSWORD
value: password
--- pgadmin-service.yaml
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
type: NodePort
ports:
- port: 30000
targetPort: 80
selector:
app: pgadmin-pod
--- postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-pod
template:
metadata:
labels:
app: postgres-pod
spec:
containers:
- name: postgres-container
image: postgres:9.6-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: database
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_USER
value: username
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgrepvc
volumes:
- name: postgrepvc
persistentVolumeClaim:
claimName: postgres-pv-claim
--- postgres-service.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: NodePort
ports:
- port: 30001
targetPort: 5432
selector:
app: postgres-pod
--- postgres-storage.yaml
postgres-storage.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
I then run the following command kubectl create -f ./ which results in the following:
kubernetes pods / svc's
Then I try to access pgAdmin on 10.43.225.170:30000 from outside of the cluster, but I get "10.43.225.170 took too long to respond." no matter what I try.
So how do I expose pgAdmin & postgress to the outside world, and is there a way to give them static ip's so I don't have to update ip's in connection strings each time I re-deploy on kubernetes, or do I have to use statefulset for this?
Problems here
you are trying to reach node internal ip 10.43.225.170 instead of external one.
nodePort service configured incorrectly. In addition you are trying to call incorrect port
You haven't specified what platform you use. I'm using GKE, so in my case its easier because I have external IP's automatically assigned during cluster node creation. But I had to manually create ingress firewall rule to allow access from outside to nodes and required ports (30000,30001)
In any case, to be able to use nodePort - you should have external IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port
Going next. You are trying to call <NodeIP>:spec.ports[*].port.
As per Type NodePort documentation:
Service is visible as <NodeIP>:spec.ports[*].nodePort
You need explicitly specify nodePort
I have changed a bit your deployment, can access pgAdmin after deploying it and opening corresponding ports in firewall.
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin-deployment
spec:
replicas: 1
selector:
matchLabels:
app: pgadmin-pod
template:
metadata:
labels:
app: pgadmin-pod
spec:
containers:
- name: pgadmin-container
image: dpage/pgadmin4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 80
env:
- name: PGADMIN_DEFAULT_EMAIL
value: email#example.com
- name: PGADMIN_DEFAULT_PASSWORD
value: password
---
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
type: NodePort
ports:
- nodePort: 30000
targetPort: 80
port: 80
selector:
app: pgadmin-pod
--- postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-pod
template:
metadata:
labels:
app: postgres-pod
spec:
containers:
- name: postgres-container
image: postgres:9.6-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: database
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_USER
value: username
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgrepvc
volumes:
- name: postgrepvc
persistentVolumeClaim:
claimName: postgres-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: NodePort
ports:
- nodePort: 30001
targetPort: 5432
port: 5432
selector:
app: postgres-pod
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
Check:
kubectl apply -f pg_my.yaml
deployment.apps/pgadmin-deployment created
service/pgadmin-service created
service/postgres-service created
persistentvolume/postgres-pv-volume created
persistentvolumeclaim/postgres-pv-claim created
#In my case I take node external ip from any node from `kubectl get nodes -o wide` output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
gke-cluster-1-default-pool-*******-***** Ready <none> 20d v1.18.16-gke.502 10.186.0.7 *.*.*.*
curl *.*.*.*:30000
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to target URL: /login?next=%2F.

How to connect to samba server from container running in kubernetes?

I created a kubernetes cluster in amazon. Then I run my pod (container) and volume into this cluster. Now I want to run the samba server into the volume and connect my pod to samba server. Is there any tutorial how can I solve this problem? By the way I am working at windows 10. Here is my deployment code with volume:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
labels:
app : application
spec:
replicas: 2
selector:
matchLabels:
project: k8s
template:
metadata:
labels:
project: k8s
spec:
containers:
- name : k8s-web
image: mine/flask:latest
volumeMounts:
- mountPath: /test-ebs
name: my-volume
ports:
- containerPort: 8080
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: pv0004
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0004
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: [my-Id-volume]
you can check out the smaba container docker image at : https://github.com/dperson/samba
---
kind: Service
apiVersion: v1
metadata:
name: smb-server
labels:
app: smb-server
spec:
type: LoadBalancer
selector:
app: smb-server
ports:
- port: 445
name: smb-server
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: smb-server
spec:
replicas: 1
selector:
matchLabels:
app: smb-server
template:
metadata:
name: smb-server
labels:
app: smb-server
spec:
containers:
- name: smb-server
image: dperson/samba
env:
- name: PERMISSIONS
value: "0777"
args: ["-u", "username;test","-s","share;/smbshare/;yes;no;no;all;none","-p"]
volumeMounts:
- mountPath: /smbshare
name: data-volume
ports:
- containerPort: 445
volumes:
- name: data-volume
hostPath:
path: /smbshare
type: DirectoryOrCreate

MongoDB in Kubernetes within GCP

I'm trying to deploy mongodb on my k8s cluster as mongodb is my db of choice. To do that I've config files (very similar to what I did with postgress few weeks ago).
Here's mongo's deployment k8s object:
apiVersion: apps/v1
kind: Deployment
metadata:
name: panel-admin-mongo-deployment
spec:
replicas: 1
selector:
matchLabels:
component: panel-admin-mongo
template:
metadata:
labels:
component: panel-admin-mongo
spec:
volumes:
- name: panel-admin-mongo-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: panel-admin-mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: panel-admin-mongo-storage
mountPath: /data/db
In order to get into the pod I made a service:
apiVersion: v1
kind: Service
metadata:
name: panel-admin-mongo-cluster-ip-service
spec:
type: ClusterIP
selector:
component: panel-admin-mongo
ports:
- port: 27017
targetPort: 27017
And of cource I need a PVC as well:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
In order to get to the db from my server I used server deployment object:
apiVersion: apps/v1
kind: Deployment
metadata:
name: panel-admin-api-deployment
spec:
replicas: 1
selector:
matchLabels:
component: panel-admin-api
template:
metadata:
labels:
component: panel-admin-api
spec:
containers:
- name: panel-admin-api
image: my-image
ports:
- containerPort: 3001
env:
- name: MONGO_URL
value: panel-admin-mongo-cluster-ip-service // This is important
imagePullSecrets:
- name: gcr-json-key
But for some reason when I'm booting up all containers with kubectl apply command my server says:
MongoDB :: connection error: MongoParseError: Invalid connection string
Can I deploy it like that (as it was possible with postgress)? Or what am I missing here?
Use mongodb:// in front of your panel-admin-mongo-cluster-ip-service
So it should look like this:
mongodb://panel-admin-mongo-cluster-ip-service

unable to deploy debezium on minikube

I am new to kubernetes, I am trying to integrate kafka with debezium and mysql.
i successfully deploy kafka and mysql on minikube , once i deploy the debezium yml on minikube, it got hanged and don't response at all , then i restart the minikube, After running all pod minikube again got hanged.
below is my code:
zookeeper service
apiVersion: v1
kind: Service
metadata:
name: zoo1
labels:
app: zookeeper-1
spec:
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
selector:
app: zookeeper-1
zookeeper deployment:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: zookeeper-deployment-1
spec:
template:
metadata:
labels:
app: zookeeper-1
spec:
containers:
- name: zoo1
image: debezium/zookeeper
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zoo1
kafka service:
apiVersion: v1
kind: Service
metadata:
name: kafka-service
labels:
name: kafka
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
app: kafka
id: "1"
type: NodePort
kafka deployemnt:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: kafka-broker1
spec:
template:
metadata:
labels:
selector: kafka
app: kafka
id: "1"
spec:
containers:
- name: kafka
image: debezium/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: 192.168.39.47
- name: KAFKA_ZOOKEEPER_CONNECT
value: zoo1:2181
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CREATE_TOPICS
value: hello-topic:3:3
MySql-persistance volume:
#application/mysql/mysql-pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
mysql deployment:
#application/mysql/mysql-deployment.yaml
# this command is for mysql client kubectl run -it --rm --image=debezium/example-mysql --restart=Never mysql-client -- mysql -h mysql -pdebezium
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: extensions/v1beta1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: debezium/example-mysql
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: debezium
- name: MYSQL_USER
value: mysqluser
- name: MYSQL_PASSWORD
value: mysqlpw
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Debezium deployment:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: debezium-connect-source
spec:
selector:
matchLabels:
app: debezium-connect-source
replicas: 1
template:
metadata:
labels:
app: debezium-connect-source
spec:
terminationGracePeriodSeconds: 30
containers:
- name: debezium-connect-source
image: debezium/connect
env:
- name: BOOTSTRAP_SERVERS
value: kafka-service:9092
- name: GROUP_ID
value: "1"
- name: CONFIG_STORAGE_TOPIC
value: debezium-connect-source_config
- name: OFFSET_STORAGE_TOPIC
value: debezium-connect-source_offset
ports:
- containerPort: 8083
name: dm-c-source
when i deploy the debezium , then problem starts and minikube response like
$ kubectl get pods
Unable to connect to the server: net/http: TLS handshake timeout
OS :Centos
minikube version: v0.30.0
I believe this is happening because of the resource crunch on the VM started by minikube.
By default when you start using minikube start it takes up only 2 CPU and 2GB RAM from your system, and by looking at your deployments (kafka + mysql + debezium) that might not be enough.
You can increase CPU and memory allocated to VM by using minikube start with parameters --cpu and --memory (value should be in MB).
For more info, you should do minikube start -h
I strongly suggest, if you want to setup a heavy deployments you should be using machines with more resources.
Hope this helps.
you should set memory limits for the Java-based pods. The older versions of Java would see the whole guest memory as their own and will happily consume it completely - and there are at least three JVMs started.