Kubernetes multi-container pod [closed] - kubernetes

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
Hello I try to have a Pod with 2 container, one a c++ app, one a mysql database. I used to have the mysql deployed in its own service, but i got latency issue. So i want to try multi-container pod.
But i've been struggling to connect my app with the mysql through localhost. It says..
Can\'t connect to local MySQL server through socket
\'/var/run/mysqld/mysqld.sock
Here is my kubernetes.yaml. Please I need help :(
# Database setup
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: storage-camera
labels:
group: camera
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: camera-pv
labels:
group: camera
spec:
storageClassName: db-camera
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: storage-camera
---
# Service setup
apiVersion: v1
kind: Service
metadata:
name: camera-service
labels:
group: camera
spec:
ports:
- port: 50052
targetPort: 50052
selector:
group: camera
tier: service
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: camera-service
labels:
group: camera
tier: service
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
minReadySeconds: 60
template:
metadata:
labels:
group: camera
tier: service
spec:
containers:
- image: asia.gcr.io/test/db-camera:latest
name: db-camera
env:
- name : MYSQL_ROOT_PASSWORD
value : root
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: camera-persistent-storage
mountPath: /var/lib/mysql
- name: camera-service
image: asia.gcr.io/test/camera-service:latest
env:
- name : DB_HOST
value : "localhost"
- name : DB_PORT
value : "3306"
- name : DB_NAME
value : "camera"
- name : DB_ROOT_PASS
value : "password"
ports:
- name: http-cam
containerPort: 50052
volumes:
- name: camera-persistent-storage
persistentVolumeClaim:
claimName: camera-pv
restartPolicy: Always

Your MySQL client is configured to use a socket and not talk over the network stack, cf. the MySQL documentation:
On Unix, MySQL programs treat the host name localhost specially, in a
way that is likely different from what you expect compared to other
network-based programs. For connections to localhost, MySQL programs
attempt to connect to the local server by using a Unix socket file.
This occurs even if a --port or -P option is given to specify a port
number. To ensure that the client makes a TCP/IP connection to the
local server, use --host or -h to specify a host name value of
127.0.0.1, or the IP address or name of the local server. You can also specify the connection protocol explicitly, even for localhost, by
using the --protocol=TCP option.
If you still want camera-service to talk over the file system socket you need to mount the file system for the camera-service as well. Currently you only mount it for db-camera

Related

Kubernetes HostPort with dnsmasq issue [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I'm trying to setup a dnsmasq pod via kubernetes. Yaml file is like below:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dnsmasq1
labels:
name: dnsmasq1
spec:
serviceName: "dnsmasq1"
replicas: 1
selector:
matchLabels:
name: dnsmasq1
volumeClaimTemplates:
- metadata:
name: dnsmasqconf-pv1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ceph-rbd-sc
template:
metadata:
labels:
name: dnsmasq1
spec:
hostNetwork: false
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
name: dnsmasq1
topologyKey: "kubernetes.io/hostname"
hostname: dnsmasq1
containers:
- name: dnsmasq1
image: jpillora/dnsmasq
ports:
- containerPort: 8080
hostPort: 8082
imagePullPolicy: IfNotPresent
env:
- name: HTTP_USER
value: "****"
- name: HTTP_PASS
value: "****"
volumeMounts:
- mountPath: /mnt/config
name: dnsmasqconf-pv1
resources:
requests:
memory: 250Mi
limits:
memory: 250Mi
nodeSelector:
etiket: worker
dnsPolicy: "None"
dnsConfig:
nameservers:
- 8.8.8.8
volumes:
- name: dnsmasqconf-pv1
persistentVolumeClaim:
claimName: dnsmasqconf-pv1
This works fine and I can reach to pod using the node's IP address. I decided to test the pod as a dns server on a test machine but the entries are not resolved. I think this is because I'm not using the Pod's IP as dns server but the node's. How can I set this pod to be used a dns server externally? I don't have a cloud provider so I don't think I can use loadbalancer IP here.
I don't think you can use dnsmasq as external dns server as dnsmasq is a lightweight DNS forwarder, designed to provide DNSservices to a small-scale network. It can serve the names of local machines which are not in the global DNS. dnsmasq makes it simple to specify the nameserver to use for a given domain and it is ideal to manage communication in a kubernetes cluster.
In /etc/NetworkManager/NetworkManager.conf, add or uncomment the following line in the [main] section:
dns=dnsmasq
Create /etc/NetworkManager/dnsmasq.d/kube.conf with this line:
server=/cluster.local/10.90.0.10
This tells dnsmasq that queries for anything in the cluster.local domain should be forwarded to the DNS server at 10.90.0.10. This happens to be the default IP address of the kube-dns service in the kube-system namespace. If your cluster’s DNS service has a different IP address, you’ll need to specify it instead.
Now, after you run systemctl restart NetworkManager, your /etc/resolv.conf should look something like this:
#Generated by NetworkManager
search localdomain
nameserver 127.0.0.1
The important line is nameserver 127.0.0.1. This means DNS requests are sent to localhost, which is handled by dnsmasq.

Kubernetes HA data across several workers [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I have set up a Kubernetes system with 1 Master and 3 worker Nodes, and a load balancer. But at the moment my pipes are stuck as I'm struggling to find a solution, how can I setup a WordPress website with traffic that is replicated on all nodes. All for me is clear only I don't get, how to get all 3 Workers ( VPS servers in different countries) to have the same data so that pods can work and scale, and if one worker is dead the second and third can continue providing all services. IS PVE the solution or some other? Please point me in the direction to start searching.
Thanks.
You can create a PersistentVolumeClaim in ReadWriteMany mode that creates a PersistentVolume that holds your WordPress site data then create a Deployment with 3 replicas that mounts that PersistentVolume.
Example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wordpress-data
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
storageClass: fast # update this to whatever persistent storage class is available on your cluster. See https://kubernetes.io/docs/concepts/storage/storage-classes/
resources:
requests:
storage: 10Gi
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
replicas: 3
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:latest
ports:
- containerPort: 80
name: http
protocol: TCP
volumeMounts:
- mountPath: "/var/www/html"
name: wordpress-data
volumes:
- name: wordpress-data
persistentVolumeClaim:
claimName: wordpress-data # notice this is referencing the PersistentVolumeClaim we declared above
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
type: NodePort # or LoadBalancer
selector:
app: wordpress
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80

PVCs and multiple namespaces

I have a small Saas App that customers can sign up for, and they get their own instance completely separated from the rest of the clients. It's a simple REST API, with a DB (postgres) and caddy that gets deployed using a docker-compose.
This works fine, but requires me to create the VPS, deploy the different services, and essentially is really hard to manage as I most of the work is manual.
I have decided to use kubernetes, and I have gotten to the point where I can create a separate instance of the system on it's own, isolated namespace for each client, fully automated. This creates the different deployments, services, and pods. Also, I create a PVC for each namespace/client.
The issue has to do with Persistent Volume Claims, and how they work in namespaces. As I want to keep the data completely separate from other instances, I wanted to create a PVC for each client, so that only the DB from that client can access it (and the server, as it requires some data to be written to disk).
This works fine in minikube, but the issue comes with the hosting provider. I use DigitalOcean's managed cluster and they do not allow multiple PVCs to be created, therefore making it impossible to achieve the level of isolation that I want. They allow you to mount a block storage (whatever size you need), and then use it. This would mean that the data is all stored on the "same disk", and all namespaces can access it.
My question is: Is there a way to achieve the same level of isolation, i.e. separate the mount points for each of the DB instances, in such a way that I can still achieve (or at least get close) to the level of separation that I require? The idea would be something along the lines of:
/pvc-root
/client1
/server
/db
/client2
/server
/db
...
This is what I have for now:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: database-claim
name: database-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: database
name: database
spec:
ports:
- name: "5432"
port: 5432
targetPort: 5432
selector:
io.kompose.service: database
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: database
name: database
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: database
strategy:
type: Recreate
template:
metadata:
labels:
io.kompose.service: database
spec:
containers:
- env:
- name: POSTGRES_DB
value: db_name
- name: POSTGRES_PASSWORD
value: db_password
- name: POSTGRES_USER
value: db_user
image: postgres:10
imagePullPolicy: ""
name: postgres
ports:
- containerPort: 5432
resources: {}
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: database-claim
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: database-claim
persistentVolumeClaim:
claimName: database-claim
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: server
name: server
spec:
ports:
- name: "8080"
port: 8080
targetPort: 8080
selector:
io.kompose.service: server
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: server
name: server
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: server
template:
metadata:
labels:
io.kompose.service: server
spec:
containers:
- env:
- name: DB_HOST
value: database
- name: DB_NAME
value: db_name
- name: DB_PASSWORD
value: db_password
- name: DB_PORT
value: "5432"
- name: DB_USERNAME
value: db_user
image: <REDACTED>
name: server-image
ports:
- containerPort: 8080
restartPolicy: Always
volumes: null
EDIT Feb 02, 2021
I have been in contact with DO's customer support and they clarified a few things:
You need to manually attach a volume to the cluster, so the PVC deployment file is ignored. The volume is then mounted and available to the cluster but NOT in a ReadWriteMany config, which could have served this case quite well
They provide an API, so in theory I could create the volume (for each client) programmatically and then attach a volume for a specific client, keeping a ReadWriteOnce
This of course locks me in to them as a vendor, and makes things a bit harder to configure and migrate
I am still looking for suggestions whether this is the correct approach for my case. If you have a better way let me know!
in theory this should be achievabe
Is there a way to achieve the same level of isolation, i.e. separate the mount points for each of the DB instances, in such a way that I can still achieve (or at least get close) to the level of separation that I require?
Don't run a production database with a single volume. You want to run a database with some form of replication, in case a volume or node crash.
Either run the database in a distributed setup e.g. using Crunchy PostgreSQL for Kubernetes or use a managed database e.g. DigitalOcean Managed Database
Within that DBMS, create logical databases or schemas for each customer - if you really need that strong isolation. Hint: it is probably easier to maintain with less isolation e.g. using multi-tenancy within the tables.
Late to the party, but here's a solution that may bode well for your use case - if you still haven't found a solution.
Create a highly available NFS/CEPH server then export that in a way that your pods can attach to it. Then create PV's and PVC's as you'd like and bypass all that DO blockage.
You have an application very similar to what you describe I support and I went with a highly available NFS server using DRBD, Corosync, and Pacemaker and it all works as expected. No issues so far.

Installing a local FTP Server with K8S / Minikube, and accessing it with Filezilla

I'm trying to install a FTP server with Kubernetes based on this repo.
I also use Traefik as Ingress.
Everything seems fine, and I can I connect FTP Server with cluster-ip, but I can't make it work with a local domain like ftp.local
Here are my K8S files:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
reloader.stakater.com/auto: "true"
labels:
app: ftp-local
name: ftp-local
namespace: influx
spec:
selector:
matchLabels:
app: ftp-local
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
app: ftp-local
spec:
containers:
- name: ftp-local
image: fauria/vsftpd
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 21
protocol: TCP
name: "ftp-server"
volumeMounts:
- mountPath: "/home/vsftpd"
name: task-pv-storage
env:
- name: FTP_USER
value: "sunchain"
- name: FTP_PASS
value: "sunchain"
#- name: PASV_ADDRESS
# value: "127.0.0.1"
#- name: PASV_MIN_PORT
# value: "21100"
#- name: PASV_MAX_PORT
# value: "21110"
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
namespace: influx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: task-pv-volume
namespace: influx
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data"
---
apiVersion: v1
kind: Service
metadata:
name: ftp-local
namespace: influx
labels:
app: ftp-local
spec:
ports:
- name: "21"
port: 21
targetPort: 21
selector:
app: ftp-local
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ftp-ingress
namespace: influx
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: ftp.local
http:
paths:
- backend:
serviceName: ftp-local
servicePort: 21
I also have a line in /etc/hosts that like that:
127.0.0.1 ftp.local
What am I missing ?
At first, link to repo you used was created/updated 2-3 years ago.
Many newest Kubernetes features require SSL for communication. That's the reason why SFTP is easier to apply in Kubernetes.
Another thing, you are using Minikube with --driver=none which have some restrictions. All neccesary information about none driver are described here.
There was already similar question regarding FTP Filezilla in this thread.
As workaround you can consider using hostPort or hostNetwork.
Many configuration aspects depends on if you want to use Acitve or Passive FTP.
FTP requires two TCP connection to work. Second connection is established using random port. I doesn't look compatible with Services concept. SFTP requires only one connection.
As another solution you could consider use SFTP. You can find many articles in the web with information that its better to use SFTP instead of FTP. For example Tibco docs or this article
You can check Information about SFTP server using OpenSSH and try this github SFTP example.
Here you can find information abut using SFTP in FileZilla.

Kubernetes Yaml Generator UI , yaml builder for kubernetes [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
Is there any tool , online or self hosted , that takes all the values in UI as input and generate the full declarative yaml for the following kubernetes objects:
Deployment, with init containers and imagepullsecrets and other options
Service
ConfigMap
Secret
Daemonset
StatefulSet
Namespaces and quotas
RBAC resources
Edit:
I have been using kubectl create and kubectl run , but they dont spupport all the possible configuration options , and you still need to rememer all the options it supports , in UI one would be able to select from the give options for each resource.
The closest is kubectl create .... and kubectl run ...... Run them with -o yaml --dry-run > output.yaml. This won't create the resource, but will write the resource description to output.yaml file.
Found yipee.io that supports all the options and resources:
# Generated 2018-10-18T11:07:27.621Z by Yipee.io
# Application: nginx
# Last Modified: 2018-10-18T11:07:27.621Z
apiVersion: v1
kind: Service
metadata:
namespace: webprod
name: nginx
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 8080
name: nginx-hhpt
protocol: TCP
nodePort: 30003
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
namespace: webprod
annotations:
yipee.io.lastModelUpdate: '2018-10-18T11:07:27.595Z'
spec:
selector:
matchLabels:
name: nginx
component: nginx
app: nginx
rollbackTo:
revision: 0
template:
spec:
imagePullSecrets:
- name: imagsecret
containers:
- volumeMounts:
- mountPath: /data
name: nginx-vol
name: nginx
ports:
- containerPort: 80
protocol: TCP
name: http
imagePullPolicy: IfNotPresent
image: docker.io/nginx:latest
volumes:
- name: nginx-vol
hostPath:
path: /data
type: Directory
serviceAccountName: test
metadata:
labels:
name: nginx
component: nginx
app: nginx
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 2
replicas: 1
revisionHistoryLimit: 3
I have tried to address the same issue using a Java client based on the most popular Kubernetes Java Client:
<dependency>
<groupId>io.fabric8</groupId>
<artifactId>kubernetes-client</artifactId>
<version>4.1.3</version>
</dependency>
It allows you to set the most exotic options... but the API is not very fluent (or I have not found yet the way to use it fluently) so the code becomes quite verbose... Building a UI is a challenge, because of the extreme complexity of the model.
yipee.io sounds promising though, but I didn't understand how to get a trial version.