Running thehive4 as a cluster on Kubernetes - kubernetes

I'm trying to create a cluster of the thehive4 on K8s. From the Documentation here, the deployment is done on servers which is different. There is non official helm chart here, but it is not what I'm looking for.
I created this deployment manifest and successfully created thehive nodes on Kuberenetes, but they are still not acting as a cluster since the application.conf file is not setup correctly.
Has anyone tried this before or has a similar experience with it, please share any documentation or usefull support to finish the setup.
Thank you in advance.
apiVersion: v1
kind: ConfigMap
metadata:
name: thehive-config
data:
application.conf: |
# configuration example using servers not k8s
akka {
cluster.enable = on
actor {
provider = cluster
}
remote.artery {
canonical {
hostname = "<My IP address>"
port = 2551
}
}
# seed node list contains at least one active node
cluster.seed-nodes = [
"akka://[email protected]<Node 1 IP address>:2551",
"akka://[email protected]<Node 2 IP address>:2551",
"akka://[email protected]<Node 3 IP address>:2551"
]
}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: thehive-claim1
name: thehive-claim1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
status: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: thehive-claim2
name: thehive-claim2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
status: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: thehive
name: thehive
spec:
replicas: 3
selector:
matchLabels:
io.kompose.service: thehive
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: thehive
spec:
containers:
- args:
- --no-config
- --no-config-secret
image: thehiveproject/thehive4:latest
name: thehive4
ports:
- containerPort: 9000
resources: {}
volumeMounts:
- mountPath: /etc/thehive/application.conf
subPath: application.conf
name: config-volume
- mountPath: /opt/thp/thehive/data
name: thehive-claim1
- mountPath: /opt/thp/thehive/index
name: thehive-claim2
restartPolicy: Always
volumes:
- name: config-volume
configMap:
name: thehive-config
- name: thehive-claim1
persistentVolumeClaim:
claimName: thehive-claim1
- name: thehive-claim2
persistentVolumeClaim:
claimName: thehive-claim2
status: {}

Related

Kubernetes error while creating mount source path : file exists

after re-deploying my kubernetes statefulset, the pod is now failing due to error while creating mount source path
'/var/lib/kubelet/pods/1559ef17-9c48-401d-9a2f-9962a4a16151/volumes/kubernetes.io~csi/pvc-6b9ac265-d0ec-4564-adb2-1c7b3f6631ca/mount': mkdir /var/lib/kubelet/pods/1559ef17-9c48-401d-9a2f-9962a4a16151/volumes/kubernetes.io~csi/pvc-6b9ac265-d0ec-4564-adb2-1c7b3f6631ca/mount: file exists
I'm assuming this is because the persistent volume/PVC already exists and so it cannot be created, but I thought that was the point of the statefulset, that the data would persist and you could just mount it again? How should I fix this?
Thanks.
apiVersion: v1
kind: Service
metadata:
name: foo-service
spec:
type: ClusterIP
ports:
- name: http
port: 80
selector:
app: foo-app
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: foo-statefulset
namespace: foo
spec:
selector:
matchLabels:
app: foo-app
serviceName: foo-app
replicas: 1
template:
metadata:
labels:
app: foo-app
spec:
serviceAccountName: foo-service-account
containers:
- name: foo
image: blahblah
imagePullPolicy: Always
volumeMounts:
- name: foo-data
mountPath: "foo"
- name: stuff
mountPath: "here"
- name: config
mountPath: "somedata"
volumes:
- name: stuff
persistentVolumeClaim:
claimName: stuff-pvc
- name: config
configMap:
name: myconfig
volumeClaimTemplates:
- metadata:
name: foo-data
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: "foo-storage"
resources:
requests:
storage: 2Gi

Pods in differents Nodes can't talk each other - Kubernetes

Context:
I am building an application and now I am on the infrastructure step.
The application is built with Java and persistence layer is MongoDB.
Problem:
If the application is running in same Node as persistence Layer are, everything goes ok, but on different nodes the application cannot communicate with MongoDB.
There is a print of Kubernetes Dashboard:
As you can see, two pods of application (gateway) are running in same node as Mongo, but other two don't. These two are not finding MongoDb.
Here is the mongo-db.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-data
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /home/vitor/seguranca/mongo
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
volumeName: mongo-data
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mongo
name: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mongo
name: mongo-service
spec:
volumes:
- name: "deployment-storage"
persistentVolumeClaim:
claimName: "pvc"
containers:
- image: mongo
name: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: "deployment-storage"
mountPath: "/data/db"
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
name: mongo-service
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
clusterIP: None
and here the application.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: gateway
name: gateway
spec:
replicas: 4
selector:
matchLabels:
app: gateway
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: gateway
spec:
containers:
- image: vitornilson1998/native-micro
name: native-micro
env:
- name: MONGO_CONNECTION_STRING
value: mongodb://mongo-service:27017 #HERE IS THE POINT THAT THE APPLICATION USES TO ACCESS MONGODB
- name: MONGO_DB
value: gateway
resources: {}
ports:
- containerPort: 8080
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: gateway-service
name: gateway-service
spec:
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: gateway
type: NodePort
status:
loadBalancer: {}
I can't see what is stopping application to reach MongoDB.
Should I do what?
I was using calico as CNI.
I removed calico and let kube-proxy take care of everything.
Now everything is working fine.

Why am I getting "1 pod has unbound immediate PersistentVolumeClaims" when working with 2 deployments

I am trying to do a fairly simple red/green setup using Minikube where I want 1 pod running a red container and 1 pod running a green container and a service to hit each. To do this my k82 file is like...
apiVersion: v1
kind: PersistentVolume
metadata:
name: main-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Users/jackiegleason/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: main-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
labels:
app: express-app
name: express-service-red
spec:
type: LoadBalancer
ports:
- port: 3000
selector:
app: express-app-red
---
apiVersion: v1
kind: Service
metadata:
labels:
app: express-app
name: express-service-green
spec:
type: LoadBalancer
ports:
- port: 3000
selector:
app: express-app-green
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: express-app-deployment-red
labels:
app: express-app-red
tier: app
spec:
replicas: 1
selector:
matchLabels:
tier: app
template:
metadata:
labels:
app: express-app-red
tier: app
spec:
volumes:
- name: express-app-storage
persistentVolumeClaim:
claimName: main-volume-claim
containers:
- name: express-app-container
image: partyk1d24/hello-express:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/external"
name: express-app-storage
ports:
- containerPort: 3000
protocol: TCP
name: express-endpnt
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: express-app-deployment-green
labels:
app: express-app-green
tier: app
spec:
replicas: 1
selector:
matchLabels:
tier: app
template:
metadata:
labels:
app: express-app-green
tier: app
spec:
volumes:
- name: express-app-storage
persistentVolumeClaim:
claimName: main-volume-claim
containers:
- name: express-app-container
image: partyk1d24/hello-express:latest
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_TYPE
value: "Green"
volumeMounts:
- mountPath: "/var/external"
name: express-app-storage
ports:
- containerPort: 3000
protocol: TCP
name: express-endpnt
But when I run I get...
0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
What am I missing? It worked fine without the second deployment so what am I missing?
Thank you!
You cannot use the same a PV with accessMode: ReadWriteOnce multiple times.
To do this you would need to use a volume that supports ReadWriteMany access mode.
Check out k8s documentation for the list of Volume Plugins that support this feature.
Additionally, as David already menioned, it's much better to log to the STDOUT.
You can also check solutions like FluentBit/fluentd or ELK stack.

After running Kompose I get: pod has unbound immediate PersistentVolumeClaims

Whats the Problem?
I can't get my pods running which are using a volume. In the Kubernetes Dashboard I got the following error:
running "VolumeBinding" filter plugin for pod "influxdb-6979bff6f9-hpf89": pod has unbound immediate PersistentVolumeClaims
What did I do?
After running Kompose convert to my docker-compose.yml file I tried to start the pods with micro8ks kubectl apply -f . (I am using micro8ks) I had to replace the version of the networkpolicy yaml files with networking.k8s.io/v1 (see here) but except of this change, I didn't change anything.
YAML Files
influxdb-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: influxdb
name: influxdb
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: influxdb
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.network/cloud-net: "true"
io.kompose.network/default: "true"
io.kompose.service: influxdb
spec:
containers:
- env:
- name: INFLUXDB_HTTP_LOG_ENABLED
value: "false"
image: influxdb:1.8
imagePullPolicy: ""
name: influxdb
ports:
- containerPort: 8086
resources: {}
volumeMounts:
- mountPath: /var/lib/influxdb
name: influx
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: influx
persistentVolumeClaim:
claimName: influx
status: {}
influxdb-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: influxdb
name: influxdb
spec:
ports:
- name: "8087"
port: 8087
targetPort: 8086
selector:
io.kompose.service: influxdb
status:
loadBalancer: {}
influx-persistenvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: influx
name: influx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
The PersistentVolumeClaim will be unbound if either the cluster does not have a StorageClass which can dynamically provision a PersistentVolume or it does not have a manually created PersistentVolume to satisfy the PersistentVolumeClaim
Here is a guide on how to configure a pod to use PersistentVolume
To solve the current scenario you can manually create a PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
Please note usage of hostPath is only as an example. It's not recommended for production usage. Consider using external block or file storage from the supported types here

Kubernetes nfs define path in claim

I want to create common persistence volume with nfs.
PV(nfs):
common-data-pv 1500Gi RWO Retain
192.168.0.24 /home/common-data-pv
I want a claim or pod(mount the claim) subscribed common-data-pv can define path example :
/home/common-data-pv/www-site-1(50GI)
/home/common-data-pv/www-site-2(50GI)
But i not found in documentation how i can define this.
My actual conf for pv :
kind: PersistentVolume
apiVersion: v1
metadata:
name: common-data-pv
labels:
type: common
spec:
capacity:
storage: 1500Gi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.122.1
path: "/home/pv/common-data-pv"
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: common-data-pvc
namespace: kube-system
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
type: common
Example use:
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-web-1
namespace: kube-system
spec:
replicas: 2
selector:
role: web-frontend
template:
metadata:
labels:
role: web-frontend
spec:
containers:
- name: web
image: nginx:alpine
ports:
- name: web
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: common-data-pvc
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-web-2
namespace: kube-system
spec:
replicas: 2
selector:
role: web-frontend
template:
metadata:
labels:
role: web-frontend
spec:
containers:
- name: web
image: nginx:alpine
ports:
- name: web
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: common-data-pvc
To use the claim you just need to add a volumeMounts section and volumes to your manifest. Here's an example replication controller for nginx that would use your claim. Note the very last line that uses the same PVC name.
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-web
namespace: kube-system
spec:
replicas: 2
selector:
role: web-frontend
template:
metadata:
labels:
role: web-frontend
spec:
containers:
- name: web
image: nginx:alpine
ports:
- name: web
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: common-data-pvc
More examples can be found in the kubernetes repo under examples