I created a kubernetes cluster in amazon. Then I run my pod (container) and volume into this cluster. Now I want to run the samba server into the volume and connect my pod to samba server. Is there any tutorial how can I solve this problem? By the way I am working at windows 10. Here is my deployment code with volume:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
labels:
app : application
spec:
replicas: 2
selector:
matchLabels:
project: k8s
template:
metadata:
labels:
project: k8s
spec:
containers:
- name : k8s-web
image: mine/flask:latest
volumeMounts:
- mountPath: /test-ebs
name: my-volume
ports:
- containerPort: 8080
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: pv0004
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0004
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: [my-Id-volume]
you can check out the smaba container docker image at : https://github.com/dperson/samba
---
kind: Service
apiVersion: v1
metadata:
name: smb-server
labels:
app: smb-server
spec:
type: LoadBalancer
selector:
app: smb-server
ports:
- port: 445
name: smb-server
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: smb-server
spec:
replicas: 1
selector:
matchLabels:
app: smb-server
template:
metadata:
name: smb-server
labels:
app: smb-server
spec:
containers:
- name: smb-server
image: dperson/samba
env:
- name: PERMISSIONS
value: "0777"
args: ["-u", "username;test","-s","share;/smbshare/;yes;no;no;all;none","-p"]
volumeMounts:
- mountPath: /smbshare
name: data-volume
ports:
- containerPort: 445
volumes:
- name: data-volume
hostPath:
path: /smbshare
type: DirectoryOrCreate
I have a Kubernetes template file that contains multiple temples inside for example:
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
namespace: elastic-foo
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
# all input will come from filebeat, no local logs
input {
}
filter {
}
output {
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
namespace: elastic-foo
spec:
replicas: 1
selector:
matchLabels:
app: logstash
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.10.1
env:
ports:
- containerPort: 5044
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-config
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-config
items:
- key: logstash.conf
path: logstash.conf
---
kind: Service
apiVersion: v1
metadata:
name: logstash-service
namespace: elastic-foo
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 5044
targetPort: 5044
type: ClusterIP
My question is, how can I execute only one template by name
for example :
kubectl create -f Service or Deployment or ConfigMap -f my_config.yml
That is not a feature of Kubectl. You can split them into different files or pipe through a tool like yq and use kubectl create -f -.
I am using the ELK stack (elasticsearch, logstash, kibana) for log processing and analysis in a Kubernetes environment. To capture logs I am using filebeat.
The service account, the cluster role, and the cluster role binding of elasticsearch below yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch
namespace: kube-system
labels:
k8s-app: elasticsearch
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: elasticsearch
labels:
k8s-app: elasticsearch
rules:
- apiGroups:
- ""
resources:
- "services"
- "namespaces"
- "endpoints"
verbs:
- "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: kube-system
name: elasticsearch
labels:
k8s-app: elasticsearch
subjects:
- kind: ServiceAccount
name: elasticsearch
namespace: kube-system
apiGroup: ""
roleRef:
kind: ClusterRole
name: elasticsearch
apiGroup: ""
elasticsearch service yaml
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: kube-system
labels:
k8s-app: elasticsearch
spec:
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
k8s-app: elasticsearch
externalIPs:
- 10.10.0.82
Elastic search statesul set yaml below:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
namespace: kube-system
labels:
k8s-app: elasticsearch
spec:
serviceName: elasticsearch
replicas: 2
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
k8s-app: elasticsearch
template:
metadata:
labels:
k8s-app: elasticsearch
spec:
serviceAccountName: elasticsearch
containers:
- image: elasticsearch:6.8.4
name: elasticsearch
resources:
limits:
cpu: 1000m
memory: "2Gi"
requests:
cpu: 100m
memory: "1Gi"
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: data
mountPath: /data
env:
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
initContainers:
- image: alpine:3.6
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
name: elasticsearch-init
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data
labels:
k8s-app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
pv & pvc0 yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: elklogs-pv0
namespace: kube-system
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 10.10.0.131
path: /opt/data/vol/0
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: data-elasticsearch-0
namespace: kube-system
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
pv_pvc1.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elklogs-pv1
namespace: kube-system
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 10.10.0.131
path: /opt/data/vol/1
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: data-elasticsearch-1
namespace: kube-system
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
logstash_svc.yaml
kind: Service
apiVersion: v1
metadata:
name: logstash-service
namespace: kube-system
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 5044
targetPort: 5044
externalIPs:
- 10.10.0.82
logstash_config.yaml
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: kube-system
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
geoip {
source => "clientip"
}
}
output {
elasticsearch {
hosts => ["http://10.10.0.82:9200"]
}
}
logstash deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: logstash
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:6.3.0
ports:
- containerPort: 5044
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
filebeat.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
prospectors:
# Mounted `filebeat-prospectors` configmap:
path: ${path.config}/prospectors.d/*.yml
# Reload prospectors configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
output.logstash:
hosts: ["http://10.10.0.82:5044"]
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-prospectors
namespace: kube-system
labels:
k8s-app: filebeat
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:6.8.4
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
securityContext:
runAsUser: 0
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: prospectors
mountPath: /usr/share/filebeat/prospectors.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: prospectors
configMap:
defaultMode: 0600
name: filebeat-prospectors
- name: data
emptyDir: {}
kibana.yaml
kind: Deployment
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
spec:
replicas: 3
selector:
matchLabels:
k8s-app: kibana-logging
template:
metadata:
labels:
k8s-app: kibana-logging
spec:
containers:
- name: kibana-logging
image: docker.elastic.co/kibana/kibana-oss:6.8.4
env:
- name: ELASTICSEARCH_URL
value: http://10.10.0.82:9200
ports:
- containerPort: 5601
name: ui
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/name: "Kibana"
spec:
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
nodePort: 32010
selector:
k8s-app: kibana-logging
kubectl get svc -n kube-system
elasticsearch ClusterIP 10.43.50.63 10.10.0.82 9200/TCP 31m
kibana-logging NodePort 10.43.58.127 10.10.0.82 5601:32010/TCP 4m4s
kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 23d
logstash-service ClusterIP 10.43.130.36 10.10.0.82 5044/TCP 30m
filebeat pod logs :
2020-11-04T16:42:22.857Z INFO log/harvester.go:255 Harvester started for file: /var/lib/docker/containers/011d24d334bba573ffbb466b0f3f70ae5ddc986f233e683076eaae7394801203/011d24d334bba573ffbb466b0f3f70ae5ddc986f233e683076eaae7394801203-json.log
2020-11-04T16:42:22.983Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://logstash-service:9600))
2020-11-04T16:42:52.412Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":270,"time":{"ms":271}},"total":{"ticks":740,"time":{"ms":745},"value":740},"user":{"ticks":470,"time":{"ms":474}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":97},"info":{"ephemeral_id":"6584086a-eff4-46b5-9be0-93892dad9d97","uptime":{"ms":30191}},"memstats":{"gc_next":36421840,"memory_alloc":32140904,"memory_total":55133048,"rss":65593344}},"filebeat":{"events":{"active":4214,"added":4219,"done":5},"harvester":{"open_files":89,"running":88,"started":88}},"libbeat":{"config":{"module":{"running":0},"reloads":2},"output":{"type":"logstash"},"pipeline":{"clients":2,"events":{"active":4117,"filtered":88,"published":4116,"total":4205}}},"registrar":{"states":{"current":5,"update":5},"writes":{"success":6,"total":6}},"system":{"cpu":{"cores":8},"load":{"1":1.9,"15":0.61,"5":0.9,"norm":{"1":0.2375,"15":0.0763,"5":0.1125}}}}}}
2020-11-04T16:42:54.289Z ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://logstash-service:5044)): dial tcp 10.43.145.162:5044: i/o timeout
2020-11-04T16:42:54.289Z INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://logstash-service:5044)) with 1 reconnect attempt(s)
logstash pod logs :
[WARN ] 2020-11-04 15:45:04.648 [Ruby-0-Thread-4: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.1.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:232] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"}
What I have understood from your architecture, you are using Filebeat >> Logstash >> Elasticsearch >> Kibana
So, in the filebeat.yml, you have selected output as logstash. But, you have given wrong port for logstash output in filebeat.yml.
It should be:
output.logstash:
hosts: ['http://195.134.187.25:5044']
As, if you see in logstash_config.yaml, you have given 5044 as beats input. So, make the changes in filebeat.yml in output.logstash
I don't see an option to mount a configMap as volume in the statefulset , as per https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#statefulset-v1-apps only PVC can be associated with "StatefulSet" . But PVC does not have option for configMaps.
Here is a minimal example:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: example
spec:
selector:
matchLabels:
app: example
serviceName: example
template:
metadata:
labels:
app: example
spec:
containers:
- name: example
image: nginx:stable-alpine
volumeMounts:
- mountPath: /config
name: example-config
volumes:
- name: example-config
configMap:
name: example-configmap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: example-configmap
data:
a: "1"
b: "2"
In the container, you can find the files a and b under /config, with the contents 1 and 2, respectively.
Some explanation:
You do not need a PVC to mount the configmap as a volume to your pods. PersistentVolumeClaims are persistent drives, which you can read from/write to. An example for their usage is a database, such as Postgres.
ConfigMaps on the other hand are read-only key-value structures that are stored inside Kubernetes (in its etcd store), which are to store the configuration for your application. Their values can be mounted as environment variables or as files, either individually or altogether.
I have done it in this way.
apiVersion: v1
kind: ConfigMap
metadata:
name: rabbitmq-configmap
namespace: default
data:
enabled_plugins: |
[rabbitmq_management,rabbitmq_shovel,rabbitmq_shovel_management].
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rabbitmq
labels:
component: rabbitmq
spec:
serviceName: "rabbitmq"
replicas: 1
selector:
matchLabels:
component: rabbitmq
template:
metadata:
labels:
component: rabbitmq
spec:
initContainers:
- name: "rabbitmq-config"
image: busybox:1.32.0
volumeMounts:
- name: rabbitmq-config
mountPath: /tmp/rabbitmq
- name: rabbitmq-config-rw
mountPath: /etc/rabbitmq
command:
- sh
- -c
- cp /tmp/rabbitmq/rabbitmq.conf /etc/rabbitmq/rabbitmq.conf && echo '' >> /etc/rabbitmq/rabbitmq.conf;
cp /tmp/rabbitmq/enabled_plugins /etc/rabbitmq/enabled_plugins
volumes:
- name: rabbitmq-config
configMap:
name: rabbitmq-configmap
optional: false
items:
- key: enabled_plugins
path: "enabled_plugins"
- name: rabbitmq-config-rw
emptyDir: {}
containers:
- name: rabbitmq
image: rabbitmq:3.8.5-management
env:
- name: RABBITMQ_DEFAULT_USER
value: "username"
- name: RABBITMQ_DEFAULT_PASS
value: "password"
- name: RABBITMQ_DEFAULT_VHOST
value: "vhost"
ports:
- containerPort: 15672
name: ui
- containerPort: 5672
name: api
volumeMounts:
- name: rabbitmq-data-pvc
mountPath: /var/lib/rabbitmq/mnesia
volumeClaimTemplates:
- metadata:
name: rabbitmq-data-pvc
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
selector:
component: rabbitmq
ports:
- protocol: TCP
port: 15672
targetPort: 15672
name: ui
- protocol: TCP
port: 5672
targetPort: 5672
name: api
type: ClusterIP
I'm trying to implement an ElasticStack in Kubernetes via Minikube. I've barely started, as I'm writing basically everything from scratch to get a better understand of K8s and because the provided yml's from Elastic don't offer any explanation as to what is done why, so I'm doing my own thing.
The problem I've ran into is that my Kibana-pod cannot communicate with my ElasticSearch-pod, although I've set up the necessary services and ports on my pods.
Where it gets weird is that
kubectl port-forward services/elastic-http 9200
works flawlessly and lets me get information from my ElasticSearch pod. However, when I enter a pod via
kubectl exec -it <pod-name> -- /bin/bash
and try to use curl to get the same information my browser just showed me, the connection is being refused and my pods won't talk to one another.
My configs look as follows.
Kibana.yml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: my-kb
namespace: default
spec:
selector:
matchLabels:
app: kibana
template:
metadata:
name: kibana
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.3.0
ports:
- containerPort: 5601
name: kibana-web
volumeMounts:
- name: kb-conf
mountPath: /usr/share/kibana/config/kibana.yml
subPath: kibana.yml
volumes:
- name: kb-conf
configMap:
name: kibana-config
items:
- key: kibana.yml
path: kibana.yml
---
kind: Service
apiVersion: v1
metadata:
name: kibana-http
namespace: default
spec:
selector:
app: kibana
ports:
- protocol: TCP
port: 5601
name: kibana-web
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kibana-config
namespace: default
data:
kibana.yml: |
elasticsearch.hosts: ["http://elastic-http.default.svc:9200"]
ElasticSearch.yml
kind: PersistentVolume
apiVersion: v1
metadata:
name: elastic-pv
namespace: default
spec:
capacity:
storage: 15Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: elastic-pv-claim
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 15Gi
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: elastic-deploy
namespace: default
spec:
selector:
matchLabels:
app: elasticsearch
template:
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
ports:
- containerPort: 9200
name: elastic-http
protocol: TCP
- containerPort: 9300
name: node-sniffer
protocol: TCP
#readinessProbe:
# httpGet:
# port: 9200
# periodSeconds: 5
volumeMounts:
- name: elastic-conf
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
- name: elastic-data
mountPath: /var/data
securityContext:
privileged: true
initContainers:
- name: sysctl-adj
image: busybox
command: ['sysctl', '-w', 'vm.max_map_count=262144']
securityContext:
privileged: true
volumes:
- name: elastic-data
persistentVolumeClaim:
claimName: elastic-pv-claim
- name: elastic-conf
configMap:
name: elastic-config
items:
- key: elasticsearch.yml
path: elasticsearch.yml
---
kind: Service
apiVersion: v1
metadata:
name: elastic-http
namespace: default
spec:
selector:
app: elasticsearch
ports:
- port: 9200
targetPort: elastic-http
name: elastic-http
- port: 9300
targetPort: node-sniffer
name: node-finder
---
kind: ConfigMap
apiVersion: v1
metadata:
name: elastic-config
namespace: default
data:
elasticsearch.yml: |
xpack.security.enabled: false
node.master: true
path.data: /var/data
http.port: 9200
I think you are having clusterIP service type and if you want to see it in browser one of the option is to have service type as NodePort.
You can see more details here
I'm not sure about this part in service:
targetPort: elastic-http
targetPort: node-sniffer
could you try to remove them and try again