How to port forward service? - postgresql

I am using https://github.com/zalando/postgres-operator and I have created a database cluster. The following services have been also created:
databaker-users-db ClusterIP 10.245.227.1 <none> 5432/TCP 52d
databaker-users-db-config ClusterIP None <none> <none> 52d
databaker-users-db-repl ClusterIP 10.245.156.119 <none> 5432/TCP 52d
I would like to forward the service to localhost and I tried as follows:
kubectl port-forward service/databaker-users-db 5432:5432
and it shows me:
error: cannot attach to *v1.Service: invalid service 'databaker-users-db': Service is defined without a selector
The content of the yml file
apiVersion: acid.zalan.do/v1
kind: postgresql
metadata:
annotations:
meta.helm.sh/release-name: users
meta.helm.sh/release-namespace: dev
labels:
app.kubernetes.io/managed-by: Helm
team: databaker
name: databaker-users-db
namespace: dev
spec:
databases:
databaker_users_db: databaker
numberOfInstances: 2
postgresql:
version: '12'
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 100m
memory: 100Mi
teamId: databaker
users:
databaker:
- superuser
- createdb
volume:
size: 2Gi
What am I doing wrong?

It seems like, your k8s service databaker-users-db doesn't have selector specified.
apiVersion: v1
kind: Service
metadata:
name: databaker-users-db
spec:
ports:
- ...
- ...
selector: <-- check here
When a Service has no selector, the corresponding Endpoint object is not created automatically. You can manually map the Service to the network address and port where it's running, by adding an Endpoint object manually.

Related

"Must specify limits.cpu" error during pod deployment even though cpu limit is specified

I am trying to run a test pod with OpenShift CLI:
$oc run nginx --image=nginx --limits=cpu=2,memory=4Gi
deploymentconfig.apps.openshift.io/nginx created
$oc describe deploymentconfig.apps.openshift.io/nginx
Name: nginx
Namespace: myproject
Created: 12 seconds ago
Labels: run=nginx
Annotations: <none>
Latest Version: 1
Selector: run=nginx
Replicas: 1
Triggers: Config
Strategy: Rolling
Template:
Pod Template:
Labels: run=nginx
Containers:
nginx:
Image: nginx
Port: <none>
Host Port: <none>
Limits:
cpu: 2
memory: 4Gi
Environment: <none>
Mounts: <none>
Volumes: <none>
Deployment #1 (latest):
Name: nginx-1
Created: 12 seconds ago
Status: New
Replicas: 0 current / 0 desired
Selector: deployment=nginx-1,deploymentconfig=nginx,run=nginx
Labels: openshift.io/deployment-config.name=nginx,run=nginx
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal DeploymentCreated 12s deploymentconfig-controller Created new replication controller "nginx-1" for version 1
Warning FailedCreate 1s (x12 over 12s) deployer-controller Error creating deployer pod: pods "nginx-1-deploy" is forbidden: failed quota: quota-svc-myproject: must specify limits.cpu,limits.memory
I get "must specify limits.cpu,limits.memory" error, despite both limits being present in the same describe output.
What might be the problem and how do I fix it?
I found a solution!
Part of the error message was "Error creating deployer pod". It means that the problem is not with my pod, but with the deployer pod which performs my pod deployment.
It seems the quota in my project affects deployer pods as well.
I couldn't find a way to set deployer pod limits with CLI, so I've made a DeploymentConfig.
kind: "DeploymentConfig"
apiVersion: "v1"
metadata:
name: "test-app"
spec:
template:
metadata:
labels:
name: "test-app"
spec:
containers:
- name: "test-app"
image: "nginxinc/nginx-unprivileged"
resources:
limits:
cpu: "2000m"
memory: "20Gi"
ports:
- containerPort: 8080
protocol: "TCP"
replicas: 1
selector:
name: "test-app"
triggers:
- type: "ConfigChange"
- type: "ImageChange"
imageChangeParams:
automatic: true
containerNames:
- "test-app"
from:
kind: "ImageStreamTag"
name: "nginx-unprivileged:latest"
strategy:
type: "Rolling"
resources:
limits:
cpu: "2000m"
memory: "20Gi"
A you can see, two sets of limitations are specified here: for container and for deployment strategy.
With this configuration it worked fine!
Looks like you have specified resource quota and the values you specified for limits seems to be larger than that. Can you describe the resource quota oc describe quota quota-svc-myproject and adjust your configs accordingly.
A good reference could be https://docs.openshift.com/container-platform/3.11/dev_guide/compute_resources.html

I have deployed a Drupal Instance but i see that the instance Endpoint are not visible although the containers deployed successfully

I have deployed a Drupal Instance but i see that the instance Endpoint are not visible although the containers deployed successfully.
Container logs don't point to any direction
apiVersion: apps/v1
kind: Deployment
metadata:
name: drupal-deployment
spec:
replicas: 1
selector:
matchLabels:
app: drupal
type: frontend
template:
metadata:
labels:
app: drupal
spec:
containers:
- name: drupal
image: drupal
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
**********************
apiVersion: v1
kind: Service
metadata:
name: drupal-service
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 30010
selector:
app: drupal
type: frontend
************************`
root#ip-172-31-32-54:~# microk8s.kubectl get pods
NAME READY STATUS RESTARTS AGE
drupal-deployment-6fdd7975f-l4j2z 1/1 Running 0 9h
drupal-deployment-6fdd7975f-p7sfz 1/1 Running 0 9h
root#ip-172-31-32-54:~# microk8s.kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
drupal-service NodePort 10.152.183.6 <none> 80:30010/TCP 9h
kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 34h
***********************
root#ip-172-31-32-54:~# microk8s.kubectl describe service drupal-service
Name: drupal-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=drupal,type=frontend
Type: NodePort
IP: 10.152.183.6
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30010/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Any directions is really helpful.
NOTE: This works perfectly when running a container using the command
docker run --name some-drupal -p 8080:80 -d drupal
Thank you,
Anish
Your service selector has two values:
Selector: app=drupal,type=frontend
but your pod has only one of these:
spec:
template:
metadata:
labels:
app: drupal
Just make sure that all labels required by the service actually exist on the pod.
Like following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: drupal-deployment
spec:
replicas: 1
selector:
matchLabels:
app: drupal
type: frontend
template:
metadata:
labels:
app: drupal
type: frontend # <--------- look here
spec:
containers:
- name: drupal
image: drupal
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80

Exposing cassandra cluster on minikube to access externally

I'm trying to deploy a cassandra multinode cluster in minikube, I have followed this tutorial Example: Deploying Cassandra with Stateful Sets and made some modifications, the cluster is up and running and with kubectl I can connect via cqlsh, but I want to connect externally, I tried to expose the service via NodePort and test the connection with datastax studio (192.168.99.100:32554) but no success, also later I want to connect in spring boot, I supose that I have to use the svc name or the node ip.
All host(s) tried for query failed (tried: /192.168.99.100:32554 (com.datastax.driver.core.exceptions.TransportException: [/192.168.99.100:32554] Cannot connect))
[cassandra-0] /etc/cassandra/cassandra.yaml
rpc_port: 9160
broadcast_rpc_address: 172.17.0.5
listen_address: 172.17.0.5
# listen_interface: eth0
start_rpc: true
rpc_address: 0.0.0.0
# rpc_interface: eth1
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "cassandra-0.cassandra.default.svc.cluster.local"
Here is minikube output for the svc and pods
$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cassandra NodePort 10.102.236.158 <none> 9042:32554/TCP 20m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cassandra-0 1/1 Running 0 20m 172.17.0.4 minikube <none> <none>
cassandra-1 1/1 Running 0 19m 172.17.0.5 minikube <none> <none>
cassandra-2 1/1 Running 1 19m 172.17.0.6 minikube <none> <none>
$ kubectl describe service cassandra
Name: cassandra
Namespace: default
Labels: app=cassandra
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"cassandra"},"name":"cassandra","namespace":"default"},"s...
Selector: app=cassandra
Type: NodePort
IP: 10.102.236.158
Port: <unset> 9042/TCP
TargetPort: 9042/TCP
NodePort: <unset> 32554/TCP
Endpoints: 172.17.0.4:9042,172.17.0.5:9042,172.17.0.6:9042
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
$ kubectl exec -it cassandra-0 -- nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.17.0.5 104.72 KiB 256 68.1% 680bfcb9-b374-40a6-ba1d-4bf7ee80a57b rack1
UN 172.17.0.4 69.9 KiB 256 66.5% 022009f8-112c-46c9-844b-ef062bac35aa rack1
UN 172.17.0.6 125.31 KiB 256 65.4% 48ae76fe-b37c-45c7-84f9-3e6207da4818 rack1
$ kubectl exec -it cassandra-0 -- cqlsh
Connected to K8Demo at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.11.4 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh>
cassandra-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
type: NodePort
ports:
- port: 9042
selector:
app: cassandra
cassandra-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra
image: cassandra:3.11
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "K8Demo"
- name: CASSANDRA_DC
value: "DC1-K8Demo"
- name: CASSANDRA_RACK
value: "Rack1-K8Demo"
- name: CASSANDRA_START_RPC
value: "true"
- name: CASSANDRA_RPC_ADDRESS
value: "0.0.0.0"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the stateful pod volumes.
volumeMounts:
- name: cassandra-data
mountPath: /var/lib/cassandra
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
# do not use these in production until ssd GCEPersistentDisk or other ssd pd
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: fast
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: k8s.io/minikube-hostpath
parameters:
type: pd-standard
Just for anyone with this problem:
After reading docs on datastax I realized that DataStax Studio is meant for use with DataStax Enterprise, for local development and the community edition of cassanda I'm using DataStax DevCenter and it works.
For spring boot (Cassandra cluster running on minikube):
spring.data.cassandra.keyspacename=mykeyspacename
spring.data.cassandra.contactpoints=cassandra-0.cassandra.default.svc.cluster.local
spring.data.cassandra.port=9042
spring.data.cassandra.schemaaction=create_if_not_exists
For DataStax DevCenter(Cassandra cluster running on minikube):
ContactHost = 192.168.99.100
NativeProtocolPort: 300042
Updated cassandra-service
# ------------------- Cassandra Service ------------------- #
apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
type: NodePort
ports:
- port: 9042
nodePort: 30042
selector:
app: cassandra
If we just want to connect cqlsh, what you neeed is following command
kubectl exec -it cassandra-0 -- cqlsh
On the other hand, if we want to connect from external point, command can be used to get cassandra url (I use DBever to connect cassandra cluster)
minikube service cassandra --url

Two kubernetes deployments in the same namespace are not able to communicate

I'm deploying ELK stack (oss) to kubernetes cluster. Elasticsearch deployment and service starts correctly and API is reacheble. Kibana deployment starts but can't access elasticsearch:
From Kibana container logs:
{"type":"log","#timestamp":"2019-05-08T22:49:26Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nHEAD http://elasticsearch:9200/ => getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}
{"type":"log","#timestamp":"2019-05-08T22:50:44Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2019-05-08T22:50:44Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
Both deployments are in the same namespace "observability". I also tried to reference elasticsearch container as elasticsearch.observability.svc.cluster.local but it's not working too.
What I'am doing wrong? How to reference elasticsearch container from kibana container?
More info:
kubectl --context=19team-observability-admin-context -n observability get pods
NAME READY STATUS RESTARTS AGE
elasticsearch-9d495b84f-j2297 1/1 Running 0 15s
kibana-65bc7f9c4-s9cv4 1/1 Running 0 15s
kubectl --context=19team-observability-admin-context -n observability get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch NodePort 10.104.250.175 <none> 9200:30083/TCP,9300:30059/TCP 1m
kibana NodePort 10.102.124.171 <none> 5601:30124/TCP 1m
I start my containers with command
kubectl --context=19team-observability-admin-context -n observability apply -f .\elasticsearch.yaml -f .\kibana.yaml
elasticsearch.yaml
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: observability
spec:
type: NodePort
ports:
- name: "9200"
port: 9200
targetPort: 9200
- name: "9300"
port: 9300
targetPort: 9300
selector:
app: elasticsearch
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: elasticsearch
namespace: observability
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: set-vm-max-map-count
image: busybox
imagePullPolicy: IfNotPresent
command: ['sysctl', '-w', 'vm.max_map_count=262144']
securityContext:
privileged: true
resources:
requests:
memory: "512Mi"
cpu: "1"
limits:
memory: "724Mi"
cpu: "1"
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.7.1
ports:
- containerPort: 9200
- containerPort: 9300
resources:
requests:
memory: "3Gi"
cpu: "1"
limits:
memory: "3Gi"
cpu: "1"
kibana.yaml
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: observability
spec:
type: NodePort
ports:
- name: "5601"
port: 5601
targetPort: 5601
selector:
app: observability_platform_kibana
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: observability_platform_kibana
name: kibana
namespace: observability
spec:
replicas: 1
template:
metadata:
labels:
app: observability_platform_kibana
spec:
containers:
- env:
# THIS IS WHERE WE SET CONNECTION BETWEEN KIBANA AND ELASTIC
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
- name: SERVER_NAME
value: kibana
image: docker.elastic.co/kibana/kibana-oss:6.7.1
name: kibana
ports:
- containerPort: 5601
resources:
requests:
memory: "512Mi"
cpu: "1"
limits:
memory: "724Mi"
cpu: "1"
restartPolicy: Always
UPDATE 1
As gonzalesraul proposed I've created second service for elastic with ClusterIP type:
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch-local
namespace: observability
spec:
type: ClusterIP
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
app: elasticsearch
Service is created:
kubectl --context=19team-observability-admin-context -n observability get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch NodePort 10.106.5.94 <none> 9200:31598/TCP,9300:32018/TCP 26s
elasticsearch-local ClusterIP 10.101.178.13 <none> 9200/TCP 26s
kibana NodePort 10.99.73.118 <none> 5601:30004/TCP 26s
And reference elastic as "http://elasticsearch-local:9200"
Unfortunately it does not work, in kibana container:
{"type":"log","#timestamp":"2019-05-09T10:13:54Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-local:9200/"}
Do not use a NodePort service, instead use a ClusterIP. If you need to expose as a Nodeport your service, create a second service besides, for instance:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch-local
namespace: observability
spec:
type: ClusterIP
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
app: elasticsearch
Then update the kibana manifest to point to the ClusterIP service:
# ...
# THIS IS WHERE WE SET CONNECTION BETWEEN KIBANA AND ELASTIC
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch-local:9200
# ...
The nodePort services do not create a 'dns entry' (ex. elasticsearch.observability.svc.cluster.local) on kubernetes
Edit the server name value in kibana.yaml and set it to kibana:5601.
I think if you don't do this, by default it is trying to go to port 80.
This is what looks like now kibana.yaml:
...
spec:
containers:
- env:
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
- name: SERVER_NAME
value: kibana:5601
image: docker.elastic.co/kibana/kibana-oss:6.7.1
imagePullPolicy: IfNotPresent
name: kibana
...
And this is the output now:
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:console#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:interpreter#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:metrics#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:tile_map#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:timelion#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:elasticsearch#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2019-05-09T10:37:17Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
UPDATE
I just tested it on a bare metal cluster (bootstraped through kubeadm), and worked again.
This is the output:
{"type":"log","#timestamp":"2019-05-09T11:09:59Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","#timestamp":"2019-05-09T11:10:01Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2019-05-09T11:10:01Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","#timestamp":"2019-05-09T11:10:04Z","tags":["status","plugin:elasticsearch#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","#timestamp":"2019-05-09T11:10:04Z","tags":["info","migrations"],"pid":1,"message":"Creating index .kibana_1."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["info","migrations"],"pid":1,"message":"Pointing alias .kibana to .kibana_1."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["info","migrations"],"pid":1,"message":"Finished in 2417ms."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
Note that it passed from "No Living Connections" to "Running". I am running the nodes on GCP. I had to open the firewalls for it to work. What's your environment?

Running MongoDB on Kubernetes Minikube with local persistent storage

I am currently trying to reproduce this tutorial on Minikube:
http://blog.kubernetes.io/2017/01/running-mongodb-on-kubernetes-with-statefulsets.html
I updated the configuration files to use a hostpath as a persistent storage on minikube node.
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0001
labels:
type: local
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp"
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: myclaim
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: myclaim
Which result in the following:
kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pv0001 1Gi RWO Retain Available 17s
pvc-134a6c0f-1565-11e7-9cf1-080027f4d8c3 1Gi RWO Delete Bound default/myclaim 11s
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
myclaim Bound pvc-134a6c0f-1565-11e7-9cf1-080027f4d8c3 1Gi RWO 14s
kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 3d
mongo None <none> 27017/TCP 53s
kubectl get pod
No resources found.
kubectl describe service mongo
Name: mongo
Namespace: default
Labels: name=mongo
Selector: role=mongo
Type: ClusterIP
IP: None
Port: <unset> 27017/TCP
Endpoints: <none>
Session Affinity: None
No events.
kubectl get statefulsets
NAME DESIRED CURRENT AGE
mongo 3 0 4h
kubectl describe statefulsets mongo
Name: mongo
Namespace: default
Image(s): mongo,cvallance/mongo-k8s-sidecar
Selector: environment=test,role=mongo
Labels: environment=test,role=mongo
Replicas: 0 current / 3 desired
Annotations: <none>
CreationTimestamp: Thu, 30 Mar 2017 18:23:56 +0200
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------------- -------
1s 1s 4 {statefulset } WarningFailedCreate pvc: myclaim-mongo-0, error: PersistentVolumeClaim "myclaim-mongo-0" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
1s 1s 4 {statefulset } WarningFailedCreate pvc: myclaim-mongo-1, error: PersistentVolumeClaim "myclaim-mongo-1" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
1s 0s 4 {statefulset } WarningFailedCreate pvc: myclaim-mongo-2, error: PersistentVolumeClaim "myclaim-mongo-2" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
kubectl get ev | grep mongo
29s 1m 15 mongo StatefulSet Warning FailedCreate {statefulset } pvc: myclaim-mongo-0, error: PersistentVolumeClaim "myclaim-mongo-0" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
29s 1m 15 mongo StatefulSet Warning FailedCreate {statefulset } pvc: myclaim-mongo-1, error: PersistentVolumeClaim "myclaim-mongo-1" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
29s 1m 15 mongo StatefulSet Warning FailedCreate {statefulset } pvc: myclaim-mongo-2, error: PersistentVolumeClaim "myclaim-mongo-2" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
kubectl describe pvc myclaim
Name: myclaim
Namespace: default
StorageClass: standard
Status: Bound
Volume: pvc-134a6c0f-1565-11e7-9cf1-080027f4d8c3
Labels: <none>
Capacity: 1Gi
Access Modes: RWO
No events.
minikube version: v0.17.1
It seems that the service is not able to load pods, which makes it complicated to debug with kubectl logs.
Is there something wrong with the way I am creating a persistent volume on my node ?
Thanks a lot
TL; DR
In the situation described in the question the problem was that the Pods for the StatefulSet did not start up at all therefore the Service had no targets. The reason for not starting up was:
WarningFailedCreate pvc: myclaim-mongo-0, error: PersistentVolumeClaim "myclaim-mongo-0" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]`
And since the volume by default is defined as required the Pod won't start without it. So edit the StatefulSet's volumeClaimTemplate to have:
volumeClaimTemplates:
- metadata:
name: myclaim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
(There is no need to create the PersistentVolumeClaim manually.)
More general solution
If can't connect a Service try this command:
kubectl describe service myservicename
And if you see something like this in the output:
Endpoints: <none>
That means there are no targets (usually Pods) running or the targets are not ready. To find out which one is the case do:
kubectl describe endpoint myservicename
It will list all endpoints, ready or not. If not ready, investigate the readinessProbe in the Pod. If doesn't exist then try to find out why by looking at the StatefulSet (Deployment, ReplicaSet, ReplicationController, etc) itself for messages (the Events section):
kubectl describe statefulset mystatefulsetname
This information is available if you do:
kubectl get ev | grep something
If you are sure they are running and ready then the labels on the Pods and the Service do not match up.