I've been trying to run a glusterfs cluster on my kubernetes cluster using those:
glusterfs-service.json
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"spec": {
"type": "NodePort",
"selector": {
"name": "gluster"
},
"ports": [
{
"port": 1
}
]
}
}
and glusterfs-server.json:
{
"apiVersion": "extensions/v1beta1",
"kind": "DaemonSet",
"metadata": {
"labels": {
"name": "gluster"
},
"name": "gluster"
},
"spec": {
"selector": {
"matchLabels": {
"name": "gluster"
}
},
"template": {
"metadata": {
"labels": {
"name": "gluster"
}
},
"spec": {
"containers": [
{
"name": "gluster",
"image": "gluster/gluster-centos",
"livenessProbe": {
"exec": {
"command": [
"/bin/bash",
"-c",
"systemctl status glusterd.service"
]
}
},
"readinessProbe": {
"exec": {
"command": [
"/bin/bash",
"-c",
"systemctl status glusterd.service"
]
}
},
"securityContext": {
"privileged": true
},
"volumeMounts": [
{
"mountPath": "/mnt/brick1",
"name": "gluster-brick"
},
{
"mountPath": "/etc/gluster",
"name": "gluster-etc"
},
{
"mountPath": "/var/log/gluster",
"name": "gluster-logs"
},
{
"mountPath": "/var/lib/glusterd",
"name": "gluster-config"
},
{
"mountPath": "/dev",
"name": "gluster-dev"
},
{
"mountPath": "/sys/fs/cgroup",
"name": "gluster-cgroup"
}
]
}
],
"dnsPolicy": "ClusterFirst",
"hostNetwork": true,
"volumes": [
{
"hostPath": {
"path": "/mnt/brick1"
},
"name": "gluster-brick"
},
{
"hostPath": {
"path": "/etc/gluster"
},
"name": "gluster-etc"
},
{
"hostPath": {
"path": "/var/log/gluster"
},
"name": "gluster-logs"
},
{
"hostPath": {
"path": "/var/lib/glusterd"
},
"name": "gluster-config"
},
{
"hostPath": {
"path": "/dev"
},
"name": "gluster-dev"
},
{
"hostPath": {
"path": "/sys/fs/cgroup"
},
"name": "gluster-cgroup"
}
]
}
}
}
}
Then on my pod definition, I'm doing:
"volumes": [
{
"name": "< volume name >",
"glusterfs": {
"endpoints": "glusterfs-cluster.default.svc.cluster.local",
"path": "< gluster path >",
"readOnly": false
}
}
]
But the pod creation is timing out because it can't mont the volume
It also look like only one of the glusterfs pod is running
Here are my logs:
http://imgur.com/a/j2I8r
I then try to run my pod on the same namespace as my gluster cluster is running, I'm now getting this error:
Operation for "\"kubernetes.io/glusterfs/01a0834e-64ab-11e6-af52-42010a840072-ssl-certificates\" (\"01a0834e-64ab-11e6-af52-42010a840072\")" failed.
No retries permitted until 2016-08-17 18:51:20.61133778 +0000 UTC (durationBeforeRetry 2m0s).
Error: MountVolume.SetUp failed for volume "kubernetes.io/glusterfs/01a0834e-64ab-11e6-af52-42010a840072-ssl-certificates" (spec.Name: "ssl-certificates") pod "01a0834e-64ab-11e6-af52-42010a840072" (UID: "01a0834e-64ab-11e6-af52-42010a840072") with: glusterfs: mount failed:
mount failed: exit status 1
Mounting arguments:
10.132.0.7:ssl_certificates /var/lib/kubelet/pods/01a0834e-64ab-11e6-af52-42010a840072/volumes/kubernetes.io~glusterfs/ssl-certificates
glusterfs [log-level=ERROR log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/ssl-certificates/caddy-server-1648321103-epvdi-glusterfs.log]
Output: Mount failed. Please check the log file for more details. the following error information was pulled from the glusterfs log to help diagnose this issue:
[2016-08-17 18:49:20.583585] E [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:ssl_certificates)
[2016-08-17 18:49:20.610531] E [glusterfsd-mgmt.c:1494:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
The logs clearly say what's going on:
failed to get endpoints glusterfs-cluster [endpoints "glusterfs-cluster" not found]
because:
"ports": [
{
"port": 1
}
is bogus in a couple of ways. First, a port of "1" is very suspicious. Second, it has no matching containerPort: on the DaemonSet side to which kubernetes could point that Service -- thus, it will not create Endpoints for the (podIP, protocol, port) tuple. Because glusterfs (reasonably) would want to contact the underlying Pods directly, without going through the Service, then it is unable to discover the Pods and everything comes to an abrupt halt.
Related
Today when I create the PVC and PV in kubernetes cluster v1.15.x, the kubernetes dashboard shows the PVC is in lost state, this is the error message of PVC:
this claim is in lost state.
this is my PVC define:
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "zhuolian-report-mysql-pv-claim",
"namespace": "dabai-uat",
"selfLink": "/api/v1/namespaces/dabai-uat/persistentvolumeclaims/zhuolian-report-mysql-pv-claim",
"uid": "3ca3425b-b2dc-4bd7-876f-05f8cbcafcf8",
"resourceVersion": "106652242",
"creationTimestamp": "2021-09-26T02:58:32Z",
"annotations": {
"pv.kubernetes.io/bind-completed": "yes"
},
"finalizers": [
"kubernetes.io/pvc-protection"
]
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "8Gi"
}
},
"volumeName": "nfs-zhuolian-report-mysql-pv",
"volumeMode": "Filesystem"
},
"status": {
"phase": "Lost"
}
}
and this is my PV define in the same namespace:
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "nfs-zhuolian-report-mysql-pv",
"selfLink": "/api/v1/persistentvolumes/nfs-zhuolian-report-mysql-pv",
"uid": "86291e89-8360-4d48-bae7-62c3c642e945",
"resourceVersion": "106652532",
"creationTimestamp": "2021-09-26T03:01:02Z",
"labels": {
"alicloud-pvname": "zhuolian-report-data"
},
"finalizers": [
"kubernetes.io/pv-protection"
]
},
"spec": {
"capacity": {
"storage": "8Gi"
},
"nfs": {
"server": "balabala.cn-hongkong.nas.balabala.com",
"path": "/docker/mysql_zhuolian_report_data"
},
"accessModes": [
"ReadWriteOnce"
],
"claimRef": {
"kind": "PersistentVolumeClaim",
"namespace": "dabai-uat",
"name": "zhuolian-report-mysql-pv-claim"
},
"persistentVolumeReclaimPolicy": "Retain",
"mountOptions": [
"vers=4.0",
"noresvport"
],
"volumeMode": "Filesystem"
},
"status": {
"phase": "Available"
}
}
what should I do do fix this problem? how to avoid problem like this? what may cause this problem?
try to delete the PVC's annoation will make the PVC rebind:
"annotations": {
"pv.kubernetes.io/bind-completed": "yes"
},
I copied the PVC from another PVC and forget to remove the annotation.
I can obtain my service by running
$ kubectl get service <service-name> --namespace <namespace name>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service name LoadBalancer ********* ********* port numbers 16h
here is my service running at kubernetes but I can't access it through public IP. below are my service and deployment files added . i am using azre devops to build and release container image to azure container registry . As you see above on service describe i got external ip and cluster ip but when i try this ip in browser or use curl i get no response. `
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "service-name",
"namespace": "namespace-name",
"selfLink": "*******************",
"uid": "*******************",
"resourceVersion": "1686278",
"creationTimestamp": "2019-07-15T14:12:11Z",
"labels": {
"run": "service name"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": ****,
"nodePort": ****
}
],
"selector": {
"run": "profile-management-service"
},
"clusterIP": "**********",
"type": "LoadBalancer",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "*************"
}
]
}
}
}
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "deployment-name",
"namespace": "namespace-name",
"selfLink": "*************************",
"uid": "****************************",
"resourceVersion": "1686172",
"generation": 1,
"creationTimestamp": "2019-07-15T14:12:04Z",
"labels": {
"run": "deployment-name"
},
"annotations": {
"deployment.kubernetes.io/revision": "1"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"run": "deployment-name"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"run": "deployment-name"
}
},
"spec": {
"containers": [
{
"name": "deployment-name",
"image": "dev/containername:50",
"ports": [
{
"containerPort": ****,
"protocol": "TCP"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": 1,
"maxSurge": 1
}
},
"revisionHistoryLimit": 2147483647,
"progressDeadlineSeconds": 2147483647
},
"status": {
"observedGeneration": 1,
"replicas": 1,
"updatedReplicas": 1,
"readyReplicas": 1,
"availableReplicas": 1,
"conditions": [
{
"type": "Available",
"status": "True",
"lastUpdateTime": "2019-07-15T14:12:04Z",
"lastTransitionTime": "2019-07-15T14:12:04Z",
"reason": "MinimumReplicasAvailable",
"message": "Deployment has minimum availability."
}
]
}
}
`
Apparently there's a mismatch in label and selector:
Service selector
"selector": {
"run": "profile-management-service"
While deployment label
"labels": {
"run": "deployment-name"
},
Also check targetPort value of the service, it should match containerPort of your deployment
You need to add readinessProbe and livenessProbe on your Deployment and after that check your firewall if all rules are correct.
Here you have some more info about liveness and readiness
I encounter a trouble with the last version of kubernetes (1.5.1). I have a quiet non usual setup composed with 5 Redhat Enterprise server. 3 are nodes, 2 are masters. Both masters are on an etcd cluster, flannel had been also added in baremetal.
I have this looping log in the kube-DNS container :
Failed to list *api.Endpoints: Get https://*.*.*.33:443/api/v1/endpoints?resourceVersion=0: x509: failed to load system roots and no roots provided
I made a big number of tests concerning the certificate. Curl works with the same credentials perfectly. The generation has been made with the official recommandation of kubernetes.
This is my different files of configuration ( with just the censorship of the ip and hostname if needed).
kube-apiserver.yml
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-apiserver",
"namespace": "kube-system",
"labels": {
"component": "kube-apiserver",
"tier": "control-plane"
}
},
"spec": {
"volumes": [
{
"name": "certs",
"hostPath": {
"path": "/etc/ssl/certs"
}
},
{
"name": "pki",
"hostPath": {
"path": "/etc/kubernetes"
}
}
],
"containers": [
{
"name": "kube-apiserver",
"image": "gcr.io/google_containers/kube-apiserver-amd64:v1.5.1",
"command": [
"/usr/local/bin/kube-apiserver",
"--v=0",
"--insecure-bind-address=127.0.0.1",
"--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota",
"--service-cluster-ip-range=100.64.0.0/12",
"--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem",
"--client-ca-file=/etc/kubernetes/pki/ca.pem",
"--tls-cert-file=/etc/kubernetes/pki/apiserver.pem",
"--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem",
"--secure-port=5443",
"--allow-privileged",
"--advertise-address=X.X.X.33",
"--etcd-servers=http://X.X.X.33:2379,http://X.X.X.37:2379",
"--kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP"
],
"resources": {
"requests": {
"cpu": "250m"
}
},
"volumeMounts": [
{
"name": "certs",
"mountPath": "/etc/ssl/certs"
},
{
"name": "pki",
"readOnly": true,
"mountPath": "/etc/kubernetes/"
}
],
"livenessProbe": {
"httpGet": {
"path": "/healthz",
"port": 8080,
"host": "127.0.0.1"
},
"initialDelaySeconds": 15,
"timeoutSeconds": 15
}
}
],
"hostNetwork": true
}
}
kube-controlleur-manager.yml
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-controller-manager",
"namespace": "kube-system",
"labels": {
"component": "kube-controller-manager",
"tier": "control-plane"
}
},
"spec": {
"volumes": [
{
"name": "pki",
"hostPath": {
"path": "/etc/kubernetes"
}
}
],
"containers": [
{
"name": "kube-controller-manager",
"image": "gcr.io/google_containers/kube-controller-manager-amd64:v1.5.1",
"command": [
"/usr/local/bin/kube-controller-manager",
"--v=0",
"--address=127.0.0.1",
"--leader-elect=true",
"--master=https://X.X.X.33",
"--cluster-name= kubernetes",
"--kubeconfig=/etc/kubernetes/kubeadminconfig",
"--root-ca-file=/etc/kubernetes/pki/ca.pem",
"--service-account-private-key-file=/etc/kubernetes/pki/apiserver-key.pem",
"--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem",
"--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem"
],
"resources": {
"requests": {
"cpu": "200m"
}
},
"volumeMounts": [
{
"name": "pki",
"readOnly": true,
"mountPath": "/etc/kubernetes/"
}
],
"livenessProbe": {
"httpGet": {
"path": "/healthz",
"port": 10252,
"host": "127.0.0.1"
},
"initialDelaySeconds": 15,
"timeoutSeconds": 15
}
}
],
"hostNetwork": true
}
}
kube-scheduler.yml
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-scheduler",
"namespace": "kube-system",
"labels": {
"component": "kube-scheduler",
"tier": "control-plane"
}
},
"spec": {
"volumes": [
{
"name": "pki",
"hostPath": {
"path": "/etc/kubernetes"
}
}
],
"containers": [
{
"name": "kube-scheduler",
"image": "gcr.io/google_containers/kube-scheduler-amd64:v1.5.1",
"command": [
"/usr/local/bin/kube-scheduler",
"--v=0",
"--address=127.0.0.1",
"--leader-elect=true",
"--kubeconfig=/etc/kubernetes/kubeadminconfig",
"--master=https://X.X.X.33"
],
"resources": {
"requests": {
"cpu": "100m"
}
},
"volumeMounts": [
{
"name": "pki",
"readOnly": true,
"mountPath": "/etc/kubernetes/"
}
],
"livenessProbe": {
"httpGet": {
"path": "/healthz",
"port": 10251,
"host": "127.0.0.1"
},
"initialDelaySeconds": 15,
"timeoutSeconds": 15
}
}
],
"hostNetwork": true
}
}
haproxy.yml
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "haproxy",
"namespace": "kube-system",
"labels": {
"component": "kube-apiserver",
"tier": "control-plane"
}
},
"spec": {
"volumes": [
{
"name": "vol",
"hostPath": {
"path": "/etc/haproxy/haproxy.cfg"
}
}
],
"containers": [
{
"name": "haproxy",
"image": "docker.io/haproxy:1.7",
"resources": {
"requests": {
"cpu": "250m"
}
},
"volumeMounts": [
{
"name": "vol",
"readOnly": true,
"mountPath": "/usr/local/etc/haproxy/haproxy.cfg"
}
]
}
],
"hostNetwork": true
}
}
kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=/etc/kubernetes/kubelet ExecStart=/usr/bin/kubelet \
$KUBELET_ADDRESS \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS \
$KUBE_LOGTOSTDERR \
$KUBE_ALLOW_PRIV \
$KUBELET_NETWORK_ARGS \
$KUBELET_DNS_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
kubelet
KUBELET_ADDRESS="--address=0.0.0.0 --port=10250"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeadminconfig --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests"
KUBE_LOGTOSTDERR="--logtostderr=true --v=9"
KUBE_ALLOW_PRIV="--allow-privileged=true"
KUBELET_DNS_ARGS="--cluster-dns=100.64.0.10 --cluster-domain=cluster.local"
kubadminconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.pem
server: https://X.X.X.33
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: admin
name: admin#kubernetes
- context:
cluster: kubernetes
user: kubelet
name: kubelet#kubernetes
current-context: admin#kubernetes
kind: Config
users:
- name: admin
user:
client-certificate: /etc/kubernetes/pki/admin.pem
client-key: /etc/kubernetes/pki/admin-key.pem
I already have seen most of the question relative from far to close to this question in the internet so i hope someone will have a hint to debug this.
I'm testing the new Openshift platform based on Docker and Kubernetes.
I've created a new project from scratch, then when I try to deploy a simple MongoDB service (as well with a python app), I got the following errors in the Monitoring section in Web console:
Unable to mount volumes for pod "mongodb-1-sfg8t_rob1(e9e53040-ab59-11e6-a64c-0e3d364e19a5)": timeout expired waiting for volumes to attach/mount for pod "mongodb-1-sfg8t"/"rob1". list of unattached/unmounted volumes=[mongodb-data]
Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "mongodb-1-sfg8t"/"rob1". list of unattached/unmounted volumes=[mongodb-data]
It seems a problem mounting the PVC in the container, however the PVC is correctly created and bounded:
oc get pvc
Returns:
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
mongodb-data Bound pv-aws-9dged 1Gi RWO 29m
I've deployed it with the following commands:
oc process -f openshift/templates/mongodb.json | oc create -f -
oc deploy mongodb --latest
The complete log from Web console:
The content of the template that I used is:
{
"kind": "Template",
"apiVersion": "v1",
"metadata": {
"name": "mongo-example",
"annotations": {
"openshift.io/display-name": "Mongo example",
"tags": "quickstart,mongo"
}
},
"labels": {
"template": "mongo-example"
},
"message": "The following service(s) have been created in your project: ${NAME}.",
"objects": [
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "${DATABASE_DATA_VOLUME}"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "${DB_VOLUME_CAPACITY}"
}
}
}
},
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "${DATABASE_SERVICE_NAME}",
"annotations": {
"description": "Exposes the database server"
}
},
"spec": {
"ports": [
{
"name": "mongodb",
"port": 27017,
"targetPort": 27017
}
],
"selector": {
"name": "${DATABASE_SERVICE_NAME}"
}
}
},
{
"kind": "DeploymentConfig",
"apiVersion": "v1",
"metadata": {
"name": "${DATABASE_SERVICE_NAME}",
"annotations": {
"description": "Defines how to deploy the database"
}
},
"spec": {
"strategy": {
"type": "Recreate"
},
"triggers": [
{
"type": "ImageChange",
"imageChangeParams": {
"automatic": true,
"containerNames": [
"mymongodb"
],
"from": {
"kind": "ImageStreamTag",
"namespace": "",
"name": "mongo:latest"
}
}
},
{
"type": "ConfigChange"
}
],
"replicas": 1,
"selector": {
"name": "${DATABASE_SERVICE_NAME}"
},
"template": {
"metadata": {
"name": "${DATABASE_SERVICE_NAME}",
"labels": {
"name": "${DATABASE_SERVICE_NAME}"
}
},
"spec": {
"volumes": [
{
"name": "${DATABASE_DATA_VOLUME}",
"persistentVolumeClaim": {
"claimName": "${DATABASE_DATA_VOLUME}"
}
}
],
"containers": [
{
"name": "mymongodb",
"image": "mongo:latest",
"ports": [
{
"containerPort": 27017
}
],
"env": [
{
"name": "MONGODB_USER",
"value": "${DATABASE_USER}"
},
{
"name": "MONGODB_PASSWORD",
"value": "${DATABASE_PASSWORD}"
},
{
"name": "MONGODB_DATABASE",
"value": "${DATABASE_NAME}"
}
],
"volumeMounts": [
{
"name": "${DATABASE_DATA_VOLUME}",
"mountPath": "/data/db"
}
],
"readinessProbe": {
"timeoutSeconds": 1,
"initialDelaySeconds": 5,
"exec": {
"command": [ "/bin/bash", "-c", "mongo --eval 'db.getName()'"]
}
},
"livenessProbe": {
"timeoutSeconds": 1,
"initialDelaySeconds": 30,
"tcpSocket": {
"port": 27017
}
},
"resources": {
"limits": {
"memory": "${MEMORY_MONGODB_LIMIT}"
}
}
}
]
}
}
}
}
],
"parameters": [
{
"name": "NAME",
"displayName": "Name",
"description": "The name",
"required": true,
"value": "mongo-example"
},
{
"name": "MEMORY_MONGODB_LIMIT",
"displayName": "Memory Limit (MONGODB)",
"required": true,
"description": "Maximum amount of memory the MONGODB container can use.",
"value": "512Mi"
},
{
"name": "DB_VOLUME_CAPACITY",
"displayName": "Volume Capacity",
"description": "Volume space available for data, e.g. 512Mi, 2Gi",
"value": "512Mi",
"required": true
},
{
"name": "DATABASE_DATA_VOLUME",
"displayName": "Volumne name for DB data",
"required": true,
"value": "mongodb-data"
},
{
"name": "DATABASE_SERVICE_NAME",
"displayName": "Database Service Name",
"required": true,
"value": "mongodb"
},
{
"name": "DATABASE_NAME",
"displayName": "Database Name",
"required": true,
"value": "test1"
},
{
"name": "DATABASE_USER",
"displayName": "Database Username",
"required": false
},
{
"name": "DATABASE_PASSWORD",
"displayName": "Database User Password",
"required": false
}
]
}
Is there any issue with my template ? Is it a OpenShift issue ? Where and how can I get further details about the mount problem in OpenShift logs ?
So, I think you're coming up against 2 different issues.
You're template is setup to pull from the Mongo image on Dockerhub (specified by the blank "namespace" value. When trying to pull the mongo:latest image from Dockerhub in the Web UI, you are greeted by a friendly message notifying you that the docker image is not usable because it runs as root:
OpenShift Online Dev preview has been having some issues related to PVC recently (http://status.preview.openshift.com/). Specifically this reported bug at the moment, https://bugzilla.redhat.com/show_bug.cgi?id=1392650. This may be a cause for some issues, as the "official" Mongo image on OpenShift is also failing to build.
I would like to direct you to an OpenShift MongoDB template, not the exact one used in the Developer Preview, but should hopefully provide some good direction going forward! https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_examples/files/examples/v1.4/db-templates/mongodb-persistent-template.json
I am getting this error, when I am trying to access the Public IP's http exposure.
RajRajen:lb4btest rajrajen$ curl http://104.154.84.143:8000
curl: (56) Recv failure: Connection reset by peer
RajRajen:lb4btest rajrajen$ telnet 104.154.84.143 8000
Trying 104.154.84.143...
Connected to 143.84.154.104.bc.googleusercontent.com.
Escape character is '^]'.
Connection closed by foreign host.
RajRajen:lb4btest rajrajen$
< Pls note the above IP is just representation , Once I redeploy, the IP may change. But the problem is not >
Controller URL as it in in my json file .
RajRajen:lb4btest rajrajen$ kubectl create -f middleware-service.json
services/lb4b-api-v9
And the rc - replication controller json file.
{
"kind": "ReplicationController",
"apiVersion": "v1",
"metadata": {
"name": "lb4b-api-v9",
"labels": {
"app": "lb4bapi",
"tier": "middleware"
}
},
"spec": {
"replicas": 1,
"selector": {
"app": "lb4bapi",
"tier": "middleware"
},
"template": {
"metadata": {
"labels": {
"app": "lb4bapi",
"tier": "middleware"
}
},
"spec": {
"containers": [
{
"name": "lb4bapicontainer",
"image": "gcr.io/helloworldnodejs-1119/myproject",
"resources": {
"requests": {
"cpu": "500m",
"memory": "128Mi"
}
},
"env": [
{
"name": "GET_HOSTS_FROM",
"value": "dns"
},
{
"name": "PORT",
"value": "8000"
}
],
"ports": [
{
"name": "middleware",
"containerPort": 8000,
"hostPort": 8000
}
]
}
]
}
}
}
}
And here is the service json file
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "lb4b-api-v9",
"labels": {
"name": "lb4b-api-v9",
"app": "lb4bmiddleware",
"tier": "middleware"
}
},
"spec": {
"type": "LoadBalancer",
"selector": {
"name": "lb4b-api-v9",
"app": "lb4bmiddleware",
"tier": "middleware"
},
"ports": [
{
"protocol": "TCP",
"port": 8000
}
]
}
}
Docker container is running a node application as non-root user as per the pm2 requirement .
ENTRYPOINT ["pm2"]
CMD ["start", "app.js", "--no-daemon"]
I am perfectly able to do curl on this POD Local IP curl http://podIP:podPort inside the POD docker as well as inside the NODE.
But unable to do curl on http://serviceLocalIP:8000 inside the NODE.
Can you please give some suggestions to make this work?
Thanks in advance.
This issue is resolved.. After following the troubleshooting comment about the Service Endpoint, especially keeping the SERVICE selector value same as in POD selector value.
https://cloud.google.com/container-engine/docs/debugging/
Search for My service is missing endpoints
Solution in Controller.json
{
"kind": "ReplicationController",
"apiVersion": "v1",
"metadata": {
"name": "lb4b-api-v9",
"labels": {
"name": "lb4b-api-v9",
"app": "lb4bmiddleware",
"tier": "middleware"
}
},
"spec": {
"replicas": 1,
"selector": {
**"name": "lb4b-api-v9",
"app": "lb4bmiddleware",
"tier": "middleware"**
},
"template": {
"metadata": {
"labels": {
"name": "lb4b-api-v9",
"app": "lb4bmiddleware",
"tier": "middleware"
}
},
"spec": {
"containers": [
{
"name": "lb4b-api-v9",
"image": "gcr.io/myprojectid/myproect",
"resources": {
"requests": {
"cpu": "500m",
"memory": "128Mi"
}
},
"env": [
{
"name": "GET_HOSTS_FROM",
"value": "dns"
},
{
"name": "PORT",
"value": "8000"
}
],
"ports": [
{
"name": "middleware",
"containerPort": 8000,
"hostPort": 8000
}
]
}
]
}
}
}
}
And in Service.json
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "lb4b-api-v9",
"labels": {
"name": "lb4b-api-v9",
"app": "lb4bmiddleware",
"tier": "middleware"
}
},
"spec": {
"type": "LoadBalancer",
"selector": {
**"name": "lb4b-api-v9",
"app": "lb4bmiddleware",
"tier": "middleware"**
},
"ports": [
{
"protocol": "TCP",
"port": 8000
}
]
}
}
Thats all..