I'm testing the new Openshift platform based on Docker and Kubernetes.
I've created a new project from scratch, then when I try to deploy a simple MongoDB service (as well with a python app), I got the following errors in the Monitoring section in Web console:
Unable to mount volumes for pod "mongodb-1-sfg8t_rob1(e9e53040-ab59-11e6-a64c-0e3d364e19a5)": timeout expired waiting for volumes to attach/mount for pod "mongodb-1-sfg8t"/"rob1". list of unattached/unmounted volumes=[mongodb-data]
Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "mongodb-1-sfg8t"/"rob1". list of unattached/unmounted volumes=[mongodb-data]
It seems a problem mounting the PVC in the container, however the PVC is correctly created and bounded:
oc get pvc
Returns:
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
mongodb-data Bound pv-aws-9dged 1Gi RWO 29m
I've deployed it with the following commands:
oc process -f openshift/templates/mongodb.json | oc create -f -
oc deploy mongodb --latest
The complete log from Web console:
The content of the template that I used is:
{
"kind": "Template",
"apiVersion": "v1",
"metadata": {
"name": "mongo-example",
"annotations": {
"openshift.io/display-name": "Mongo example",
"tags": "quickstart,mongo"
}
},
"labels": {
"template": "mongo-example"
},
"message": "The following service(s) have been created in your project: ${NAME}.",
"objects": [
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "${DATABASE_DATA_VOLUME}"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "${DB_VOLUME_CAPACITY}"
}
}
}
},
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "${DATABASE_SERVICE_NAME}",
"annotations": {
"description": "Exposes the database server"
}
},
"spec": {
"ports": [
{
"name": "mongodb",
"port": 27017,
"targetPort": 27017
}
],
"selector": {
"name": "${DATABASE_SERVICE_NAME}"
}
}
},
{
"kind": "DeploymentConfig",
"apiVersion": "v1",
"metadata": {
"name": "${DATABASE_SERVICE_NAME}",
"annotations": {
"description": "Defines how to deploy the database"
}
},
"spec": {
"strategy": {
"type": "Recreate"
},
"triggers": [
{
"type": "ImageChange",
"imageChangeParams": {
"automatic": true,
"containerNames": [
"mymongodb"
],
"from": {
"kind": "ImageStreamTag",
"namespace": "",
"name": "mongo:latest"
}
}
},
{
"type": "ConfigChange"
}
],
"replicas": 1,
"selector": {
"name": "${DATABASE_SERVICE_NAME}"
},
"template": {
"metadata": {
"name": "${DATABASE_SERVICE_NAME}",
"labels": {
"name": "${DATABASE_SERVICE_NAME}"
}
},
"spec": {
"volumes": [
{
"name": "${DATABASE_DATA_VOLUME}",
"persistentVolumeClaim": {
"claimName": "${DATABASE_DATA_VOLUME}"
}
}
],
"containers": [
{
"name": "mymongodb",
"image": "mongo:latest",
"ports": [
{
"containerPort": 27017
}
],
"env": [
{
"name": "MONGODB_USER",
"value": "${DATABASE_USER}"
},
{
"name": "MONGODB_PASSWORD",
"value": "${DATABASE_PASSWORD}"
},
{
"name": "MONGODB_DATABASE",
"value": "${DATABASE_NAME}"
}
],
"volumeMounts": [
{
"name": "${DATABASE_DATA_VOLUME}",
"mountPath": "/data/db"
}
],
"readinessProbe": {
"timeoutSeconds": 1,
"initialDelaySeconds": 5,
"exec": {
"command": [ "/bin/bash", "-c", "mongo --eval 'db.getName()'"]
}
},
"livenessProbe": {
"timeoutSeconds": 1,
"initialDelaySeconds": 30,
"tcpSocket": {
"port": 27017
}
},
"resources": {
"limits": {
"memory": "${MEMORY_MONGODB_LIMIT}"
}
}
}
]
}
}
}
}
],
"parameters": [
{
"name": "NAME",
"displayName": "Name",
"description": "The name",
"required": true,
"value": "mongo-example"
},
{
"name": "MEMORY_MONGODB_LIMIT",
"displayName": "Memory Limit (MONGODB)",
"required": true,
"description": "Maximum amount of memory the MONGODB container can use.",
"value": "512Mi"
},
{
"name": "DB_VOLUME_CAPACITY",
"displayName": "Volume Capacity",
"description": "Volume space available for data, e.g. 512Mi, 2Gi",
"value": "512Mi",
"required": true
},
{
"name": "DATABASE_DATA_VOLUME",
"displayName": "Volumne name for DB data",
"required": true,
"value": "mongodb-data"
},
{
"name": "DATABASE_SERVICE_NAME",
"displayName": "Database Service Name",
"required": true,
"value": "mongodb"
},
{
"name": "DATABASE_NAME",
"displayName": "Database Name",
"required": true,
"value": "test1"
},
{
"name": "DATABASE_USER",
"displayName": "Database Username",
"required": false
},
{
"name": "DATABASE_PASSWORD",
"displayName": "Database User Password",
"required": false
}
]
}
Is there any issue with my template ? Is it a OpenShift issue ? Where and how can I get further details about the mount problem in OpenShift logs ?
So, I think you're coming up against 2 different issues.
You're template is setup to pull from the Mongo image on Dockerhub (specified by the blank "namespace" value. When trying to pull the mongo:latest image from Dockerhub in the Web UI, you are greeted by a friendly message notifying you that the docker image is not usable because it runs as root:
OpenShift Online Dev preview has been having some issues related to PVC recently (http://status.preview.openshift.com/). Specifically this reported bug at the moment, https://bugzilla.redhat.com/show_bug.cgi?id=1392650. This may be a cause for some issues, as the "official" Mongo image on OpenShift is also failing to build.
I would like to direct you to an OpenShift MongoDB template, not the exact one used in the Developer Preview, but should hopefully provide some good direction going forward! https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_examples/files/examples/v1.4/db-templates/mongodb-persistent-template.json
Related
I can obtain my service by running
$ kubectl get service <service-name> --namespace <namespace name>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service name LoadBalancer ********* ********* port numbers 16h
here is my service running at kubernetes but I can't access it through public IP. below are my service and deployment files added . i am using azre devops to build and release container image to azure container registry . As you see above on service describe i got external ip and cluster ip but when i try this ip in browser or use curl i get no response. `
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "service-name",
"namespace": "namespace-name",
"selfLink": "*******************",
"uid": "*******************",
"resourceVersion": "1686278",
"creationTimestamp": "2019-07-15T14:12:11Z",
"labels": {
"run": "service name"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": ****,
"nodePort": ****
}
],
"selector": {
"run": "profile-management-service"
},
"clusterIP": "**********",
"type": "LoadBalancer",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "*************"
}
]
}
}
}
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "deployment-name",
"namespace": "namespace-name",
"selfLink": "*************************",
"uid": "****************************",
"resourceVersion": "1686172",
"generation": 1,
"creationTimestamp": "2019-07-15T14:12:04Z",
"labels": {
"run": "deployment-name"
},
"annotations": {
"deployment.kubernetes.io/revision": "1"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"run": "deployment-name"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"run": "deployment-name"
}
},
"spec": {
"containers": [
{
"name": "deployment-name",
"image": "dev/containername:50",
"ports": [
{
"containerPort": ****,
"protocol": "TCP"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": 1,
"maxSurge": 1
}
},
"revisionHistoryLimit": 2147483647,
"progressDeadlineSeconds": 2147483647
},
"status": {
"observedGeneration": 1,
"replicas": 1,
"updatedReplicas": 1,
"readyReplicas": 1,
"availableReplicas": 1,
"conditions": [
{
"type": "Available",
"status": "True",
"lastUpdateTime": "2019-07-15T14:12:04Z",
"lastTransitionTime": "2019-07-15T14:12:04Z",
"reason": "MinimumReplicasAvailable",
"message": "Deployment has minimum availability."
}
]
}
}
`
Apparently there's a mismatch in label and selector:
Service selector
"selector": {
"run": "profile-management-service"
While deployment label
"labels": {
"run": "deployment-name"
},
Also check targetPort value of the service, it should match containerPort of your deployment
You need to add readinessProbe and livenessProbe on your Deployment and after that check your firewall if all rules are correct.
Here you have some more info about liveness and readiness
I'm running into an issue when I attempt to run the 'Azure Resource Group Deploy' release task to create/update a resource group and the resources within it via an ARM Template. In particular, I need to have the Virtual Machine created by the ARM template accessible via WinRM; This needs to be done so that I can copy files (specifically a ZIP file containing the results of a build) to the VM in a later step.
Currently, I have the 'Template' portion of this task set up as follows: https://i.imgur.com/mvZDIMK.jpg (I can't post images since I don't have reputation here yet...)
Unless I've misunderstood (which is definitely possible), the "Configure with WinRM" option should allow the release step to create a WinRM Listener on any Virtual Machines created by this step.
I currently have the following resources in the ARM Template:
{
"type": "Microsoft.Storage/storageAccounts",
"sku": {
"name": "Standard_LRS",
"tier": "Standard"
},
"kind": "Storage",
"name": "[variables('StorageAccountName')]",
"apiVersion": "2018-02-01",
"location": "[parameters('LocationPrimary')]",
"scale": null,
"tags": {},
"properties": {
"networkAcls": {
"bypass": "AzureServices",
"virtualNetworkRules": [],
"ipRules": [],
"defaultAction": "Allow"
},
"supportsHttpsTrafficOnly": false,
"encryption": {
"services": {
"file": {
"enabled": true
},
"blob": {
"enabled": true
}
},
"keySource": "Microsoft.Storage"
}
},
"dependsOn": []
},
{
"name": "[variables('NetworkInterfaceName')]",
"type": "Microsoft.Network/networkInterfaces",
"apiVersion": "2018-04-01",
"location": "[parameters('LocationPrimary')]",
"dependsOn": [
"[concat('Microsoft.Network/networkSecurityGroups/', variables('NetworkSecurityGroupName'))]",
"[concat('Microsoft.Network/virtualNetworks/', variables('VNetName'))]",
"[concat('Microsoft.Network/publicIpAddresses/', variables('PublicIPAddressName'))]"
],
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"subnet": {
"id": "[variables('subnetRef')]"
},
"privateIPAllocationMethod": "Dynamic",
"publicIpAddress": {
"id": "[resourceId(resourceGroup().name, 'Microsoft.Network/publicIpAddresses', variables('PublicIPAddressName'))]"
}
}
}
],
"networkSecurityGroup": {
"id": "[variables('nsgId')]"
}
},
"tags": {}
},
{
"name": "[variables('NetworkSecurityGroupName')]",
"type": "Microsoft.Network/networkSecurityGroups",
"apiVersion": "2018-08-01",
"location": "[parameters('LocationPrimary')]",
"properties": {
"securityRules": [
{
"name": "RDP",
"properties": {
"priority": 300,
"protocol": "TCP",
"access": "Allow",
"direction": "Inbound",
"sourceAddressPrefix": "*",
"sourcePortRange": "*",
"destinationAddressPrefix": "*",
"destinationPortRange": "3389"
}
}
]
},
"tags": {}
},
{
"name": "[variables('VNetName')]",
"type": "Microsoft.Network/virtualNetworks",
"apiVersion": "2018-08-01",
"location": "[parameters('LocationPrimary')]",
"properties": {
"addressSpace": {
"addressPrefixes": [ "10.0.0.0/24" ]
},
"subnets": [
{
"name": "default",
"properties": {
"addressPrefix": "10.0.0.0/24"
}
}
]
},
"tags": {}
},
{
"name": "[variables('PublicIPAddressName')]",
"type": "Microsoft.Network/publicIpAddresses",
"apiVersion": "2018-08-01",
"location": "[parameters('LocationPrimary')]",
"properties": {
"publicIpAllocationMethod": "Dynamic"
},
"sku": {
"name": "Basic"
},
"tags": {}
},
{
"name": "[variables('VMName')]",
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2018-06-01",
"location": "[parameters('LocationPrimary')]",
"dependsOn": [
"[concat('Microsoft.Network/networkInterfaces/', variables('NetworkInterfaceName'))]",
"[concat('Microsoft.Storage/storageAccounts/', variables('StorageAccountName'))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "Standard_A7"
},
"storageProfile": {
"osDisk": {
"createOption": "fromImage",
"managedDisk": {
"storageAccountType": "Standard_LRS"
}
},
"imageReference": {
"publisher": "MicrosoftWindowsDesktop",
"offer": "Windows-10",
"sku": "rs4-pro",
"version": "latest"
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', variables('NetworkInterfaceName'))]"
}
]
},
"osProfile": {
"computerName": "[variables('VMName')]",
"adminUsername": "[parameters('AdminUsername')]",
"adminPassword": "[parameters('AdminPassword')]",
"windowsConfiguration": {
"enableAutomaticUpdates": true,
"provisionVmAgent": true
}
},
"licenseType": "Windows_Client",
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "[concat('https://', variables('StorageAccountName'), '.blob.core.windows.net/')]"
}
}
},
"tags": {}
}
This ARM Template currently works if I do not attempt to configure the VM to have the WinRM Listener.
When I attempt to run the release, I get the following error message:
Error number: -2144108526 0x80338012
The client cannot connect to the destination specified in the request. Verify that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or WinRM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig".
In all honesty, my problem is likely a lack of understanding, as this is my first time working with VM Setup in any real capacity. Any insight and advice would be greatly appreciated.
you just need to add this to the "windowsConfiguration":
"winRM": {
"listeners": [
{
"protocol": "http"
},
{
"protocol": "https",
"certificateUrl": "<URL for the certificate you got in Step 4>"
}
]
}
you also need to provision certificates
reference: https://learn.microsoft.com/en-us/rest/api/compute/virtualmachines/createorupdate#winrmconfiguration
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/winrm
After a Service Fabric Mesh Service has been deployed, how does one find the external facing IP Address. Things tried so far:
Looking at the properties and settings of the service in the Azure portal
Running the command az mesh app list - this shows a valid response but the IP Address is missing
Running the command az mesh app show - this shows a valid response but the IP Address is missing
Running the command az mesh service list - this shows a valid response but the IP Address is missing
Running the command az mesh service show - this shows a valid response but the IP Address is missing
Update 2018-12-10
The new ApiVersion has been released(2018-09-01-preview) and the new way of exposing Services is by using the Gateway resource. More information can be found on this github thread, and a sample was already added to the original answer
Original Answer
What you are looking for is the network public IP address:
az mesh network show --resource-group myResourceGroup --name myAppNetwork
Public Network
When you deploy an application, you place it in a network resource, this network will provide the access to your application.
Example of a define network in the :
{
"$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json",
"contentVersion": "1.0.0.0",
"parameters": {
"location": {
"type": "string",
"metadata": {
"description": "Location of the resources."
}
}
},
"resources": [
{
"apiVersion": "2018-07-01-preview",
"name": "helloWorldNetwork",
"type": "Microsoft.ServiceFabricMesh/networks",
"location": "[parameters('location')]",
"dependsOn": [],
"properties": {
"addressPrefix": "10.0.0.4/22",
"ingressConfig": {
"layer4": [
{
"name": "helloWorldIngress",
"publicPort": "80",
"applicationName": "helloWorldApp",
"serviceName": "helloWorldService",
"endpointName": "helloWorldListener"
}
]
}
}
},
{
"apiVersion": "2018-07-01-preview",
"name": "helloWorldApp",
"type": "Microsoft.ServiceFabricMesh/applications",
"location": "[parameters('location')]",
"dependsOn": [
"Microsoft.ServiceFabricMesh/networks/helloWorldNetwork"
],
"properties": {
"description": "Service Fabric Mesh HelloWorld Application!",
"services": [
{
"type": "Microsoft.ServiceFabricMesh/services",
"location": "[parameters('location')]",
"name": "helloWorldService",
"properties": {
"description": "Service Fabric Mesh Hello World Service.",
"osType": "linux",
"codePackages": [
{
"name": "helloWorldCode",
"image": "seabreeze/azure-mesh-helloworld:1.1-alpine",
"endpoints": [
{
"name": "helloWorldListener",
"port": "80"
}
],
"resources": {
"requests": {
"cpu": "1",
"memoryInGB": "1"
}
}
},
{
"name": "helloWorldSideCar",
"image": "seabreeze/azure-mesh-helloworld-sidecar:1.0-alpine",
"resources": {
"requests": {
"cpu": "1",
"memoryInGB": "1"
}
}
}
],
"replicaCount": "1",
"networkRefs": [
{
"name": "[resourceId('Microsoft.ServiceFabricMesh/networks', 'helloWorldNetwork')]"
}
]
}
}
]
}
}
]
}
source
Gateway (preview)
There are plans to provide a gateway that will bridge the external access to an internal network, would work like an ingress in kubernetes, it is still in preview, the solution would be something like this:
{
"$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json",
"contentVersion": "1.0.0.0",
"parameters": {
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location of the resources (e.g. westus, eastus, westeurope)."
}
},
"fileShareName": {
"type": "string",
"metadata": {
"description": "Name of the Azure Files file share that provides the volume for the container."
}
},
"storageAccountName": {
"type": "string",
"metadata": {
"description": "Name of the Azure storage account that contains the file share."
}
},
"storageAccountKey": {
"type": "securestring",
"metadata": {
"description": "Access key for the Azure storage account that contains the file share."
}
},
"stateFolderName": {
"type": "string",
"defaultValue": "CounterService",
"metadata": {
"description": "Folder in which to store the state. Provide a empty value to create a unique folder for each container to store the state. A non-empty value will retain the state across deployments, however if more than one applications are using the same folder, the counter may update more frequently."
}
}
},
"resources": [
{
"apiVersion": "2018-09-01-preview",
"name": "counterAzureFileShareAccountKey",
"type": "Microsoft.ServiceFabricMesh/secrets",
"location": "[parameters('location')]",
"dependsOn": [],
"properties": {
"kind": "inlinedValue",
"contentType": "text/plain",
"description": "Access key for the Azure storage account that contains the file share."
}
},
{
"apiVersion": "2018-09-01-preview",
"name": "counterAzureFileShareAccountKey/v1",
"type": "Microsoft.ServiceFabricMesh/secrets/values",
"location": "[parameters('location')]",
"dependsOn": [
"Microsoft.ServiceFabricMesh/secrets/counterAzureFileShareAccountKey"
],
"properties": {
"value": "[parameters('storageAccountKey')]"
}
},
{
"apiVersion": "2018-09-01-preview",
"name": "counterVolume",
"type": "Microsoft.ServiceFabricMesh/volumes",
"location": "[parameters('location')]",
"dependsOn": [
"Microsoft.ServiceFabricMesh/secrets/counterAzureFileShareAccountKey/values/v1"
],
"properties": {
"description": "Azure Files storage volume for counter App.",
"provider": "SFAzureFile",
"azureFileParameters": {
"shareName": "[parameters('fileShareName')]",
"accountName": "[parameters('storageAccountName')]",
"accountKey": "[resourceId('Microsoft.ServiceFabricMesh/secrets/values','counterAzureFileShareAccountKey','v1')]"
}
}
},
{
"apiVersion": "2018-09-01-preview",
"name": "counterNetwork",
"type": "Microsoft.ServiceFabricMesh/networks",
"location": "[parameters('location')]",
"dependsOn": [],
"properties": {
"kind": "Local",
"description": "Azure Service Fabric Mesh Counter Application network.",
"networkAddressPrefix": "10.0.0.0/24"
}
},
{
"apiVersion": "2018-09-01-preview",
"name": "counterApp",
"type": "Microsoft.ServiceFabricMesh/applications",
"location": "[parameters('location')]",
"dependsOn": [
"Microsoft.ServiceFabricMesh/networks/counterNetwork",
"Microsoft.ServiceFabricMesh/volumes/counterVolume"
],
"properties": {
"description": "Azure Service Fabric Mesh Counter Application.",
"services": [
{
"name": "counterService",
"properties": {
"description": "A web service that serves the counter value stored in the Azure Files volume.",
"osType": "linux",
"codePackages": [
{
"name": "counterCode",
"image": "seabreeze/azure-mesh-counter:0.1-alpine",
"volumeRefs": [
{
"name": "[resourceId('Microsoft.ServiceFabricMesh/volumes', 'counterVolume')]",
"destinationPath": "/app/data"
}
],
"endpoints": [
{
"name": "counterServiceListener",
"port": 80
}
],
"environmentVariables": [
{
"name": "STATE_FOLDER_NAME",
"value": "[parameters('stateFolderName')]"
}
],
"resources": {
"requests": {
"cpu": 0.5,
"memoryInGB": 0.5
}
}
}
],
"replicaCount": 1,
"networkRefs": [
{
"name": "[resourceId('Microsoft.ServiceFabricMesh/networks', 'counterNetwork')]",
"endpointRefs": [
{
"name": "counterServiceListener"
}
]
}
]
}
}
]
}
},
{
"apiVersion": "2018-09-01-preview",
"name": "counterGateway",
"type": "Microsoft.ServiceFabricMesh/gateways",
"location": "[parameters('location')]",
"dependsOn": [
"Microsoft.ServiceFabricMesh/networks/counterNetwork"
],
"properties": {
"description": "Service Fabric Mesh Gateway for counter sample.",
"sourceNetwork": {
"name": "Open"
},
"destinationNetwork": {
"name": "[resourceId('Microsoft.ServiceFabricMesh/networks', 'counterNetwork')]"
},
"tcp": [
{
"name": "web",
"port": 80,
"destination": {
"applicationName": "counterApp",
"serviceName": "counterService",
"endpointName": "counterServiceListener"
}
}
]
}
}
],
"outputs": {
"publicIPAddress": {
"value": "[reference('counterGateway').ipAddress]",
"type": "string"
}
}
}
source
I encounter a trouble with the last version of kubernetes (1.5.1). I have a quiet non usual setup composed with 5 Redhat Enterprise server. 3 are nodes, 2 are masters. Both masters are on an etcd cluster, flannel had been also added in baremetal.
I have this looping log in the kube-DNS container :
Failed to list *api.Endpoints: Get https://*.*.*.33:443/api/v1/endpoints?resourceVersion=0: x509: failed to load system roots and no roots provided
I made a big number of tests concerning the certificate. Curl works with the same credentials perfectly. The generation has been made with the official recommandation of kubernetes.
This is my different files of configuration ( with just the censorship of the ip and hostname if needed).
kube-apiserver.yml
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-apiserver",
"namespace": "kube-system",
"labels": {
"component": "kube-apiserver",
"tier": "control-plane"
}
},
"spec": {
"volumes": [
{
"name": "certs",
"hostPath": {
"path": "/etc/ssl/certs"
}
},
{
"name": "pki",
"hostPath": {
"path": "/etc/kubernetes"
}
}
],
"containers": [
{
"name": "kube-apiserver",
"image": "gcr.io/google_containers/kube-apiserver-amd64:v1.5.1",
"command": [
"/usr/local/bin/kube-apiserver",
"--v=0",
"--insecure-bind-address=127.0.0.1",
"--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota",
"--service-cluster-ip-range=100.64.0.0/12",
"--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem",
"--client-ca-file=/etc/kubernetes/pki/ca.pem",
"--tls-cert-file=/etc/kubernetes/pki/apiserver.pem",
"--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem",
"--secure-port=5443",
"--allow-privileged",
"--advertise-address=X.X.X.33",
"--etcd-servers=http://X.X.X.33:2379,http://X.X.X.37:2379",
"--kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP"
],
"resources": {
"requests": {
"cpu": "250m"
}
},
"volumeMounts": [
{
"name": "certs",
"mountPath": "/etc/ssl/certs"
},
{
"name": "pki",
"readOnly": true,
"mountPath": "/etc/kubernetes/"
}
],
"livenessProbe": {
"httpGet": {
"path": "/healthz",
"port": 8080,
"host": "127.0.0.1"
},
"initialDelaySeconds": 15,
"timeoutSeconds": 15
}
}
],
"hostNetwork": true
}
}
kube-controlleur-manager.yml
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-controller-manager",
"namespace": "kube-system",
"labels": {
"component": "kube-controller-manager",
"tier": "control-plane"
}
},
"spec": {
"volumes": [
{
"name": "pki",
"hostPath": {
"path": "/etc/kubernetes"
}
}
],
"containers": [
{
"name": "kube-controller-manager",
"image": "gcr.io/google_containers/kube-controller-manager-amd64:v1.5.1",
"command": [
"/usr/local/bin/kube-controller-manager",
"--v=0",
"--address=127.0.0.1",
"--leader-elect=true",
"--master=https://X.X.X.33",
"--cluster-name= kubernetes",
"--kubeconfig=/etc/kubernetes/kubeadminconfig",
"--root-ca-file=/etc/kubernetes/pki/ca.pem",
"--service-account-private-key-file=/etc/kubernetes/pki/apiserver-key.pem",
"--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem",
"--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem"
],
"resources": {
"requests": {
"cpu": "200m"
}
},
"volumeMounts": [
{
"name": "pki",
"readOnly": true,
"mountPath": "/etc/kubernetes/"
}
],
"livenessProbe": {
"httpGet": {
"path": "/healthz",
"port": 10252,
"host": "127.0.0.1"
},
"initialDelaySeconds": 15,
"timeoutSeconds": 15
}
}
],
"hostNetwork": true
}
}
kube-scheduler.yml
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-scheduler",
"namespace": "kube-system",
"labels": {
"component": "kube-scheduler",
"tier": "control-plane"
}
},
"spec": {
"volumes": [
{
"name": "pki",
"hostPath": {
"path": "/etc/kubernetes"
}
}
],
"containers": [
{
"name": "kube-scheduler",
"image": "gcr.io/google_containers/kube-scheduler-amd64:v1.5.1",
"command": [
"/usr/local/bin/kube-scheduler",
"--v=0",
"--address=127.0.0.1",
"--leader-elect=true",
"--kubeconfig=/etc/kubernetes/kubeadminconfig",
"--master=https://X.X.X.33"
],
"resources": {
"requests": {
"cpu": "100m"
}
},
"volumeMounts": [
{
"name": "pki",
"readOnly": true,
"mountPath": "/etc/kubernetes/"
}
],
"livenessProbe": {
"httpGet": {
"path": "/healthz",
"port": 10251,
"host": "127.0.0.1"
},
"initialDelaySeconds": 15,
"timeoutSeconds": 15
}
}
],
"hostNetwork": true
}
}
haproxy.yml
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "haproxy",
"namespace": "kube-system",
"labels": {
"component": "kube-apiserver",
"tier": "control-plane"
}
},
"spec": {
"volumes": [
{
"name": "vol",
"hostPath": {
"path": "/etc/haproxy/haproxy.cfg"
}
}
],
"containers": [
{
"name": "haproxy",
"image": "docker.io/haproxy:1.7",
"resources": {
"requests": {
"cpu": "250m"
}
},
"volumeMounts": [
{
"name": "vol",
"readOnly": true,
"mountPath": "/usr/local/etc/haproxy/haproxy.cfg"
}
]
}
],
"hostNetwork": true
}
}
kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=/etc/kubernetes/kubelet ExecStart=/usr/bin/kubelet \
$KUBELET_ADDRESS \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS \
$KUBE_LOGTOSTDERR \
$KUBE_ALLOW_PRIV \
$KUBELET_NETWORK_ARGS \
$KUBELET_DNS_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
kubelet
KUBELET_ADDRESS="--address=0.0.0.0 --port=10250"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeadminconfig --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests"
KUBE_LOGTOSTDERR="--logtostderr=true --v=9"
KUBE_ALLOW_PRIV="--allow-privileged=true"
KUBELET_DNS_ARGS="--cluster-dns=100.64.0.10 --cluster-domain=cluster.local"
kubadminconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.pem
server: https://X.X.X.33
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: admin
name: admin#kubernetes
- context:
cluster: kubernetes
user: kubelet
name: kubelet#kubernetes
current-context: admin#kubernetes
kind: Config
users:
- name: admin
user:
client-certificate: /etc/kubernetes/pki/admin.pem
client-key: /etc/kubernetes/pki/admin-key.pem
I already have seen most of the question relative from far to close to this question in the internet so i hope someone will have a hint to debug this.
I've been trying to run a glusterfs cluster on my kubernetes cluster using those:
glusterfs-service.json
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"spec": {
"type": "NodePort",
"selector": {
"name": "gluster"
},
"ports": [
{
"port": 1
}
]
}
}
and glusterfs-server.json:
{
"apiVersion": "extensions/v1beta1",
"kind": "DaemonSet",
"metadata": {
"labels": {
"name": "gluster"
},
"name": "gluster"
},
"spec": {
"selector": {
"matchLabels": {
"name": "gluster"
}
},
"template": {
"metadata": {
"labels": {
"name": "gluster"
}
},
"spec": {
"containers": [
{
"name": "gluster",
"image": "gluster/gluster-centos",
"livenessProbe": {
"exec": {
"command": [
"/bin/bash",
"-c",
"systemctl status glusterd.service"
]
}
},
"readinessProbe": {
"exec": {
"command": [
"/bin/bash",
"-c",
"systemctl status glusterd.service"
]
}
},
"securityContext": {
"privileged": true
},
"volumeMounts": [
{
"mountPath": "/mnt/brick1",
"name": "gluster-brick"
},
{
"mountPath": "/etc/gluster",
"name": "gluster-etc"
},
{
"mountPath": "/var/log/gluster",
"name": "gluster-logs"
},
{
"mountPath": "/var/lib/glusterd",
"name": "gluster-config"
},
{
"mountPath": "/dev",
"name": "gluster-dev"
},
{
"mountPath": "/sys/fs/cgroup",
"name": "gluster-cgroup"
}
]
}
],
"dnsPolicy": "ClusterFirst",
"hostNetwork": true,
"volumes": [
{
"hostPath": {
"path": "/mnt/brick1"
},
"name": "gluster-brick"
},
{
"hostPath": {
"path": "/etc/gluster"
},
"name": "gluster-etc"
},
{
"hostPath": {
"path": "/var/log/gluster"
},
"name": "gluster-logs"
},
{
"hostPath": {
"path": "/var/lib/glusterd"
},
"name": "gluster-config"
},
{
"hostPath": {
"path": "/dev"
},
"name": "gluster-dev"
},
{
"hostPath": {
"path": "/sys/fs/cgroup"
},
"name": "gluster-cgroup"
}
]
}
}
}
}
Then on my pod definition, I'm doing:
"volumes": [
{
"name": "< volume name >",
"glusterfs": {
"endpoints": "glusterfs-cluster.default.svc.cluster.local",
"path": "< gluster path >",
"readOnly": false
}
}
]
But the pod creation is timing out because it can't mont the volume
It also look like only one of the glusterfs pod is running
Here are my logs:
http://imgur.com/a/j2I8r
I then try to run my pod on the same namespace as my gluster cluster is running, I'm now getting this error:
Operation for "\"kubernetes.io/glusterfs/01a0834e-64ab-11e6-af52-42010a840072-ssl-certificates\" (\"01a0834e-64ab-11e6-af52-42010a840072\")" failed.
No retries permitted until 2016-08-17 18:51:20.61133778 +0000 UTC (durationBeforeRetry 2m0s).
Error: MountVolume.SetUp failed for volume "kubernetes.io/glusterfs/01a0834e-64ab-11e6-af52-42010a840072-ssl-certificates" (spec.Name: "ssl-certificates") pod "01a0834e-64ab-11e6-af52-42010a840072" (UID: "01a0834e-64ab-11e6-af52-42010a840072") with: glusterfs: mount failed:
mount failed: exit status 1
Mounting arguments:
10.132.0.7:ssl_certificates /var/lib/kubelet/pods/01a0834e-64ab-11e6-af52-42010a840072/volumes/kubernetes.io~glusterfs/ssl-certificates
glusterfs [log-level=ERROR log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/ssl-certificates/caddy-server-1648321103-epvdi-glusterfs.log]
Output: Mount failed. Please check the log file for more details. the following error information was pulled from the glusterfs log to help diagnose this issue:
[2016-08-17 18:49:20.583585] E [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:ssl_certificates)
[2016-08-17 18:49:20.610531] E [glusterfsd-mgmt.c:1494:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
The logs clearly say what's going on:
failed to get endpoints glusterfs-cluster [endpoints "glusterfs-cluster" not found]
because:
"ports": [
{
"port": 1
}
is bogus in a couple of ways. First, a port of "1" is very suspicious. Second, it has no matching containerPort: on the DaemonSet side to which kubernetes could point that Service -- thus, it will not create Endpoints for the (podIP, protocol, port) tuple. Because glusterfs (reasonably) would want to contact the underlying Pods directly, without going through the Service, then it is unable to discover the Pods and everything comes to an abrupt halt.