when i tried to accessing message table through an api i am getting an error that the table doesn't exists but it exists in db - loopback

LOOPBACK:
when i tried to access message table through an api, i am getting an error that the table doesn't exists but it exists in db.
i have used mysql db
Error: ER_NO_SUCH_TABLE: Table 'careuchoose.message' doesn't exist
at Query.Sequence._packetToError (/root/careuchoose/node_modules/mysql/lib/protocol/sequences/Sequence.js:47:14)
at Query.ErrorPacket (/root/careuchoose/node_modules/mysql/lib/protocol/sequences/Query.js:77:18)
at Protocol._parsePacket (/root/careuchoose/node_modules/mysql/lib/protocol/Protocol.js:291:23)
at Parser._parsePacket (/root/careuchoose/node_modules/mysql/lib/protocol/Parser.js:433:10)
at Parser.write (/root/careuchoose/node_modules/mysql/lib/protocol/Parser.js:43:10)
at Protocol.write (/root/careuchoose/node_modules/mysql/lib/protocol/Protocol.js:38:16)
at Socket. (/root/careuchoose/node_modules/mysql/lib/Connection.js:91:28)
at Socket. (/root/careuchoose/node_modules/mysql/lib/Connection.js:525:10)
at Socket.emit (events.js:189:13)
at Socket.EventEmitter.emit (domain.js:441:20)
at addChunk (_stream_readable.js:284:12)
at readableAddChunk (_stream_readable.js:265:11)
at Socket.Readable.push (_stream_readable.js:220:10)
at TCP.onStreamRead [as onread] (internal/stream_base_commons.js:94:17)
--------------------
at Protocol._enqueue (/root/careuchoose/node_modules/mysql/lib/protocol/Protocol.js:144:48)
at Connection.query (/root/careuchoose/node_modules/mysql/lib/Connection.js:201:25)
at Socket. (/root/careuchoose/server/server.js:124:17)
at Socket.emit (events.js:189:13)
at Socket.EventEmitter.emit (domain.js:441:20)
at /root/careuchoose/node_modules/socket.io/lib/socket.js:528:12
at process._tickCallback (internal/process/next_tick.js:61:11)
[nodemon] app crashed - waiting for file changes before starting...Error: ER_NO_SUCH_TABLE: Table 'careuchoose.message' doesn't exist
at Query.Sequence._packetToError (/root/careuchoose/node_modules/mysql/lib/protocol/sequences/Sequence.js:47:14)
at Query.ErrorPacket (/root/careuchoose/node_modules/mysql/lib/protocol/sequences/Query.js:77:18)
at Protocol._parsePacket (/root/careuchoose/node_modules/mysql/lib/protocol/Protocol.js:291:23)
at Parser._parsePacket (/root/careuchoose/node_modules/mysql/lib/protocol/Parser.js:433:10)
at Parser.write (/root/careuchoose/node_modules/mysql/lib/protocol/Parser.js:43:10)
at Protocol.write (/root/careuchoose/node_modules/mysql/lib/protocol/Protocol.js:38:16)
at Socket. (/root/careuchoose/node_modules/mysql/lib/Connection.js:91:28)
at Socket. (/root/careuchoose/node_modules/mysql/lib/Connection.js:525:10)
at Socket.emit (events.js:189:13)
at Socket.EventEmitter.emit (domain.js:441:20)
at addChunk (_stream_readable.js:284:12)
at readableAddChunk (_stream_readable.js:265:11)
at Socket.Readable.push (_stream_readable.js:220:10)
at TCP.onStreamRead [as onread] (internal/stream_base_commons.js:94:17)
--------------------
at Protocol._enqueue (/root/careuchoose/node_modules/mysql/lib/protocol/Protocol.js:144:48)
at Connection.query (/root/careuchoose/node_modules/mysql/lib/Connection.js:201:25)
at Socket. (/root/careuchoose/server/server.js:124:17)
at Socket.emit (events.js:189:13)
at Socket.EventEmitter.emit (domain.js:441:20)
at /root/careuchoose/node_modules/socket.io/lib/socket.js:528:12
at process._tickCallback (internal/process/next_tick.js:61:11)
[nodemon] app crashed - waiting for file changes before starting...
"name": "message",
"base": "PersistedModel",
"idInjection": true,
"options": {
"validateUpsert": true
},
"properties": {
"sender": {
"type": "string"
},
"sender_name": {
"type": "string"
},
"recipient": {
"type": "string"
},
"recipient_name": {
"type": "string"
},
"room_id": {
"type": "string"
},
"job_id": {
"type": "string"
},
"body": {
"type": "string",
"length": 10000
},
"seen": {
"type": "boolean",
"default": 0
},
"time": {
"type": "string"
},
"first_chat": {
"type": "string"
},
"sender_profile": {
"type": "string"
},
"type": {
"type": "string"
}
},
"validations": [],
"relations": {},
"acls": [
{
"accessType": "*",
"principalType": "ROLE",
"principalId": "$owner",
"permission": "ALLOW"
},
{
"accessType": "*",
"principalType": "ROLE",
"principalId": "$authenticated",
"permission": "ALLOW"
},
{
"accessType": "*",
"principalType": "ROLE",
"principalId": "admin",
"permission": "ALLOW"
}
],
"methods": {}
}
datasource.json
{
"db": {
"host":"localhost",
"port": 3306,
"url": "",
"database": "***",
"password": "*****",
"name": "db",
"user": "root",
"connector": "mysql",
"insecureAuth": "true"
},
"admin_db": {
"host": "localhost",
"port": 3306,
"url": "",
"database": "*****",
"password": "*****",
"name": "admin_db",
"user": "root",
"connector": "mysql",
"insecureAuth": "true"
}
}
Its working in local when i tried this in prod env then it shows these errors

Related

How to mapping identity fields?

How to setup FieldValueMap for identityRef fields, AssignedTo for example? I migrate from DevOps Services to DevOps Server 2020 Update 1.1
// no work
{
"$type": "FieldValueMapConfig",
"WorkItemTypeName": "*",
"sourceField": "System.AssignedTo.id",
"targetField": "System.AssignedTo.id",
"valueMapping": {
}
}
// migration.exe crash
{
"$type": "FieldValueMapConfig",
"WorkItemTypeName": "*",
"sourceField": "System.AssignedTo",
"targetField": "System.AssignedTo",
"valueMapping": {
{
"displayName": "displayName",
"url": "url",
"_links": {
"avatar": {
"href": "href"
}
},
"id": "id",
"uniqueName": "uniqueName",
"imageUrl": "imageUrl",
"descriptor": "descriptor"
}: {
"displayName": "mapped_displayName",
"url": "mapped_url",
"_links": {
"avatar": {
"href": "mapped_href"
}
},
"id": "mapped_id",
"uniqueName": "mapped_uniqueName",
"imageUrl": "mapped_imageUrl",
"descriptor": "mapped_descriptor"
}
}
}
Can anybody help?
Same issue on github.

Notification throttling has bad behaviour

I have deployed a empty Orion and connected a device that sends updates ervery 10 secs. I need to receive one notification each 10 min so I have created a notification with a throttling of 600 s.
The problem is that I am reciving two notifications in the same minute or even a couple seconds after the first one.
1) One instance of orion version: 2.2.0
2) Orion suscriptions
[
{
"id": "5e9ec90d9e2ba22996168d78",
"description": "Notificaciones a Arcgis",
"expires": "2040-01-01T14:00:00.00Z",
"status": "active",
"subject": {
"entities": [
{
"id": "id1",
"type": "aqss"
}
],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 521026,
"lastNotification": "2020-05-21T10:52:53.00Z",
"attrs": [],
"attrsFormat": "normalized",
"http": {
"url": "http://test.eu/notify/"
},
"metadata": [
"dateCreated",
"dateModified"
],
"lastFailure": "2020-05-20T16:46:58.00Z",
"lastFailureReason": "Timeout was reached",
"lastSuccess": "2020-05-21T10:52:53.00Z",
"lastSuccessCode": 500
}
},
{
"id": "5e9ec96f9e2ba22996168d79",
"description": "Notificaciones a Arcgis",
"expires": "2040-01-01T14:00:00.00Z",
"status": "active",
"subject": {
"entities": [
{
"id": "id2",
"type": "aqss"
}
],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 526757,
"lastNotification": "2020-05-21T10:52:50.00Z",
"attrs": [],
"attrsFormat": "normalized",
"http": {
"url": "http://test.eu/notify/"
},
"metadata": [
"dateCreated",
"dateModified"
],
"lastFailure": "2020-05-21T09:32:34.00Z",
"lastFailureReason": "Timeout was reached",
"lastSuccess": "2020-05-21T10:52:50.00Z",
"lastSuccessCode": 500
}
},
{
"id": "5e9ec9899e2ba22996168d7a",
"description": "Notificaciones a Arcgis",
"expires": "2040-01-01T14:00:00.00Z",
"status": "active",
"subject": {
"entities": [
{
"id": "id3",
"type": "aqss"
}
],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 541814,
"lastNotification": "2020-05-21T10:52:47.00Z",
"attrs": [],
"attrsFormat": "normalized",
"http": {
"url": "http://test.eu/notify/"
},
"metadata": [
"dateCreated",
"dateModified"
],
"lastFailure": "2020-05-21T01:49:52.00Z",
"lastFailureReason": "Timeout was reached",
"lastSuccess": "2020-05-21T10:52:48.00Z",
"lastSuccessCode": 500
}
},
{
"id": "5e9ec9a69e2ba22996168d7b",
"description": "Notificaciones a Arcgis",
"expires": "2040-01-01T14:00:00.00Z",
"status": "active",
"subject": {
"entities": [
{
"id": "id3",
"type": "aqss"
}
],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 470859,
"lastNotification": "2020-05-21T10:52:47.00Z",
"attrs": [],
"attrsFormat": "normalized",
"http": {
"url": "http://test.eu/notify/"
},
"metadata": [
"dateCreated",
"dateModified"
],
"lastFailure": "2020-05-20T21:01:32.00Z",
"lastFailureReason": "Timeout was reached",
"lastSuccess": "2020-05-21T10:52:47.00Z",
"lastSuccessCode": 500
}
},
{
"id": "5e9ec9c09e2ba22996168d7c",
"description": "Notificaciones a Arcgis",
"expires": "2040-01-01T14:00:00.00Z",
"status": "active",
"subject": {
"entities": [
{
"id": "id4",
"type": "aqss"
}
],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 532901,
"lastNotification": "2020-05-21T10:52:44.00Z",
"attrs": [],
"attrsFormat": "normalized",
"http": {
"url": "http://test.eu/notify/"
},
"metadata": [
"dateCreated",
"dateModified"
],
"lastFailure": "2020-05-21T08:33:10.00Z",
"lastFailureReason": "Timeout was reached",
"lastSuccess": "2020-05-21T10:52:45.00Z",
"lastSuccessCode": 500
}
},
{
"id": "5e9ec9ff9e2ba22996168d7d",
"description": "Notificaciones a Arcgis",
"expires": "2040-01-01T14:00:00.00Z",
"status": "active",
"subject": {
"entities": [
{
"id": "id5",
"type": "aqss"
}
],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 520974,
"lastNotification": "2020-05-21T10:52:53.00Z",
"attrs": [],
"attrsFormat": "normalized",
"http": {
"url": "http://test.eu/notify/"
},
"metadata": [
"dateCreated",
"dateModified"
],
"lastFailure": "2020-05-21T05:30:58.00Z",
"lastFailureReason": "Timeout was reached",
"lastSuccess": "2020-05-21T10:52:53.00Z",
"lastSuccessCode": 500
}
}
]
3) Orion is running in a docker instance
[root#271386b095c6 /]# ps ax
PID TTY STAT TIME COMMAND
1 ? Ssl 320:12 /usr/bin/contextBroker -fg -multiservice -ngsiv1Autocast -dbhost mongo -corsOrigin __ALL -logLevel DEBUG
9248 pts/0 Ss 0:00 bash
9293 pts/0 R+ 0:00 ps ax
4) Pending...
5) Entities and suscriptions belong to the same service and service-path
Thanks
Looking to the info above, it seems you have a lot of subscriptions. Six of them, if I have counted correctly.
Not sure if all of them are covering the same entities/attributes, but my recommendation would be to simplify your scenario leaving only one subscription and test again.
You can delete subscription using DELETE /v2/subscriptions/{subId}.

Error: Error response from daemon: invalid volume specification - Windows 8.1 Docker Toolbox

Error: Error response from daemon: invalid volume specification: 'C:/Users/Anthony/magento2-devbox:/C:/Users/Anthony/magento2-devbox'
I have had a google around on this but I cant see how this path has been assembled. Most paths exclude the : and i am also not sure why it has assembled this :/C:/ or whether this is just produced for the error message.
This is the YAML for the replica set;
{
"kind": "ReplicaSet",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "magento2-monolith-54cdd5b4b7",
"namespace": "default",
"selfLink": "/apis/extensions/v1beta1/namespaces/default/replicasets/magento2-monolith-54cdd5b4b7",
"uid": "e819bfbd-8820-11e9-a613-080027316036",
"resourceVersion": "22855",
"generation": 1,
"creationTimestamp": "2019-06-06T06:04:12Z",
"labels": {
"app.kubernetes.io/instance": "magento2",
"app.kubernetes.io/name": "monolith",
"pod-template-hash": "54cdd5b4b7"
},
"annotations": {
"deployment.kubernetes.io/desired-replicas": "1",
"deployment.kubernetes.io/max-replicas": "1",
"deployment.kubernetes.io/revision": "1"
},
"ownerReferences": [
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"name": "magento2-monolith",
"uid": "9ec9d23e-8691-11e9-a3dd-080027316036",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app.kubernetes.io/instance": "magento2",
"app.kubernetes.io/name": "monolith",
"pod-template-hash": "54cdd5b4b7"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app.kubernetes.io/instance": "magento2",
"app.kubernetes.io/name": "monolith",
"pod-template-hash": "54cdd5b4b7"
}
},
"spec": {
"volumes": [
{
"name": "nginx-config-volume",
"configMap": {
"name": "magento2-monolith-nginx-config",
"defaultMode": 420
}
},
{
"name": "varnish-config-volume",
"configMap": {
"name": "magento2-monolith-varnish-config",
"defaultMode": 420
}
},
{
"name": "code",
"hostPath": {
"path": "C:/Users/Anthony/magento2-devbox",
"type": ""
}
}
],
"containers": [
{
"name": "monolith",
"image": "magento2-monolith:dev",
"ports": [
{
"containerPort": 8050,
"protocol": "TCP"
}
],
"env": [
{
"name": "DEVBOX_ROOT",
"value": "C:/Users/Anthony/magento2-devbox"
},
{
"name": "COMPOSER_HOME",
"value": "C:/Users/Anthony/magento2-devbox/.composer"
},
{
"name": "MAGENTO_ROOT",
"value": "C:/Users/Anthony/magento2-devbox/magento"
},
{
"name": "MAGENTO_ROOT_HOST",
"value": "C:/Users/Anthony/magento2-devbox/magento"
},
{
"name": "DEVBOX_ROOT_HOST",
"value": "C:/Users/Anthony/magento2-devbox"
},
{
"name": "IS_WINDOWS_HOST",
"value": "0"
}
],
"resources": {},
"volumeMounts": [
{
"name": "code",
"mountPath": "C:/Users/Anthony/magento2-devbox"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Never",
"securityContext": {
"privileged": true,
"procMount": "Default"
}
},
{
"name": "monolith-xdebug",
"image": "magento2-monolith:dev-xdebug",
"ports": [
{
"containerPort": 8002,
"protocol": "TCP"
}
],
"env": [
{
"name": "DEVBOX_ROOT",
"value": "C:/Users/Anthony/magento2-devbox"
},
{
"name": "COMPOSER_HOME",
"value": "C:/Users/Anthony/magento2-devbox/.composer"
},
{
"name": "MAGENTO_ROOT",
"value": "C:/Users/Anthony/magento2-devbox/magento"
},
{
"name": "MAGENTO_ROOT_HOST",
"value": "C:/Users/Anthony/magento2-devbox/magento"
},
{
"name": "DEVBOX_ROOT_HOST",
"value": "C:/Users/Anthony/magento2-devbox"
},
{
"name": "IS_WINDOWS_HOST",
"value": "0"
}
],
"resources": {},
"volumeMounts": [
{
"name": "code",
"mountPath": "C:/Users/Anthony/magento2-devbox"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Never",
"securityContext": {
"privileged": true,
"procMount": "Default"
}
},
{
"name": "nginx",
"image": "nginx:1.9",
"resources": {},
"volumeMounts": [
{
"name": "code",
"mountPath": "C:/Users/Anthony/magento2-devbox"
},
{
"name": "nginx-config-volume",
"mountPath": "/etc/nginx/nginx.conf",
"subPath": "nginx.conf"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent",
"securityContext": {
"privileged": true,
"procMount": "Default"
}
},
{
"name": "varnish",
"image": "million12/varnish",
"env": [
{
"name": "VCL_CONFIG",
"value": "/etc/varnish/magento.vcl"
},
{
"name": "VARNISHD_PARAMS",
"value": "-a 0.0.0.0:6081"
}
],
"resources": {},
"volumeMounts": [
{
"name": "varnish-config-volume",
"mountPath": "/etc/varnish/magento.vcl",
"subPath": "varnish.vcl"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
}
},
"status": {
"replicas": 1,
"fullyLabeledReplicas": 1,
"observedGeneration": 1
}
}
I am new to Docker/Kubernetes after coming over from Vagrant so i do not know where to start. The information i have is from the web browser dashboard
Path is probably not converted to unix style. Since docker 1.9.0, Windows paths are not automatically converted (eg. C:\Users to /c/Users).
So your path should be like :
{
"name": "DEVBOX_ROOT",
"value": "/c/Users/Anthony/magento2-devbox"
}

Running two containers in a single pod via kubectl run does not terminate the pod with two containers

Trying to run a two docker containers in one kubernetes pod. The problem we face is that we have 2 images/containers in a Kubernetes pod whereas we used to have 1 image for this pod previously where it was working perfectly and shutting down the pod gracefully. Now the problem is that with 2 images or containers within the same pod both images are not shutting down at the same time gracefully.
We are doing integration testing of our container against the database which is MongoDB, our container terminates but MongoDB fails to terminate here and continues running
while getopts n:c:l:i:d:t: option
do
case "${option}"
in
n) Company_NAMESPACE=$OPTARG;;
c) Company_CONFIG=$OPTARG;;
l) Company_LICENSE=$OPTARG;;
i) Company_IMAGE=$OPTARG;;
d) Company_CONTAINER_NAME=$OPTARG;;
t) Company_TOKEN=$OPTARG;;
esac
done
kubectl run $Company_CONTAINER_NAME -n $Company_NAMESPACE --restart=Never --overrides='
{
"apiVersion": "v1",
"spec": {
"imagePullSecrets": [
{
"name": "Company-regsecret"
}
],
"initContainers": [
{
"name": "copy-configs",
"image": "busybox",
"command": ["sh", "-c", "cp /tmp/Company-config-volume/server/* /tmp/ng-rt/config/server/ 2>/dev/null || true; cp /tmp/Company-license-volume/licenses/* /tmp/ng-rt/config/licenses 2>/dev/null || true"],
"volumeMounts": [
{
"name": "Company-config-volume",
"mountPath": "mount_path"
},
{
"name": "'$Company_CONFIG'",
"mountPath": "mount_path"
},
{
"name": "Company-license-volume",
"mountPath": "mount_path"
},
{
"name": "'$Company_LICENSE'",
"mountPath": "mount_path"
}
]
}
],
"containers": [
{
"name": "mongodb-test",
"image": "mongo:3.6.8",
"command": [
"numactl",
"--interleave=all",
"mongod",
"--wiredTigerCacheSizeGB",
"0.1",
"--replSet",
"MainRepSet",
"--bind_ip_all"
],
"ports": [{
"containerPort": 27017
}],
"readinessProbe": {
"exec": {
"command": ["mongo", "--eval", "rs.initiate()"]
}
},
"terminationGracePeriodSeconds": 10
},
{
"env": [
{
"name": "AWS_ACCESS_KEY_ID",
"valueFrom": {
"secretKeyRef": {
"key": "AWS_ACCESS_KEY_ID",
"name": "aws-secrets"
}
}
},
{
"name": "AWS_SECRET_ACCESS_KEY",
"valueFrom": {
"secretKeyRef": {
"key": "AWS_SECRET_ACCESS_KEY",
"name": "aws-secrets"
}
}
},
{
"name": "AWS_REGION",
"valueFrom": {
"secretKeyRef": {
"key": "AWS_REGION",
"name": "aws-secrets"
}
}
},
{
"name": "BUILD_ID",
"valueFrom": {
"configMapKeyRef": {
"key": "BUILD_ID",
"name": "config"
}
}
}
],
"command": [
"sh",
"-c",
"mkdir -p mount_path 2\u003e/dev/null && npm test --skipConnectivityTestRethinkDB"
],
"name": "'$Company_CONTAINER_NAME'",
"image": "'$Company_IMAGE'",
"volumeMounts": [
{
"mountPath": "mount_path",
"name": "'$Company_CONFIG'"
},
{
"mountPath": "mount_path",
"name": "'$Company_LICENSE'"
}
]
}
],
"volumes": [
{
"name": "Company-config-volume",
"configMap": {
"name": "'$Company_CONFIG'"
}
},
{
"name": "'$Company_CONFIG'",
"emptyDir": {}
},
{
"name": "Company-license-volume",
"configMap": {
"name": "'$Company_LICENSE'"
}
},
{
"name": "'$Company_LICENSE'",
"emptyDir": {}
}
]
}
}
' --image=$Company_IMAGE -ti --rm --token=$Company_TOKEN
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "test-dev-58663",
"namespace": "d-int-company-dev-v2",
"labels": {
"run": "test-dev-58663"
},
"annotations": {
"kubernetes.io/psp": "nfs-provisioner"
}
},
"spec": {
"volumes": [
{
"name": "company-config-volume",
"configMap": {
"name": "test-core",
"defaultMode": 420
}
},
{
"name": "test-core",
"emptyDir": {}
},
{
"name": "company-license-volume",
"configMap": {
"name": "company-license",
"defaultMode": 420
}
},
{
"name": "company-license",
"emptyDir": {}
},
{
"name": "default-token-wqp5x",
"secret": {
"secretName": "default-token-wqp5x",
"defaultMode": 420
}
}
],
"initContainers": [
{
"name": "copy-configs",
"image": "busybox",
"command": [
"sh",
"-c",
"cp mount_path* mount_path 2>/dev/null || true; cp mount_path* mount_path 2>mount_path|| true"
],
"resources": {},
"volumeMounts": [
{
"name": "company-config-volume",
"mountPath": "mount_path"
},
{
"name": "test-core",
"mountPath": "mount_path"
},
{
"name": "company-license-volume",
"mountPath": "mount_path"
},
{
"name": "company-license",
"mountPath": "mount_path"
},
{
"name": "default-token-wqp5x",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"containers": [
{
"name": "mongodb-test",
"image": "mongo:3.6.8",
"ports": [
{
"containerPort": 27017,
"protocol": "TCP"
}
],
"resources": {},
"volumeMounts": [
{
"name": "default-token-wqp5x",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"readinessProbe": {
"exec": {
"command": [
"mongo",
"--eval",
"rs.initiate()"
]
},
"timeoutSeconds": 1,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 3
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent"
},
{
"name": "test-dev-58663",
"image": "image_path",
"command": [
"sh",
"-c",
"mkdir -p mount_path 2>/dev/null && npm test --skipConnectivityTestRethinkDB"
],
"env": [
{
"name": "AWS_ACCESS_KEY_ID",
"valueFrom": {
"secretKeyRef": {
"name": "aws-secrets",
"key": "AWS_ACCESS_KEY_ID"
}
}
},
{
"name": "AWS_SECRET_ACCESS_KEY",
"valueFrom": {
"secretKeyRef": {
"name": "aws-secrets",
"key": "AWS_SECRET_ACCESS_KEY"
}
}
},
{
"name": "AWS_REGION",
"valueFrom": {
"secretKeyRef": {
"name": "aws-secrets",
"key": "AWS_REGION"
}
}
},
{
"name": "BUILD_ID",
"valueFrom": {
"configMapKeyRef": {
"name": "tbsp-config",
"key": "BUILD_ID"
}
}
}
],
"resources": {},
"volumeMounts": [
{
"name": "test-core",
"mountPath": "mount_path"
},
{
"name": "company-license",
"mountPath": "mount_path"
},
{
"name": "default-token-wqp5x",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Never",
"terminationGracePeriodSeconds": 10,
"dnsPolicy": "ClusterFirst",
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "pc212",
"securityContext": {},
"imagePullSecrets": [
{
"name": "company-regsecret"
}
],
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
]
},
"status": {
"phase": "Pending",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-05-10T12:50:39Z"
},
{
"type": "Ready",
"status": "False",
"lastProbeTime": null,
"lastTransitionTime": "2019-05-10T12:49:54Z",
"reason": "ContainersNotReady",
"message": "containers with unready status: [mongodb-test test-dev-58663]"
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-05-10T12:49:54Z"
}
],
"hostIP": "10.10.2.12",
"podIP": "172.16.4.22",
"startTime": "2019-05-10T12:49:54Z",
"initContainerStatuses": [
{
"name": "copy-configs",
"state": {
"terminated": {
"exitCode": 0,
"reason": "Completed",
"startedAt": "2019-05-10T12:50:39Z",
"finishedAt": "2019-05-10T12:50:39Z",
"containerID": "docker://1bcd12f5848e32e82f7dfde8e245223345e87f70061b789cbbabc0f798436b59"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker-pullable://busybox#sha256:0b184b74edc63924be0d7f67d16f5afbcdbe61caa1aca9312ed3b5c57792f6c1",
"containerID": "docker://1bcd12f5848e32e82f7dfde8e245223345e87f70061b789cbbabc0f798436b59"
}
],
"containerStatuses": [
{
"name": "mongodb-test",
"state": {
"waiting": {
"reason": "PodInitializing"
}
},
"lastState": {},
"ready": false,
"restartCount": 0,
"image": "mongo:3.6.8",
"imageID": ""
},
{
"name": "test-dev-58663",
"state": {
"waiting": {
"reason": "PodInitializing"
}
},
"lastState": {},
"ready": false,
"restartCount": 0,
"image": "image_path",
"imageID": ""
}
],
"qosClass": "BestEffort"
}
}
Both the containers and the hosting pod should terminate gracefully.
You have pod consisting of two containers, one of them is supposed to run indefinitely, another one runs to completion. This is bad practice. You should split your pod into two separate things: Pod with Mongo and Job with your integration script. You have to write logic that watches Job to finish and then terminates both Pod and Job. You can do it like this:
kubectl apply -f integration-test.yaml
kubectl wait --for=condition=Complete --timeout=5m job/test
kubectl delete -f integration-test.yaml

ContextBroker unexpected error

We have a contextBroker version 1.0.0 and yesterday we had the below unexpected error.
log directory: '/var/log/contextBroker'
terminate called after throwing an instance of 'mongo::AssertionException'
what(): assertion src/mongo/bson/bsonelement.cpp:392
log directory:'/var/log/contextBroker'
Could someone please tell us why it could have happened?
The petition that we do is the below:
HttpUri=http://172.21.0.33:1026/v1/updateContext
HttpMethod=POST
Accept=application/json
Content-Type=application/json
Fiware-Service=tmp_015_adapter
Fiware-ServicePath=/Prueba/Planta_3
{
"contextElements": [{
"type": "device_reading",
"isPattern": "false",
"id": "xxxxx",
"attributes": [{
"name": "timestamp",
"type": "string",
"value": "2016-06-14T12:02:03.000Z"
}, {
"name": "location",
"type": "coords",
"value": "23.295132549930024, 2.1797946491494258"
}, {
"name": "mac",
"type": "string",
"value": "xxxxx"
}, {
"name": "densityPlans",
"type": "string",
"value": "R-B2"
}, {
"name": "floor",
"type": "string",
"value": "Prueba_planta3"
}, {
"name": "manufacturer",
"type": "string",
"value": "Xiaomi+Communications"
}, {
"name": "rssi",
"type": "string",
"value": "9"
}]
}],
"updateAction": "APPEND" }
The error that shows in the application because of contextBroker's error is:
org.apache.http.NoHttpResponseException: 172.21.0.33:1026 failed to respond