I have some time-data that I'd like to show with Chart.js. The graph is displayed correctly, but instead of having invididual days on the x-axis, I'd like to see the months at most. I tried setting "unit"/"minUnit" like
{
"config": {
"type": "line",
"data": {
"labels": [
"2020-01-26",
[...]
"2022-01-02",
],
"datasets": [...]
},
"options": {
"elements": { "point": { "radius": 0 } },
"scales": { "xAxes": [{ "type": "time", "time": { "unit": "month" } }] }
}
}
}
but this still gives
Any hints?
It turns out you don't need a list [] for the options. The following works well:
"x": {
"type": "time",
"time": { "unit": "month" },
"grid": { "display": false }
}
Related
I am working on an API documentation using Redoc where I have some conditionals (if-then) in the JSON Schema. I am trying to get this through in the API documentation, but the Redoc documentation does not show this.
As an example I have used the if-then-else method showcased here: https://json-schema.org/understanding-json-schema/reference/conditionals.html
by wrapping pairs of if and then inside an allOf. This is the JSON Schema after doing that:
{
"type": "object",
"properties": {
"street_address": {
"type": "string"
},
"country": {
"default": "United States of America",
"enum": ["United States of America", "Canada", "Netherlands"]
}
},
"allOf": [
{
"if": {
"properties": { "country": { "const": "United States of America" } }
},
"then": {
"properties": { "postal_code": { "pattern": "[0-9]{5}(-[0-9]{4})?" } }
}
},
{
"if": {
"properties": { "country": { "const": "Canada" } },
"required": ["country"]
},
"then": {
"properties": { "postal_code": { "pattern": "[A-Z][0-9][A-Z] [0-9][A-Z][0-9]" } }
}
},
{
"if": {
"properties": { "country": { "const": "Netherlands" } },
"required": ["country"]
},
"then": {
"properties": { "postal_code": { "pattern": "[0-9]{4} [A-Z]{2}" } }
}
}
]
}
Redoc just displays this as:
With no mention to the if-then statements. Can that be achieved?
I have create a connector to azureEventhub , it works fine to pull the data into the topic ,
My use case is to filter the messages that I m pullling basing on the message type .
Example :
{
"messageType": "Transformed",
"timeStamp": 1652113146105,
"payload": {
"externalId": "24323",
"equipmentType": "TemperatureSensor",
"measureType": "Temperature",
"normalizedData": {
"equipmentData": [
{
"key": "ReadingValue",
"value": 23,
"valueType": "number",
"measurementUnit": "celsius",
"measurementDateTime": "2022-05-09T16:18:34.0000000Z"
}
]
},
"dataProviderName": "LineMetrics"
},
},
{
"messageType": "IntegratorGenericDataStream",
"timeStamp": 1652113146103,
"payload": {
"dataSource": {
"type": "sensor",
},
"dataPoints": [
{
"type": "Motion",
"value": 0,
"valueType": "number",
"dateTime": "2022-05-09T16:18:37.0000000Z",
"unit": "count"
}
],
"dataProvider": {
"id": "ba84cbdb-cbf8-4d4f-9a55-93b43f671b5a",
"name": "LineMetrics",
"displayName": "Line Metrics"
}
},
}
I wanted to apply a filter on a value like shown in the pic:
enter image description here
the error that appears to me :
enter image description here
I have an existing datasource in Druid. I am trying to delete some records by reindexing the data with filter and overwriting the existing data.
If the dataSource within ioConfig is my_datasource and the dataSource within the dataSchema is other_datasource, it works just fine and the other_datasource shows expected result.
But when both the dataSources (ioConfig and dataSchema) are the same, the existing data does not change according to the filters applied.
Here is the configuration sample:
{
"type": "index_parallel",
"spec": {
"dataSchema": {
"dataSource": "my_datasource",
"timestampSpec": {
"column": "RecordDate",
"format": "YYYY-MM-DD"
},
"dimensionsSpec": {
"dimensions":["RecordDate", "Column1", "Column2"]
},
"metricsSpec": [
],
"granularitySpec": {
"type": "uniform",
"queryGranularity": "none",
"segmentGranularity": "day",
"rollup": "false"
},
"transformSpec" : {
"filter" :{"type":"not", "field":{"type":"expression", "expression":"RecordDate >='1997-02-01' && RecordDate<='1997-02-28'"}},
"transforms" : [ ]
}
},
"ioConfig": {
"type": "index_parallel",
"inputSource": {
"type": "druid",
"dataSource": "my_datasource",
"interval": "1970-01-01/2021-12-26"
},
"appendToExisting":"false"
},
"tuningConfig": {
"type": "index_parallel",
"partitionsSpec": {
"type": "dynamic"
},
"maxNumConcurrentSubTasks": 4
}
}
}
What am I missing here? Is there a better way of achieving what I am trying to do?
Appreciate your help. Thank you.
I made it to work by adding dropExisting=true to the ioConfig.
Also, I moved the filter to the dataSource within the ioConfig block.
Here is the complete config.
{
"type": "index_parallel",
"spec": {
"dataSchema": {
"dataSource": "my_datasource",
"timestampSpec": {
"column": "RecordDate",
"format": "YYYY-MM-DD"
},
"dimensionsSpec": {
"dimensions":["RecordDate", "Column1", "Column2"]
},
"metricsSpec": [
],
"granularitySpec": {
"type": "uniform",
"queryGranularity": "none",
"segmentGranularity": "day",
"rollup": "false",
"intervals":["1970-01-01/2021-12-27"]
},
"transformSpec" : {
"transforms" : [ ]
}
},
"ioConfig": {
"type": "index_parallel",
"inputSource": {
"type": "druid",
"dataSource": "my_datasource",
"interval": "1970-01-01/2021-12-26",
"filter" :{"type":"not", "field":{"type":"expression", "expression":"RecordDate >='1997-02-01' && RecordDate<='1997-02-28'"}},
},
"appendToExisting":false,
"dropExisting":true
},
"tuningConfig": {
"type": "index_parallel",
"partitionsSpec": {
"type": "dynamic"
},
"maxNumConcurrentSubTasks": 4
}
}
}
I have an index with the following mappings - standard format for a date. In the 2nd record below the time specified is actually a local time - but ES treats it as UTC.
Even though ES is internally converting all parsed datetimes to UTC but it must obviously store the original string as well.
My question is whether (and how) it might be possible to query all records for which the scheduledDT value doesn't have the timezone explicitly specified.
{
"curator_v3": {
"mappings": {
"published": {
"analyzer": "classic",
"numeric_detection": true,
"properties": {
"Id": {
"type": "string",
"index": "not_analyzed",
"include_in_all": false
},
"createDT": {
"type": "date",
"format": "dateOptionalTime",
"include_in_all": false
},
"scheduleDT": {
"type": "date",
"format": "dateOptionalTime",
"include_in_all": false
},
"title": {
"type": "string",
"fields": {
"english": {
"type": "string",
"analyzer": "english"
},
"raw": {
"type": "string",
"index": "not_analyzed"
},
"shingle": {
"type": "string",
"analyzer": "shingle"
},
"spanish": {
"type": "string",
"analyzer": "spanish"
}
},
"include_in_all": false
}
}
}
}
}
}
We use .NET as our client to ElasticSearch and haven't been consistent in specifying a timezone for the scheduleDT field.
{
"took": 2,
"timed_out": false,
"_shards": {
"total": 12,
"successful": 12,
"failed": 0
},
"hits": {
"total": 32,
"max_score": null,
"hits": [
{
"_index": "curator_v3",
"_type": "published",
"_id": "29651227",
"_score": null,
"fields": {
"Id": [
"29651227"
],
"scheduleDT": [
"2015-11-21T22:17:51.0946798-06:00"
],
"title": [
"97 Year-Old Woman Cries Tears Of Joy After Finally Getting Her High School Diploma"
],
"createDT": [
"2015-11-21T22:13:32.3597142-06:00"
]
},
"sort": [
1448165871094
]
},
{
"_index": "curator_v3",
"_type": "published",
"_id": "210466413",
"_score": null,
"fields": {
"Id": [
"210466413"
],
"scheduleDT": [
"2015-11-22T12:00:00"
],
"title": [
"6 KC treats to bring to Thanksgiving"
],
"createDT": [
"2015-11-20T15:08:25.4282-06:00"
]
},
"sort": [
1448193600000
]
}
]
},
"aggregations": {
"ScheduleDT": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 27,
"buckets": [
{
"key": 1448165871094,
"key_as_string": "2015-11-22T04:17:51.094Z",
"doc_count": 1
},
{
"key": 1448193600000,
"key_as_string": "2015-11-22T12:00:00.000Z",
"doc_count": 4
}
]
}
}
}
You can do this by querying the document having a scheduleDT whose field length is less than 20 characters (e.g. 2015-11-22T12:00:00). All the date fields with a specified time zone would be longer.
Something like this should do:
{
"query": {
"filtered": {
"filter": {
"script": {
"script": "doc.scheduleDT.value.size() < 20"
}
}
}
}
}
Note, however, that in order to make your queries easier to create you should always try to convert all your timestamps in UTC before indexing your documents.
Finally, also make sure that you have dynamic scripting enabled in order to run the above query.
UPDATE
Actually, if you use the _source directly in the script it will work because it will return the real value from the source as it was when the document was indexed:
{
"query": {
"filtered": {
"filter": {
"script": {
"script": "_source.scheduleDT.size() < 20"
}
}
}
}
}
I've got two CloudKit data objects that look somewhat like this:
Parent Object:
{
"records": [
{
"recordName": "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57",
"recordType": "ParentObject",
"fields": {
"fsYear": {
"value": "2015",
"type": "STRING"
},
"displayOrder": {
"value": 2015221153856287200,
"type": "INT64"
},
"fjpFSGuidForReference": {
"value": "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57",
"type": "STRING"
},
"fsDateSearch": {
"value": "2015221153856287158",
"type": "STRING"
},
},
"recordChangeTag": "id4w7ivn",
"created": {
"timestamp": 1439149087571,
"userRecordName": "_0d26968032e31bbc72c213037b6cb35d",
"deviceID": "A19CD995FDA3093781096AF5D818033A241D65C1BFC3D32EC6C5D6B3B4A9AA6B"
},
"modified": {
"timestamp": 1439149087571,
"userRecordName": "_0d26968032e31bbc72c213037b6cb35d",
"deviceID": "A19CD995FDA3093781096AF5D818033A241D65C1BFC3D32EC6C5D6B3B4A9AA6B"
}
}
],
"total":
}
Child Object:
{
"records": [
{
"recordName": "2015221153856287168",
"recordType": "ChildObject",
"fields": {
"District": {
"value": "002",
"type": "STRING"
},
"ZipCode": {
"value": "12345",
"type": "STRING"
},
"InspecReference": {
"value": {
"recordName": "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57",
"action": "NONE",
"zoneID": {
"zoneName": "_defaultZone"
}
},
"type": "REFERENCE"
},
},
"recordChangeTag": "id4w7lew",
"created": {
"timestamp": 1439149090856,
"userRecordName": "_0d26968032e31bbc72c213037b6cb35d",
"deviceID": "A19CD995FDA3093781096AF5D818033A241D65C1BFC3D32EC6C5D6B3B4A9AA6B"
},
"modified": {
"timestamp": 1439149090856,
"userRecordName": "_0d26968032e31bbc72c213037b6cb35d",
"deviceID": "A19CD995FDA3093781096AF5D818033A241D65C1BFC3D32EC6C5D6B3B4A9AA6B"
}
}
],
"total": 1
}
I'm trying to write a query to directly access the CloudKit web service and return the Child Object based on the reference of the parent object.
My test JSON looks something like this:
{"query":{"recordType":"ChildObject","filterBy":{"fieldName":"InspecReference","fieldValue":{ "value" : "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57", "type" : "string" },"comparator":"EQUALS"}},"zoneID":{"zoneName":"_defaultZone"}}
However, I'm getting the following error from CloudKit:
{"uuid":"33db91f3-b768-4a68-9056-216ecc033e9e","serverErrorCode":"BAD_REQUEST","reason":"BadRequestException:
Unexpected input"}
I'm guessing I have the Record Field Dictionary in the query wrong. However, the documentation isn't clear on what this should look like on a reference object.
You have to re-create the actual object of the reference. In this particular case, the JSON looks like this:
{
"query": {
"recordType": "ChildObject",
"filterBy": {
"fieldName": "InspecReference",
"fieldValue": {
"value": {
"recordName": "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57",
"action": "NONE"
},
"type": "REFERENCE"
},
"comparator": "EQUALS"
}
},
"zoneID": {
"zoneName": "_defaultZone"
}
}