How do you load jsTree using a variable for data - jstree

I have an ajax call to a wcf module that returns a menu tree in this format. (truncated)
{ "data" : [
{
"data" : "Home",
"attr" : {"webpageid" : "1", "url" : "/Intranet/index.html", "appcode" : "Intranet Home", "parent" : "0", "enabled" : "1", "visible" : "1", "target" : "_self ", "order" : "0", "title" : "Home", "itmname" : "index", "submenuclass" : "", "htmlid" : "homenav", "opennewtab" : "true", "externalsite" : "0"},
"children" :[
{
"data" : "Site Administration",
"attr" : {"webpageid" : "64", "url" : "", "appcode" : "Mgmt/S0001", "parent" : "1", "enabled" : "1", "visible" : "1", "target" : "_self ", "order" : "1", "title" : "Site Administration", "itmname" : "SiteAdmin", "submenuclass" : "", "htmlid" : "", "opennewtab" : "false", "externalsite" : "0"},
"children" :[
{
"data" : "Add Web Page",
"attr" : {"webpageid" : "65", "url" : "/Intranet/admin/mgmt/addwebpage.html", "appcode" : "Mgmt/S0002", "parent" : "64", "enabled" : "1", "visible" : "1", "target" : "_self ", "order" : "1", "title" : "Add Web Page", "itmname" : "AddPage", "submenuclass" : "", "htmlid" : "", "opennewtab" : "false", "externalsite" : "0"}
}
]
}
]
}, ... more data ...
]}
here is the call back function from my ajax call:
function fSucc(data) {
$(function () {
$('#webpagetree').jstree({
"json_data": (function () { return data; })(),
"ui": { "select_limit": 1 },
"plugins": ["themes", "json_data", "ui", "themeroller", "dnd", "crrm"]
}).bind("select_node.jstree", function (e, data) {
alert(jQuery.data(data.rslt.obj[0], "jstree").id)
});
});
}
but it doesn't create or load an instance of the jsTree.
instead I get the error "Neither data nor ajax settings supplied."
I've also tried putting the data into a global variable, then returning that.
Thanks for any help

Wouldn't you just want
$('#webpagetree').jstree({
"json_data": data,
...
in your fSucc function?

Adding an eval() around the data will work.
"json_data": { "data" : eval(data) },

It works well by passing the variable containing the json in the function eval() :
$('#webpagetree').jstree({
'core': {
'data': eval(myJson)
}
});

Related

Find a nested object field inside an array in mongodb aggregate

I have this object as below.
{
"_id" : ObjectId("5ec80a981e89a84b19934039"),
"status" : "active",
"organizationId" : "1",
"productId" : "1947",
"name" : "BOOKEND & PAPER WEIGHT SET – ZODIAC PIG – RED COPPER + PLATINUM",
"description" : "This global exclusive Zodiac bookend and paperweight set from Zuny will stand auspiciously on your bookcase and table, spreading good luck and fortune throughout your home just in time for the Year of the Pig.",
"brand" : "ZUNY",
"created" : "2018-09-28 00:00:00",
"updated" : "2020-05-22 09:19:07",
"mainImage" : "https://",
"availableOnline" : true,
"colors" : [
{
"images" : [
{
"type" : "studio",
"url" : "https://"
},
{
"type" : "studio",
"url" : "https://"
},
{
"type" : "studio",
"url" : "https://"
}
],
"extraInfo" : [
{
"type" : "text-tag",
"title" : "CATEGORY",
"tags" : [
"HOME FURNISHING & DÉCOR",
"LIFESTYLE"
]
},
{
"type" : "text-tag",
"title" : "BRAND",
"tags" : [
"ZUNY"
]
},
{
"type" : "text-tag",
"title" : "COLOUR",
"tags" : [
"GOLD",
"ROSE GOLD"
]
},
{
"type" : "text-tag",
"title" : "SEASON",
"tags" : [
"AW(2018)"
]
},
{
"type" : "text-tag",
"title" : "HASHTAG",
"tags" : [
"BOOKCASES",
"BOOKEND",
"COLOUR",
"EXCLUSIVE",
"GLOBAL EXCLUSIVE",
"HOME",
"LEATHER",
"MOTIF",
"OBJECTS",
"PAPER",
"PAPERWEIGHT",
"PLATINUM",
"SET",
"SYNTHETIC",
"ZODIAC",
"HANDMADE",
"time"
]
}
],
"_id" : ObjectId("5ec80a981e89a84b1993403a"),
"colorId" : "1",
"color" : "ROSE GOLD",
"status" : "active",
"sizes" : [
{
"extraInfo" : [
{
"type" : "text-block",
"title" : "Size And Fit",
"text" : ""
},
{
"type" : "text-block",
"title" : "Information",
"text" : "Global exclusive. Colour: Copper/Platinum. Set includes: Zodiac Pig bookend (x 1), Zodiac Pig paperweight (x 1). Metallic copper- and platinum-tone synthetic leather. Pig motif. Iron pellet filling. Handmade"
}
],
"_id" : ObjectId("5ec80a981e89a84b1993403b"),
"sizeId" : "1",
"neo" : "0210111790664",
"size" : "*",
"originalPrice" : "1060.00",
"sellingPrice" : "1060.00",
"discountPercent" : "0.00",
"url" : "https://",
"status" : "active",
"currency" : "HK$",
"stores" : [
{
"storeId" : "1",
"quantity" : 70,
"_id" : ObjectId("5ec80a981e89a84b1993403c"),
"available" : 70,
"reserved" : 0,
"name" : "Park Street",
"status" : "active"
},
{
"storeId" : "2",
"quantity" : 95,
"_id" : ObjectId("5ec80a981e89a84b1993403d"),
"name" : "Rashbehari",
"status" : "active"
}
]
}
]
}
],
"__v" : 0
}
I want the output as follows
{
"name": "Mock Collection",
"collectionId": "92",
"products": [
{
"title": "GLOBAL EXCLUSIVE OFF-SHOULDER SHIRT DRESS",
"imageUrl": "https://",
"productId": "21174",
"currency": "" // This should be this.colors[0].sizes[0].currency
},
]
}
How to get the nested field. I tried using arrayElemAt by which I was able to get to colors[0]. But I am confused how to get inside the nested object of sizes from there. Also the currency node should have the exact value. It comes like currency:{currency: value} which I don't want.
Please help!
Not sure how you've got that output but to extract currency from first object of sizes then you need to try this :
db.collection.aggregate([
{
$project: {
currency: {
$arrayElemAt: [
{
$arrayElemAt: [ "$colors.sizes.currency", 0 ] // gives an array of currency values, in your case since you've only one object just an array of one value
},
0
]
}
}
}
])
Test : mongoplayground

Need find query for dynamic multiple nested collections in mongodb

I need to find collections based nested on values.But my collection having dynamic values . See below code. In which image name keys are dynamic ( _DSC9691.jpg , _DSC9514.JPG ) and " key1 " is dynamic items. Now I need to find collection based on component, material, Subtype
{
"_id" : ObjectId("5ce2df8498f10b276cb466c4"),
"num" : "1",
"lat" : "39.941436099965",
"lon" : "-86.0691700063581",
"images" : {
"_DSC9691.jpg" : {
"key1" : {
"component" : "Sleeve",
"condition" : "",
"sub_type" : {
"Auto Sleeve" : true
},
"material" : "",
"misc" : ""
}
}
}}
{
"_id" : ObjectId("5ce2df8498f10b276cb466c7"),
"num" : "4",
"lat" : "39.9413828961847",
"lon" : "-86.0715084495015",
"images" : {
"_DSC9554.JPG" : {
},
"_DSC9514.JPG" : {
},
"_DSC9622.JPG" : {
}
}}
#Nagendran you won't be able to perform those operations because the nested document that you want to watch isn't named properly. I suggest you to rename that field using a common name and try to use the code bellow. Also remember to not use blank spaces on field names like "Auto Sleeve".
Object:
{
"_id" : ObjectId("5ce2df8498f10b276cb466c7"),
"num" : "4",
"lat" : "39.9413828961847",
"lon" : "-86.0715084495015",
"images" : [
{
"name" : "some name",
"key":
{
"component" : "Sleeve",
"condition" : "",
"sub_type" : {
"Auto_Sleeve" : true
},
"material" : "",
"misc" : ""
},
},
{
"name" : "some name 2",
"key":
{
"component" : "Sleeve 2",
"condition" : "",
"sub_type" : {
"Auto_Sleeve" : true
},
"material" : "",
"misc" : ""
},
},
]
}
Query:
db.collection.find({
"images.key.sub_type.Auto_Sleeve": true
})
If you want you can use aggregation framework to filter inside the "images" nested document.
To get a litle bit more information you can access:
https://docs.mongodb.com/manual/aggregation/
https://docs.mongodb.com/manual/tutorial/query-documents/
https://www.mongodb.com/blog/post/6-rules-of-thumb-for-mongodb-schema-design-part-1
https://university.mongodb.com/

Unable to load data in druid

I am a newbie in druid. Trying to load a very simple data in JSON format to druid. The data contains just one dimension, one metric and timestamp. I have been successfully able to load data to druid for a different dataset but somehow I am getting errors for this dataset.
This is my index file :
{
"type" : "index",
"spec" : {
"dataSchema" : {
"dataSource" : "datatemplate",
"parser" : {
"type" : "string",
"parseSpec" : {
"format" : "json",
"dimensionsSpec" : {
"dimensions" : [
"Loc"
]
},
"timestampSpec" : {
"format" : "auto",
"column" : "Timestamp"
}
}
},
"metricsSpec" : [{"name" : "Qty","type" : "doubleSum","fieldName" : "Qty"}],
"granularitySpec" : {
"type" : "uniform",
"segmentGranularity" : "day",
"queryGranularity" : "none",
"intervals" : ["2016-01-01T00:00:00Z/2030-06-30T00:00:00Z"],
"rollup" : true
}
},
"ioConfig" : {
"type" : "index",
"firehose" : {
"type" : "local",
"baseDir" : "datatemplate/",
"filter" : "datatemplate.json"
},
"appendToExisting" : false
},
"tuningConfig" : {
"type" : "index",
"targetPartitionSize" : 10000000,
"maxRowsInMemory" : 40000,
"forceExtendableShardSpecs" : true
}
}
}
Also here is my dataset in JSON format:
{"Loc": "A", "Qty": "1", "Timestamp": "2017-12-01T00:00:00Z"}
{"Loc": "A", "Qty": "1", "Timestamp": "2017-12-01T00:00:00Z"}
{"Loc": "B", "Qty": "2", "Timestamp": "2017-12-01T00:00:00Z"}
{"Loc": "B", "Qty": "1", "Timestamp": "2017-12-01T00:00:00Z"}

swagger file for all possible 'properties'

I need to call an API to load data on an ongoing basis. API returns different properties for each event. When I create the swagger file it has properties from sample return, but in the long run there will be more properties which can be added by the source system and they will not be in swagger file.
Is there any way to recreate swagger file before data load dynamically with additional properties?
Swagger file is generated by Informatica Cloud based on a sample return while testing the connection.
Properties list has a different number of entries based on event type.
swagger file:
{"swagger" : "2.0",
"info" : {
"description" : null,
"version" : "1.0.0",
"title" : null,
"termsOfService" : null,
"contact" : null,
"license" : null
},
"host" : "<host>.com",
"basePath" : "/api",
"schemes" : [ "https" ],
"paths" : {
"/2.0" : {
"post" : {
"tags" : [ "events" ],
"summary" : null,
"description" : null,
"operationId" : "events",
"produces" : [ "application/json" ],
"consumes" : [ "application/json" ],
"parameters" : [ {
"name" : "script",
"in" : "query",
"description" : null,
"required" : false,
"type" : "string"
}, {
"name" : "Authorization",
"in" : "header",
"description" : null,
"required" : false,
"type" : "string"
} ],
"responses" : {
"200" : {
"description" : "successful operation",
"schema" : {
"$ref" : "#/definitions/events"
}
}
}
}
}
},
"definitions" : {
"events##properties" : {
"properties" : {
"$app_build_number" : {
"type" : "string"
},
"$app_version_string" : {
"type" : "string"
},
"$carrier" : {
"type" : "string"
},
"$lib_version" : {
"type" : "string"
},
"$manufacturer" : {
"type" : "string"
},
"$model" : {
"type" : "string"
},
"$os" : {
"type" : "string"
},
"$os_version" : {
"type" : "string"
},
"$radio" : {
"type" : "string"
},
"$region" : {
"type" : "string"
},
"$screen_height" : {
"type" : "number",
"format" : "int32"
},
"$screen_width" : {
"type" : "number",
"format" : "int32"
},
"Home Step Enabled" : {
"type" : "string"
},
"Number Of Lifetime Logins" : {
"type" : "number",
"format" : "int32"
},
"Sessions" : {
"type" : "number",
"format" : "int32"
},
"mp_country_code" : {
"type" : "string"
},
"mp_lib" : {
"type" : "string"
}
}
},
"events" : {
"properties" : {
"name" : {
"type" : "string"
},
"distinct_id" : {
"type" : "string"
},
"labels" : {
"type" : "string"
},
"time" : {
"type" : "number",
"format" : "int64"
},
"sampling_factor" : {
"type" : "number",
"format" : "int32"
},
"dataset" : {
"type" : "string"
},
"properties" : {
"$ref" : "#/definitions/events##properties"
}
}
}
}
}
sample return:
"name": "Session",
"distinct_id": "1234567890",
"labels": [],
"time": 1520072505000,
"sampling_factor": 1,
"dataset": "$event_data_set",
"properties": {
"$app_build_number": "900",
"$app_version_string": "1.9",
"$carrier": "AT&T",
"$lib_version": "2.0.1",
"$manufacturer": "Apple",
"$model": "iPhone10,6",
"$os": "iOS",
"$os_version": "11.2.6",
"$radio": "LTE",
"$region": "Florida",
"$screen_height": 667,
"$screen_width": 375,
"Number Of Lifetime Logins": 2,
"Session Length": "00h:00m:08s",
"Sessions": 43,
"mp_country_code": "US",
"mp_lib": "swift"
}
}

Druid:how to add a numeric data to metric without aggregation function

The scenario is i want to setup a stock quote server and save the quote data into druid.
my requirement is to get the latest price of all the stock by a query.
But i notice that the query interface of druid such as time series only work on metrics filed ,not the dimension fields.
so i consider to make the price filed one of the metrics,but no need to aggregated.
how can i do it?
Any suggestions?
here is my tranquility config file.
{
"dataSources" : {
"stock-index-topic" : {
"spec" : {
"dataSchema" : {
"dataSource" : "stock-index-topic",
"parser" : {
"type" : "string",
"parseSpec" : {
"timestampSpec" : {
"column" : "timestamp",
"format" : "auto"
},
"dimensionsSpec" : {
"dimensions" : ["code","name","acronym","market","tradeVolume","totalValueTraded","preClosePx","openPrice","highPrice","lowPrice","latestPrice","closePx"],
"dimensionExclusions" : [
"timestamp",
"value"
]
},
"format" : "json"
}
},
"granularitySpec" : {
"type" : "uniform",
"segmentGranularity" : "HOUR",
"queryGranularity" : "SECOND",
},
"metricsSpec" : [
{
"name" : "firstPrice",
"type" : "doubleFirst",
"fieldName" : "tradePrice"
},{
"name" : "lastPrice",
"type" : "doubleLast",
"fieldName" : "tradePrice"
}, {
"name" : "minPrice",
"type" : "doubleMin",
"fieldName" : "tradePrice"
}, {
"name" : "maxPrice",
"type" : "doubleMax",
"fieldName" : "tradePrice"
}
]
},
"ioConfig" : {
"type" : "realtime"
},
"tuningConfig" : {
"type" : "realtime",
"maxRowsInMemory" : "100000",
"intermediatePersistPeriod" : "PT10M",
"windowPeriod" : "PT10M"
}
},
"properties" : {
"task.partitions" : "1",
"task.replicants" : "1",
"topicPattern" : "stock-index-topic"
}
}
},
"properties" : {
"zookeeper.connect" : "localhost:2181",
"druid.discovery.curator.path" : "/druid/discovery",
"druid.selectors.indexing.serviceName" : "druid/overlord",
"commit.periodMillis" : "15000",
"consumer.numThreads" : "2",
"kafka.zookeeper.connect" : "localhost:2181",
"kafka.group.id" : "tranquility-kafka"
}
}
I think you should make [latest_price] as new numeric dimension, it would be much better from performance and querying standpoint considering how druid works.
Metrics and meant to perform aggregation functions as core so won't be helpful in your use case.