I have a JSON and I need to write the values into different tables. I could get the data from json, but I need to insert the data accordingly. It's like I have a form, the form has n number of sections, each section have n number of steps and each step can have n number of questions. How I can loop this and write into different tables? Basically I need to know how we can find how many sections, steps and questions we have in the JSON. I tried array_length, but not working.
Here is a small sample of my JSON.
{ "functionId" : "2","subFunctionId" : "6","groupId" : "11","formId" : "","formName":"BladeInseption","submittedBy" : "200021669","createdDate" : "2015-08-06",
"updatedBy" : "","updatedDate" : "","comments" : "","formStatusId" :"11","formStatus" :"Draft","formLanguage" : "English","isFormConfigured" : "N","formChange":"Yes",
"sectionLevelChange":"Yes","isActive" : "Y","formVersionNo" : "1.0","formFooterDetails" : "","formHeaderDetails" : "","images" : [
{"imageId" : "","imageTempId" : "","imageTempUrl" : "","imageName" : "","imageUrl" : "","isDeleted" : "","imagesDesc" : ""} ],
"imagesDescLevel" : "","sectionElements" : [{"sectionElement":[{"sectionId" : "","sectionTempId":"sectionId+DDMMHHSSSS","sectionName":"section1",
"sectionChange":"Yes","stepLevelChange":"Yes","sectionLabel" : "","sectionOrder" : "1","outOfScopeSection" : "false",
"punchListSection" : "false","images" : [{"imageId" : "","imageTempId" : "","imageTempUrl" : "","imageName" : "","imageUrl" : "","isDeleted" : "",
"imagesDesc" : ""}],"imagesDescLevel" : "","isDeleted" : "","stepElements" : [{"stepElement":[{"stepId" : "","stepTempId":"stepId+DDMMHHSSSS",
"stepName":"section1step1","stepLabel" : "","stepOrder" : "1","stepChange":"Yes","questionLevelChange":"Yes","images" : [{"imageId" : "",
"imageTempId" : "","imageTempUrl" : "","imageName" : "","imageUrl" : "","isDeleted" : "","imagesDesc" : ""}],"imagesDescLevel" : "","isDeleted" : "",
"questionAnswerElements" : [{"questionAnswerElement":[{"questionId" : "","questionClientUid" : "","questionDescription" : "step1question1",
"questionAccessibility" : "","isPunchListQuestion" : "","questionChange":"Yes","questionOrder" : "1","isDeleted" : "","images" : [{
"imageId" : "","imageTempId" : "","imageTempUrl" : "","imageName" : "","imageUrl" : "","isDeleted" : "","imagesDesc" : ""}],"imagesDescLevel" : "",
"answerId" : "","answerClientUid" : "","elements" :[{"element" :[{"elementId": "2","elementMapId" : "12","clientUid" : "","clientClass" : "","imageTempId" : "",
"imageTempUrl" : "","elementType":"Question","elementOrder" : "1","elementArributuesProp": [{"attributeId" : "1","attributeName" : "","defaultValue" : ""}],
"elementArributuesVal":[{"value1" : "item1"}],"rule" : [{"ruleId" : "1","ruleName" : "Mandatory","formula" : "i>a","formulaData" : "i>50","isDeleted" : "",
...
}
If you know all paths to JSON arrays in your code, can use some special functions appearing in 9.4 such as
SELECT json_array_length('{"array":[{"a":1},{"b":2},{"c":3}]}'::json->'array')
If you need to iterate through JSON array, there is another useful function:
SELECT json_array_elements('{"array":[{"a":1},{"b":2},{"c":3}]}'::json->'array')
SELECT json_array_elements('[{"a":1},{"b":2},{"c":3}]'::json)
or if json is stored in table, lets call
SELECT json_array_elements(tbl.json_value->'array') FROM jsontable AS tbl
It returns a set of json values unwrapped from array ready to processing.
http://www.postgresql.org/docs/9.4/static/functions-json.html
More information about JSON parsing can be found here
How do I query using fields inside the new PostgreSQL JSON datatype?
Related
I am completely new to Gatling/Scala.
I have a scenario to execute. Here it goes:
-->Change the shift timings of the employees.
For the above, I am able to script/code the flow. However, I have a challenge:
-> I need to extract the "new" time values from the response and check if that matches with the "new" time values being passed through the parameter (csv) file.
Approach/logic: Extract the date values from the response body and compare that with the date value that has been provided in the csv file.
Sample Response:
{
"employeeId":"xxxxxx",
"schedules":
[
{
"date":"2019-11-25",
: : : "schedules":
: : : [
: : : : {
: : : : : "employeeId":"xxxxxx",
: : : : : "laborWeekStartDate":"2019-11-25", //New edited time
: : : : : "laborWeekEndDate":"2019-12-01", //New edited time
: : : : : "schedules":
: : : : : {
: : : : : : "startTime":"2019-11-25T18:15:00.000Z",
: : : : : : "endTime":"2019-11-25T23:45:00.000Z",
: : : : : : "departmentId":xxxxx,
: : : : : : "departmentName":"abc",
: : : : : : "lastModifiedTimestamp":"2019-12-11T09:22:44.000Z",
: : : : : : "breakDetails":
: : : : : : [
: : : : : : : {
: : : : : : : : "startTime":"2019-11-25T21:00:00.000Z",
: : : : : : : : "endTime":"2019-11-25T21:15:00.000Z",
: : : : : : : : "type":"break"
: : : : : : : }
: : : : : : ]
: : : : : }
: : : : }
: : : ]
: : }
Here, in the below, the right-handside values need to be extracted and compared with the values provided in the csv file.
"startTime":"2019-11-25T18:15:00.000Z",
"endTime":"2019-11-25T23:45:00.000Z",
Please help in performing the above. A step-wise detailed explanation would be much appreciated considering I am totally new to this.
Thanks!
Disclaimer: I will provide some useful links that should help you in achieving the task. If you will encounter any problems doing it, just post a new question
In order to get a value from a JSON response, you could use a jsonPath HTTP response body. There is an example here, how value can be extracted and saved using this method :JSON Path Usage for Gatling Tests
Reading values from CSV file is possible using a built-in feeder functionality: CSV feeders .Once you have the feeder added, you can reference a value using ${columnName} There is an example here: Step 03: Use dynamic data with Feeders and Checks.
After this step you have both values in session. Then using scala language, you should be able to compare those values. Getting a value from session happens using session("variableName").as[String]
For example, you could do a String comparision, if you first substring the value from csv. Scala String comparision Another option is like described here, which is really close to your requirement : How to compare responses from http calls in gatling?
Good luck! :)
I have this array of documents, I would like to put "table" on the same level like mastil_antenas and other variables. how Can I do that with aggregate?
I'm trying with the aggregate $project but I can't get the result.
Example of Data
[ {
"mastil_antena" : "1",
"nro_platf" : "1",
"antmarcmast" : "ANDREW",
"antmodelmast" : "HWXXX6516DSA3M",
"retmarcmast" : "Ericsson",
"retmodelmast" : "ATM200-A20",
"distmast" : "1.50",
"altncramast" : "41.30",
"ORIENTMAG" : "73.00",
"incelecmast" : "RET",
"incmecmast" : "1.00",
"Feedertypemast" : "Fibra Optica",
"longjumpmast" : "5.00",
"longfo" : "100",
"calibrecablefuerza" : "10 mm",
"longcablefuerza" : "65.00",
"modelorruantena" : "32B66A",
"tiltmecfoto" : "https://secure.appenate.com/Files/FormEntry/47929-92cdf219-3128-4903-8324-a81000602b9d171017114934746000.jpg",
"tiltmecfoto_fh" : "2017-10-18T05:51:22Z",
"az0foto" : "https://secure.appenate.com/Files/FormEntry/47929-92cdf219-3128-4903-8324-a81000602b9d171017115012727000.jpg",
"az0foto_fh" : "2017-10-18T05:55:21Z",
"azneg60foto" : "https://secure.appenate.com/Files/FormEntry/47929-92cdf219-3128-4903-8324-a81000602b9d171017115016199000.jpg",
"azneg60foto_fh" : "2017-10-18T05:55:36Z",
"azpos60foto" : "https://secure.appenate.com/Files/FormEntry/47929-92cdf219-3128-4903-8324-a81000602b9d171017115020147000.jpg",
"azpos60foto_fh" : "2017-10-18T05:55:49Z",
"etiqantenafoto" : "https://secure.appenate.com/Files/FormEntry/47929-92cdf219-3128-4903-8324-a81000602b9d171017114920853000.jpg",
"etiqantenafoto_fh" : "2017-10-18T05:56:01Z",
"tiltelectfoto" : "https://secure.appenate.com/Files/FormEntry/47929-92cdf219-3128-4903-8324-a81000602b9d171017114914236000.jpg",
"tiltelectfoto_fh" : "2017-10-18T05:56:13Z",
"idcablefoto" : "https://secure.appenate.com/Files/FormEntry/47929-92cdf219-3128-4903-8324-a81000602b9d171017114900279000.jpg",
"idcablefoto_fh" : "2017-10-18T05:56:38Z",
"rrutmafoto" : "https://secure.appenate.com/Files/FormEntry/47929-92cdf219-3128-4903-8324-a81000602b9d171017114947279000.jpg",
"rrutmafoto_fh" : "2017-10-18T05:56:49Z",
"etiquetarrufoto" : "https://secure.appenate.com/Files/FormEntry/47929-92cdf219-3128-4903-8324-a81000602b9d171017114954648000.jpg",
"etiquetarrufoto_fh" : "2017-10-18T05:57:02Z",
"rrutmafoto1" : "https://secure.appenate.com/Files/FormEntry/47929-92cdf219-3128-4903-8324-a81000602b9d171017114959738000.jpg",
"rrutmafoto1_fh" : "2017-10-18T05:57:12Z",
"etiquetarrufoto1" : "https://secure.appenate.com/Files/FormEntry/47929-92cdf219-3128-4903-8324-a81000602b9d171017115005545000.jpg",
"etiquetarrufoto1_fh" : "2017-10-18T05:57:27Z",
"botontorre4" : "sstelcel3",
"table" : { /* put all varibles one level up*/
"tecmast" : "LTE",
"frecmast" : "2100",
"secmast" : "1",
"untitled440" : "Salir"
},
"comentmast" : "",
"longfeedmast" : "",
"numtmasmast" : "",
"otra_marca_antena" : "",
"otro_modelo_antena" : ""
}]
Starting from MongoDB version 3.4 you could use $addFields to do this.
//replace products with what makes sense in your database
db.getCollection('products').aggregate(
[
{ //1 add the properties from subdocument table to documents
$addFields: {
"documents.tecmast" : "documents.0.table.tecmast",
"documents.frecmast" : "documents.0.table.frecmast",
"documents.secmast" : "documents.0.table.secmast",
"documents.untitled440" : "documents.0.table.untitled440"
}
},
{
//(optional) 2 remove the table property from the documents
$project: {"documents.table" : 0}
}
]
)
Step 1: use $addFields to grab properties from table inside documents.table and put them on documents
Step 2: (optional) remove property "table" from documents.
I hope this helps!!!
I have this output from urlread function:
// [
{
"id": "22144"
,"t" : "AAPL"
,"e" : "NASDAQ"
,"l" : "148.59"
,"l_fix" : "148.59"
,"l_cur" : "148.59"
,"s": "0"
,"ltt":"1:13PM EDT"
,"lt" : "May 5, 1:13PM EDT"
,"lt_dts" : "2017-05-05T13:13:23Z"
,"c" : "+2.06"
,"c_fix" : "2.06"
,"cp" : "1.41"
,"cp_fix" : "1.41"
,"ccol" : "chg"
,"pcls_fix" : "146.53"
,"eo" : ""
,"delay": ""
,"op" : "146.76"
,"hi" : "148.91"
,"lo" : "146.76"
,"vo" : "-"
,"avvo" : "-"
,"hi52" : "148.91"
,"lo52" : "89.47"
,"mc" : "771.93B"
,"pe" : "17.38"
,"fwpe" : ""
,"beta" : "1.21"
,"eps" : "8.55"
,"shares" : "5.21B"
,"inst_own" : "63%"
,"name" : "Apple Inc."
,"type" : "Company"
}
]
My question is how can I convert this to a two-column cell? Or even better create a structure called AAPL which gives me , for example for AAPL.l the price?
Use jsondecode function to convert JSON format text to a MATLAB struct type. Typically the text would start with a '[' or '{'. You can try code using a simpler subset as below.
jsondecode('{"id": "22144","t" : "AAPL","e" : "NASDAQ","l" : "148.59"}')
This produces a struct with the following fields.
id: '22144'
t: 'AAPL'
e: 'NASDAQ'
l: '148.59'
{
"_id" : ObjectId("586aac4c8231ee0b98458045"),
"store_code" : NumberInt(10800),
"counter_name" : "R.N.Electric",
"address" : "314 khatipura road",
"locality" : "Khatipura Road (Jhotwara)",
"pincode" : NumberInt(302012),
"town" : "JAIPUR",
"gtm_city" : "JAIPUR",
"sales_office" : "URAJ",
"owner_name" : "Rajeev",
"owner_mobile" : "9828024073",
"division_mapping" : [//this contains only 1 element in every doc
{
"dvcode" : "cfc",
"dc" : "trade",
"beatcode" : "govindpura",
"fos" : {
"_id" : ObjectId("586ab8318231ee0b98458843"),
"loginid" : "9928483483",
"name" : "Arpit Gupta",
"division" : [
"cfc",
"iron"
],
"sales_office" : "URAJ", //office
"gtm_city" : "JAIPUR" //city
},
"beat" : {
"_id" : ObjectId("586d372b39f64316b9c3cbd7"),
"division" : {
"_id" : ObjectId("5869f8b639f6430fe4edee2a"),
"clientdvcode" : NumberInt(40),
"code" : "cfc",
"name" : "Cooking & Fabric Care",
"project_code" : "usha-fos",
"client_code" : "usha",
"agent_code" : "v5global"
},
"beatcode" : "govindpura",
"sales_office" : "URAJ",
"gtm_city" : "JAIPUR",
"active" : true,
"agency_code" : "v5global",
"client_code" : "USHA_FOS",
"proj_code" : "usha-fos",
"fos" : {
"_id" : ObjectId("586ab8318231ee0b98458843"),
"loginid" : "9928483483",
"name" : "Arpit Gupta",
"division" : [
"cfc",
"iron"
],
"sales_office" : "URAJ",
"gtm_city" : "JAIPUR"
}
}
}
],
"distributor_mail" : "sunil.todi#yahoo.in",
"project_code" : "usha-fos",
"client_code" : "usha",
"agent_code" : "v5global",
"distributor_name" : "Sundeep Electrical"
}
I am having only 1 element in division_mapping's array and I want to find those documents whose dc in division_mapping is trade.
I have tried following:
"division_mapping":{$elemMatch:{$eq:{"dc":"trade"}}}})
Dont know what I am doing wrong.
//Maybe I have to unwind the array but is there any other way?
According to MongoDB documentation
The $elemMatch operator matches documents that contain an array
field with at least one element that matches all the specified query
criteria.
According to above mentioned description to retrieve only documents whose dc in division_mapping is trade please try executing below mentioned query
db.collection.find({division_mapping:{$elemMatch:{dc:'trade'}}})
I'm using groovy with mongodb. I have a result set but need a value from a different grouping of documents. How do I pull that value into the result set I need?
MAIN:Network data
"resource_metadata" : {
"name" : "tapd2e75adf-71",
"parameters" : { },
"fref" : null,
"instance_id" : "9f170531-79d0-48ee-b0f7-9bd2788b1cc5"}
I need the display_name for the network data result set which is contained in the compute data.
CPU data
"resource_id" : "9f170531-79d0-48ee-b0f7-9bd2788b1cc5",
"resource_metadata" : {
"ramdisk_id" : "",
"display_name" : "testinstance0001"}
You can see the resource_id and the Instance_id are the same values. I know there is no relationship I can do but trying to reach to see if anyone has come across this. I'm using the table model to retrieve data for reporting. Hashtable has been suggested to me but I'm not seeing that working. Somehow in the hasNext I need to include the display_name value. in the networking data so GUID number doesn't only valid name shows from compute data.
def docs = meter.find(query).sort(sort).limit(50)\
while (docs.hasNext()) { def doc = docs.next()\
model.addRow([ doc.get("counter_name"),doc.get("counter_volume"),doc.get("timestamp"),\
doc.get("resource_metadata").getString("mac"),\
doc.get("resource_metadata").getString("instance_id"),\
doc.get("counter_unit")]
as Object[]);}
Full document:
1st set where I need the network data measure with no name only id {resource_metadata.instance_id}
{
"_id" : ObjectId("528812f8be09a32281e137d0"),
"counter_name" : "network.outgoing.packets",
"user_id" : "4d4e43ec79c5497491b23b13644c2a3b",
"timestamp" : ISODate("2013-11-17T00:51:00Z"),
"resource_metadata" : {
"name" : "tap6baab24e-8f",
"parameters" : { },
"fref" : null,
"instance_id" : "a8727a1d-4661-4565-9c0a-511279024a97",
"instance_type" : "50",
"mac" : "fa:16:3e:a3:bf:fc"
},
"source" : "openstack",
"counter_unit" : "packet",
"counter_volume" : 4611911,
"project_id" : "97dc4ca962b040608e7e707dd03f2574",
"message_id" : "54039238-4f22-11e3-8e68-e4115b99a59d",
"counter_type" : "cumulative"
}
2nd set where I want to grab the name as I get the values {resource_id}:
"_id" : ObjectId("5287bc3ebe09a32281dd2594"),
"counter_name" : "cpu",
"user_id" : "4d4e43ec79c5497491b23b13644c2a3b",
"message_signature" :
"timestamp" : ISODate("2013-11-16T18:40:58Z"),
"resource_id" : "a8727a1d-4661-4565-9c0a-511279024a97",
"resource_metadata" : {
"ramdisk_id" : "",
"display_name" : "vmsapng01",
"name" : "instance-000014d4",
"disk_gb" : "",
"availability_zone" : "",
"kernel_id" : "",
"ephemeral_gb" : "",
"host" : "3746d148a76f4e1a8203d7e2378ef48ccad8a714a47e7481ab37bcb6",
"memory_mb" : "",
"instance_type" : "50",
"vcpus" : "",
"root_gb" : "",
"image_ref" : "869be2c0-9480-4239-97ad-df383c6d09bf",
"architecture" : "",
"os_type" : "",
"reservation_id" : ""
},
"source" : "openstack",
"counter_unit" : "ns",
"counter_volume" : NumberLong("724574640000000"),
"project_id" : "97dc4ca962b040608e7e707dd03f2574",
"message_id" : "a240fa5a-4eee-11e3-8e68-e4115b99a59d",
"counter_type" : "cumulative"
}
This is another collection that contains the same value but just thought it would be easier to grab from same collection:
"_id" : "a8727a1d-4661-4565-9c0a-511279024a97",
"metadata" : {
"ramdisk_id" : "",
"display_name" : "vmsapng01",
"name" : "instance-000014d4",
"disk_gb" : "",
"availability_zone" : "",
"kernel_id" : "",
"ephemeral_gb" : "",
"host" : "3746d148a76f4e1a8203d7e2378ef48ccad8a714a47e7481ab37bcb6",
"memory_mb" : "",
"instance_type" : "50",
"vcpus" : "",
"root_gb" : "",
"image_ref" : "869be2c0-9480-4239-97ad-df383c6d09bf",
"architecture" : "",
"os_type" : "",
"reservation_id" : "",
}
Mike
It looks like these data are in 2 different collections, is this correct?
Would you be able to query CPU data for each "instance_id" ("resource_id")?
Or if this would cause too many queries to the database (looks like you limit to 50...) you could use $in with the list of all "Instance_id"s
http://docs.mongodb.org/manual/reference/operator/query/in/
Either way, you will need to query each collection separately.