MongoDB insert operation using Groovy script for Jmeter - mongodb

i am trying to load test MongoDB using Jmeter, i am using JSR223Sampler using Groovy, i am able to connect but for some reason insert part is not working
i need to insert below :
"cart" : {
"schema" : "http://dell.com/dcp/schemas/cart/3.0.0#",
"_id" : "s5ChQonvAUGKM6s2Yq8Z31",
"createdOn" : {
"DateTime" : ISODate("2018-03-07T06:54:01.242Z"),
"Ticks" : NumberLong(636560222412422269),
"Offset" : 330
},
"lastModifiedOn" : {
"DateTime" : ISODate("2018-03-07T06:54:01.245Z"),
"Ticks" : NumberLong(636560222412452266),
"Offset" : 330
},
"expiresOn" : {
"DateTime" : ISODate("2019-04-10T08:21:43.984Z"),
"Ticks" : NumberLong(636904813039840000),
"Offset" : 0
},
"commerceContext" : {
"region" : "us",
"country" : "US",
"language" : "en",
"currency" : "USD",
"segment" : "bsd",
"customerSet" : "rc1005388",
"accessGroup" : "DSA",
"companyNumber" : "08",
"businessUnitId" : "11",
"classCode" : "string",
"sourceApplicationName" : "OLRGCOMM"
},
"items" : [],
"shipments" : [],
"price" : {
"couponCodes" : []
},
"references" : [
{
"referenceId" : "8TOOOrdEJUeiGPTqWA226Q",
"referenceType" : "New Cart",
"referencedOn" : {
"DateTime" : ISODate("2018-03-07T06:54:01.239Z"),
"Ticks" : NumberLong(636560222412392112),
"Offset" : 330
},
"referenceCreatedBy" : "DCQO",
"targetSystem" : "DSP",
"target" : "string"
}
],
"validation" : {},
"properties" : {}
}
})

First of all you need to get MongoDB connection from the MongoDB Source Config, it can be done as follows:
import com.mongodb.DB;
import org.apache.jmeter.protocol.mongodb.config.MongoDBHolder;
DB db = MongoDBHolder.getDBFromSource("mongodb source name", "database name");
Next you just need to call DBCollection.insert() function like:
db.getCollection('your collection name').insert(your DBObject payload here)
More information: How to Load Test MongoDB with JMeter

Related

Mongoose Return One Object from Array

I use mongoose and I have this data in my mongodb :
"_id" : ObjectId("5f13e1edf7c56a896987c191"),
"deleted" : false,
"email" : "klinikkoding#gmail.com",
"firstName" : "Klinik",
"lastName" : "Koding",
"resetToken" : null,
"workspaces" : [
{
"status" : "active",
"_id" : ObjectId("5f13e124f7c56a896987c18e"),
"code" : "kk",
"name" : "Klinik Koding",
"_roleId" : ObjectId("5f13de3eb33fa33ce2a3b0dd")
},
{
"status" : "invited",
"_id" : ObjectId("5f13e13ff7c56a896987c190"),
"code" : "rm",
"name" : "The Manage",
"_roleId" : ObjectId("5f13de3eb33fa33ce2a3b0dd")
}
],
How can I return only one workspace? The output that I need is like this:
"_id" : ObjectId("5f13e1edf7c56a896987c191"),
"deleted" : false,
"email" : "klinikkoding#gmail.com",
"firstName" : "Klinik",
"lastName" : "Koding",
"resetToken" : null,
"workspaces" : [
{
"status" : "active",
"_id" : ObjectId("5f13e124f7c56a896987c18e"),
"code" : "kk",
"name" : "Klinik Koding",
"_roleId" : ObjectId("5f13de3eb33fa33ce2a3b0dd")
},
],
Anyone can help me with this query?
Try this.
async function retrieve(){
// You can retrieve it by any field. I used email because it is unique.
let data=await yourModelName.findOne({email:"klinikkoding#gmail.com"})
data.workspaces.splice(1,1)
console.log("your final result",data)
}
retrieve()

Upload Data to druid Incrementally

I need to upload data to an existing model. This has to be done on daily basis. I guess some changes needs to be done in the index file and i am not able to figure out. I tried pushing the data with the same model name but the parent data was removed.
Any help would be appreciated.
Here is the ingestion json file :
{
"type" : "index",
"spec" : {
"dataSchema" : {
"dataSource" : "mksales",
"parser" : {
"type" : "string",
"parseSpec" : {
"format" : "json",
"dimensionsSpec" : {
"dimensions" : ["Address",
"City",
"Contract Name",
"Contract Sub Type",
"Contract Type",
"Customer Name",
"Domain",
"Nation",
"Contract Start End Date",
"Zip",
"Sales Rep Name"
]
},
"timestampSpec" : {
"format" : "auto",
"column" : "time"
}
}
},
"metricsSpec" : [
{ "type" : "count", "name" : "count", "type" : "count" },
{"name" : "Price","type" : "doubleSum","fieldName" : "Price"},
{"name" : "Sales","type" : "doubleSum","fieldName" : "Sales"},
{"name" : "Units","type" : "longSum","fieldName" : "Units"}],
"granularitySpec" : {
"type" : "uniform",
"segmentGranularity" : "day",
"queryGranularity" : "none",
"intervals" : ["2000-12-01T00:00:00Z/2030-06-30T00:00:00Z"],
"rollup" : true
}
},
"ioConfig" : {
"type" : "index",
"firehose" : {
"type" : "local",
"baseDir" : "mksales/",
"filter" : "mksales.json"
},
"appendToExisting" : false
},
"tuningConfig" : {
"type" : "index",
"targetPartitionSize" : 10000000,
"maxRowsInMemory" : 40000,
"forceExtendableShardSpecs" : true
}
}
}
There are 2 ways using which you can append/update the data to an existing segment.
Reindexing and Delta Ingestion
You need to reindex your data every time new data comes in a particular segment.(In your case its day) For the reindexing you need to give all the files having data for that day.
For Delta Ingestion you need to use inputSpec type="multi"
You can refer the documentation link for more details - http://druid.io/docs/latest/ingestion/update-existing-data.html

Mongodb Aggregate query to groovy script

I am new to groovy scripting. I want to convert mongodb aggregate query to groovy script to query mongodb. Below is the mongodb query i want to change to groovy script:
db.VERISK_METADATA_COLLECTION.aggregate([
{ "$match":{contentType:"FRM"}},
{ "$project":{ "appCount": {$size:"$applicability"}}},
{ "$group": { "_id":null, Total: {$sum:"$appCount" } }},
{ "$project": {Total:1,"_id":0}}
])
Below is one sample record I am writing this query for:
{
"_id" : "04c64247-7030-4b0e-a2f9-da77d27b8667",
"_class" : "com.mongodb.BasicDBObject",
"documentType" : "doc",
"stateType" : "AZ",
"availabilityDate" : "",
"multiState" : "false",
"language" : "E",
"title" : "ARIZONA LIMITED EXCLUSION OF ACTS OF TERRORISM (OTHER THAN CERTIFIED ACTS OF TERRORISM); CAP ON LOSSES FROM CERTIFIED ACTS OF TERRORISM; COVERAGE FOR CERTAIN FIRE LOSSES",
"displayFormNumber" : "FP 10 53 09 05",
"mandatory" : "N",
"client_id" : "VERISK_001",
"ignored_record_type" : "new_publish_insert",
"formLobType" : "FR",
"formNumber" : "FP10530905",
"job_queue_id" : "f8ac839f-24ee-4063-bc3c-7ed00eb91f11",
"contentType" : "FRM",
"directoryName" : "FRFORMS",
"formsType" : "E",
"objectTypeCode" : "13",
"documentName" : "FP10539O",
"doc_id" : [
"f3f528e2-32ca-49f8-b287-b436befae779"
],
"update_date" : "2017-12-27 21:05:21.189 EST",
"earliestEffectiveDate_dt" : "2005-09-01T00:00:00Z",
"formStatus" : "H",
"applicability" : [
{
"filingId_dbValue_str" : "CL-2005-OFOTR",
"jurisdiction" : "AZ",
"withDrawnDate" : "2008-10-01T00:00:00Z",
"filingId" : "CL-2005-OFOTR",
"derivedFrom" : "",
"id" : "0ef65042-6b35-460e-b12a-46766c64ff27",
"circularNumber" : "LI-FR-2005-121",
"lob" : "FR",
"effectiveDate" : "2005-09-01T00:00:00Z",
"circularDate" : "2005-07-15T00:00:00Z"
}
],
"created_date" : "2017-12-27 21:05:21.106 EST",
"uri" : "VERISK_001/FRM/FP10530905"
}

Druid:how to add a numeric data to metric without aggregation function

The scenario is i want to setup a stock quote server and save the quote data into druid.
my requirement is to get the latest price of all the stock by a query.
But i notice that the query interface of druid such as time series only work on metrics filed ,not the dimension fields.
so i consider to make the price filed one of the metrics,but no need to aggregated.
how can i do it?
Any suggestions?
here is my tranquility config file.
{
"dataSources" : {
"stock-index-topic" : {
"spec" : {
"dataSchema" : {
"dataSource" : "stock-index-topic",
"parser" : {
"type" : "string",
"parseSpec" : {
"timestampSpec" : {
"column" : "timestamp",
"format" : "auto"
},
"dimensionsSpec" : {
"dimensions" : ["code","name","acronym","market","tradeVolume","totalValueTraded","preClosePx","openPrice","highPrice","lowPrice","latestPrice","closePx"],
"dimensionExclusions" : [
"timestamp",
"value"
]
},
"format" : "json"
}
},
"granularitySpec" : {
"type" : "uniform",
"segmentGranularity" : "HOUR",
"queryGranularity" : "SECOND",
},
"metricsSpec" : [
{
"name" : "firstPrice",
"type" : "doubleFirst",
"fieldName" : "tradePrice"
},{
"name" : "lastPrice",
"type" : "doubleLast",
"fieldName" : "tradePrice"
}, {
"name" : "minPrice",
"type" : "doubleMin",
"fieldName" : "tradePrice"
}, {
"name" : "maxPrice",
"type" : "doubleMax",
"fieldName" : "tradePrice"
}
]
},
"ioConfig" : {
"type" : "realtime"
},
"tuningConfig" : {
"type" : "realtime",
"maxRowsInMemory" : "100000",
"intermediatePersistPeriod" : "PT10M",
"windowPeriod" : "PT10M"
}
},
"properties" : {
"task.partitions" : "1",
"task.replicants" : "1",
"topicPattern" : "stock-index-topic"
}
}
},
"properties" : {
"zookeeper.connect" : "localhost:2181",
"druid.discovery.curator.path" : "/druid/discovery",
"druid.selectors.indexing.serviceName" : "druid/overlord",
"commit.periodMillis" : "15000",
"consumer.numThreads" : "2",
"kafka.zookeeper.connect" : "localhost:2181",
"kafka.group.id" : "tranquility-kafka"
}
}
I think you should make [latest_price] as new numeric dimension, it would be much better from performance and querying standpoint considering how druid works.
Metrics and meant to perform aggregation functions as core so won't be helpful in your use case.

Inconsistent query results with embedded documents on MongoDB

I've got a collection called payments with an example of its document shown below:
{
"_id" : ObjectId("579b5ee817e3aaac2f0aebc1"),
"updatedAt" : ISODate("2016-07-29T11:04:01.209-03:00"),
"createdAt" : ISODate("2016-07-29T10:49:28.113-03:00"),
"createdBy" : ObjectId("5763f56010cd7b03008147d4"),
"contract" : ObjectId("578cb907f1575f0300d84d09"),
"recurrence" : [
{
"when" : ISODate("2016-05-29T11:03:45.606-03:00"),
"_id" : ObjectId("579b6241ea945e3631f64e2d"),
"transaction" : {
"createdAt" : ISODate("2016-05-29T11:03:45.608-03:00"),
"tid" : "9999999999999999B01A",
"status" : 4,
"code" : "00",
"message" : "Transação autorizada"
},
"status" : "PAGO"
},
{
"when" : ISODate("2016-06-29T11:03:45.608-03:00"),
"_id" : ObjectId("579b6241ea945e3631f64e2c"),
"transaction" : {
"createdAt" : ISODate("2016-06-29T11:03:45.608-03:00"),
"tid" : "9999999999999999B01A",
"status" : 4,
"code" : "00",
"message" : "Transação autorizada"
},
"status" : "PAGO"
},
{
"when" : ISODate("2016-07-29T11:03:45.608-03:00"),
"_id" : ObjectId("579b6241ea945e3631f64e2b"),
"status" : "ERRO",
"transaction" : {
"code" : "56",
"createdAt" : ISODate("2016-07-29T11:04:01.196-03:00"),
"message" : "Autorização negada",
"status" : 5,
"tid" : "1006993069000730B88A"
}
},
{
"when" : ISODate("2016-07-30T11:03:45.608-03:00"),
"_id" : ObjectId("579b6241ea945e3631f64e2a"),
"status" : "PENDENTE"
},
{
"when" : ISODate("2016-07-31T11:03:45.608-03:00"),
"_id" : ObjectId("579b6241ea945e3631f64e29"),
"status" : "PENDENTE"
},
{
"when" : ISODate("2016-08-01T11:03:45.608-03:00"),
"_id" : ObjectId("579b6241ea945e3631f64e28"),
"status" : "PENDENTE"
}
],
"status" : "PAGO",
"conditions" : {
"originalValue" : 7406.64,
"totalValue" : 7400,
"upfrontValue" : 1500,
"upfrontInstallments" : 3,
"balanceInstallments" : 9
},
"__v" : 0,
"transaction" : {
"code" : "00",
"createdAt" : ISODate("2016-07-29T10:49:46.610-03:00"),
"message" : "Transação autorizada",
"status" : 6,
"tid" : "1006993069000730AF5A"
}
}
If I run the query below, I get the desired document shown above:
db.payments.find({ "recurrence.transaction.tid": "1006993069000730B88A" })
However, if I run this other query, MongoDB returns my entire collection (presumably because it didn't match the subdocument's id):
db.payments.find({ "recurrence._id": ObjectId("579b6241ea945e3631f64e2b") })
Both queries should return the same result! I also checked some other questions including this one so unless I'm going crazy I'm doing the same thing. Not sure why the inconsistent results though.
Tryout this:
db.payments.find({ recurrence : { $elemMatch: { "transaction.tid": "1006993069000730B88A"} } }).pretty()