Timestamp in my trace event .json file does not match after loading in the chrome://tracing/ - google-chrome-devtools

From the excerpt of my trace_file.json
[
{
"args": {
"core": 0,
"opKind": "aicconvolutiond32",
"opMemory": "TCM",
"opName": "Conv_323__2_fc__2_tile_0__1",
"opOperand": [
"Conv_323__2_fc__2_tile_0__1_res#ui8",
...
" Conv_323__2_fc__2_tile_0__1_vtcmin8_res#i8"
],
"opResource": "HMX",
"opShape": [
"Dest [1 x 1 x 1 x 5 x 2048]",
...
" Unnamed10 [1 x 5 x 4096]"
]
},
"cat": "Conv_323__2_fc__2_tile_0__1",
"dur": 1.041667,
"name": "Conv_323",
"ph": "X",
"pid": 103,
"tid": 2,
"ts": 617480883719.4166
},
{
"args": {
"core": 0,
"opKind": "aicconvolutiond32",
"opMemory": "TCM",
"opName": "Conv_323__2_fc__2_tile_0__1",
"opOperand": [
"Conv_323__2_fc__2_tile_0__1_res#ui8",
...
" Conv_323__2_fc__2_tile_0__1_vtcmin8_res#i8"
],
"opResource": "HMX",
"opShape": [
"Dest [1 x 1 x 1 x 5 x 2048]",
...
" Unnamed10 [1 x 5 x 4096]"
]
},
"cat": "Conv_323__2_fc__2_tile_0__1",
"dur": 1.041667,
"name": "Conv_323",
"ph": "X",
"pid": 104,
"tid": 2,
"ts": 617480883719.4166
},
{
"args": {
"core": 0,
"opKind": "aicconvolutiond32",
"opMemory": "TCM",
"opName": "Conv_323__2_fc__2_tile_1__1",
"opOperand": [
"Conv_323__2_fc__2_tile_1__1_res#ui8",
...
" Conv_323__2_fc__2_tile_1__1_vtcmin8_res#i8"
],
"opResource": "HMX",
"opShape": [
"Dest [1 x 1 x 1 x 5 x 2048]",
...
" Unnamed10 [1 x 5 x 4096]"
]
},
"cat": "Conv_323__2_fc__2_tile_1__1",
"dur": 0.260417,
"name": "Conv_323",
"ph": "X",
"pid": 103,
"tid": 2,
"ts": 617480883720.7188
},
{
"args": {
"core": 0,
"opKind": "aicconvolutiond32",
"opMemory": "TCM",
"opName": "Conv_323__2_fc__2_tile_1__1",
"opOperand": [
"Conv_323__2_fc__2_tile_1__1_res#ui8",
...
" Conv_323__2_fc__2_tile_1__1_vtcmin8_res#i8"
],
"opResource": "HMX",
"opShape": [
"Dest [1 x 1 x 1 x 5 x 2048]",
...
" Unnamed10 [1 x 5 x 4096]"
]
},
"cat": "Conv_323__2_fc__2_tile_1__1",
"dur": 0.260417,
"name": "Conv_323",
"ph": "X",
"pid": 104,
"tid": 2,
"ts": 617480883720.7188
}
]
Now when I load the same trace file in chrome://tracing/
I see the start time does not coincide with the value of ts in the trace file.
"ts": 617480883719.4166 in .json file and
Start in snapshot is 2,908,417 ns.
???
Can some one help me how chrome tracing is normalizing\factorizing that "ts" value?

Related

Golang scan db rows to json of string and array

I am trying to get the output from DB using an inner join with 3 tables
say Table A and B.
Output Struct
type C struct {
A A `json:"A"`
B B `json:"B"`
SecID int64 `json:"section_id"`
SecName string `json:"section_name"`
}
type A struct {
AID int64 `json:"aid"`
Name string `json:"name"`
Des string `json:"des"`
Price string `json:"price"`
}
type B struct {
BID int64 `json:"bid"`
Answer string `json::answer"`
Score int16 `json:"score"`
}
DB query
var cs []C
rows, err := db.Query(sqlStatement, RequestBody.tID)
for rows.Next() {
var c C
err = rows.Scan(&c.A.ID, &c.A.Name, &c.A.Des, &c.A.Price, &c.A.Price, &c.B.ID, &c.B.Answer, &c.B.Score, &c.SecID, &c.SecName)
cs = append(cs, c)
The above code result in the following output:
[
{
"a": {
"aid": 1,
"name": "XXXXXX",
"description": "addd kdjd a jdljljlad",
"price": "10",
},
"section_id": 1,
"section_name": "personal details",
"b": {
"bid": 1,
"answer": "adfdf d fd d f",
"score": 0
}
},
{
"a": {
"aid": 1,
"name": "XXXXXX",
"description": "addd kdjd a jdljljlad",
"price": "10",
},
"section_id": 1,
"section_name": "personal details",
"b": {
"bid": 2,
"answer": "adfdf d fd d f",
"score": 10
}
}
]
But I am trying to merge field "b" in one single field with the list of dictionaries and writing "a" field only once as the values are repeated.
[
{
"a": {
"aid": 1,
"name": "XXXXXX",
"description": "addd kdjd a jdljljlad",
"price": "10",
},
"b": [
{
"section_id": 1,
"section_name": "personal details",
"bid": 1,
"answer": "adfdf d fd d f",
"score": 0
},
{
"section_id": 1,
"section_name": "personal details",
"bid": 2,
"answer": "adfdf d fd d f",
"score": 10
}
]
}
]
Tried changing the struct but doesn't seem to work.
DB details:
Table A (AID, Name, Des, Place)
Table B (BID, Answer, Score)
Query:
select * from A a
inner join temp_table tt on tt.aid = a.aid
inner join B b on b.bid = tt.bid
where a.aid=1;

Remove numeric key from mongodb collection

I am having the below document in the collection for example.
I want to remove data with numeric keys from the document. for example "0" in the below document.
I have tried the below code
$manager = Core_BP_BaseTableMongo::connection();
$bulk = new MongoDB\Driver\BulkWrite;
$bulk->update($wh, ['$unset' => array("0" => true)]);
$result = $manager->executeBulkWrite(Core_BP_BaseTableMongo::getDatabaseName().'.'.$collection_name, $bulk);
Document
{
"0": {
"datetimestamp_id": "1613044151UyZKNjrs7l",
"product_id": ObjectId("5fe155e4045a855ae211e2a4"),
"image": "",
"squence": 0,
"slug": "ks1215",
"name": "KS1215",
"dimensions": "",
"brand_id": ObjectId("5fe12bbbab6506780bd925a3"),
"brand": "Scihome",
"brand_slug": "scihome",
"country": "China",
"description": "<p>The backrest has a front and rear pushable design that opens to give a bed. The middle of the armrest and the feet are made of stainless steel for a modern look. The two color combinations are more compatible with the style of different living rooms. The activity is on the head to satisfy more sitting and crowd use.</p>\r\n\r\n<p><strong>Dimensions :</strong></p>\r\n\r\n<ul>\r\n\t<li>3 left: L - 1860 mm x W - 1130/1370 mm x H - 835 mm</li>\r\n\t<li>1 no : L - 770 mm x W - 1130/1370 mm x H - 835 mm</li>\r\n\t<li>lying right : L - 1090 mm x W - 1800/2040 mm x H - 835 mm</li>\r\n\t<li>footstool : L - 1080 mm x W - 770 mm x H - 410 mm</li>\r\n\t<li>coffee table : L - 350 mm x W - 1070 mm x H - 645 mm</li>\r\n</ul>",
"qty": 1,
"uom": "Each",
"area": "Drawing hall",
"notes": "No leather"
},
"_id": ObjectId("602519b69613e33c1576866d"),
"title": "XYZ",
"project_type": "Duplex Apartments",
"customer": {
"customer_id": ObjectId("5fe31123045a855ae22a29e8"),
"name": "Jay",
"email": "bd02#xyz.com"
},
"status": "Pending",
"request_date": "2021-02-11 17:19:10",
"rfq": false,
"products": [
{
"datetimestamp_id": "1613044151UyZKNjrs7l",
"product_id": ObjectId("5fe155e4045a855ae211e2a4"),
"image": "",
"squence": 0,
"slug": "ks1215",
"name": "KS1215",
"dimensions": "",
"brand_id": ObjectId("5fe12bbbab6506780bd925a3"),
"brand": "Scihome",
"brand_slug": "scihome",
"country": "China",
"description": "<p>The backrest has a front and rear pushable design that opens to give a bed. The middle of the armrest and the feet are made of stainless steel for a modern look. The two color combinations are more compatible with the style of different living rooms. The activity is on the head to satisfy more sitting and crowd use.</p>\r\n\r\n<p><strong>Dimensions :</strong></p>\r\n\r\n<ul>\r\n\t<li>3 left: L - 1860 mm x W - 1130/1370 mm x H - 835 mm</li>\r\n\t<li>1 no : L - 770 mm x W - 1130/1370 mm x H - 835 mm</li>\r\n\t<li>lying right : L - 1090 mm x W - 1800/2040 mm x H - 835 mm</li>\r\n\t<li>footstool : L - 1080 mm x W - 770 mm x H - 410 mm</li>\r\n\t<li>coffee table : L - 350 mm x W - 1070 mm x H - 645 mm</li>\r\n</ul>",
"qty": 1,
"uom": "Each",
"area": "Drawing hall",
"notes": "No leather"
}
],
"shared": [
{
"customer_id": ObjectId("5fe31123045a855ae22a29e8"),
"name": "Jay",
"email": "bd02#xyz.com",
"permission": "Owner"
}
],
"audit_created_by": "Jay",
"audit_created_date": {
"sec": 1613044150
},
"audit_ip": "172.18.0.1",
"audit_updated_by": null,
"audit_updated_date": {
"sec": 1618221803
},
"is_deleted": false
}
I tried $unset as well but it gives me an error Modifiers operate on fields but we found type array instead. For example: {$mod: {: ...}} not {$unset: [ true ]}

How to force plotly R to plot missing values with category axis

I want to plot a simple bar chart in plotly R and by default it will skip the NA observations. In ggplot we have a parameter to disable the default NA removal, or we can customize the axis limits. However I cannot get it done in plotly.
dt_plot <- data.frame(categories = letters[1:10], values = c(rep(NA_integer_, 3), 1:5, rep(NA_integer_, 2)))
plot_ly(data = dt_plot) %>%
add_bars(x = ~categories, y = ~values)
I want to show the x axis as the consistent letters 1:10, because I'm actually including this plot in a shiny app with dynamic data selection. Some data have values in all x values, some only have values in a subset. I want to keep the plot to be consistent and always show complete x values.
There is a similar question here, but the answer doesn't apply to my case, because I'm using a category type x axis:
https://plot.ly/r/reference/#layout-xaxis
If the axis type is "category", it should be numbers, using the scale where each category is assigned a serial number from zero in the order it appears.
I tried different range combinations but it didn't work. plotly seemed always remove NA observations first.
There is a related issue where Carson Sievert suggested a hack, but it didn't really work me either.
# this do show all the x values but the plot is wrong
layout(xaxis = list(type = "category", tickvals = 1:10/10, ticktext = letters[1:10], range = c(0, 1)))
By inspecting the plot object, it looks like the NA data is removed before constructing the plot:
{
"visdat": {
"7abc7354f619": ["function () ", "plotlyVisDat"]
},
"cur_data": "7abc7354f619",
"attrs": {
"7abc7354f619": {
"alpha_stroke": 1,
"sizes": [10, 100],
"spans": [1, 20],
"x": {},
"y": {},
"type": "bar",
"inherit": true
}
},
"layout": {
"margin": {
"b": 40,
"l": 60,
"t": 25,
"r": 10
},
"xaxis": {
"domain": [0, 1],
"automargin": true,
"range": [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
"title": "categories",
"type": "category",
"categoryorder": "array",
"categoryarray": ["a", "b", "c", "d", "e", "f", "g", "h", "i", "j"]
},
"yaxis": {
"domain": [0, 1],
"automargin": true,
"title": "values"
},
"hovermode": "closest",
"showlegend": false
},
"source": "A",
"config": {
"showSendToCloud": false
},
"data": [
{
"x": ["d", "e", "f", "g", "h"],
"y": [1, 2, 3, 4, 5],
"type": "bar",
"marker": {
"color": "rgba(31,119,180,1)",
"line": {
"color": "rgba(31,119,180,1)"
}
},
"error_y": {
"color": "rgba(31,119,180,1)"
},
"error_x": {
"color": "rgba(31,119,180,1)"
},
"xaxis": "x",
"yaxis": "y",
"frame": null
}
],
...
There is info in reference: https://plotly.com/r/reference/layout/xaxis/#layout-xaxis-range, but not so obvious.
But first of all: categorical variable needs to be factor.
Second you need to specify range from 0 to number of levels -1. To better looking use +-0.5 on ends
dt_plot <- data.frame(categories = letters[1:10], values = c(rep(NA_integer_, 3), 1:5, rep(NA_integer_, 2)))
dt_plot$categories <- as.factor(dt_plot$categories)
plot_ly(data = dt_plot) %>%
add_bars(x = ~categories, y = ~values)%>%
layout(xaxis = list(range = list(-0.5, 9.5)))

How to get the GitHub data from various URLs and stored in the single mongoDB

I'm trying to get the GitHub data using Talend big data. The thing is, i have multiple URLs,because used each URL to take some values & stored into single mongoDB. The below order only i'm going to try & get the informations,
https://api.github.com/users/sample/repos
https://api.github.com/repos/sample/awesome-ciandcd/commits
https://api.github.com/repos/sample/awesome-ciandcd/contributors
Each URLs are giving the single JSONArray with multiple data format.Please give some suggestion to do this. I've already tried with sub-jobs component. But not get clear job.
My Output Should be like,
{
"gitname": "sample",
"gitType" : "User",
"password": "password",
"repoCount": 3,
"repositoryDetails": [
{
"repoName": "MergeCheckRepository",
"fileCount": 10,
"branchCount": 6,
"releaseCount": 2,
"commitsCount": 10,
"contributorsCount": 3,
"totalPulls": 1,
"mergeCount": 1,
"totalIssues": 12,
"closedIssueCount": 3,
"watchersCount": 1,
"stargazersCount": 4,
"contributorsDetails": [
{
"login": "sample",
"avatarURL": "https://avatars2.githubusercontent.com/u/30261572?v=4",
"contributions": 3
}
],
"commitDetails": [
{
"name": "sample",
"email": "sampletest#test.com",
"date": "2017-07-20T09:09:09Z"
}
]
},
{
"repoName": "Dashboard",
"filecount": 19,
"branchCount": 4,
"releasecount": 2,
"commitsCount": 5,
"contributorsCount": 3,
"totalPulls": 1,
"totalIssues": 2,
"closedIssueCount": 3,
"watchersCount": 1,
"stargazersCount": 4,
"contributorsDetails": [
{
"login": "sample",
"avatarURL": "https://avatars2.githubusercontent.com/u/30261572?v=4",
"contributions": 3
},
{
"login": "Dashboard",
"avatarURL": "https://avatars2.githubusercontent.com/u/30261572?v=4",
"contributions": 3
}
],
"commitDetails": [
{
"name": "sample",
"email": "sampletest#test.com",
"date": "2017-07-14T09:09:09Z"
},
{
"name": "Dashboard",
"email": "prakash.thangasamy#test.com",
"date": "2017-07-19T09:09:09Z"
},
{
"name": "testrepo",
"email": "test.dashboard#test.com",
"date": "2017-07-20T09:09:09Z"
}
]
}
]
}
How to achieve this one with sub-job? Is there any other way to do this?

Postgresql i have to return 0

I'm trying to make one postgreSQL 9.3 query.The problem is here i have to count all cleaners rated below 4 rating.Here is my query
SELECT count(ratings.score) as below, avg(ratings.score) as avg_rating, cleaners.first_name, cleaners.last_name, cleaners.id, cleaners.created_at
FROM "cleaners"
LEFT JOIN ratings ON ratings.cleaner_id = cleaners.id
GROUP BY cleaners.first_name, cleaners.last_name, cleaners.id, cleaners.created_at
here is the following result:
{
"HTTP_CODE": 200,
"cleaners": [
{
"id": 29,
"rating_below_3_stars": 1,
"avg_rating": "5.0",
"first_name": "asen",
"last_name": "asenov"
},
{
"id": 35,
"rating_below_3_stars": 2,
"avg_rating": "2.5",
"first_name": "Simepl",
"last_name": "cleaner"
}
]
}
The cleaner with id "29" his rating_below_3_stars have to e set to 0
What i want is:
{
"HTTP_CODE": 200,
"cleaners": [
{
"id": 29,
"rating_below_3_stars": 0,
"avg_rating": "5.0",
"first_name": "asen",
"last_name": "asenov"
},
{
"id": 35,
"rating_below_3_stars": 2,
"avg_rating": "2.5",
"first_name": "Simepl",
"last_name": "cleaner"
}
]
}
you count only the ratings below 4
SELECT sum(case when ratings.score<=3 then 1 else null end) as below,
avg(ratings.score) as avg_rating,
cleaners.first_name, cleaners.last_name, cleaners.id,
cleaners.created_at
FROM "cleaners"
LEFT JOIN ratings ON ratings.cleaner_id = cleaners.id
GROUP BY cleaners.first_name, cleaners.last_name, cleaners.id, cleaners.created_at