I am new to database indexing. My application has the following "find" and "update" queries, searched by single and multiple fields
reference | timestamp | phone | username | key | Address
update x | | | | |
findOne | x | x | | |
find/limit:16 | x | x | x | |
find/limit:11 | x | | | x | x
find/limit:1/sort:-1 | x | x | | x | x
find | x | | | |
1)update({"reference":"f0d3dba-278de4a-79a6cb-1284a5a85cde"}, ……….
2)findOne({"timestamp":"1466595571", "phone":"9112345678900"})
3)find({"timestamp":"1466595571", "phone":"9112345678900", "username":"a0001a"}).limit(16)
4)find({"timestamp":"1466595571", "key":"443447644g5fff", "address":"abc road, mumbai, india"}).limit(11)
5)find({"timestamp":"1466595571", "phone":"9112345678900", "key":"443447644g5fff", "address":"abc road, mumbai, india"}).sort({"_id":-1}).limit(1)
6)find({"timestamp":"1466595571"})
I am creating index
db.coll.createIndex( { "reference": 1 } ) //for 1st, 6th query
db.coll.createIndex( { "timestamp": 1, "phone": 1, "username": 1 } ) //for 2nd, 3rd query
db.coll.createIndex( { "timestamp": 1, "key": 1, "address": 1, phone: 1 } ) //for 4th, 5th query
Is this the correct way?
Please help me
Thank you
I think what you have done looks fine. One way to check if your query is using an index, which index is being used, and whether the index is effective is to use the explain() function alongside your find().
For example:
db.coll.find({"timestamp":"1466595571"}).explain()
will return a json document which details what index (if any) was used. In addition to this you can specify that the explain return "executionStats"
eg.
db.coll.find({"timestamp":"1466595571"}).explain("executionStats")
This will tell you how many index keys were examined to find the result set as well as the execution time and other useful metrics.
Related
I have a column with values counting occurrences.
I am trying to continue the series in Power Query.
I am thus trying to increment 1 to the max of the given column..
The ID column has rows with letter tags : AB or BE. Following these letters, specific numeric ranges are associated. For both AB and BE, number ranges first from 0000 to 3000 and from 3000 to 6000.
I thus have the following possibilities: From AB0000 to AB3000 From AB3001 to AB6000 From BE0000 to BE3000 From BE3001 to AB6000
Each category match to the a specific item in my column geography, from the other workbook: From AB0000 to AB3000, it is ItalyZ From AB3001 to AB6000, it is ItalyB From BE0000 to BE3000, it is UKY From BE3001 to AB6000, it is UKM
I am thus trying to find the highest number associated to the first AB category, the second AB category, the first BE category, and the second.
My issue is that for some values, there is simply "nothing" yet in we source file.
This means that there is no occurrence yet of UKM for example.
Here is an example with no UKM or UKY:
|------------------|---------------------|
| Max | Geography |
|------------------|---------------------|
| 0562 | ItalyZ |
|------------------|---------------------|
| 0563 | ItalyZ |
|------------------|---------------------|
Hence, I have the following result:
|------------------|---------------------|
| Increment | Place |
|------------------|---------------------|
| 0564 | ItalyZ |
|------------------|---------------------|
| 0565 | ItalyZ |
|------------------|---------------------|
| 0565 | ItalyZ |
|------------------|---------------------|
| null | UKM |
|------------------|---------------------|
Here is the used power query code:
let
Source = #table({"Prefix", "Seq_Start", "Seq_End","GeoLocation"},{{"AB",0,2999,"ItalyZ"},{"AB",3000,6000,"ItalyB"},{"BC",0,299,"UKY"},{"BC",3000,6000,"UKM"}}),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Seq_Start", Int64.Type}, {"Seq_End", Int64.Type}}),
#"Merged Queries" = Table.NestedJoin(#"Changed Type", {"Prefix"}, HighestID, {"Prefix"}, "HighestID", JoinKind.LeftOuter),
#"Expanded HighestID" = Table.ExpandTableColumn(#"Merged Queries", "HighestID", {"Number"}, {"Number"}),
#"Filtered Rows" = Table.SelectRows(#"Expanded HighestID", each [Number] >= [Seq_Start] and [Number] <= [Seq_End]),
#"Grouped Rows" = Table.Group(#"Filtered Rows", {"Prefix", "Seq_Start", "Seq_End", "GeoLocation"}, {{"NextSeq", each List.Max([Number]) + 1, type number}})
in
#"Grouped Rows"
I would like to know how I could insure that when I have the first occurrence of a value, I would not have "null", but "0000" (or 0) and so on for the next occurrences.
Because, for example, if I have 0 occurrences of UKY before, I do not know why but the end results will be as follows:
|------------------|---------------------|
| Increment | Place |
|------------------|---------------------|
| 1 | UKM |
|------------------|---------------------|
| 2 | UKM |
|------------------|---------------------|
Which is not ideal because UKM should start at 30000. And because I had no values recorded before, it is starting with "null" only and then, 1, 2...rather than 3001 and 3002.
I'm using Power BI for visualize my data saved in a Mongo Database.
My records looks like that
'_id': 0,
'code_zone': "ABCD",
'type_zone': "Beautiful",
'all_coordinates': [{
"type": "Feature",
"geometry": {
"type": "Point",
"one_coordinates": [10.11, 40.44]
},
"properties": {
"limite_vertical_min": "L0",
"limite_vertical_max": "L100"
}
}]
When I import data on Power BI, he divides my records into 3 "tables":
my_collection
my_collection.all_coordinates
my_collection.all_coordinates.one_coordinates
Because I didn't know how can I fixe this issue, I selected this 3 tables and I linked them using id.
Actually, I can visualize that :
_id | code_zone | index_all_coordinates | index_one_coordinate | value
----------------------------------------------------------------------
id0 | ABCD | 1 | 0 | 10.11
----------------------------------------------------------------------
id0 | ABCD | 1 | 1 | 40.44
I'm expecting to have that :
_id | code_zone | index_all_coordinates | value_x | value_y
------------------------------------------------------------
id0 | ABCD | 1 | 10.11 | 40.44
------------------------------------------------------------
Is it the good solution or I have to refactor my data before the import in Power BI ?
How can I merge this two lines into one with Power BI ?
To get from the first table to the second, you can pivot on the index_one_coordinate column and then relabel those new columns 0 and 1 to value_x and value_y.
I need a SQL query in Postgres that produce a JSON with grouped/inherited data,
see example below.
having a table "issues" with following example data:
+--------------------------------------+-------+------------+-----------------------+
| product_id | level | typology | comment |
+--------------------------------------+-------+------------+-----------------------+
| e1227f18-0c1f-4ebb-8cbf-a09c74ba14f5 | 1 | electronic | LED broken |
| e1227f18-0c1f-4ebb-8cbf-a09c74ba14f5 | 1 | mechanical | missing gear |
| e1227f18-0c1f-4ebb-8cbf-a09c74ba14f5 | 1 | mechanical | cover damaged |
| e1227f18-0c1f-4ebb-8cbf-a09c74ba14f5 | 2 | electric | switch wrong color |
| e1227f18-0c1f-4ebb-8cbf-a09c74ba14f5 | 2 | mechanical | missing o-ring |
| e1227f18-0c1f-4ebb-8cbf-a09c74ba14f5 | 2 | electric | plug wrong type |
| 3567ae01-c7b3-4cd7-9e4f-85730aab89ee | 1 | mechanical | gear wrong dimensions |
+--------------------------------------+-------+------------+-----------------------+
product_id, typology and comment are string.
level is an integer.
I want to obtain this JSON:
{
"e1227f18-0c1f-4ebb-8cbf-a09c74ba14f5": {
"1": {
"electronic": [ "LED broken" ],
"mechanical": [ "missing gear", "cover damaged"]
},
"2": {
"electronic": [ "switch wrong color", "plug wrong type" ],
"mechanical": [ "missing o-ring" ]
}
},
"3567ae01-c7b3-4cd7-9e4f-85730aab89ee": {
"1": {
"mechanical": [ "gear wrong dimensions"]
}
}
}
So I begun to wrote a query like this:
SELECT array_to_json(array_agg(json_build_object(
product_id, json_build_object(
level, json_build_object(
typology, comment
)
)
))) FROM issues
but I didn't realize ho to group/aggregate to obtain the wanted JSON
step-by-step demo:db<>fiddle
SELECT
jsonb_object_agg(key, value)
FROM (
SELECT
jsonb_build_object(product_id, jsonb_object_agg(key, value)) as products
FROM (
SELECT
product_id,
jsonb_build_object(level, jsonb_object_agg(key, value)) AS level
FROM (
SELECT
product_id,
level,
jsonb_build_object(typology, jsonb_agg(comment)) AS typology
FROM
issues
GROUP BY product_id, level, typology
) s,
jsonb_each(typology)
GROUP BY product_id, level
) s,
jsonb_each(level)
GROUP BY product_id
) s,
jsonb_each(products)
jsonb_agg() aggregates some values into one JSON array. This has been done with the comments.
After that there is a more complicated step. To aggregate two different JSON objects into one object, you need to do this:
simplified demo:db<>fiddle
First you need to expand the elements into a key and a value column using jsonb_each(). Now you are able to aggregate these two columns using the aggregate function jsonb_object_agg(). See also
This is why the following steps look somewhat difficult. Every level of aggregation (level and product_id) need these steps because you want to merge the elements into single non-array JSON objects.
Because every single aggregation needs separate GROUP BY clauses, every step is done in its own subquery.
Situation
Using Python 3, Django 1.9, Cubes 1.1, and Postgres 9.5.
These are my datatables in pictorial form:
The same in text format:
Store table
------------------------------
| id | code | address |
|-----|------|---------------|
| 1 | S1 | Kings Row |
| 2 | S2 | Queens Street |
| 3 | S3 | Jacks Place |
| 4 | S4 | Diamonds Alley|
| 5 | S5 | Hearts Road |
------------------------------
Product table
------------------------------
| id | code | name |
|-----|------|---------------|
| 1 | P1 | Saucer 12 |
| 2 | P2 | Plate 15 |
| 3 | P3 | Saucer 13 |
| 4 | P4 | Saucer 14 |
| 5 | P5 | Plate 16 |
| and many more .... |
|1000 |P1000 | Bowl 25 |
|----------------------------|
Sales table
----------------------------------------
| id | product_id | store_id | amount |
|-----|------------|----------|--------|
| 1 | 1 | 1 |7.05 |
| 2 | 1 | 2 |9.00 |
| 3 | 2 | 3 |1.00 |
| 4 | 2 | 3 |1.00 |
| 5 | 2 | 5 |1.00 |
| and many more .... |
| 1000| 20 | 4 |1.00 |
|--------------------------------------|
The relationships are:
Sales belongs to Store
Sales belongs to Product
Store has many Sales
Product has many Sales
What I want to achieve
I want to use cubes to be able to do a display by pagination in the following manner:
Given the stores S1-S3:
-------------------------
| product | S1 | S2 | S3 |
|---------|----|----|----|
|Saucer 12|7.05|9 | 0 |
|Plate 15 |0 |0 | 2 |
| and many more .... |
|------------------------|
Note the following:
Even though there were no records in sales for Saucer 12 under Store S3, I displayed 0 instead of null or none.
I want to be able to do sort by store, say descending order for, S3.
The cells indicate the SUM total of that particular product spent in that particular store.
I also want to have pagination.
What I tried
This is the configuration I used:
"cubes": [
{
"name": "sales",
"dimensions": ["product", "store"],
"joins": [
{"master":"product_id", "detail":"product.id"},
{"master":"store_id", "detail":"store.id"}
]
}
],
"dimensions": [
{ "name": "product", "attributes": ["code", "name"] },
{ "name": "store", "attributes": ["code", "address"] }
]
This is the code I used:
result = browser.aggregate(drilldown=['Store','Product'],
order=[("Product.name","asc"), ("Store.name","desc"), ("total_products_sale", "desc")])
I didn't get what I want.
I got it like this:
----------------------------------------------
| product_id | store_id | total_products_sale |
|------------|----------|---------------------|
| 1 | 1 | 7.05 |
| 1 | 2 | 9 |
| 2 | 3 | 2.00 |
| and many more .... |
|---------------------------------------------|
which is the whole table with no pagination and if the products not sold in that store it won't show up as zero.
My question
How do I get what I want?
Do I need to create another data table that aggregates everything by store and product before I use cubes to run the query?
Update
I have read more. I realised that what I want is called dicing as I needed to go across 2 dimensions. See: https://en.wikipedia.org/wiki/OLAP_cube#Operations
Cross-posted at Cubes GitHub issues to get more attention.
This is a pure SQL solution using crosstab() from the additional tablefunc module to pivot the aggregated data. It typically performs better than any client-side alternative. If you are not familiar with crosstab(), read this first:
PostgreSQL Crosstab Query
And this about the "extra" column in the crosstab() output:
Pivot on Multiple Columns using Tablefunc
SELECT product_id, product
, COALESCE(s1, 0) AS s1 -- 1. ... displayed 0 instead of null
, COALESCE(s2, 0) AS s2
, COALESCE(s3, 0) AS s3
, COALESCE(s4, 0) AS s4
, COALESCE(s5, 0) AS s5
FROM crosstab(
'SELECT s.product_id, p.name, s.store_id, s.sum_amount
FROM product p
JOIN (
SELECT product_id, store_id
, sum(amount) AS sum_amount -- 3. SUM total of product spent in store
FROM sales
GROUP BY product_id, store_id
) s ON p.id = s.product_id
ORDER BY s.product_id, s.store_id;'
, 'VALUES (1),(2),(3),(4),(5)' -- desired store_id's
) AS ct (product_id int, product text -- "extra" column
, s1 numeric, s2 numeric, s3 numeric, s4 numeric, s5 numeric)
ORDER BY s3 DESC; -- 2. ... descending order for S3
Produces your desired result exactly (plus product_id).
To include products that have never been sold replace [INNER] JOIN with LEFT [OUTER] JOIN.
SQL Fiddle with base query.
The tablefunc module is not installed on sqlfiddle.
Major points
Read the basic explanation in the reference answer for crosstab().
I am including with product_id because product.name is hardly unique. This might otherwise lead to sneaky errors conflating two different products.
You don't need the store table in the query if referential integrity is guaranteed.
ORDER BY s3 DESC works, because s3 references the output column where NULL values have been replaced with COALESCE. Else we would need DESC NULLS LAST to sort NULL values last:
PostgreSQL sort by datetime asc, null first?
For building crosstab() queries dynamically consider:
Dynamic alternative to pivot with CASE and GROUP BY
I also want to have pagination.
That last item is fuzzy. Simple pagination can be had with LIMIT and OFFSET:
Displaying data in grid view page by page
I would consider a MATERIALIZED VIEW to materialize results before pagination. If you have a stable page size I would add page numbers to the MV for easy and fast results.
To optimize performance for big result sets, consider:
SQL syntax term for 'WHERE (col1, col2) < (val1, val2)'
Optimize query with OFFSET on large table
Giving I have this data in my mongo collection
product_id | original_id | text
1 | "A00149" | "1280 x 1024"
1 | "A00373" | "Black"
2 | "A00149" | "1280 x 1024"
2 | "A00373" | "White"
3 | "A00149" | "1980 x 1200"
3 | "A00373" | "Black"
(I have added quotes around the values in hand - these are not in the real collection)
With the following query, Im getting 0 results, though I was expecting 1.
product_id = 1 should meet the query.
Can somebody explain me what Im doing wrong?
In SQL the where would look like this
WHERE
(original_id = "A00149" AND text = "1280 x 1024")
AND
(original_id = "A00373" AND text = "Black")
And the mongo query
db.Filter.find({
"find":true,
"query":{
"$and":[
{
"original_id":"A00149",
"text":"1280 x 1024"
},
{
"original_id":"A00373",
"text":"Black"
}
]
},
"fields":{
"product_id":1
}
});
If your collection is called 'Filter' and you want a query to return the document with product_id = 1 then its simple:
db.Filter.find({"product_id" : 1})
I maybe misunderstood your question though?
Edit:
try:
db.Filter.find({$and: [{"original_id": "A00149", "text": "1280 x 1024"}, {"original_id": "A00373", "text": "Black"}]},{"product_id": 1})
see http://docs.mongodb.org/manual/reference/operator/query/and/#op._S_and