is it possible to query after the child value if its a string? In alphabetically order?
Doesnt matter whether it is descending or ascending.
e.g. under the key, each reference has the assigned name of the follower, and I want to order all the followers alphabetically.
Only manage to query it ordered by an integer unfortunately. (INCLUDING PAGINATION)
If this doesnt work, is there a way to query ordered by key? e.g. I have key 1 "-edasMmaed" and key 2 "-deLkdnw" etc and that if do paginate I start after the last value?
I haven't found anything useful unfortunately.
Kind regards
Edit: This is for the first part of the question
EDIT 2:
var query = Ref().databaseFollowingForUser(uid: userId, type: type).queryOrderedByKey()
if let latestUserFollowers = uid, latestUserFollowers != 0 {
query = query.queryEnding(atValue: latestUserFollowers).queryLimited(toLast: limit)
} else {
query = query.queryLimited(toLast: limit)
}
query.observeSingleEvent(of: .value, with: { (snapshot) in
With this code I receive the first 10 results (limit is defined as 10)
everbody from ID: 276 through ID: 18. (starting at holgerhagerson and ending at manni85)
Now I want to paginate and load more which I am not able yet.
The passed uid is the uid of the latest fetched user which is "18", manni85
BIG EDIT: I managed to order it by keys. Reading your answers regarding keys are always saved as strings, I realized my mistake and are now able to do it properly.
Big thank you!
Keys in the Firebase Realtime Database are stored (and sorted) as strings. Even if they look like numbers to you, Firebase will store (and sort) them as strings.
This means that the 2, 3, 4, etc in your screenshot are actually "2", "3", "4" etc. This affects how they are ordered, as strings are ordered lexicographically and in that order these keys will show up as "3", "30", "4", "44", "5", etc.
If you want to use keys that you can sort numerically, take these steps:
Prefix keys with a short non-numeric prefix, to prevent Firebase interpreting them as an array.
Use a fixed length for the numbers in all keys, prefixing the value with zeroes or spaces as needed.
When combined, your keys would show up as:
"key_003": ...,
"key_004": ...,
...
"key_008": ...,
"key_016": ...,
"key_018": ...,
"key_030": ...,
"key_044": ...
Which you can now reliably sort when you query /FOLLOW/Follower/2 by calling queryOrderedByKey().
Related
I have a two part question
We have a PostgreSQL table with a jsonb column. The values in jsonb are valid jsons, but they are such that for some rows a node would come in as an array whereas for others it will come as an object.
for example, the json we receive could either be like this ( node4 I just an object )
"node1": {
"node2": {
"node3": {
"node4": {
"attr1": "7d181b05-9c9b-4759-9368-aa7a38b0dc69",
"attr2": "S1",
"UserID": "WebServices",
"attr3": "S&P 500*",
"attr4": "EI",
"attr5": "0"
}
}
}
}
Or like this ( node4 is an array )
"node1": {
"node2": {
"node3": {
"node4": [
{
"attr1": "7d181b05-9c9b-4759-9368-aa7a38b0dc69",
"attr2": "S1",
"UserID": "WebServices",
"attr3": "S&P 500*",
"attr4": "EI",
"attr5": "0"
},
{
"attr1": "7d181b05-9c9b-4759-9368-aa7a38b0dc69",
"attr2": "S1",
"UserID": "WebServices",
"attr3": "S&P 500*",
"attr4": "EI",
"attr5": "0"
}
]
}
}
}
And I have to write a jsonpath query to extract, for example, attr1, for each PostgreSQL row containing this json. I would like to have just one jsonpath query that would always work irrespective of whether the node is object or array. So, I want to use a path like below, assuming, if it is an array, it will return the value for all indices in that array.
jsonb_path_query(payload, '$.node1.node2.node3.node4[*].attr1')#>> '{}' AS "ATTR1"
I would like to avoid checking whether the type in array or object and then run a separate query for each and do a union.
Is it possible?
A sub-question related to above - Since I needed the output as text without the quotes, and somewhere I saw to use #>> '{}' - so I tried that and it is working, but can someone explain, how that works?
The second part of the question is - the incoming json can have multiple sets of nested arrays and the json and the number of nodes is huge. So other part I would like to do is flatten the json into multiple rows. The examples I found were one has to identify each level and either use cross join or unnest. What I was hoping is there is a way to flatten a node that is an array, including all of the parent information, without knowing which, if any, if its parents are arrays or simple object. Is this possible as well?
Update
I tried to look at the documentation and tried to understand the #>> '{}' construct, and then I came to realise that '{}' is the right hand operand for the #>> operator which takes a path and in my case the path is the current attribute value hence {}. Looking at examples that had non-empty single attribute path helped me realise that.
Thank you
You can use a "recursive term" in the JSON path expression:
select t.some_column,
p.attr1
from the_table t
cross join jsonb_path_query(payload, 'strict $.**.attr1') as p(attr1)
Note that the strict modifier is required, otherwise, each value will be returned multiple times.
This will return one row for each key attr1 found in any level of the JSON structure.
For the given sample data, this would return:
attr1
--------------------------------------
"7d181b05-9c9b-4759-9368-aa7a38b0dc69"
"7d181b05-9c9b-4759-9368-aa7a38b0dc69"
"7d181b05-9c9b-4759-9368-aa7a38b0dc69"
"I would like to avoid checking whether the type in array or object and then run a separate query for each and do a union. Is it possible?"
Yes it is and your jsonpath query works fine in both cases either when node4 is a jsonb object or when it is a jsonb array because the jsonpath wildcard array accessor [*] also works with a jsonb object in the lax mode which is the default behavior (but not in the strict mode see the manual). See the test results in dbfiddle.
"I saw to use #>> '{}' - so I tried that and it is working, but can someone explain, how that works?"
The output of the jsonb_path_query function is of type jsonb, and when the result is a jsonb string, then it is automatically displayed with double quotes " in the query results. The operator #>> converts the output into the text type which is displayed without " in the query results and the associated text array '{}' just point at the root of the passed jsonb data.
" the incoming json can have multiple sets of nested arrays and the json and the number of nodes is huge. So other part I would like to do is flatten the json into multiple rows"
you can refer to the answer of a_horse_with_no_name using the recursive wildcard member accessor .**
I have below json string in my table column which is of type jsonb,
{
"abc": 1,
"def": 2
}
i want to remove the "abc" key from it and insert "mno" with some default value. i followed the below approcach for it.
UPDATE books SET books_desc = books_desc - 'abc';
UPDATE books SET books_desc = jsonb_set(books_desc, '{mno}', '5');
and it works.
Now i have another table with json as below,
{
"a": {
"abc": 1,
"def": 2
},
"b": {
"abc": 1,
"def": 2
}
}
Even in this json, i want to do the same thing. take out "abc" and introduce "mno" with some default value. Please help me to achieve this.
The keys "a" and "b" are dynamic and can change. But the values for "a" and "b" will always have same keys but values may change.
I need a generic logic.
Requirement 2:
abc:true should get converted to xyz:1.
abc:false should get converted to xyz:0.
demo:db<>fiddle
Because of a possible variety of your JSON keys it might be complicated to generate a common query. This is because you need to give the path within the json_set() function. But without actual values it would be hard.
A simple work-around is using the regexp_replace() function on the text representation of the JSON string to replace the relevant objects.
UPDATE my_table
SET my_data =
regexp_replace(my_data::text, '"abc"\s*:\s*\d+', '"mno":5', 'g')::jsonb
For added requirement 2:
I wrote the below query based on already given solution:
UPDATE books
SET book_info =
regexp_replace(book_info::text, '"abc"\s*:\s*true', '"xyz":1', 'g')::jsonb;
UPDATE books
SET book_info =
regexp_replace(book_info::text, '"abc"\s*:\s*false', '"xyz":0', 'g')::jsonb;
This question already has an answer here:
From which direction swift starts to read dictionaries? [duplicate]
(1 answer)
Closed 5 years ago.
Hello I tried a print dictionary items at Xcode9. Like this:
var items = ["Bear":"0", "Glass":"1", "Car":"2"]
for (key,value) in items{
print("\(key) : \(value)")
}
output:
Glass : 1
Bear : 0
Car : 2
Why output not like this: Bear: 0, Glass:1, Car:2
I dont understand this output reason.
Dictionary :
A dictionary stores associations between keys of the same type and values of the same type in a collection with no defined ordering.
Each value is associated with a unique key, which acts as an identifier for that value within the dictionary. Unlike items in an array, items in a dictionary do not have a specified order.
Array - An array stores values of the same type in an ordered list.
Sets - A set stores distinct values of the same type in a collection with no defined ordering.
From Apple documentation
Dictionary in Swift is implemented as hash map. There is no guarantee that items in Dictionary will have the same order you added them.
The only container that retains the order of items is Array. You can use it to store tuples, like so:
var items : [(key: String, value: String)] = [(key: "Bear", value: "0"),(key: "Glass", value: "1"), (key: "Car", value: "2")]
Your iteration will work as expected, but you will lose Dictionary's ability to lookup items by subscript
Arrays are ordered collections of values.
Sets are unordered collections of unique values.
Dictionaries are unordered
collections of key-value associations.
So, You can not expect the same order, when you are iterating the values from Dictionary.
Reference to Collection Type
Dictionary collection isn't ordered, that is it simply doesn't guarantee to be ordered. But Array is. Dictionary adds values in discrete orders whereas Array adds values in continuous order.
You simply can't have an Array like this:
["a", "b", ... , "d", ... , ... , "g"] //Discrete index aren't allowed. You just can't skip any index in between.
Instead you have to have the above array like this:
["a", "b", "d", "g"]
To be able to get rid of this behavior, Dictionary (where you don't have to maintain previous indexes to have values) was introduced. So you can insert values as you like. It won't bother you for maintaining any index.
I am working on developing a web application feature that suggests prices for users based on previous orders in the database. I am using the MongoDB NoSQL database. Before I begin, I am trying to figure out the best way to set up the order object to return the correct results.
When a user places an order such as the following: 1 cheeseburger + 1 fry, McDonalds, 12345 E. Street, MyTown, USA... it should only return objects that are EXACT matches from the database.
For example, I would not want to receive an order that contained 1 cheeseburger + 1 fry + 1 shake. I will be keeping running averages of the prices and counts for that exact order.
{
restaurantAddress: "12345 E. Street, MyTown, USA",
restaurantName: "McDonald's",
orders: {
{ cheeseburger: 1, fries: 2 }
: {
sumPaid: 1444.55,
numTimesOrdered: 167,
avgPaid: 8.65 (gets recomputed w/ each new order)
},
{ // repeat for each unique item config },
{ // another unique item (or items) }
}
Do you think this is a valid and efficient way to set up the document in MongoDB? Or should I be using multiple documents?
If this is valid, how can I query it to only return exact orders? I looked into $eq but it did not seem to be exactly what I was looking for.
So I believe we have solved the problem. The solution is to create a string that is unique for the order on the server side. For example, we will write a function that would transform the 1 cheeseburger + 2 fries into burger1fries2. In order to keep consistency in the database, we will first sort the entries alphabetically, so we will always hit what we intended with the query. A similar order of 2 fries + 1 cheeseburger would generate the string burger1fries2 as well.
UPDATE: I need to add that the point of this question is to allow me to define schemas for Json Rest Stores. The user can search by any one key, or several keys. So, I cannot easily predict what the users will search by -- it could be 1, 2, 5 fields (this is especially true for data-rich fields like people, bookings, etc.)
Imagine that I have an index as such:
{ "item": 1, "location": 1, "stock": 1 }
Following the MongoDb manual on indexes:
MongoDB can use this index to support queries that include:
the item field,
the item field and the location field,
the item field and the location field and the stock field, or
only the item and stock fields; however, this index would be less efficient than an index on only item and stock.
MongoDB cannot use this index to support queries that include:
only the location field,
only the stock field, or
only the location and stock fields.
Now, suppose I have a schema with exactly these fields:
item: String
location: String
stock: String
qty: number
And imagine I want to make sure every query is indeed indexed. I would do:
For item:
item, location, stock, qty
item, location, qty, stock
item, stock, qty, location
item, stock, location, qty
item, qty, location, stock
item, qty, stock, location
For location:
...you know the gist
Now... this seems a little insane. If you have a database where you have TEN searchable fields, this becomes clearly unworkable as the number of indexes grows exponentially.
Am I missing something? My idea was to define a schema, define which fields were searchable, and write a function that makes up all of the needed indexes regardless of what fields were present and what fields weren't. However, I am thinking about it, and... well, I must be missing something.
Am I?
I will try to explain what does this mean by example. The indexes based on B-tree is not something mongodb specific. In contrast it is rather common concept.
So when you create an index - you show the database an easier way to find something. But this index is stored somewhere with a pointer pointing to a location of the original document. This information is ordered and you might look at it as binary tree which has a really nice property: the search is reduced from O(n) (linear scan) to O(log(n)). Which is much much faster because each time we trim our space in half (potentially we can reduce the time from 10^6 to 20 lookups). For example we have a big collection with field {a : some int, b: 'some other things'} and if we index it by a, we end up with another data structure which is sorted by a. It looks this way (by this I do not mean that it is another collection, this is just for demonstration):
{a : 1, pointer: to the field with a = 1}, // if a is the smallest number in the starting collection
...
{a : 999, pointer: to the field with a = 990} // assuming that 999 is the biggest field
So right now we are searching for a field a = 18. Instead of going one by one through all elements we take something in the middle and if it is bigger then 18, then we are dividing the lower part in half and checking the element there. We continue till we will find a = 18. Then we look at the pointer and knowing it we extract the original field.
The situation with compound index is similar (instead of ordering by one element we order by many). For example you have a collection:
{ "item": 5, "location": 1, "stock": 3, 'a lot of other fields' } // was stored at position 5 on the disk
{ "item": 1, "location": 3, "stock": 1, 'a lot of other fields' } // position 1 on the disk
{ "item": 2, "location": 5, "stock": 7, 'a lot of other fields' } // position 3 on the disk
... huge amount of other data
{ "item": 1, "location": 1, "stock": 1, 'a lot of other fields' } // position 9 on the disk
{ "item": 1, "location": 1, "stock": 2, 'a lot of other fields' } // position 7 on the disk
and want an index { "item": 1, "location": 1, "stock": 1 }. The lookup table would look like this (one more time - this is not another collection, this is just for demonstration):
{ "item": 1, "location": 1, "stock": 1, pointer = 9 }
{ "item": 1, "location": 1, "stock": 2, pointer = 7 }
{ "item": 1, "location": 3, "stock": 1, pointer = 1 }
{ "item": 2, "location": 5, "stock": 7, pointer = 3 }
.. huge amount of other data (but not necessarily here. If item would be one it would be somewhere next to items 1)
{ "item": 5, "location": 1, "stock": 3, pointer = 5 }
See that here everything is basically sorted by item, then by location and then by pointer.
The same way as with a single index we do not need to scan everything. If we have a query which looks for item = 2, location = 5 and stock = 7 we can quickly identify where documents with item = 2 are and then the same way quickly identify where among these items item with location 5 and so on.
And right now an interesting part. Also we created just one index (although this is a compound index, it is still one index) we can use it to quickly find the element
only by the item. Really all we need to do is only the first step. So there is no point to create another index {location : 1} because it is already covered by compound index.
also we can quickly find only by item and by location (we need only 2 steps).
Cool 1 index but helps us in three different ways. But wait a minute: what if we want to find by item and stock. Oh it looks like we can speed up this query as well. We can in log(n) find all elements with specific item and ... here we have to stop - magic has finished. We need to iterate through all of them. But still pretty good.
But may it can help us with other queries. Lets look at a query by location which looks like was already ordered. But if you will look at it - you see that this is a mess. One in the beginning and then one in the end. It can not help you at all.
I hope this clarifies few things:
why indexes are good (reduce time from O(n) to potentially O(log(n))
why compound indexes can help with some queries nonetheless we have not created an index on that particular field and help with some other queries.
what indexes are covered by compound index
why indexes can harm (it creates additional datastructure which should be maintained)
And this should tell another valid thing: index is not a silver bullet. You can not speed up all your queries, so it sound silly to think that by creating indexes on all fields EVERYTHING would be super fast.
What are your real query patterns? It's very unlikely that you would need to create all of these possible index combinations. I also doubt that including qty in the index would be of much use. Do you need to search for things where qty == 4 independent of location and item type?
An index doesn't need to identify every single record, it just needs to be specific enough to make any final scan small. Given an item code or a stock value are there really that many locations that you'd also need to index on them?
I suspect in this case an index on item, an index on location and and index on stock would be sufficient to answer most likely queries with sufficient speed. (But we'd need to know more about what these field names mean and what the count and distribution of values is within them).
Use explain with your queries and you can see how well they are performing. Add indices as necessary, don't create every possible ordering.