With the FeathersJs REST client - how can I query a single field with multiple values?
Eg. if I have a Books service and I want to retrieve all books written in the year 1990, 1991 and 1992. I'd assume I'd call:
/books?year[]=1990&year[]=1991&year[]=1992
Although this doesn't work.
Have a look at the documentation for the common database adapter querying syntax. The correct way to query for what you are looking for is:
year: {
$in: [ 1990, 1991, 1992 ]
}
The corresponding query would be
/books?year[$in]=1990&year[$in]=1991&year[$in]=1992
Parsing this query string requires using the extend query parser qs which you can enable with
app.set('query parser', 'extended');
Since the query is a string, if you need the actual number values for the year you might also have to convert it the query in a before hook (although most database usually allow both).
Related
var dt= new Date();
var datestring = dt.getFullYear()+"-"+("0"+(dt.getMonth()+1)).slice(-2)+"-"+("0"+dt.getDate()).slice(-2);
db.getCollection('Profile Section').count({Last_Active : {$regex : datestring}})
I have written this query in Mongo DB, It is giving correct count value. So I have written Query properly. I am using Spring Boot for backend, how can I create this query equivalent to JAVA, I am using rest API to call in postman application,
Can u suggest how can I write equivalent spring boot backend with the help of this query? and also I have to insert resultant count value to another collection called ActiveUsers in mongodb.
#GetMapping("/Activeusers")
is the api call in my JAVA application.
You have to use $gte & $lt query commands.
db.getCollection('Profile').find({
"lastActive" : {
$gte: new Date('2020-05-19'),
$lt: new Date('2020-05-20'),
}
}).pretty()
In Spring,
#Query("{'lastActive' : { $gte: ?0, $lt: ?1 } }")
public List<Profile> getProfileByLastActive(LocalDate today, LocalDate nextDay);
Use either LocalDate or Date as per your convenience.
If you are going to implement in Node.js, the best option is to use moment.js. Actually I have been working with moment.js for calendar related activities. That's y I am suggesting to use it.
Your code is accurate. The problem: it's too much accurate.
If you were to update those records every millisecond, you would be able to get them. But this is an overkill for most usages, and this is where the architecture of your system matters.
Querying a date field with an $eq operator, will fetch results with an exact match, with a milliseconds precision. The "active users" logic might attempt us to check users who are active right now, and imply our intuition to use an $eq operator. While this is correct, we will miss lots of users who are active, but their corresponding records on db are not updated at the millisecond rate (and this depends on the way you update your db records).
As implied above, one solution would be to update the db, with dozens of updates just to have an accurate description on db for a kind of real time active users manner. This might be too much for many systems.
Another solution would be to query the active users by an interval / gap of few seconds (or more). Every second would increase the probability of getting active users by a factor of 1,000. You can see this here:
db.getCollection('Profile').find({"lastActive" : {$gte: new Date(ISODate().getTime() - 1000 * 60 * 2)}}).pretty() // this fetch records within 2 seconds
I have a simple json:
{
"id": 1,
"name": "John",
"login": "2019-02-13"
}
This kind of documents are stored in Couchbase, but for now I would like to create index (or list in some other, well time way) which should filter all documents where login is older then 30 days. How should I create it in Couchbase and get this in Scala?
For now I get all documents from database and filter them in API, but I think it is not very good way. I would like to filter it on database side and retrieve only documents which have login older then 30 days.
Now, in Scala I have only the method only to get docs by id:
bucket.get(id, classOf[RawJsonDocument])
I would recommend taking a look at N1QL (which is just SQL for JSON). Here's an example:
SELECT u.*
FROM mybucket u
WHERE DATE_DIFF_STR(NOW_STR(), login, 'day') > 30;
You'll also need an index, something like:
CREATE INDEX ix_login_date ON mybucket (login);
Though I can't promise that's the best index, it will at least get you started.
I used DATE_DIFF_STR and NOW_STR, but there are other ways to manipulate dates. Check out Date Functions in the documentation. And since you are new to N1QL, I'd recommend checking out the interactive N1QL tutorial.
The following query is more efficient because it can push the predicate to IndexScan when index key matched with one side of predicate relation operator. If you have expression that derives from index key, it gets all the values and filter in the query engine.
CREATE INDEX ix_login_date ON mybucket (login);
SELECT u.*
FROM mybucket AS u
WHERE u.login < DATE_ADD_STR(NOW_STR(), 'day', -30) ;
I need monitor the time of the records been created, for further query and modify.
first thing flashed in my mind is give the document a "createDateTime" field, with the default value of "new Date()", but Mongodb said the document _id has a timestamp embedded with, and the id was generated when the document was created, so it sounds dummy to add a new field for that.
for too many times, I've seen people set a "createDateTime" for their data, and I don't know if they know about the details of mongodb's _id.
I want know should I use the _id as a "createDateTime" field? what is the best practice?
and the pros and cons.
thanks for any tips.
I'd actually say it depends on how you want to use the date.
For example, it's not actionable using the aggregation framework Date operators.
This will fail for example:
db.test.aggregate( { $group : { _id: { $year: "$_id" } } })
The following error occurs:
"errmsg" : "exception: can't convert from BSON type OID to Date"
(The date cannot be extracted from the ObjectId.)
So, operations that are normally simple date operations become much more complex if you wanted to do any sort of date math in an aggregation. It would be far easier to have a createDateTime stamp. Counting the number of documents created in a particular year and month would be simple using aggregation with a distinct createdDateTime field.
You can sort on an ObjectId, to some degree. The remaining 8 bytes of the ObjectId aren't sortable in a meaningful way. Most MongoDB drivers default to creating the ObjectId within the driver and not on the database. So, if you've got multiple clients (like web servers for example) creating new documents (and new ObjectIds), the time stamps will only be as accurate as the various servers.
Also, depending the precision you'd need, an ISODate value is stored using 8 bytes, rather than the 4 used in an ObjectId.
Yes, you should. There is no reason not to do, besides the human readability while directly looking into the database. See also here and here.
If you want to use the aggregation framework to group by the date within _id, this is not possible yet as WiredPrairie correctly said. There is an open jira ticket for that, you might watch. But of course you can do this with Map-Reduce and ObjectID.getTimestamp(). An example for that can be found here.
I am looking for a way to query mongo for documents matching the results between two fields when compared to a variable.
For example, overlapping date ranges. I have a document with the following schema:
{startDate : someDate, endDate : otherDate, restrictions : {daysBefore : 5, daysAfter : 5}}
My user will supply their own date range like
var userInfo = { from : Date, to : Date}
I need the documents that satisfy this condition:
startDate - restrictions.daysBefore <= userInfo.to && endDate + restrictions.daysAfter >= userInfo.from;
I tried using a $where clause, but I loose the context of the to and from since they are defined outside of the scope of the where function.
I would like to do this without pulling down all of the results, or creating another field upon insert.
Is there a simple way this query can be done?
The aggregation framework [AF] will do what you want. The AF backend is written in C++ and therefor much faster then using JavaScript as an added bonus. In addition to faster then JavaScript there are number of reasons we discourage the use of $where some of which can be found in the $where docs.
The AF docs(i.e. the good stuff to use):
http://docs.mongodb.org/manual/reference/aggregation/
I am uncertain the format of the data you are storing, and this will also have an affect on performance. For instance if the date is the standard date of milliseconds since Jan 1st 1970 (unix epoch) and daysBefore is stored in (miliseconds per day) * (number of days), you can use simple math as the example below does. This is very fast. If not there are date conversions available in the AF, but that is of course more expensive to do the conversions in addition to getting the differences.
In Python (your profile mentions Django) datetime.timedelta can be used be used for daysBefore. For instance for 5 days:
import datetime
daysBefore=datetime.timedelta(5)
There are two main ways to go about what you want to use in the AF. Do the calculation directly and match on it, or create a new column and match against that. Your specific use case and testing against will be necessary for complicated or large scale deployments. An aggregate command from the shell to match against the calculation in Python:
fromDate=<program provided>
db.collection.aggregate([{"$match":{"startDate":{ "$lt": {"$add": ["$restrictions.daysBefore", fromDate]}}}}])
If you want to run multiple calculations in the same $match use $and:[{}, {}, …, {}]. I omitted that for clarity.
Further aggregation documentation for the AF can be found at:
http://api.mongodb.org/python/current/examples/aggregation.html#aggregation-framework
Note that “aggregation” also includes map reduce in Mongo, but this case the AF should be able to do it all (and much more quickly).
If you need any further information about the AF or if there is anything the docs don’t make clear, please don’t hesitate to ask.
Best,
Charlie
I am using the Rally bulk query API to pull data from multiple tables. My issue happens when I try to use a placeholder for the Iteration's StartDate and pass it along to a following query same bulk request. i.e.
"iteration": "/Iteration?fetch=ObjectID,StartDate&query=(Name = \"Sprint 1\")",
"started": "${iteration.StartDate}",
"other_queries": "...?query=(CreatedDate > $(iteration.StartDate))"
The bulk service seems to convert this field to a formatted string. Is there a way to prevent this from happening? I am attempting to use the placeholder to limit other queries by date without making several requests.
It looks like the iteration object comes back with the date correctly, but when it is used as a placeholder it is automatically converted to a string.
"started": ["Wed Jan 16 22:00:00 MST 2013"],
"iteration": {
"Results": [
....
"StartDate": "2013-01-17T05:00:00.000Z",
]}
Unfortunately no, as this functionality is currently implemented, this is expected behavior. The placeholder is converted to a formatted String server-side, so it will be necessary to formulate a similar followup request if the same data is needed in another query.