I need to build a system that will track our users in all of our websites.
Each new user coming to our website will be getting an ID that will be stored in a cookie.
On every activity in the site, we would like to save the relevant data.
for example, when a user register, we will expose an api for adding the activity to the database. later, we will make reporting back-end on the data.
We havn't yet decided on the technology, but we assume we will go for nodejs + express + mongoose.
We believe that the first collection (see bellow) will have around 6 million rows in a month. the other collections might have half of that.
I dont know if the following data structure will work good in mongodb.
SessionCollection
Id mongo ObjectId - generated, will be the cookie Id eventually.
Referer - string (length of full query string uri)
LandingUrl - string (length of full query string uri)
DateTime
Params - KeyValue data, its the parsed data from LandingUrl, suppose to be a nested json tree.
if the LandingUrl was http://s.com?a=1&b=2&c=3 so the params will be :
params : {a:'1',b:'2',c:'3'}
ActivityCollection
Id mongo ObjectId
SessionId - "forein key" to SessionCollection
ActivityType - Short free string
DateTime
ActivityData - free KeyValue data (similar to the explanation above).
Both of the collection will be searchable in all fields, when I say all i mean all.
Is this good structure for mongo?
Do you recognize a bad pattern here?
Do you have suggestions to make it better?
Can a full url be indexed in mongodb?
thanks
I will answer #4 since it is an interesting question with not an obvious answer.
Can a full url be indexed in mongodb?
the answer is most of the times but not all of the time.
Explanation: A URL cannot always get indexed in MongoDB since MongoDB has a limit on the length of the index (1024 bytes). If the length is more than that then it will not get indexed or may get an error (depends on the version and the case). A full URL may exceed this limit (since at least 2000 chars are supported in almost all browsers). If you have the possibility of such long URLs a solution would be to use a hash approach for the index.
For more info on MondoDB limits and how it handles indexing for >1024 bytes (behavior has significantly changed from 2.6 and onward) see https://docs.mongodb.org/manual/reference/limits/
For URL lengths see What is the maximum length of a URL in different browsers?
Related
I'm developing a front-end app where I want to support searching data by the id, so I'm going to have an "object id" field. I want to validate the object id to make sure it's a valid MongoDB ObjectId before sending it to the API.
So I searched on how to do it and I found this thread, where all the answers suggest using a implementation provided by a MongoDB driver or ORM such as mongodb or mongose. However, I don't want to go that way, because I don't want to install an entire database driver/ORM in my front-end app just to use some id validation - I'd rather implement the validation myself.
Unfortunately, I couldn't find an existing implementation. Then I tried checking the ObjectId spec and implementing the validation myself, but that didn't work out either.
The specification says...
The ObjectID BSON type is a 12-byte value consisting of three different portions (fields):
a 4-byte value representing the seconds since the Unix epoch in the highest order bytes,
a 5-byte random number unique to a machine and process,
a 3-byte counter, starting with a random value.
Which doesn't make much sense to me. When it says the ObjectId has 12 bytes, it makes me think that the string representation is going to have 12 characters (1 byte = 1 char), but it doesn't. Most object ids have 24 characters.
Finally, I searched mongodb's and mongoose's source code but I didn't had much luck with that either. The best I could do was finding this line of code, but I don't know where to go from there.
TL;DR: What is the actual algorithm to check if a given string is a valid MongoDB Object Id?
You find is correct, you just stopped too early. the isValid comes from the underlying bson library: https://github.com/mongodb/js-bson/blob/a2a81bc1bc63fa5bf3f918fbcaafef25aca2df9d/src/objectid.ts#L297
And yes, you get it right - there is not much to validate. Any 12 bytes can be an object ID. The reason you see 24 characters is because not all 256 ASCII are printable/readable, so the ObjectID is usually presented in hex format - 2 characters per byte. The regexp to validate 12-bytes hex representation would be /[0-9a-f]{24}/i
TL;DR: check the constructor of ObjectId in the bson library for the official validation algorithm
Hint: you don't need most of it, as you are limited to string input on frontend.
I would like to create two queries, with pagination option. On the first one I would like to get the first ten records and the second one I would like to get the other all records:
.startAt(0)
.limit(10)
.startAt(9)
.limit(null)
Can anyone confirm that above code is correct for both condition?
Firestore does not support index or offset based pagination. Your query will not work with these values.
Please read the documentation on pagination carefully. Pagination requires that you provide a document reference (or field values in that document) that defines the next page to query. This means that your pagination will typically start at the beginning of the query results, then progress through them using the last document you see in the prior page.
From CollectionReference:
offset(offset) → {Query}
Specifies the offset of the returned results.
As Doug mentioned, Firestore does not support Index/offset - BUT you can get similar effects using combinations of what it does support.
Firestore has it's own internal sort order (usually the document.id), but any query can be sorted .orderBy(), and the first document will be relative to that sorting - only an orderBy() query has a real concept of a "0" position.
Firestore also allows you to limit the number of documents returned .limit(n)
.endAt(), .endBefore(), .startAt(), .startBefore() all need either an object of the same fields as the orderBy, or a DocumentSnapshot - NOT an index
what I would do is create a Query:
const MyOrderedQuery = FirebaseInstance.collection().orderBy()
Then first execute
MyOrderedQuery.limit(n).get()
or
MyOrderedQuery.limit(n).get().onSnapshot()
which will return one way or the other a QuerySnapshot, which will contain an array of the DocumentSnapshots. Let's save that array
let ArrayOfDocumentSnapshots = QuerySnapshot.docs;
Warning Will Robinson! javascript settings is usually by reference,
and even with spread operator pretty shallow - make sure your code actually
copies the full deep structure or that the reference is kept around!
Then to get the "rest" of the documents as you ask above, I would do:
MyOrderedQuery.startAfter(ArrayOfDocumentSnapshots[n-1]).get()
or
MyOrderedQuery.startAfter(ArrayOfDocumentSnapshots[n-1]).onSnapshot()
which will start AFTER the last returned document snapshot of the FIRST query. Note the re-use of the MyOrderedQuery
You can get something like a "pagination" by saving the ordered Query as above, then repeatedly use the returned Snapshot and the original query
MyOrderedQuery.startAfter(ArrayOfDocumentSnapshots[n-1]).limit(n).get() // page forward
MyOrderedQuery.endBefore(ArrayOfDocumentSnapshots[0]).limit(n).get() // page back
This does make your state management more complex - you have to hold onto the ordered Query, and the last returned QuerySnapshot - but hey, now you're paginating.
BIG NOTE
This is not terribly efficient - setting up a listener is fairly "expensive" for Firestore, so you don't want to do it often. Depending on your document size(s), you may want to "listen" to larger sections of your collections, and handle more of the paging locally (Redux or whatever) - Firestore Documentation indicates you want your listeners around at least 30 seconds for efficiency. For some applications, even pages of 10 can be efficient; for others you may need 500 or more stored locally and paged in smaller chucks.
I'm pretty confused concerning this hip thing called NoSQL, especially CloudantDB by Bluemix. As you know, this DB doesn't store the values chronologically. It's the programmer's task to sort the entries in case he wants the data to.. well.. be sorted.
What I try to achive is to simply get the last let's say 100 values a sensor has sent to Watson IoT (which saves everything in the connected CloudantDB) in an ORDERED way. In the end it would be nice to show them in a D3.css style kind of graph but that's another task. I first need the values in an ordered array.
What I tried so far: I used curl to get the data via PHP from https://averylongID-bluemix.cloudant.com/iotp_orgID_iotdb_2018-01-25/_all_docs?limit=20&include_docs=true';
What I get is an unsorted array of 20 row entries with random timestamps. The last 20 entries in the DB. But not in terms of timestamps.
My question is now: Do you know of a way to get the "last" 20 entries? Sorted by timestamp? I did a POST request with a JSON string where I wanted the data to be sorted by the timestamp, but that doesn't work, maybe because of the ISO timestamp string.
Do I really have to write a javascript or PHP script to get ALL the database entries and then look for the 20 or 100 last entries by parsing the timestamp, sorting the array again and then get the (now really) last entries? I can't believe that.
Many thanks in advance!
I finally found out how to get the data in a nice ordered way. The key is to use the _design api together with the _view api.
So a curl request with the following URL / attributes and a query string did the job:
https://alphanumerical_something-bluemix.cloudant.com/iotp_orgID_iotdb_2018-01-25/_design/iotp/_view/by-date?limit=120&q=name:%27timestamp%27
The curl result gets me the first (in terms of time) 120 entries. I just have to find out how to get the last entries, but that's already a pretty good result. I can now pass the data on to a nice JS chart and display it.
One option may be to include the timestamp as part of the ID. The _all_docs query returns documents in order by id.
If that approach does not work for you, you could look at creating a secondary index based on the timestamp field. One type of index is Cloudant Query:
https://console.bluemix.net/docs/services/Cloudant/api/cloudant_query.html#query
Cloudant query allows you to specify a sort argument:
https://console.bluemix.net/docs/services/Cloudant/api/cloudant_query.html#sort-syntax
Another approach that may be useful for you is the _changes api:
https://console.bluemix.net/docs/services/Cloudant/api/database.html#get-changes
The changes API allows you to receive a continuous feed of changes in your database. You could feed these changes into a D3 chart for example.
My understanding of REST is that anything that does not change state to the underlying system (e.g. query) should be a GET request. This also means that query parameters have to be put into the URI like so:
api/SomeMethod/Parameter1/{P1:double}/Parameter2/{P1:double}
or as query strings as discussed here:
REST API Best practice: How to accept list of parameter values as input
Sometimes the query may require a lengthy vector (number of x/y points). How do I overcome the length problem of URIs here? Should I just use a POST? Thanks.
If the vector really is big enough to start worrying about you should really consider moving it out of the query params and represent it as a RESTful resource.
For example, create a collection at:
api/Vector
Then your API clients can POST their large vectors and then in another request refer to it by a single id number.
This reduces the size of the query length drastically, abides by REST, and allows for these vectors to be easily reused. If you are worried about storage you can expire vectors after 30 minutes or longer.
Another option is to go down the JSON-LD road which is similar except you don‘t host the vectors. You just provide an #context object and API clients will host the vector on their own server and reference it to your API by URL in a query parameter.
document/show?id=4cf8ce8a8aad6957ff00005b
Generally I think you should be cautious to expose internals (such as DB ids) to the client. The URL can easily be manipulated and the user has possibly access to objects you don't want him to have.
For MongoDB in special, the object ID might even reveal some additional internals (see here), i.e. they aren't completely random. That might be an issue too.
Besides that, I think there's no reason not to use the id.
I generally agree with #MartinStettner's reply. I wanted to add a few points, mostly elaborating what he said. Yes, a small amount of information is decodeable from the ObjectId. This is trivially accessible if someone recognizes this as a MongoDB ObjectID. The two downsides are:
It might allow someone to guess a different valid ObjectId, and request that object.
It might reveal info about the record (such as its creation date) or the server that you didn't want someone to have.
The "right" fix for the first item is to implement some sort of real access control: 1) a user has to login with a username and password, 2) the object is associated with that username, 3) the app only serves objects to a user that are associated with that username.
MongoDB doesn't do that itself; you'll have to rely on other means. Perhaps your web-app framework, and/or some ad-hoc access control list (which itself could be in MongoDB).
But here is a "quick fix" that mostly solves both problems: create some other "id" for the record, based on a large, high-quality random number.
How large does "large" need to be? A 128-bit random number has 3.4 * 10^38 possible values. So if you have 10,000,000 objects in your database, someone guessing a valid value is a vanishingly small probability: 1 in 3.4 * 10^31. Not good enough? Use a 256-bit random number... or higher!
How to represent this number in the document? You could use a string (encoding the number as hex or base64), or MongoDB's binary type. (Consult your driver's API docs to figure out how to created a binary object as part of a document.)
While you could add a new field to your document to hold this, then you'd probably also want an index. So the document size is bigger, and you spend more memory on that index. Here's what you might not have though of: simply USE that "truly random id" as your documents "_id" field. Thus the per-document size is only a little higher, and you use the index that you [probably] had there anyways.
I can set both the 128 character session string and other collection document object ids as cookies and when user visits do a asynchronous fetch where I fetch the session, user and account all at once. Instead of fetching the session first and then after fetching user, account. If the session document is valid ill share the user and account documents.
If I do this I'll have to make every single request for a user and account document require the session 128 character session cookie to be fetched too thus making exposing the user and account object id safer. It means if anyone is guessing a user ID or account ID, they also have to guess the 128 string to get any answers from the system.
Another security measure you could do is wrap the id is some salt which you only know the positioning such as
XXX4cf8ce8XXXXa8aad6957fXXXXXXXf00005bXXXX
Now you know exactly how to slice that up to get the ID.