How to know how many api calls are being made to Mongodb - mongodb

i want to sort how many api calls are being made to my mongodb, I want to log them in a file and then count them with wc -l, do you know how to do that?, or do you have any other suggestion?
thanks

I Believe you are after the following command
db.serverStatus()
the Documents show what the output is like
you could output this to file if you want

Assuming you have some client/abstraction layer between api calls and your DB, you could maintain a separate collection to store API usage data.
Create a document for every resource with the name of that resource, and increment ($inc) a counter every time they are called. You could add any other extraneous data as well.

Related

How to get a list of all documents in a Firebase Cloud Firestore database using the API?

I need to query my Cloud Firestore database periodically for maintenance. I'm new to REST and I've spent way too long trying to solve this myself, so I figured I could use some advice.
My Firestore is set up like so:
users > {uid} > uploaded-files > {file-hash}
{file-hash} is a document that contains several fields such as filename, source, and size
All I'm trying to do is get a list of every single filename from every single uploaded-file, including from multiple {uid}'s.
I've managed to send a successful request and get a single filename using the firestore.projects.databases.documents.get method using the API explorer, but I can't seem to get any other methods to work, namely firestore.projects.databases.documents.list
This is the successful request using firestore.projects.databases.documents.get:
GET https://firestore.googleapis.com/v1beta1/projects/{project-id}/databases/(default)/documents/users/7eGfdgGfaG0HSXdfmxMN2/uploaded-files/WGtcJBX9fdGdhdtjB?mask.fieldPaths=filename&key={YOUR_API_KEY}
Part of my issue is that I can't figure out how to get requests to work without hard-linking document names - in other words, I don't know how to to replace {uid}, or any other collection, with a wild-card so that the request returns documents from all uid's.
Really any help is greatly appreciated.
The Use the Cloud Firestore REST API documentation has instructions on getting started. The Firestore REST API documentation shows how to fetch documents.
In your case you would need to list the user-uid documents then go after the uploaded-files for each one, or iterate over the results of the list.

IBM Cloudant DB - get historical data - best way?

I'm pretty confused concerning this hip thing called NoSQL, especially CloudantDB by Bluemix. As you know, this DB doesn't store the values chronologically. It's the programmer's task to sort the entries in case he wants the data to.. well.. be sorted.
What I try to achive is to simply get the last let's say 100 values a sensor has sent to Watson IoT (which saves everything in the connected CloudantDB) in an ORDERED way. In the end it would be nice to show them in a D3.css style kind of graph but that's another task. I first need the values in an ordered array.
What I tried so far: I used curl to get the data via PHP from https://averylongID-bluemix.cloudant.com/iotp_orgID_iotdb_2018-01-25/_all_docs?limit=20&include_docs=true';
What I get is an unsorted array of 20 row entries with random timestamps. The last 20 entries in the DB. But not in terms of timestamps.
My question is now: Do you know of a way to get the "last" 20 entries? Sorted by timestamp? I did a POST request with a JSON string where I wanted the data to be sorted by the timestamp, but that doesn't work, maybe because of the ISO timestamp string.
Do I really have to write a javascript or PHP script to get ALL the database entries and then look for the 20 or 100 last entries by parsing the timestamp, sorting the array again and then get the (now really) last entries? I can't believe that.
Many thanks in advance!
I finally found out how to get the data in a nice ordered way. The key is to use the _design api together with the _view api.
So a curl request with the following URL / attributes and a query string did the job:
https://alphanumerical_something-bluemix.cloudant.com/iotp_orgID_iotdb_2018-01-25/_design/iotp/_view/by-date?limit=120&q=name:%27timestamp%27
The curl result gets me the first (in terms of time) 120 entries. I just have to find out how to get the last entries, but that's already a pretty good result. I can now pass the data on to a nice JS chart and display it.
One option may be to include the timestamp as part of the ID. The _all_docs query returns documents in order by id.
If that approach does not work for you, you could look at creating a secondary index based on the timestamp field. One type of index is Cloudant Query:
https://console.bluemix.net/docs/services/Cloudant/api/cloudant_query.html#query
Cloudant query allows you to specify a sort argument:
https://console.bluemix.net/docs/services/Cloudant/api/cloudant_query.html#sort-syntax
Another approach that may be useful for you is the _changes api:
https://console.bluemix.net/docs/services/Cloudant/api/database.html#get-changes
The changes API allows you to receive a continuous feed of changes in your database. You could feed these changes into a D3 chart for example.

XMLHTTPRequest from mongo shell

I am replicating a collection (I only have access to mongo shell on the server). In the current collection all documents have a field called jsonURL. The value of this field is a url http://www.something.com/api/abc.json. I want to copy each document from oldCollection to newCollection, but I want also want to fetch data from that url and add that to each new document created.
I last time heard that XMLHTTPRequest was on mongo's list, but as a low priority feature (I can understand why). And as I found nothing in the documentation, I am guessing its still in the queue. I am hoping I can get something in forEach(function(eachDoc){});
Do I have any other way of achieving this. Thanks.

REST parameters vs URI

I'm just learning REST and trying to figure out how to apply it in practice. I have a sampling of data that I want to query, but I'm not sure how the URLs are meant to be formed, i.e. where I put the query. For example, for querying the most recent 100 data records:
GET http://data.com/data/latest/100
GET http://data.com/data?amount=100
which of the previous two queries is the better, and why? And the same for the following:
GET http://data.com/data/latest-days/2
GET http://data.com/data?days=2
GET http://data.com/data?fromDate=01-01-2000
Thanks in advance.
Personally, I would use the query string format in this case. If your /data path is returning all of the data, and you would like to perform this type of query, I believe it makes the most sense. You could also pass query string parameters such as ?since=01-01-2000 to get entries after a specified date or pass column names such as ?category=clothing to retrieve all entries with category equaling clothing.
Additionally, you would want paths such as /data/{id} to be available to retrieve certain entries given their unique id.
It really depends on a lot of things. If you're using any sort of MVC framework, you'd use the URI segments to define your get request to your API which I personally prefer.
It's not a big deal either way, it's all based on preference and how predictable you want the URL to be to your user. In some cases, I'd say go with the REST parameters, but more often than not a URI based GET is quite clean if your setup supports it.

How to get list of aggregates using JOliviers's CommonDomain and EventStore?

The repository in the CommonDomain only exposes the "GetById()". So what to do if my Handler needs a list of Customers for example?
On face value of your question, if you needed to perform operations on multiple aggregates, you would just provide the ID's of each aggregate in your command (which the client would obtain from the query side), then you get each aggregate from the repository.
However, looking at one of your comments in response to another answer I see what you are actually referring to is set based validation.
This very question has raised quite a lot debate about how to do this, and Greg Young has written an blog post on it.
The classic question is 'how do I check that the username hasn't already been used when processing my 'CreateUserCommand'. I believe the suggested approach is to assume that the client has already done this check by asking the query side before issuing the command. When the user aggregate is created the UserCreatedEvent will be raised and handled by the query side. Here, the insert query will fail (either because of a check or unique constraint in the DB), and a compensating command would be issued, which would delete the newly created aggregate and perhaps email the user telling them the username is already taken.
The main point is, you assume that the client has done the check. I know this is approach is difficult to grasp at first - but it's the nature of eventual consistency.
Also you might want to read this other question which is similar, and contains some wise words from Udi Dahan.
In the classic event sourcing model, queries like get all customers would be carried out by a separate query handler which listens to all events in the domain and builds a query model to satisfy the relevant questions.
If you need to query customers by last name, for instance, you could listen to all customer created and customer name change events and just update one table of last-name to customer-id pairs. You could hold other information relevant to the UI that is showing the data, or you could simply hold IDs and go to the repository for the relevant customers in order to work further with them.
You don't need list of customers in your handler. Each aggregate MUST be processed in its own transaction. If you want to show this list to user - just build appropriate view.
Your command needs to contain the id of the aggregate root it should operate on.
This id will be looked up by the client sending the command using a view in your readmodel. This view will be populated with data from the events that your AR emits.