What is more recommended to use in the C Driver , mongoc_collection_command with "insert" or mongoc_collection_insert - mongodb

After working for awhile with the C driver , reading the tutorials and the API .
I little confused ,
According to this tutorial : http://api.mongodb.org/c/current/executing-command.html
i can execute DB and Collections commands which include also the CRUD commands.
And i can even get the Document cursor if i don't use "_simple" in the command API
so why do i need to use for example the mongoc_collection_insert() API command ?
What are the differences ? what is recommended ?
Thanks

This question is probably similar to what's the difference between using insert command or db.collection.insert() via the mongo shell.
mongoc_collection_insert() is specific function written to insert a document into a collection while mongoc_collection_command() is for executing any valid database commands on a collection.
I would recommend to use the API function (mongoc_collection_insert) whenever possible. For the following reasons:
The API functions had been written as an abstraction layer with a specific purpose so that you don't have to deal with other details related to the command.
For example, mongoc_collection_insert exposes the right parameters for inserting i.e. mongoc_write_concern_t and mongoc_insert_flags_t with the respective default value. On the other hand, mongoc_collection_command has broad range of parameters such as mongoc_read_prefs_t, skip, or limit which may not be relevant for inserting a document.
Any future changes for mongoc_collection_insert will more likely be considered with the correct context for insert.
Especially for CRUD, try to avoid using command because the MongoDB wire protocol specifies different request opcodes for command (OP_MSG: 1000) and insert (OP_INSERT: 2002).

Related

CQRS pattern: using command for external api call

I am aware of CQRS pattern wherein Query to be used for Reading data and Command to be used for updating the data.
In a peculiar case where the rest api is POST but not updating the data directly rather calling an external POST api of other system and passing the details.
In such case which one holds true - use Query or Command?
*Update *
System involves multiple DB. Not necessarily different DB for query and command however
Super simple. If you know for sure the call will not update or modify state or data, then it's a query, if it does (or might) then it's a command.
However, CQRS is often more about the physical structure of your system. You might have separate command and query databases... and that complicates the answer. It can have both logical and physical aspects.

Row level security using prisma and postgres

I am using prisma and yoga graphql servers with a postgres DB.
I want to implement authorization for my graphql queries. I saw solutions like graphql-shield that solve column level security nicely - meaning I can define a permission and according to it block or allow a specific table or column of data (on in graphql terms, block a whole entity or a specific field).
The part I am stuck on is row level security - filtering rows by the data they contain - say I want to allow a logged in user to view only the data that is related to him, so depending on the value in a user_id column I would allow or block access to that row (the logged in user is one example, but there are other usecases in this genre).
This type of security requires running a query to check which rows the current user has access to and I can't find a way (that is not horrible) to implement this with prisma.
If I was working without prisma, I would implement this in the level of each resolver but since I am forwarding my queries to prisma I do not control the internal resolvers on a nested query.
But I do want to work with prisma, so one idea we had was handling this in the DB level using postgres policy. This could work as follows:
Every query we run will be surrounded with “begin transaction” and “commit transaction”
Before the query I want to run “set local context.user_id to 5"
Then I want to run the query (and the policy will filter results according to the current_setting(‘context.user_id’))
For this to work I would need prisma to allow me to either add pre/post queries to each query that runs or let me set a context for the db.
But these options are not available in prisma.
Any ideas?
You can use prisma-client instead of prisma-binding.
With prisma-binding, you define the top level resolver, then delegates to prisma for all the nesting.
On the other hand, prisma-client only returns scalar values of a type, and you need to define the resolvers for the relations. Which means you have complete control on what you return, even for nested queries. (See the documentation for an example)
I would suggest you use prisma-client to apply your security filters on the fields.
With the approach you're looking to take, I'd definitely recommend a look at Graphile. It approaches row-level security essentially the same way that you're thinking of. Unfortunately, it seems like Prisma doesn't help you move away from writing traditional REST-style controller methods in this regard.

Marklogic REST API search for latest document version

We need to restrict a MarkLogic search to the latest version of managed documents, using Marklogic's REST api. We're using MarkLogic 6.
Using straight xquery, you can use dls:documents-query() as an additional-query option (see
Is there any way to restrict marklogic search on specific version of the document).
But the REST api requires XML, not arbitrary xquery. You can turn ordinary cts queries into XML easily enough (execute <some-element>{cts:word-query("hello world")}</some-element> in QConsole).
If I try that with dls:documents-query() I get this:
<cts:properties-query xmlns:cts="http://marklogic.com/cts">
<cts:registered-query>
<cts:id>17524193535823153377</cts:id>
</cts:registered-query>
</cts:properties-query>
Apart from being less than totally transparent... how safe is that number? We'll need to put it in our query options, so it's not something we can regenerate every time we need it. I've looked on two different installations here and the the number's the same, but is it guaranteed to be the same, and will it ever change? On, for example, a MarkLogic upgrade?
Also, assuming the number is safe, will the registered-query always be there? The documentation says that registered queries may be cleared by the system at various times, but it's talking about user-defined registered queries, and I'm not sure how much of that applies to internal queries.
Is this even the right approach? If we can't do this we can always set up collections and restrict the search that way, but we'd rather use dls:documents-query if possible.
The number is a registered query id, and is deterministic. That is, it will be the same every time the query is registered. That behavior has been invariant across a couple of major releases, but is not guaranteed. And as you already know, the server can unregister a query at any time. If that happens, any query using that id will throw an XDMP-UNREGISTERED error. So it's best to regenerate the query when you need it, perhaps by calling dls:documents-query again. It's safest to do this in the same request as the subsequent search.
So I'd suggest extending the REST API with your own version of the search endpoint. Your new endpoint could add dls:documents-query to the input query. That way the registered query would be generated in the same request with the subsequent search. For ML6, http://docs.marklogic.com/6.0/guide/rest-dev/extensions explains how to do this.
The call to dls:documents-query() makes sure the query is actually registered (on the fly if necessary), but that won't work from REST api. You could extend the REST api with a custom extension as suggested by Mike, but you could also use the following:
cts:properties-query(
cts:and-not-query(
cts:element-value-query(
xs:QName("dls:latest"),
"true",
(),
0
),
cts:element-query(
xs:QName("dls:version-id"),
cts:and-query(())
)
)
)
That is the query that is registered by dls:documents-query(). Might not be future proof though, so check at each upgrade. You can find the definition of the function in /Modules/MarkLogic/dls.xqy
HTH!

When should I use AQL?

In the context of ArangoDB, there are different database shells to query data:
arangosh: The JavaScript based console
AQL: Arangodb Query Language, see http://www.arangodb.org/2012/06/20/querying-a-nosql-database-the-elegant-way
MRuby: Embedded Ruby
Although I understand the use of JavaScript and MRuby, I am not sure why I would learn, and where I would use AQL. Is there any information on this? Is the idea to POST AQL directly to the database server?
AQL is ArangoDB's query language. It has a lot of ways to query, filter, sort, limit and modify the result that will be returned. It should be noted that AQL only reads data.
(Update: This answer was targeting an older version of ArangoDB. Since version 2.2, the features have been expanded and data modification on the database is also possible with AQL. For more information on that, visit the documentation link at the end of the answer.)
You cannot store data to the database with AQL.
In contrast to AQL, the Javascript or MRuby can read and store data to the database.
However their querying capabilities are very basic and limited, compared to the possibilities that open up with AQL.
It is possible though to send AQL queries from javascript.
Within the arangosh Javascript shell you would issue an AQL query like this:
arangosh> db._query('FOR user IN example FILTER user.age > 30 RETURN user').toArray()
[
{
_id : "4538791/6308263",
_rev : "6308263",
age : 31,
name : "Musterfrau"
}
]
You can find more info on AQL here:
http://www.arangodb.org/manuals/current/Aql.html

To write js from Server side in MongoDB

Can I write a procedure from server side which later on gets stored in Db and used for further transactions.
If yes can you provide me a sample code which shows how to write the js from server side in java.
Can I write a procedure from server side which later on gets stored in Db and used for further transactions.
No but as #Philipp states you can write a block of JavaScript which will be evaled within the bult in JavaScript engine in MongoDB (spidermonkey atm unles you compile with V8).
I should be clear this IS NOT A STORED PROCEDURE AND IT DOES NOT RUN "SERVER SIDE" as SQL procedures do.
You must also note that the JS engine is single threaded and eval (when used) locks, and about a tonne of other problems.
Really the whole ability to store functions in the system collection is to store repeating code for tasks such as MR.
It is possible to do that, but 10gen advises that you shouldn't do it. Javascript functions can be stored in the special collections system.js and invoced through the eval command.
The rest of this post is copy&pasted from the official documentation: http://www.mongodb.org/display/DOCS/Server-side+Code+Execution#Server-sideCodeExecution-Storingfunctionsserverside
Note: we recommend not using server-side stored functions when possible. As these are code it is likely best to store them with the rest of your code in a version control system.
There is a special system collection called system.js that can store JavaScript functions to be reused. To store a function, you would do:
db.system.js.save( { _id : "foo" , value : function( x , y ){ return x + y; } } );
_id is the name of the function, and is unique per database.
Once you do that, you can use foo from any JavaScript context (db.eval, $where, map/reduce)
Here is an example from the shell:
> db.system.js.save({ "_id" : "echo", "value" : function(x){return x;} })
> db.eval("echo('test')")
test
See http://github.com/mongodb/mongo/tree/master/jstests/storefunc.js for a full example.
In MongoDB 2.1 you will also be able to load all the scripts saved in db.system.js into the shell using db.loadServerScripts()
>db.loadServerScripts()
>echo(3)
3
Well you can use morphia. It is an ODM for mongo in Java. I know it is not like a PL/SQL kind of solution which you exactly want. But mongo has no extensive support for constraints or triggers as in Oracle or SQL Server. Validations and stuff needs to be done through code. ODM's like Morphia in Java, Mongoose in Node.js or Mongoengine in Python are pretty decent emerging libraries you can use for such tasks, but these constraints are on app side.