Inserting a password in MongoDB - mongodb

Is there a method to transform the string into a hash from mongo command line when inserting data? I want to insert account details by hand and make as many users as there is need.

If i correctly understood your problem, you want to hash a value from the shell. I don't have understood why you need this, but i will try to give you some ways to resolve the scenario.
Custom made solution:
mongoDB has Stored Procedures, that you can write in JS and you can call from any context. This will give you the 100% ownership to write you Stored Procedure and call it from the shell. In this case I'm calling "fromStringToHash" function that i have previously created...
db.loadServerScripts();
db.PasswordTest.insert({"name":"Daniele", "password": "notSecure"});
db.PasswordTest.insert({"name":"Daniele", "password": fromStringToHash("theSecretPassword")});
https://docs.mongodb.com/manual/tutorial/store-javascript-function-on-server/
MD5
If you just need a way to compute an hash, take a look
db.PasswordTest.insert({"name":"Daniele", "password": hex_md5("theSecretPassword")});
You can also use both solutions (a Stored Procedures that internally call hex_md5 in order to be free to change that function in the future, maybe). This function is available from MongoDB shell context.

Related

CQRS - How to handle if a command requires data from db (query)

I am trying to wrap my head around the best way to approach this problem.
I am importing a file that contains bunch of users so I created a handler called
ImportUsersCommandHandler and my command is ImportUsersCommand that has List<User> as one of the parameters.
In the handler, for each user that I need to import I have to make sure that the UserType is valid, this is where the confusion comes in. I need to do a query against the database, to get list of all possible user types and than for each user I am importing, I want to verify that the user type id in the import matches one that is in the db.
I have 3 options.
Create a query GetUserTypesQuery and get the rest of this and then pass it on to the ImportUsersCommand as a list and verify inside the command handler
Call the GetUserTypesQuery from the command itself and not pass it (command calling another query)
Do not create a GetUsersTypeQuery and just do the query results within the command (still a query but no query/handler involved)
I feel like all these are dirty solutions and not the correct way to apply CQRS.
I agree option 1 sounds the best but would maybe suggest adding a pre handler to validate your input?
So ImportUsersCommandHandler deals with importing you data (and only that) and add a handler that runs before that validates (in your example, checks the user types and maybe other stuff) and bails out of it does not pass. So it queries the db, checks the usertypes and does whatever it needs to if it fails. Otherwise it just passes down to your business handler (ImportUsersCommandHandler).
I am used to using Mediatr in NET Core and this pattern works well (this is what we do) so sorry if this does not fit with your environment/setup!

How to deal with complex permissions in Hasura

Basics - I need to return data from columns based on some variables from a different table(I either return column or null if access is not allowed)
I have already done what I need via a custom function in postgres, but the problem is that in Hasura functions share the permission with the table/view it implements SETOF on.
So I have to allow the access to the table itself and as the result permissions in my function are kind of meaningless, because anyone will be able to access the data simply by querying the original table directly.
My current line of thinking is that the only way to do what I need is to create a remote schema and remove access to the original table.
But maybe there is a way to not expose some of the tables as a graphql query? If I could do something like this - I'd just hide my table and expose only a function.
The remote schema seems like it would work.
Another option would be the allow-queries option.
It's possible to limit queries. It's a bit tricky it seems, you need an exact copy of every query that should be allowed (with the fields in the exactly correct order), but if you do that, then only your explicitly whitelisted queries will be accepted. More info in the docs.
I'm not familiar enough with postgres permissions to offer any better ideas...

Using luigi to update Postgres table

I've just started using the luigi library. I am regularly scraping a website and inserting any new records into a Postgres database. As I'm trying to rewrite parts of my scripts to use luigi, it's not clear to me how the "marker table" is supposed to be used.
Workflow:
Scrape data
Query DB to check if new data differs from old data.
If so, store the new data in the same table.
However, using luigi's postgres.CopyToTable, if the table already exists, no new data will be inserted. I guess I should be using the inserted column in the table_updates table to figure out what new data should be inserted, but it's unclear to me what that process looks like and I can't find any clear examples online.
You don't have to worry about marker table much: it's an internal table luigi uses to track which task has already been successfully executed. In order to do so, luigi uses the update_id property of your task. If you didn't declared one, then luigi will use the task_id as shown here. That task_id is a concatenation of the task family name and the first three parameters of your task.
The key here is to overwrite the update_id property of your task and return a custom string that you'll know will be unique for each run of your task. Usually you should use the significant parameters of your task, something like:
#property
def update_id(self):
return ":".join(self.param1, self.param2, self.param3)
By significant I mean parameters that change the output of your task. I imagine parameters like website url o id, and scraping date. Parameters like the hostname, port, username or password of your database will be the same for any of these tasks so they shouldn't be considered significant.
Notice that without having details about your tables and the data you're trying to save its pretty hard to say how you must build that update_id string, so please be careful.

What is more recommended to use in the C Driver , mongoc_collection_command with "insert" or mongoc_collection_insert

After working for awhile with the C driver , reading the tutorials and the API .
I little confused ,
According to this tutorial : http://api.mongodb.org/c/current/executing-command.html
i can execute DB and Collections commands which include also the CRUD commands.
And i can even get the Document cursor if i don't use "_simple" in the command API
so why do i need to use for example the mongoc_collection_insert() API command ?
What are the differences ? what is recommended ?
Thanks
This question is probably similar to what's the difference between using insert command or db.collection.insert() via the mongo shell.
mongoc_collection_insert() is specific function written to insert a document into a collection while mongoc_collection_command() is for executing any valid database commands on a collection.
I would recommend to use the API function (mongoc_collection_insert) whenever possible. For the following reasons:
The API functions had been written as an abstraction layer with a specific purpose so that you don't have to deal with other details related to the command.
For example, mongoc_collection_insert exposes the right parameters for inserting i.e. mongoc_write_concern_t and mongoc_insert_flags_t with the respective default value. On the other hand, mongoc_collection_command has broad range of parameters such as mongoc_read_prefs_t, skip, or limit which may not be relevant for inserting a document.
Any future changes for mongoc_collection_insert will more likely be considered with the correct context for insert.
Especially for CRUD, try to avoid using command because the MongoDB wire protocol specifies different request opcodes for command (OP_MSG: 1000) and insert (OP_INSERT: 2002).

Node.js Postgres Get Last Executed Query

js and Postgres (using this module) and I would like to view the SQL that has been executed after I have queried the database (as I'm using parameterised statements). Is there an easy way to do this?
For example, I execute code as follows:
var first = client.query("UPDATE settings SET json=$1 WHERE source_name=$2", [JSON.stringify(settings), 'website']);
first.on('end', function(result){
console.log(result);
client.end();
});
Is there a method like result.lastQuery() that I can utilise as I can't find anything like this in the docs? I'm having trouble getting my query to work and I'd like to debug it further.
There appears to be no direct way to do this (if Postgres is like most database servers, the query, with parameter markers, is compiled into intermediate code and the actual parameters are bound later on, so there's never any actual SQL text with the parameter values interpolated into it).
This blog post might or might not be helpful.