Sign s3 URL in PostgreSQL RDS/Amazon Aurora - postgresql

There is a lot of image files being returned by the DB(Either PostgreSQL RDS/Amazon Aurora). We need to sign the URL. Currently, a user defined function or a view returns the records.
I am looking for a way to sign the S3 URL directly in SQL as a user defined function. Unfortunately, there does not seem to be a way other than using Python language inside a user defined function and python is not supported as a procedural language in PostgreSQL/Aurora.
Does someone know of a way we can sign the URL directly as part of a SQL Query in PostgreSQL RDS/Amazon Aurora?

Database is not the place to perform such operation.
You should consider either putting a signed URL into the database already or to rethink your application if it shouldn't be rearchitected.

Related

How to avoid Mongo DB NoSQL blind (sleep) injection

While scanning my Application for vulnerability, I have got one high risk error i.e.
Blind MongoDB NoSQL Injection
I have checked what exactly request is sent to database by tool which performed scanning and found while Requesting GET call it had added below line to GET request.
{"$where":"sleep(181000);return 1;"}
Scan received a "Time Out" response, which indicates that the injected "Sleep" command succeeded.
I need help to fix this vulnerability. Can anyone help me out here? I just wanted to understand what I need to add in my code to perform this check before connecting to database?
Thanks,
Anshu
Similar to SQL injection, or any other type of Code Injection, don't copy untrusted content into a string that will be executed as a MongoDB query.
You apparently have some code in your app that naively accepts user input or some other content and runs it as a MongoDB query.
Sorry, it's hard to give a more specific answer, because you haven't shown that code, or described what you intended it to do.
But generally, in every place where you use external content, you have to imagine how it could be misused if the content doesn't contain the format you assume it does.
You must instead validate the content, so it can only be in the format you intend, or else reject the content if it's not in a valid format.

How to deal with complex permissions in Hasura

Basics - I need to return data from columns based on some variables from a different table(I either return column or null if access is not allowed)
I have already done what I need via a custom function in postgres, but the problem is that in Hasura functions share the permission with the table/view it implements SETOF on.
So I have to allow the access to the table itself and as the result permissions in my function are kind of meaningless, because anyone will be able to access the data simply by querying the original table directly.
My current line of thinking is that the only way to do what I need is to create a remote schema and remove access to the original table.
But maybe there is a way to not expose some of the tables as a graphql query? If I could do something like this - I'd just hide my table and expose only a function.
The remote schema seems like it would work.
Another option would be the allow-queries option.
It's possible to limit queries. It's a bit tricky it seems, you need an exact copy of every query that should be allowed (with the fields in the exactly correct order), but if you do that, then only your explicitly whitelisted queries will be accepted. More info in the docs.
I'm not familiar enough with postgres permissions to offer any better ideas...

Copying data from S3 to Redshift

I feel like this should be a lot easier than it's been on me.
copy table
from 's3://s3-us-west-1.amazonaws.com/bucketname/filename.csv'
CREDENTIALS 'aws_access_key_id=my-access;aws_secret_access_key=my-secret'
REGION 'us-west-1';
Note I added the REGION section after having a problem but did nothing.
Where I am confused though is that in the bucket properties there is only the https://path/to/the/file.csv. I can only assume that all the documentation that I have read calling for the path to start with s3://... that I would just change https to s3 like shown in my example.
However I get this error:
"Error : ERROR: S3ServiceException:
The bucket you are attempting to access must be addressed using the specified endpoint.
Please send all future requests to this endpoint.,Status 301,Error PermanentRedirect,Rid"
I am using navicat for PostgreSQL to connect to Redshift and Im running on a mac.
The S3 path should be 's3://bucketname/filename.csv'. Try this.
Yes, It should be a lot easier :-)
I have only seen this error when your S3 bucket is not in US Standard. In such cases you need to use endpoint based address e.g. http://s3-eu-west-1.amazonaws.com/mybucket/myfile.txt.
You can find the endpoints for your region in this documentation page,
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region

Creating a SHA-256 hash in OrientDB Function

I need to store a password's SHA-256 hash in OrientDB REST function - so I can use it to authenticate the user. The incoming call to the REST function will contain the password (over HTTPS) but I want to generate a hash and store that instead of the password itself.
However, OrientDB does not expose any helpers to do this. And straight javascript does not have helpers to do this either... any way I can make this happen?
(one obvious option is to SHA-256 it in the middle tier and pass that to OrientDB but I'd rather keep this in the database tier)
You can use OSecurityManager from Javascript functions like this
return com.orientechnologies.orient.core.security.OSecurityManager.instance().digest2String("password");

What kind of int storage is this?

We have an Firebird database for a (very crappy) application, and the app's front end, but nothing in between (i.e. no source code).
There is a field in the database that is stored as -2086008209 but in the front-end represents as 63997.
Examples:
Database Front-End
758038959 44093
1532056691 61409
28401112 65866
-712038758 40712
936488434 43872
-688079579 48567
1796491935 39437
1178382500 30006
1419373703 66069
1996421588 48454
890825339 46313
-820234748 45206
What kind of storage is this? The aim for us here is to access the application's back-end data and bypass the front-end GUI alltogether, so I need to know how to decode this field in order to get appropriate values from it. It is stored as a int in FireBird (I don't know if FireBird has signed/unsigned ints, but this is showing as signed when we select it).
This is the definition of the field:
It is not, as far as I can tell, de-normalised. The generator GEN_CONTACTS_ID has 66241 against it, which at a glance looks accurate.
I work on with an application that stores bitmaps in integers (just don't ask), if you express them in that form do you something useful or consistant
My impression is that the problem is in the front end. If what is stored in the DB is -2086008209, then what is stored in the DB is -2086008209. To understand better how the application is manipulating the data, try storing other numbers in the DB and see how they are displayed.
Did you come to this realization through logging SQL? If you havent, you may serve yourself well by using the Firebird Trace API to get that SQL: http://www.firebirdfaq.org/faq95/. An easier tool to parse the Trace API is this commercial product: http://www.upscene.com/products.fbtm.index.php.
I've used these tools and other techniques (triggers etc,.) to find what an application is using/changing in the Database.
Of course, if the SQL statement is select * from table, then these tools would not help much.