Make Google Cloud Storage Object Public in Elixir - google-cloud-storage

I'm using google_api_storage v0.33.0 with Goth to create objects in the bucket. However created objects don't have public access (I disabled public access of the bucket and want to use object level access). To make them public, I followed guidelines for GoogleApi.Storage.V1.Api.ObjectAccessControls - and tried to update acl as
GoogleApi.Storage.V1.Api.ObjectAccessControls.storage_object_access_controls_insert(
conn,
"example",
"example.txt",
"allUsers"
)
When executing this logic, I'm getting error 400. I even try with insert(), but didn't work.
Is there anything that I'm missing to update object permission? Any help would be much appreciated. Thanks

Related

Read-Only Postgraphile-CLI API

I am currently implementing a public API for a Open Data Platform with Postgraphile creating the needed API for me. The API should be completly public, with no authentification whatsoever and because of that the API should only implement read-only queries. Has anyone found a possibility to use Postgraphile-CLI to only create read-only functionality?
So far I have sucessfully setup a Postgraphile-CLI API for my postgres databases, with a user that only has the "GRANT SELECT" for the schemas in Postgres. However, this doesn't seem to work for my use case, since I still can use the mutation in graphql and insert or delete data from my schemas.
Since I don't know too much about postgres database administration, I therefor wonder If it is possible to just not provide mutations with Postgraphile-CLI.
Kind regards
Grigorios
EDIT0: I have found the mistake with my Postgres database rights. That may solve that read-only problem, but If anybody knows an answer to the initial question, I would be curious to know anyway.
You have a number of options:
Use permissions, as you suggest, along with the --no-ignore-rbac option - you will have to ensure your database permissions are and remain correct (no default grants to the public role, for example) for this to work
Use PostGraphile's --disable-default-mutations (-M) option; this will stop the CRUD mutations being generated but won't prevent custom mutation functions from being exposed, if you have any
Skip the MutationPlugin via --skip-plugins graphile-build:MutationPlugin - this will prevent the Mutation type from being added to the schema in the first place, so no mutations can be added.
For a real belt-and-braces approach, why not all three?
postgraphile \
--no-ignore-rbac \
--disable-default-mutations \
--skip-plugins graphile-build:MutationPlugin

Using Mirth Connect Destination Mappings for AWS Access Key Id results in Error

We use vault to store our credentials, I've successfully grabbed S3 Access key ID and Secret Access key using the vault API, and used channelMap.put to create mappings: ${access_key} and ${secret_key}.
aws_s3_file_writer
However when I use these in the S3 file writer I get the error:
"The AWS Access Key Id you provided does not exist in our records."
I know the Access Key Id is valid, it works if I plug it in directly in the S3 file writer destination.
I'd appreciate any help on this. thank you.
UPDATE: I had to convert the results to a string, that fixed it.
You can try using the variable to a higher map. You can use globalChannelMap, globalMap or configurationMap. I would use this last one since it can store password not in plain text mode. You are currently using a channelMap, it scope is only applied to the current message while it is traveling through the channel.
You can check more about variable maps and their scopes in Mirth User guide, Section Variable Maps, page 393. I think that part of the manual is really important to understand.
See my comment, it was a race condition between Vault, Mirth and AWS.

Using Firebase Security rules, how can I check for available usernames without making the entire node public?

In my Firebase database, I have a section for storing usernames that are taken.
There is a “usernames” node, where the username is the key, and the user’s ID is stored in a “userId” atrribute.
usernames
{
username1
userId : "exampleId1"
username2
userId : "exampleId2"
username3
userId : "exampleId3"
...
}
When a user is signing up, before they create an account and are Authenticated, the app must check that the username is not taken.
In order for this to work, the “usernames” node has been set to public in the Firebase Security Rules:
"usernames": {
".read": true
}
Unfortunately, this will make every taken username and internal user ID visible, which is a security concern and not something that should be done.
(for those that don’t know, public nodes can be accessed through a browser like so):
https://mydatabasename.firebaseio.com/usernames.json
There are other nodes for banned usernames and emails that work in a similar way; they have to be checked before a user is Authenticated, and should not be fully exposed to the public.
My question is: When a user is signing up, how can I check for available usernames without making the entire node public?
To know if a specific user name is already taken, the user doesn't need read permission to /usernames but it's suffice to give them read access to /usernames/$username. So:
"usernames": {
"$username": {
".read": true
}
}
With these rules, you code can check whether the specific user name that the user wants to claim is already taken (by someone else), but they can't request a list of all user names.
Two options comes to mind, the first is allowing the public read access to your database while the second method is what I would do in a real project:
Method 1: Maintain a Separate "Usernames" Node
With this method you create a secondary node, let's say it's called usernamesInUse and this would be world readable. Its structure would look like this:
{
"usernamesInUse": {
"username1": true,
"username2": true,
"username3": true
}
}
Checking if a username exists is as simple as:
db().ref('usernamesInUse/username2').once('value', (snapshot) => if (snapshot.exists()) ...)
The downsides to this method are that you have to have processes in place to update this node whenever a new user is added, modified or deleted. However this would give secure read access to usernames and nothing else.
Method 2: Create a Cloud Function (How I would do it)
Create a simple Cloud Function with an HTTPS endpoint that checks for the existence of the username and returns a 200 or 404 status code. Your database would not need any world readable permissions.
This avoids the need to duplicate data, prevents users from downloading a full list of every user in your system and prevents the world from unmetered access to your database. You also have the opportunity to block access to abusive anonymous visitors.
Tell me if you like the idea. :)
I would create a singleton with all func and variable private which will only return a Bool and take Username variable.
This way no one can access data or func from this part. No injections possible.
Insisde the singleton you can check all usernames and do whatever you want.
return only Bool.

GCloud Bucket Storage Public Permission Exception

I'm trying to add allUsers user to one of my buckets with a condition. But it throws an exception and I don't know what is the problem.
My bucket uses uniform bucket-level access control. I follow these steps:
Click Permission
Add Members
New Members (allUsers)
Select a role Storage Object Viewer
Add condition resource.name.startswith('{my pattern}')
Click Save
And I get IAM policy update failed - backend error.
Do you have any idea why am I getting this exception?
For more detail:
I have 4 different folders(I mean virtual folders) in my root directory.
one of them starts with my pattern. The bucket has a lot of files but I tried the same changes for an empty bucket, nothing changed!
I just reproduced this and in effect, I wasn't able to set a condition to allUsers in my bucket but I was able to set conditions for specific users. Setting conditions to allUsers is not supported, you must specify a valid user or group

Copying data from S3 to Redshift

I feel like this should be a lot easier than it's been on me.
copy table
from 's3://s3-us-west-1.amazonaws.com/bucketname/filename.csv'
CREDENTIALS 'aws_access_key_id=my-access;aws_secret_access_key=my-secret'
REGION 'us-west-1';
Note I added the REGION section after having a problem but did nothing.
Where I am confused though is that in the bucket properties there is only the https://path/to/the/file.csv. I can only assume that all the documentation that I have read calling for the path to start with s3://... that I would just change https to s3 like shown in my example.
However I get this error:
"Error : ERROR: S3ServiceException:
The bucket you are attempting to access must be addressed using the specified endpoint.
Please send all future requests to this endpoint.,Status 301,Error PermanentRedirect,Rid"
I am using navicat for PostgreSQL to connect to Redshift and Im running on a mac.
The S3 path should be 's3://bucketname/filename.csv'. Try this.
Yes, It should be a lot easier :-)
I have only seen this error when your S3 bucket is not in US Standard. In such cases you need to use endpoint based address e.g. http://s3-eu-west-1.amazonaws.com/mybucket/myfile.txt.
You can find the endpoints for your region in this documentation page,
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region