I have my production database with some specific master password and a specific user. My DB is AWS RDS with Postgres.
I'm automatically cloning it every day to some dev environment. Where I need multiple developers to have access to it, but they should not have access to the production environment.
How can I give it a new, non-production password during cloning? I can obviously use some automatic tool I can write. But I prefer something simpler, or optionally to use AWS API
You can use IAM to provide access to any RDS instance without having to share any passwords.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html
Related
In our organization, we are having common credentials to access the postgres databases, which every developers know, as it is hardcoded in application's connection string. Due to which, whenever a DML/DDL changes happens on databases, it is hard for us trace, as the developers use to make changes on their own. We can't have individual logins for each developers which is tedious to manage.
Note: Also, we can't ensure that the credentials won't be shared with the peer developers.
To get rid of this, we thought of integrating Postgres with Azure Active Directory, for Authentication.
If we can map Azure AD group/users to Postgres, security will be tightened as well as maintenance overhead will also reduce.
But, I couldn't find a article to implement this, since most of the articles says the integration for Azure managed postgresql with Azure AD, and not for the postgres running on VMs.
Can anyone guide me or share a detailed article to implement the Azure AD integration for Postgres running on a VM(IaaS)
In Azure portal go to the postgresql database select Authentication and set active directory admin.
You can specify an Azure AD group instead of an individual user to have multiple administrators.
Connecting to postgresql :
1.Login to Azure subscription.
2.Get the access token of the postgresql serverusing below command:
az account get-access-token --resource https://ossrdbms-aad.database.windows.net
3.Use that token as password for login with postgresql server.
Creating user
CREATE USER "user1#yourtenant.onmicrosoft.com" IN ROLE azure_ad_user;
Token validation:
Token is signed by Azure AD and has not been tampered with
Token was issued by Azure AD for the tenant associated with the server
Token has not expired
Token is for the Azure Database for PostgreSQL resource (and not another Azure resource)
Reference Link: Use Azure Active Directory - Azure Database for PostgreSQL - Single Server | Microsoft Learn
Using Azure Active Directory is a great idea for the reasons you specified, but unfortunately there's no native support for connection to Azure Active Directory with a local Postgres database (which is essentially what you have with Postgres in a VM). It can be done through the LDAP protocol, however.
FULL DISCLOSURE: I haven't actually done this part myself (or used the steps in the tutorial link), but this is my understanding from working with system operators. Use LDAP to connect to Azure AD then Postgres to connect via LDAP. More information on LDAP authentication in Postgres can be found here.
Bhavani's answer is about Azure Database for PostgreSQL, which is a Azure-native database service. This part I have used and I highly recommend it; you get Azure AD integration and can manage the database performance and connectivity specifically without having to also manage VM performance. Note that their screenshot is for the Flexible Server while the reference link says 'Single Server'; I recommend Flexible Server.
I'd like to limit the privileges afforded to any given user that I create via the Google Terraform provider. By default, any user created is placed in the cloudsqlsuperuser group, and any new database created has that role/group as owner. This gives any user created via the GCP console or google_sql_user Terraform resource total control over any database that is (or was) created in a similar fashion.
So far, the best we've been able to come up with is creating and altering a user via a single-run k8s job. This seems circuitous, at best, especially given that that resource must then be manually imported later if we want to manage it via Terraform.
Is there a better way to create a user that has privileges limited to a single, application-specific database?
I was puzzled by this behaviour too. Its probably not the answer you want but if you can use GCP IAM accounts the user gets created in the PostgreSQL instance with NO roles.
There are 3 types of account you can create from "gcloud sql users create" or terraform module "google_sql_user"
"CLOUD_IAM_USER", "CLOUD_IAM_SERVICE_ACCOUNT" or "BUILT_IN"
The default is the built_in type if not specified.
CLOUD_IAM_USER and CLOUD_IAM_SERVICE_ACCOUNTS get created with NO roles.
We are using these as integration with IAM is useful in lots of ways (no managing passwords at database level is a major plus esp. when used in conjunction with SQL Auth Proxy).
BUILT_IN accounts (ie old school need a postgres username and password) for some reason are granted the "cloudsqlsuperuser" role.
In the absence of being allowed the superuser role on GCP this is about as privileged as you can get so to me (and you) seems a bizarre default.
I want to give access to my developer to my MongoDB which is hosted by an EC2 Instance on AWS.
He should be able to make mongodump, upload the new backend and do some changes on our control Panel.
I created an IAM User with EC2FullAccess Permissions - I have seen that he was able to add his own IP to the Security Group so he could connect.
I don't feel so comfortable with that - what should I do, to secure myself that he has just enough access to do the necessary work:
Upload new code to server
Do MongoDB dump
I don't want him to be able to switch off/delete my instance or be able to delete my database at all.
Looking at your use case, you do not need to give any EC2 permissions, your developer does not even need IAM user, he can simply have the IP of the instance and the login credentials to the EC2 Instance, that should be suffice to log in to the instance and make the required changes. No need for an IAM user or AWS Console access.
IAM roles are for the purpose of accessing a service on behalf of another. Say, you want to access AWS DynamoDB or S3 from EC2 instance. In this case, an IAM role with required permissions attached to EC2 will server the purpose.
IAM User is for users who need access to AWS services either through Console or through API (programmatic). AWS credentials are required to access the service.
In your case, MongoDB is installed on EC2 and your developer needs access to "the server on which MongoDB is installed" and is not required any access of "AWS EC2 Service".
As correctly pointed out in answer by #X-Men, IAM role or IAM user is not at all required. What required is, your developer to have the IP of server and credentials to login to that server. Username-password or username-key.
Restriction which you need on developer related to MongoDB are to be configured on MongoDB itself and not on EC2 level.
Is it possible to use AWS KMS and a tool like credstash without the use of EC2 or equivalent or does it rely solely on IAM roles?
I've got a server elsewhere where I am testing some things out and ultimately I will be looking at migrating an app to EC2 etc. to make use of scaling. But for now whilst I'm setting up my deployment pipeline etc. I wondered if it was still possible to make use of KMS on my non-aws provisioned server?
The only possible way I can think of is by installing the AWS CLI tools on the server in question. Does this sounds like the right approach?
What #Viccari said is correct (in the comments). In terms of what you want to do (store passwords), the AWS Parameter Store would be a good fit for you. See https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html for more information. The guide explicitly calls out your use-case:
Parameter Store offers the following benefits and features.
Use a secure, scalable, hosted secrets management service (No servers to manage).
In the end, if you end up using Parameter Store or KMS, you will need some sort of credentials stored somewhere to grab an AWS STS token to use to call the underlying AWS services. If working outside of AWS EC2, you will need the AWS Access Key and AWS Secret Key from an IAM user. If you are in EC2, the IAM instance role will magically provide you the credentials and use that role to call those AWS services. The AWS SDK does this for you behind the scenes.
But, as you state, you don't want to run this in EC2 (to save money, or other reasons). The quickest way to store these credentials is to have them in a un-tracked file (added to your .gitignore) you can source from as environment variables, which your program will then read. This allows you to do local testing, and easily run it in EC2
with zero code changes. See https://docs.aws.amazon.com/cli/latest/userguide/cli-environment.html for what variables to set. Note that this doc talks about the CLI; the SDK's follow the same behavior.
In the time since this question was answered, AWS Tools for Powershell has been released and I basically have the same problem: I have an RDS snapshot on one AWS account that I would like to transfer to another.
So far I've been able to select the snapshot that I want with the Get-RDSDBSnapshot cmdlet, and I'd like to take that Amazon.RDS.Model.DBSnapshot object and use it in the other account.
I've been looking around and I think the Restore-RDSDBInstanceFromDBSnapshot cmdlet (maps to rds-restore-db-instance-from-db-snapshot) might be what I'm looking for, but I'm not confident that I understand its behavior -- can this cmdlet be used to take my snapshot from my first account, and restore it to an instance in the second account?
I'm specifically concerned if there are any account-specific details in a Snapshot object or the handling of the cmdlet that would prevent that data from moving across accounts. I would be okay with a more general solution than powershell, if one exists.
Update 2015/10/29:
AWS has added native support for this functionality since my original posting (link to announcement). This is supported for unencrypted MySQL, Oracle, SQL Server, and PostgreSQL.
You are given the option to share your RDS snapshot publicly, or privately (by managing specific AWS Account IDs with permission to view your snapshot). By default, snapshots can be privately shared with up to 20 accounts.
This can be managed from the RDS console by clicking 'Snapshots (left navigation bar) > Share Snapshot (top toolbar)', which leads you to the following UI:
This is also available in the RDS API and CLI.
Original Answer:
I also posted this to the AWS Developer Forums, and got a response from PhilP#AWS. It seems like we can't do this at all, via powershell or any other means. He did have a couple of alternate suggestions, though:
It's not possible to directly share an RDS Snapshot from one account
to another. However I can make a couple of suggestions here (depending
on your current configuration):
If your RDS Instance is publicly accessible:
Launch a new RDS DB onto your second account
Install the appropriate DB management tools onto a PC, and give this PC network access to both RDS instances (security groups and DB user access for read and write)
Using the database management tools to copy the data from one DB to the other DB
Copy data through an EC2 instance as an intermediary:
Launch an EC2 instance configured with appropriate DB server software
Copy the RDS DB Data from your RDS instance up to your EC2 instance
Then launch your new RDS instance into the second account
Configure appropriate access (security groups and DB user access for read and write)
Copy the database data from your EC2 instance to your newly created RDS instance
My RDS instance isn't publicly accessible, and of his suggestions the EC2 solution would be preferable. We could alternate back to using a mysqldump, per the Server Fault solution.
Edit: I wanted to update that I've successfully been able to implement the EC2 intermediary suggestion. This can be automated several ways, but the solution I chose involved passing a bash script to the (linux AMI) EC2 instance as user-data, and the details of data transfer were handled in the script.
This solution ended up being fairly cost-effective, with the caveat that you want the RDS instance and the EC2 instance to be in the same availability zone. This is in large part because data transfer between RDS-EC2 in the same availability zone is free with a private IP address.
Amazon finally made it possible to accomplish this. You can share the snapshot with another account using the Edit-RDSDBSnapshotAttribute cmdlet (example here), then you can restore it to an account the snapshot was shared with using the Restore-RDSDBInstanceFromDBSnapshot cmdlet.
You can even share encrypted snapshots now. Here's a good walkthrough on how to do that.