Can RDS snapshots be transferred across AWS accounts? - powershell

In the time since this question was answered, AWS Tools for Powershell has been released and I basically have the same problem: I have an RDS snapshot on one AWS account that I would like to transfer to another.
So far I've been able to select the snapshot that I want with the Get-RDSDBSnapshot cmdlet, and I'd like to take that Amazon.RDS.Model.DBSnapshot object and use it in the other account.
I've been looking around and I think the Restore-RDSDBInstanceFromDBSnapshot cmdlet (maps to rds-restore-db-instance-from-db-snapshot) might be what I'm looking for, but I'm not confident that I understand its behavior -- can this cmdlet be used to take my snapshot from my first account, and restore it to an instance in the second account?
I'm specifically concerned if there are any account-specific details in a Snapshot object or the handling of the cmdlet that would prevent that data from moving across accounts. I would be okay with a more general solution than powershell, if one exists.

Update 2015/10/29:
AWS has added native support for this functionality since my original posting (link to announcement). This is supported for unencrypted MySQL, Oracle, SQL Server, and PostgreSQL.
You are given the option to share your RDS snapshot publicly, or privately (by managing specific AWS Account IDs with permission to view your snapshot). By default, snapshots can be privately shared with up to 20 accounts.
This can be managed from the RDS console by clicking 'Snapshots (left navigation bar) > Share Snapshot (top toolbar)', which leads you to the following UI:
This is also available in the RDS API and CLI.
Original Answer:
I also posted this to the AWS Developer Forums, and got a response from PhilP#AWS. It seems like we can't do this at all, via powershell or any other means. He did have a couple of alternate suggestions, though:
It's not possible to directly share an RDS Snapshot from one account
to another. However I can make a couple of suggestions here (depending
on your current configuration):
If your RDS Instance is publicly accessible:
Launch a new RDS DB onto your second account
Install the appropriate DB management tools onto a PC, and give this PC network access to both RDS instances (security groups and DB user access for read and write)
Using the database management tools to copy the data from one DB to the other DB
Copy data through an EC2 instance as an intermediary:
Launch an EC2 instance configured with appropriate DB server software
Copy the RDS DB Data from your RDS instance up to your EC2 instance
Then launch your new RDS instance into the second account
Configure appropriate access (security groups and DB user access for read and write)
Copy the database data from your EC2 instance to your newly created RDS instance
My RDS instance isn't publicly accessible, and of his suggestions the EC2 solution would be preferable. We could alternate back to using a mysqldump, per the Server Fault solution.
Edit: I wanted to update that I've successfully been able to implement the EC2 intermediary suggestion. This can be automated several ways, but the solution I chose involved passing a bash script to the (linux AMI) EC2 instance as user-data, and the details of data transfer were handled in the script.
This solution ended up being fairly cost-effective, with the caveat that you want the RDS instance and the EC2 instance to be in the same availability zone. This is in large part because data transfer between RDS-EC2 in the same availability zone is free with a private IP address.

Amazon finally made it possible to accomplish this. You can share the snapshot with another account using the Edit-RDSDBSnapshotAttribute cmdlet (example here), then you can restore it to an account the snapshot was shared with using the Restore-RDSDBInstanceFromDBSnapshot cmdlet.
You can even share encrypted snapshots now. Here's a good walkthrough on how to do that.

Related

AWS GO SDK Assume role given to EC2 instance

I am running a small go application inside ec2 instance. It access Amazon SQS as a consumer. I have configured keys at ~/.aws/credential file. The EC2 instance has been assigned an IAM role.
Can my go application use the IAM role assigned to the EC2 instance?
If yes, how that can be done using configurations without a code change ?
If role is configured, should I still provide keys in somewhere ?
If you used github.com/aws/aws-sdk-go-v2/config and config.LoadDefaultConfig() method to retrieve AWS credentials,
Yes. Your application will retrieve temporary credentials with IAM Role you assigned.
aws-sdk-go-v2 will retrieve credentials from instance metadata. Detailed retrieving process is described AWS official docs here. "How do roles for EC2 instances work" section describes the process as below.
When the application runs, it obtains temporary security credentials from Amazon EC2 instance metadata, as described in Retrieving Security Credentials from Instance Metadata. These are temporary security credentials that represent the role and are valid for a limited period of time.
With some AWS SDKs, the developer can use a provider that manages the temporary security credentials transparently. (The documentation for individual AWS SDKs describes the features supported by that SDK for managing credentials.)
Alternatively, the application can get the temporary credentials directly from the instance metadata of the EC2 instance. Credentials and related values are available from the iam/security-credentials/role-name category (in this case, iam/security-credentials/Get-pics) of the metadata. If the application gets the credentials from the instance metadata, it can cache the credentials.
Also you can refer to here about aws-sdk-go-v2's credential retrieval order.
You don't have to provide key. aws-sdk-go-v2 will retrieve it from EC2 instance metadata.

Change password of AWS RDS during cloning

I have my production database with some specific master password and a specific user. My DB is AWS RDS with Postgres.
I'm automatically cloning it every day to some dev environment. Where I need multiple developers to have access to it, but they should not have access to the production environment.
How can I give it a new, non-production password during cloning? I can obviously use some automatic tool I can write. But I prefer something simpler, or optionally to use AWS API
You can use IAM to provide access to any RDS instance without having to share any passwords.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html

Using AWS KMS and/or credstash with non AWS server

Is it possible to use AWS KMS and a tool like credstash without the use of EC2 or equivalent or does it rely solely on IAM roles?
I've got a server elsewhere where I am testing some things out and ultimately I will be looking at migrating an app to EC2 etc. to make use of scaling. But for now whilst I'm setting up my deployment pipeline etc. I wondered if it was still possible to make use of KMS on my non-aws provisioned server?
The only possible way I can think of is by installing the AWS CLI tools on the server in question. Does this sounds like the right approach?
What #Viccari said is correct (in the comments). In terms of what you want to do (store passwords), the AWS Parameter Store would be a good fit for you. See https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html for more information. The guide explicitly calls out your use-case:
Parameter Store offers the following benefits and features.
Use a secure, scalable, hosted secrets management service (No servers to manage).
In the end, if you end up using Parameter Store or KMS, you will need some sort of credentials stored somewhere to grab an AWS STS token to use to call the underlying AWS services. If working outside of AWS EC2, you will need the AWS Access Key and AWS Secret Key from an IAM user. If you are in EC2, the IAM instance role will magically provide you the credentials and use that role to call those AWS services. The AWS SDK does this for you behind the scenes.
But, as you state, you don't want to run this in EC2 (to save money, or other reasons). The quickest way to store these credentials is to have them in a un-tracked file (added to your .gitignore) you can source from as environment variables, which your program will then read. This allows you to do local testing, and easily run it in EC2
with zero code changes. See https://docs.aws.amazon.com/cli/latest/userguide/cli-environment.html for what variables to set. Note that this doc talks about the CLI; the SDK's follow the same behavior.

How to access Mysql installed on my google cloud instance via Mysql workbench

I have Mysql installed on google cloud instance and its running fine.
Earlier i had a separate google cloud sql instance ,but due to performance issues i installed mysql on my google cloud instance.Iam currently running the database from my google cloud instance.
The issues is that when it was a seperate sql instance i could access the database from Mysql Workbench.
But now that i have it installed on my google cloud instance,i can not access it from workbench.
Is there a way i can access it from my workbench.
Please advise and help
I assume that you have created user in the cloud MySQL instance by giving current public IP. Once you done with it go to the MySQL workbench and click on little plus icon. Then you get a window like below. You can give any name to the database. For the host name you must provide host address relevant to your MySQL instance. Once you done with give a username. To enter the password you must click on the Store in Vault enter it. Once you complete click on TestConnection. If it gives successful message then your connection is done. If not you must recheck inputs most input your public IP, because sometimes this change even after one or two hours. No need of filling Default Schema field. This might be helpful for your work.

Accessing Cloud SQL from a Compute Engine VM instance : do I have to copy the access token from my personnal computer to the VM instance?

I'm trying to use Cloud SQL from my VM instance.
When creating the VM Instance I activated Cloud SQL Option for it.
The Cloud SQL instance authorizes my Compute Engine Project to access it.
At first I was expecting to have some tools like google_sql.sh installed on my VM since I had activated Cloud SQL on it but no :-/
In Cloud SQL docs it says that I should copy my local access token to my VM Instance.
My local machine is Mac OSX so the tokens are stored in :
~user/Library/Preferences/com.google.cloud.plist
but on my Linux VM it's stored in:
~user/.java/.userPrefs/com/google/cloud/sqlservice/oauth2/prefs.xml.
Do I have to create a prefs.xml and copy it on my VM? (but I guess the XML schema is not the same between com.google.cloud.plist and prefs.xml?)
Does someone have perfs.xml example I could use as a template (unless schema is exactly the same as com.google.cloud.plist which I doubt)?
Thanks all for your help.
The simplest thing is actually to include service account scopes when you create your instance. This page in the compute engine docs describes how to do it. This maintains an access token in the compute engine instance's metadata server which the Cloud SQL tools can then access when they need to authenticate. A similar technique works for cloud storage and other products.