[there is some internal server error while creating database instance in cloud][1]
[1]:[ https://i.stack.imgur.com/I5tty.png]
Raise a support ticket with IBM.
Operational issues on commercial cloud services are not for stackoverflow.
Additionally, if you are on the "lite" plan (as suggested by your image), there is no ability to create a Database instance. Instead, you can only connect to a shared multi-tenanted database with your your account. It has a schema which is your account-name.
Related
This question is about infosec, data privacy, specifically HIPAA compliance on GCP.
Is there any advantages for self managing Postgres server (built on GCP Compute instances using lets say Terraform) my own Vs using the managed offering, i,e. Cloud SQL
Thanks in advance
Google Cloud SQL Postgres is a fully managed option for deploying PostgreSQL to Google Cloud. The fully managed option is convenient, but is mainly suitable for cloud-native applications, or applications rebuilt for the cloud.
It has Built-in encryption for database tables, temporary files, backups, and any data transferred over Google’s internal networksSecure connections via SSL/TLS or the Cloud SQL Proxy.
Update1
As you are referring to HIPAA You can check this guide for HIPAA Compliance on Google Cloud Cloud sql encrypts the data at rest using the 256-bit Advanced Encryption Standard (AES-256), or better, with symmetric keys: that is, the same key is used to encrypt the data when it is stored, and to decrypt it when it is used. You can use your own encryptions as well with CMEK for cloud sql
And also you mentioned Infosec. I have not completely understood the term. I assume that you are referring to securing information from vulnerabilities. You can use Cloud Armor, which is a network security service that provides defenses against DDoS and application attacks like cross-site scripting (XSS) and SQL injection (SQLi).
Self hosted Postgres gives you full control over your PostgreSQL database on GCP, letting you to fine-tune server parameters, modify database configuration, and tune performance, just like in a local deployment.
Update2
As per this thread, it seems like postgresql is not HIPAA compliant.
For Encryption at rest on postgresql use can PostgreSQL TDE and Pgcrypto as discussed in this similar thread
For self hosted postgres You can also use shielded VM using which you can protect enterprise workloads from threats like remote attacks, privilege escalation, and malicious insiders
I am not sure on your application requirement, But based upon my
understanding about both cloud sql and self hosted postgres I
would recommend considering cloud sql as the best option as it is
fully managed by google and also complies with HIPAA and encryption.
For more information about pros and cons of Google Cloud SQL Postgres and Self hosted Postgres, Check this document
I would like to build a Google Cloud PostgreSQL database using the instructions here
I was able to successfully create the Postgres databases with appropriate tables and views locally.
What do I need to do in order to get the data on Google Cloud PostgreSQL? My goal is to have remote access to this data.
You have 2 options, The first one is use the Cloud SQL proxy as is described here. As the shared links say, the Cloud SQL Proxy provides secure access to your instances without the need for Authorized networks or for configuring SSL.
On the other hand, the second option is only to configure access to your instance under Authorized networks using or not SSL. The complete steps are listed here
You could connect to Cloud SQL from a local test environment using cloud sql proxy. See quickstart-proxy-test.
The workflow is:
Your Application(Running Locally) => cloud sql proxy (Running locally) => GCP remote Cloud SQL service
Watson OpenScale provides free internal database in the tutorial. However, if I want to monitor my own models, do I need to provision a paid database in IBM cloud to do the payload logging?
If not, does the free internal database also support the payload data analytics?
You can use freely the internal database for monitoring of any model type with the limitations listed at: https://cloud.ibm.com/docs/services/ai-openscale?topic=ai-openscale-connect-db#cdb-lite.
In summary:
The free database is hosted, and is not directly accessible to you.
The database capacity is limited to 1GB.
IBM® Watson OpenScale will have full access to your database, and thus will have full access to your data.
The database is not GDPR-compliant. If your model processes personally-identifiable information (PII), you cannot use the free database.
Is there a way to get the server info of my VSO account and access using SQL Server?
I've tried logging in using the URL
{account}.visualstudio.com
But I got a sever not found error
No, the back-end databases are SQL Azure instances, different from the TFS on-premise databases. I cannot see MS ever giving you access to the database - maybe the data, but not the database.
You can only use the API (old and new REST) and Power BI tools to perform queries.
If you have a specific problem you are trying to solve, post it as a new question because it may be possible without database access.
In the time since this question was answered, AWS Tools for Powershell has been released and I basically have the same problem: I have an RDS snapshot on one AWS account that I would like to transfer to another.
So far I've been able to select the snapshot that I want with the Get-RDSDBSnapshot cmdlet, and I'd like to take that Amazon.RDS.Model.DBSnapshot object and use it in the other account.
I've been looking around and I think the Restore-RDSDBInstanceFromDBSnapshot cmdlet (maps to rds-restore-db-instance-from-db-snapshot) might be what I'm looking for, but I'm not confident that I understand its behavior -- can this cmdlet be used to take my snapshot from my first account, and restore it to an instance in the second account?
I'm specifically concerned if there are any account-specific details in a Snapshot object or the handling of the cmdlet that would prevent that data from moving across accounts. I would be okay with a more general solution than powershell, if one exists.
Update 2015/10/29:
AWS has added native support for this functionality since my original posting (link to announcement). This is supported for unencrypted MySQL, Oracle, SQL Server, and PostgreSQL.
You are given the option to share your RDS snapshot publicly, or privately (by managing specific AWS Account IDs with permission to view your snapshot). By default, snapshots can be privately shared with up to 20 accounts.
This can be managed from the RDS console by clicking 'Snapshots (left navigation bar) > Share Snapshot (top toolbar)', which leads you to the following UI:
This is also available in the RDS API and CLI.
Original Answer:
I also posted this to the AWS Developer Forums, and got a response from PhilP#AWS. It seems like we can't do this at all, via powershell or any other means. He did have a couple of alternate suggestions, though:
It's not possible to directly share an RDS Snapshot from one account
to another. However I can make a couple of suggestions here (depending
on your current configuration):
If your RDS Instance is publicly accessible:
Launch a new RDS DB onto your second account
Install the appropriate DB management tools onto a PC, and give this PC network access to both RDS instances (security groups and DB user access for read and write)
Using the database management tools to copy the data from one DB to the other DB
Copy data through an EC2 instance as an intermediary:
Launch an EC2 instance configured with appropriate DB server software
Copy the RDS DB Data from your RDS instance up to your EC2 instance
Then launch your new RDS instance into the second account
Configure appropriate access (security groups and DB user access for read and write)
Copy the database data from your EC2 instance to your newly created RDS instance
My RDS instance isn't publicly accessible, and of his suggestions the EC2 solution would be preferable. We could alternate back to using a mysqldump, per the Server Fault solution.
Edit: I wanted to update that I've successfully been able to implement the EC2 intermediary suggestion. This can be automated several ways, but the solution I chose involved passing a bash script to the (linux AMI) EC2 instance as user-data, and the details of data transfer were handled in the script.
This solution ended up being fairly cost-effective, with the caveat that you want the RDS instance and the EC2 instance to be in the same availability zone. This is in large part because data transfer between RDS-EC2 in the same availability zone is free with a private IP address.
Amazon finally made it possible to accomplish this. You can share the snapshot with another account using the Edit-RDSDBSnapshotAttribute cmdlet (example here), then you can restore it to an account the snapshot was shared with using the Restore-RDSDBInstanceFromDBSnapshot cmdlet.
You can even share encrypted snapshots now. Here's a good walkthrough on how to do that.