Cloud SQL Server on GCP is recreating if I am changing the root password in teraform - github

I'm using Terraform to create a cloud SQL Server and specify its root password. When I use Terraform to change the root password for the same instance, it initially deletes the instance and then creates a new one with the new root password, which is not how it should work.
However, whether I use the console or the gcloud api to perform the same action. It enables me to modify the server's existing password.

Related

Spring Data JPA app coonection to Google Cloud Run Postgres

Google have an example to connect Cloud SQL-MYSQl from Spring JPA/Boot App ( commit 9ecdc1111e3da388a750ace41a125287d9620534 is used). The example is uses Spring Data and works fine with MySQL. But It does not work when profile is changed to postgress ( after starting right Postgres Database in same account and with same steps in #2)
spring.profiles.active=postgres
and replacing
<artifactId>spring-cloud-gcp-starter-sql-mysql</artifactId>
with
<artifactId>spring-cloud-gcp-starter-sql-postgresql</artifactId>
and
replacing src/main/resources/application-mysql.properties
with
src/main/resources/application-postgres.properties
but still applications fails with
Error creating bean with name 'entityManagerFactory' defined in class
path resource
[org/springframework/boot/autoconfigure/orm/jpa/HibernateJpaConfiguration.class]:
Invocation of init method failed; nested exception is
org.hibernate.service.spi.ServiceException: Unable to create requested
service [org.hibernate.engine.jdbc.env.spi.JdbcEnvironment]
I could not find any sample.
application-postgres.properties is appended to have
spring.profiles.active=postgres
spring.cloud.gcp.sql.instance-connection-name= xyzprj:us-central1:postgres-instance
spring.datasource.username=xyzuser
spring.datasource.password=password
application-postgres.properties is replaced as followes
spring.datasource.username=xyzuser
spring.datasource.password=passord
spring.sql.init.mode=always
spring.cloud.gcp.sql.database-name=petclinic
spring.cloud.gcp.sql.instance-connection-name=xyzprj:us-central1:postgres-instance
later both of these properties files were also changed so that
spring.datasource.username=root
and
spring.datasource.password=root
but same issue
sample is tried on Cloud Shell within Google Cloud,
gcloud auth application-default login
You are running on a Google Compute Engine virtual machine. The
service credentials associated with this virtual machine will
automatically be used by Application Default Credentials, so it is not
necessary to use this command.
If you decide to proceed anyway, your user credentials may be visible
to others with access to this virtual machine. Are you sure you want
to authenticate with your personal account?
Do you want to continue (Y/n)? n
ERROR: (gcloud.auth.application-default.login) Aborted by user.
I tried to reproduce the issue on my side, but I was able to deploy application successfully
Here are the steps I followed
Step1: Created postgresql using below command
gcloud sql instances create postgres-instance \
--database-version=POSTGRES_13 \
--cpu=1 \
--memory=4GB \
--region=us-central \
--root-password=root
Step2: Created database using
gcloud sql databases create petclinic --instance postgres-instance
Step3: Connected to the PostgreSQL instance to verify the connection established or not
gcloud sql connect postgres-instance
Step4: replaced the following as you did
In application.properties
spring.profiles.active=postgres
and replacing
<artifactId>spring-cloud-gcp-starter-sql-mysql</artifactId>
with
<artifactId>spring-cloud-gcp-starter-sql-postgresql</artifactId>
and
replacing src/main/resources/application-mysql.properties
with
src/main/resources/application-postgres.properties
Step5: In addition to above changes
In application.properties replaced
spring.cloud.gcp.sql.instance-connection-name= POSTGRESQL_CONNECTION_NAME
In src/main/resources/application-postgres.properties added
spring.datasource.username=USERNAME
spring.datasource.password=PASSWORD
In pom.xml file added following dependency
<dependency>
<groupId>com.google.cloud.sql</groupId>
<artifactId>postgres-socket-factory</artifactId>
<version>1.1.0</version>
</dependency>
In build.grable file add
dependencies {
compile 'com.google.cloud.sql:postgres-socket-factory:1.1.0'
}
Note: run gcloud auth application-default login to access default credential to communicate withCloud Sql API
For clear information check this document

Access remote database federation DB2

I have 2 systems. system A and system B and both are DB2 servers. I want to be able to access system B database from system A. Both have a database called TESTDB. I am trying to run the following command to create a server.
CREATE WRAPPER "drdawrapper"
LIBRARY 'libdb2drda.so'
OPTIONS (DB2_FENCED 'Y'
);
db2 "CREATE SERVER "PRD_SERVER_SSL_FLEX" TYPE DB2/UDB VERSION '11' WRAPPER "drdawrapper" AUTHORIZATION "xyz" PASSWORD "xyz" OPTIONS (DB2_CONCAT_NULL_NULL 'Y',DB2_VARCHAR_BLANKPADDED_COMPARISON 'Y',DBNAME 'TESTDB',HOST '169.62.253.230',NO_EMPTY_STRING 'N',PORT '50001',SECURITY 'SSL',STRING_UNITS 'S');"
But I keep getting:
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL1101N Remote database "TESTDB" on node "<unknown>" could not be accessed
with the specified authorization id and password. SQLSTATE=08004
Node directory:
db2 list node directory
Node Directory
Number of entries in the directory = 1
Node 1 entry:
Node name = TESTNODE
Comment =
Directory entry type = LOCAL
Protocol = TCPIP
Hostname = 123.21.23.12
Service name = 50001
The credentials are correct. I am not sure what node is it looking for. Any pointers?
Your question is more about configuration than programming.
As you appear to be encrypting the federated connection it can be wise to first verify that the encrypted connection works at the command-line, separately from federation. This irons out a lot of the detail and is easier to troubleshoot. After you get that working, you can then begin on encrypting the federated connection.
Please follow the detailed instructions here (choose the correct Db2-version):
You have to know in advance which kind of SSL/TLS trust verification you want (i.e. either single cert (client trusts the server - simplest and easiest), or multiple certs (both sides trust the other - more setup, arguably more secure), because this determines the configuration.
Ensure both of your Db2 instances and databases are properly configured for SSL.
Catalog the remote-node locally with security SSL (db2 catalog tcpip node ... remote ... server ...security ssl)
Catalog the remote-database locally on the new node name (db2 catalog database ... at node ...) followed by db2 terminate .
Verify a command-line connect to the remote database using the federated credentials, using the configured db2dsdriver.cfg if using SSLSERVERCERTIFICATE method, or using the keystore/stash configuration ( db2 connect to remotedb user ... using ... ). Use the same userid/password that you will use later in the create server command.
Once that command-line connect works, you can proceed with the encrypted federation link, via db2 create wrapper... and db2 create server....
There's no need to use quotes around the wrapper name, just let it fold, otherwise quotes are just annoying redundant noise, although it is not a mistake.
Inside the script for create server command options instead of AUTHORIZATION "xyz" PASSWORD "xyz" use AUTHORIZATION \"xyz\" PASSWORD \"xyz\" (i.e. escape the quotes).
For one-sided trust, use SSL_SERVERCERTIFICATE in the create server options clause and ensure the value is accurate (fully qualified path to the remote-db2instance-certificate-file), and that the file/directory permissions are valid.
For mutual trusts, use both SSL_KEYSTORE and SSL_KEYSTASH keywords with correct values, in the create server options clause (having previously ensured your keystores are properly populated, as verified by a command-line connect above).
You may also want to consider create user mapping depending on the requirements.
Finally you can create your nicknames, and test out the federated link by querying those nicknames.

How to set up MySQLi connection to Google Cloud SQL

I need to use MySQL to send queries to a Google Cloud SQL database set up. I already have an instance created and a user, and I am able to access the database through the Cloud Shell. I can't seem to find the credentials to log into the database (host name, username, password, port and socket), and I'm not sure how to access them through the shell.
You can find the available methods to connect to your Cloud SQL instance here.
Connecting from an IP address without SSL is probably the easiest one:
In the Cloud Console, go to the cloud SQL instances screen and click on your instance’s name.
In the overview tab, take note of the Primary IP Address, you’ll use it instead of a hostname.
In the users tab, you can create a new user or reset the password of an existente one, including the root user.
In the authorization tab, add the ip or ip range where you are attempting the connection from, so Cloud SQL accept connections from your client (more on this here).
Start your mysql client as follows (note the port is not necessary as the default one is used):
mysql --host=[INSTANCE_IP_ADDR] --user=[USER_NAME] --password

PostgreSQL in Openshift won't execute the entrypoint and can not start the database

We have a read-only Postgresql database that should run in Openshift cluster.
We are using RHEL as the undrlying operating system.
Our Dockerfile will install postgres software, create the database instance, loads the data to it than shuts the database down and save the image.
We are using only bash and sql scripts and deploy the database using flyway.
When starting the container the entrypoint script will simply startup the database instance using "pg_ctl" command then perform an endless loop to keep the container running.
The Dockerfile has as the last command USER 26, where 26 is the id of the postgres user. The entrypoint script can be started as the postgres user or by a sudo user.
Everything is working well in Docker.
In Openshift the container is started by a different user belonging to the root group, but not the root user nor the user 26. Actually Openshift ignores the USER 26 clause in the Dockerfile.
The user starting the container (we'll call it containeruser) has anyhow no rights to start the postgres instance , so when running the entrypoint it will get permission denied on the postgresql data folder.
I have tried different options, adding the containeruser user to the wheel group and modify the sudoers file to allow it using sudo and start the entrypoint as postgres user but with no success.
So I have my database image ready but can not start it in Openshift.
On the openshift configuration side we are not allowed to make changes like allowing sudo usage, or starting the container as root or postgres user.
Any idea or help to this problem?
I am not an Openshift expert.
Thank you!
Best regards,
rimetnac
You have two choices.
The preferred choice is to fix your image so that it can run as any user. For this, do not use the existing postgres user. Create a new user, where that user has group root. Then ensure that all directories/files that PostgreSQL needs to write to are owned by that user, but also have group root and writable by group. When the container is then started up, it will run as an assigned user ID, not in /etc/passwd, and so will fallback to using group root still. Because the directories/file are writable to group root, everything will still work. For more information see:
https://docs.openshift.org/latest/creating_images/guidelines.html
Specifically, section 'Support Arbitrary User IDs'.
The second option if you have admin control of the cluster, and your security team do not object that you are overriding the default security model, is to allow your image to be run as user ID it wants to.
First create a new service account:
oc create serviceaccount runasnonroot
Next grant that service account the ability to run as non root user ID of its choosing.
oc adm policy add-scc-to-user nonroot -z runasnonroot --as system:admin
Then patch the deployment config to use that service account.
oc patch dc/mydatabase --patch '{"spec":{"template":{"spec":{"serviceAccountName": "runasnonroot"}}}}'
Note that this still requires you use USER in the image with an integer user ID and not postgres. Otherwise it can't verify it will run as non root user. That is because if you use a user name instead of user ID, you could be maliciously mapping that to root.
I spent days figuring this out and found one good solution.
OpenShift Origin runs an image as a user created by it, as explained in this OpenShift blog post. This prevents programs from being able to access needed files and directories. To successfully run a program on OpenShift Origin, the blog post provides two solutions, however, the first will not work for PostgreSQL and the second has two disadvantages (explained in the notes):
Grant group write access to the directories used by the main program.
This will not solve the problem because, although the PostgreSQL files will be accessible by any program, they must be owned by the owner of the PostgreSQL process.
Ensure that when operating system libraries are used to look up a system user, one is returned for the ID of the user OpenShift Origin runs the image as. The following are two methods for doing this:
Use a package called nss_wrapper, "which intercepts any calls which look up details of a user and returns a valid entry."
Make the UNIX password database file (/etc/passwd) have global write permissions in the image build so that the OpenShift user can be added to it in the S2I run script.
Each option has a disadvantage: 1. install an extra package and 2. make user accounts insecure.
The best solution is to build the docker image to run as the user OpenShift Origin will run the image as. I built this instructional image with it.
One additional problem to note is that, as the owner of the PostgreSQL process must be the owner of the files and directories accessed by PostgreSQL, PostgreSQL must be set up (i.e. initdb, roles, databases, etc.) during the image build. This is because file ownership can only be changed during the image build and the ownership of the files must be changed after PostgreSQL has been set up for the reason explained in #2 below.
Here are the complete steps with notes for setting up PostgreSQL in the image build:
Manually create the PostgreSQL data directory and change its ownership to a non-root user that will be used to initialze PostgreSQL and set up the components (e.g. roles and databases) required to run the server on OpenShift Origin.
This is required because the "initdb" executable must be executed by a user other than root and will need access to the data directory. Additionally, this user cannot be the user OpenShift Origin will run the image as because it is not in the system.
Switch to the non-root user.
This is required because the initdb executable must be executed "as the user that will own the server process, because the server needs to have access to the files and directories that initdb creates" (PostgreSQL documentation) and because the PostgreSQL server will be started to set up components (e.g. roles and databases) required to run the server on OpenShift Origin.
Run the "initdb" executable.
Start the PostgreSQL server, set up the required components (roles, databases, etc.) and stop the PostgreSQL server.
Switch back to the root user.
Change the ownership of the PostgreSQL files and directories to the user OpenShift Origin will run the image as.
Edit (06/20/18): I have found that there is a solution to set up PostgreSQL after the image is built. The user OpenShift Origin will run the image as can be added to the system at the start of the build. This will allow PostgreSQL to be set up and the ownership of its files and directories to be changed after the image build.
After gathering the comments from all contributors I can asnwer my question as follows:
Option 1
When you create the postgres database during image build, you must configure openshift policies to allow starting your container as the user that created the database during image build. Use this option when the database must be filled with data and this operation takes much time making it inappropriate for a container start. the entrypoint will only start the already prepared database.
Option 2
Create your database when starting the container using the entrypoint script. Use this option when the database creation is fast enough to be done at container start.
Option 3
See the last comment from Adrian which seems to answer all the problems anyhow I didn't got the time to test it.
Thank you all for your contributions.

Rename the Amazon RDS master username

Changing the password is easily done through the console. Is there any way to change the master username after creation on RDS for PostgreSQL? If so, how?
You can't change username. You can check the following links that describe how to change master password and if Amazon adds the ability to change username you will find there:
Try to find at AWS CLI for RDS:
modify-db-instance --db-instance-identifier <value> --master-user-password (string)
--master-user-password (string)
The new password for the DB instance master user. Can be any printable
ASCII character except "/", """, or "#".
Changing this parameter does not result in an outage and the change is
asynchronously applied as soon as possible. Between the time of the
request and the completion of the request, the MasterUserPassword
element exists in the PendingModifiedValues element of the operation
response. Default: Uses existing setting
Constraints: Must be 8 to 41 alphanumeric characters (MySQL, MariaDB,
and Amazon Aurora), 8 to 30 alphanumeric characters (Oracle), or 8 to
128 alphanumeric characters (SQL Server).
The Amazon RDS Command Line Interface (CLI) has been deprecated. Instead, use the AWS CLI for RDS.
Via the AWS Management Console, choose the instance you need to reset the password for, click ‘Modify’ then choose a new master password.
If
you don’t want to use the AWS Console, you can use the
rds-modify-db-instance command (as per Amazon’s documentation for RDS)
to reset it directly, given the AWS command line tools:
rds-modify-db-instance instance-name --master-user-password
examplepassword
No. As of April 2019 one cannot reset the 'master username'.
You cannot do it directly. However you can use the database migration service from AWS:
https://aws.amazon.com/dms/
Essentially you define the current database instance as your source and the new database with the correct username as your target of the migration.
This way you migrate the data from one to another database instance. As such you can change all properties including the username.
This approach has some drawbacks:
You need to configure the migration. Which takes a bit of time.
The data is migrated. This may lead unexpected behavior since not everything is eventually migrated (e.g. views etc.)
It depends how you setup everything you may experience a downtime.
Though this may not be ideal for every use-case, I did find a workaround that allows for changing the username of the master user of an AWS RDS DB.
I am using PgAdmin4 with PostgreSQL 14 at the time of writing this answer.
Login with the master user you want to change the name of
Create a new user with the following privileges and membership
Privileges and Membership
Can login - yes
Superuser - no (not possible with a managed AWS RDS DB instance, if you need complete superuser access DO NOT use a managed AWS RDS DB)
Create roles - yes
Create databases - yes
Inherit rights from the parent roles - yes
Can initiate streaming replication and backups - no (again, not possible directly without superuser permission)
Be sure to note the password used, as you will need to access this new account at least 1 time to complete the name change
Register a server with the credentials created in step 2. Disconnect from the server but do NOT remove it! Connect to the new server created
Expand Login/Group Roles and click on the master user whom you are changing the name
Click the edit icon, edit the name, and save.
Right click the server with the master username, select Properties
Update the name under the General tab if desired
Update the username under the Connection tab to whatever you changed the master username above
Save and reconnect to the server with the master user
You have successfully updated the master user's name on a managed AWS RDS DB instance, proud of you!
As #tdubs's answer states, it is possible to change the master username for a Postgres DB instance in AWS RDS. Whether it is advisable – probably not.
Here are the SQL commands you need to issue:
Create a temporary user with the CREATEROLE privilege (while being logged in with the old master user)
CREATE ROLE temp_master PASSWORD '<temporary password>' LOGIN CREATEROLE;
Now connect to the database with the temp_master user
ALTER ROLE "<old_master_username>" RENAME TO "<new_master_username>";
-- NOTICE: MD5 password cleared because of role rename
ALTER ROLE "<new_master_username>" PASSWORD '<new password>';
Now connect to the database with the <new_master_username> user in order to clean up the temporary role
DROP ROLE temp_master;
And you're done!
Warning
AWS RDS does not know that the master username has been changed, so it will keep displaying the old one and assumes that is still the master username.
This means that if you use the AWS CLI or website to update the master password, it will have no effect.
And when connecting to the database with psql you'll see:
WARNING: role "<old_master_username>" does not exist