Trying to upgrade Semarchy xDM from version 5.2.x to 5.3.15a. Was expecting Semarchy to upgrade existing repository but it is attempting to create a new one instead. What am I missing?
Steps performed so far:
Replicated existing 5.2.x repository (and data location) schemas in Postgres DB (We are
attempting out-of-place upgrade). New schema name is semarchy_repository_5315.
Created user semarchy_repository_5315, provided required grants on schema semarchy_repository_5315
Created repository read-only user semarchy_repository_5315_ro as required by the new version, provided required grants on objects in repository schema
Created a new EC2 instance using Semarchy AMI for version 5.3.15a
Modified config.properties file to point to repository and repository read-only schemas
Configured SEMARCHY_SETUP_TOKEN
Pointing the browser to IP of EC2 server/semarchy prompts for the setup token. Upon entering the setup token, EULA screen is displayed followed by Semarchy xDM Install screen that prompts for repository name (Which is prefilled as specified in config.properties) and Admin username, password, confirm password fields. Per Upgrade instructions, it should display a prompt to confirm backup of existing schemas and proceed to upgrade the current repository. If I click "Install" after entering all the values, an error "Unable to retrieve data from the server" is displayed.
EDIT:Undeployed and redeployed semarchy within tomcat. Did not work. Checked semarchy.log and error indicates relation mta_repository already exists which means its trying to create a repository, not upgrading the existing one.
Related
I've installed Bareos 20.0.1 on Ubuntu 20.04.3 according to their documentations here.
I'm trying to backup a remote PostgreSQL database and apparently, there are three possible scenarios and the pros of the PostgreSQL Plugin (third solution), makes it the obvious choice.
Following the PostgreSQL Plugin documentations, in the Prerequisites for the PostgreSQL Plugin section, there is a line saying:
The plugin must be installed on the same host where the PostgreSQL database runs.
Now what I'm failing to understand is that, if I'm supposed to install the plugin on my database node, how will the bareos machine and the plugin on the db machine communicate?
Furthermore, I've checked out the source code for this module on their GitHub, and I see that the plugin source code tries to find files locally and that is a proof to the aforementioned statement.
In a desperate act, I tried installing the plugin and its dependencies on the bareos node and I keep getting the error Error: python3-fd-mod: Could not read Label File /var/lib/postgresql/13/main/backup_label which is actually trying to find the backup_label file in the bareos node.
Here is the configuration for my fileset:
FileSet {
Name = "psql"
Include {
Options {
compression=GZIP
signature = MD5
}
Plugin = "python"
":module_path=/usr/lib/bareos/plugins"
":module_name=bareos-fd-postgres"
":postgresDataDir=/var/lib/postgresql/13/main"
":walArchive=/var/lib/postgresql/13/wal_archive/"
":dbHost=DATABASE_DNS"
":dbuser=DATABASE_USER"
}
}
Note that the plugin document specifies the dbHost parameter as:
useful, if socket is not in default location. Specify socket-directory with a leading / here
However, since I'm trying a remote database, I'm using the DNS address of the remote database. I verified the bareos connection to database and made sure the backup_label file gets created while the PostgreSQL backup job runs.
I'll be happy to provide more details if necessary. Appreciate any help or even guesses :-D
I'm trying to import a local Postgresql database to Heroku and I'm following these steps https://devcenter.heroku.com/articles/heroku-postgres-import-export#import-to-heroku-postgres.
I have successfully:
created a dump
uploaded it to an S3 Bucket
created from AWS CLI a signed link
ran the command heroku pg:backups:restore '<SIGNED URL>' DATABASE_URL (adding -a with my app name).
The process to restore a backup starts correctly but then exits with this code:
! An error occurred and the backup did not finish.
!
! Could not initialize transfer
!
! Run heroku pg:backups:info r011 for more details.
Opening the log shows:
Database: BACKUP
Finished at: 2020-01-09 18:49:30 +0000
Status: Failed
Type: Manual
Backup Size: 0.00B (0% compression)
=== Backup Logs
2020-01-09 18:49:30 +0000 Could not initialize transfer
I've tried:
re-uploading the file to the bucket,
generating a new signed link,
putting the app in maintenance mode,
I've created a user in my IAM management service with full S3 access and saved the credentials in the app environment as from https://devcenter.heroku.com/articles/s3
Not sure where to go from here but would appreciate any help. (I'm on the hobby plan therefore I can't ask Heroku's support for help)
Edit: I also tried:
deleting and recreating the S3 Bucket
installing version 1 of the AWS CLI to see if by chance the structure of a presigned link had changed
Edit 2: Since I could not find a solution I've opted to migrate the hosting entirely on AWS for the moment
Make sure that your credentials on your machine that are stored in ~/.aws/ the default value is set to the credentials you created for your heroku configs. Then also make sure the signed url is created with those credentials and configs. I had to set my default credentials to the credentials I put in my heroku configs. Then I also had to set my default region in ~/.aws/config to match the bucket location. Should work after that.
Here are some instructions if you are on mac or linux.
Sorry Windows people. I would assume it is something similar.
Create new access id and key in IAM on AWS
Set heroku configs to use those credentials heroku config:set AWS_ACCESS_KEY_ID=xxx AWS_SECRET_ACCESS_KEY=yyy
Optional (You may have to set the bucket name in heroku config too)
On your machine set your credentials you just created to the default in ~/.aws/credentials
On your machine set your default region that corresponds to your bucket in ~/.aws/config
Create signed URL aws s3 presign s3://your-bucket-address/your-object
Run restore heroku pg:backups:restore '<SIGNED URL>' DATABASE_URL
Had the exact same error and made these 2 adjustments. In the S3 console click on the file you want to use for the backup. You should see the name fo your file followed by 4 tabs. In the General information tab, do the following:
Click on Make public to make the file available for download.
Get the URL for that object where it says URL of object
(should be something like https://mybucket.s3.amazonaws.com/my.file, you can test if it works by pasting that url in a new Chrome tab and hitting that url. That should trigger the download of your file)
Once the previous check is working you can proceed to
heroku pg:backups restore 'https://mybucket.s3.amazonaws.com/my.file' DATABASE_URL
I ran into the same issue and discovered the issue was that I had my bucket's region set as us-east rather than us-east-1.
I built and deployed a Node.js Postgres app to Heroku and can not get to any of my endpoints via the Heroku site except the root GET route. Curiously, when I run Heroku local web ALL my endpoints behave exactly as they should. I can successfully perform CRUD on the app running via Heroku local web. However, when I try, for instance, to create a user using the Heroku URL, it returns an empty error message. Yet, when I check the associated database I find that the user was indeed created. Other than returning an empty error message when I try to either create a user or sign it, the app correctly responds with the different errors I programmed. For example, when I tweak my login details or try to register the same user I earlier tried to register it correctly says the user already exists!. Still, when I try to log in that same existing user I get a blank error message. Note that I created both the Heroku PostgreSQL database and my local PostgreSQL database from exactly the same queries. Please, can you help me through this bottleneck? I am using Postman to test my APIs.
Test to sign in user on Heroku app running on the local machine: success!
Same exact test with Heroku URL: cryptic error.
Ok, so after a lot of researching and fiddling around I discovered the solution. I did not add keys from my .env file to Heroku as config vars found under the settings tab of the Heroku User dashboard. Manually adding my environment variables resolved the matter. Now my app is working both on my local machine and via the Heroku URL.
We have a read-only Postgresql database that should run in Openshift cluster.
We are using RHEL as the undrlying operating system.
Our Dockerfile will install postgres software, create the database instance, loads the data to it than shuts the database down and save the image.
We are using only bash and sql scripts and deploy the database using flyway.
When starting the container the entrypoint script will simply startup the database instance using "pg_ctl" command then perform an endless loop to keep the container running.
The Dockerfile has as the last command USER 26, where 26 is the id of the postgres user. The entrypoint script can be started as the postgres user or by a sudo user.
Everything is working well in Docker.
In Openshift the container is started by a different user belonging to the root group, but not the root user nor the user 26. Actually Openshift ignores the USER 26 clause in the Dockerfile.
The user starting the container (we'll call it containeruser) has anyhow no rights to start the postgres instance , so when running the entrypoint it will get permission denied on the postgresql data folder.
I have tried different options, adding the containeruser user to the wheel group and modify the sudoers file to allow it using sudo and start the entrypoint as postgres user but with no success.
So I have my database image ready but can not start it in Openshift.
On the openshift configuration side we are not allowed to make changes like allowing sudo usage, or starting the container as root or postgres user.
Any idea or help to this problem?
I am not an Openshift expert.
Thank you!
Best regards,
rimetnac
You have two choices.
The preferred choice is to fix your image so that it can run as any user. For this, do not use the existing postgres user. Create a new user, where that user has group root. Then ensure that all directories/files that PostgreSQL needs to write to are owned by that user, but also have group root and writable by group. When the container is then started up, it will run as an assigned user ID, not in /etc/passwd, and so will fallback to using group root still. Because the directories/file are writable to group root, everything will still work. For more information see:
https://docs.openshift.org/latest/creating_images/guidelines.html
Specifically, section 'Support Arbitrary User IDs'.
The second option if you have admin control of the cluster, and your security team do not object that you are overriding the default security model, is to allow your image to be run as user ID it wants to.
First create a new service account:
oc create serviceaccount runasnonroot
Next grant that service account the ability to run as non root user ID of its choosing.
oc adm policy add-scc-to-user nonroot -z runasnonroot --as system:admin
Then patch the deployment config to use that service account.
oc patch dc/mydatabase --patch '{"spec":{"template":{"spec":{"serviceAccountName": "runasnonroot"}}}}'
Note that this still requires you use USER in the image with an integer user ID and not postgres. Otherwise it can't verify it will run as non root user. That is because if you use a user name instead of user ID, you could be maliciously mapping that to root.
I spent days figuring this out and found one good solution.
OpenShift Origin runs an image as a user created by it, as explained in this OpenShift blog post. This prevents programs from being able to access needed files and directories. To successfully run a program on OpenShift Origin, the blog post provides two solutions, however, the first will not work for PostgreSQL and the second has two disadvantages (explained in the notes):
Grant group write access to the directories used by the main program.
This will not solve the problem because, although the PostgreSQL files will be accessible by any program, they must be owned by the owner of the PostgreSQL process.
Ensure that when operating system libraries are used to look up a system user, one is returned for the ID of the user OpenShift Origin runs the image as. The following are two methods for doing this:
Use a package called nss_wrapper, "which intercepts any calls which look up details of a user and returns a valid entry."
Make the UNIX password database file (/etc/passwd) have global write permissions in the image build so that the OpenShift user can be added to it in the S2I run script.
Each option has a disadvantage: 1. install an extra package and 2. make user accounts insecure.
The best solution is to build the docker image to run as the user OpenShift Origin will run the image as. I built this instructional image with it.
One additional problem to note is that, as the owner of the PostgreSQL process must be the owner of the files and directories accessed by PostgreSQL, PostgreSQL must be set up (i.e. initdb, roles, databases, etc.) during the image build. This is because file ownership can only be changed during the image build and the ownership of the files must be changed after PostgreSQL has been set up for the reason explained in #2 below.
Here are the complete steps with notes for setting up PostgreSQL in the image build:
Manually create the PostgreSQL data directory and change its ownership to a non-root user that will be used to initialze PostgreSQL and set up the components (e.g. roles and databases) required to run the server on OpenShift Origin.
This is required because the "initdb" executable must be executed by a user other than root and will need access to the data directory. Additionally, this user cannot be the user OpenShift Origin will run the image as because it is not in the system.
Switch to the non-root user.
This is required because the initdb executable must be executed "as the user that will own the server process, because the server needs to have access to the files and directories that initdb creates" (PostgreSQL documentation) and because the PostgreSQL server will be started to set up components (e.g. roles and databases) required to run the server on OpenShift Origin.
Run the "initdb" executable.
Start the PostgreSQL server, set up the required components (roles, databases, etc.) and stop the PostgreSQL server.
Switch back to the root user.
Change the ownership of the PostgreSQL files and directories to the user OpenShift Origin will run the image as.
Edit (06/20/18): I have found that there is a solution to set up PostgreSQL after the image is built. The user OpenShift Origin will run the image as can be added to the system at the start of the build. This will allow PostgreSQL to be set up and the ownership of its files and directories to be changed after the image build.
After gathering the comments from all contributors I can asnwer my question as follows:
Option 1
When you create the postgres database during image build, you must configure openshift policies to allow starting your container as the user that created the database during image build. Use this option when the database must be filled with data and this operation takes much time making it inappropriate for a container start. the entrypoint will only start the already prepared database.
Option 2
Create your database when starting the container using the entrypoint script. Use this option when the database creation is fast enough to be done at container start.
Option 3
See the last comment from Adrian which seems to answer all the problems anyhow I didn't got the time to test it.
Thank you all for your contributions.
Changing the password is easily done through the console. Is there any way to change the master username after creation on RDS for PostgreSQL? If so, how?
You can't change username. You can check the following links that describe how to change master password and if Amazon adds the ability to change username you will find there:
Try to find at AWS CLI for RDS:
modify-db-instance --db-instance-identifier <value> --master-user-password (string)
--master-user-password (string)
The new password for the DB instance master user. Can be any printable
ASCII character except "/", """, or "#".
Changing this parameter does not result in an outage and the change is
asynchronously applied as soon as possible. Between the time of the
request and the completion of the request, the MasterUserPassword
element exists in the PendingModifiedValues element of the operation
response. Default: Uses existing setting
Constraints: Must be 8 to 41 alphanumeric characters (MySQL, MariaDB,
and Amazon Aurora), 8 to 30 alphanumeric characters (Oracle), or 8 to
128 alphanumeric characters (SQL Server).
The Amazon RDS Command Line Interface (CLI) has been deprecated. Instead, use the AWS CLI for RDS.
Via the AWS Management Console, choose the instance you need to reset the password for, click ‘Modify’ then choose a new master password.
If
you don’t want to use the AWS Console, you can use the
rds-modify-db-instance command (as per Amazon’s documentation for RDS)
to reset it directly, given the AWS command line tools:
rds-modify-db-instance instance-name --master-user-password
examplepassword
No. As of April 2019 one cannot reset the 'master username'.
You cannot do it directly. However you can use the database migration service from AWS:
https://aws.amazon.com/dms/
Essentially you define the current database instance as your source and the new database with the correct username as your target of the migration.
This way you migrate the data from one to another database instance. As such you can change all properties including the username.
This approach has some drawbacks:
You need to configure the migration. Which takes a bit of time.
The data is migrated. This may lead unexpected behavior since not everything is eventually migrated (e.g. views etc.)
It depends how you setup everything you may experience a downtime.
Though this may not be ideal for every use-case, I did find a workaround that allows for changing the username of the master user of an AWS RDS DB.
I am using PgAdmin4 with PostgreSQL 14 at the time of writing this answer.
Login with the master user you want to change the name of
Create a new user with the following privileges and membership
Privileges and Membership
Can login - yes
Superuser - no (not possible with a managed AWS RDS DB instance, if you need complete superuser access DO NOT use a managed AWS RDS DB)
Create roles - yes
Create databases - yes
Inherit rights from the parent roles - yes
Can initiate streaming replication and backups - no (again, not possible directly without superuser permission)
Be sure to note the password used, as you will need to access this new account at least 1 time to complete the name change
Register a server with the credentials created in step 2. Disconnect from the server but do NOT remove it! Connect to the new server created
Expand Login/Group Roles and click on the master user whom you are changing the name
Click the edit icon, edit the name, and save.
Right click the server with the master username, select Properties
Update the name under the General tab if desired
Update the username under the Connection tab to whatever you changed the master username above
Save and reconnect to the server with the master user
You have successfully updated the master user's name on a managed AWS RDS DB instance, proud of you!
As #tdubs's answer states, it is possible to change the master username for a Postgres DB instance in AWS RDS. Whether it is advisable – probably not.
Here are the SQL commands you need to issue:
Create a temporary user with the CREATEROLE privilege (while being logged in with the old master user)
CREATE ROLE temp_master PASSWORD '<temporary password>' LOGIN CREATEROLE;
Now connect to the database with the temp_master user
ALTER ROLE "<old_master_username>" RENAME TO "<new_master_username>";
-- NOTICE: MD5 password cleared because of role rename
ALTER ROLE "<new_master_username>" PASSWORD '<new password>';
Now connect to the database with the <new_master_username> user in order to clean up the temporary role
DROP ROLE temp_master;
And you're done!
Warning
AWS RDS does not know that the master username has been changed, so it will keep displaying the old one and assumes that is still the master username.
This means that if you use the AWS CLI or website to update the master password, it will have no effect.
And when connecting to the database with psql you'll see:
WARNING: role "<old_master_username>" does not exist