I am running ThingsBoard (Internet-of-Things) on an Ubuntu 18.04 LTS VPS, using a PostgreSQL DB. Since the IoT devices send a lot of data to the DB, I need to clean regularly the DB.
For this, I would like to use pg_cron.
I followed the steps described at https://github.com/citusdata/pg_cron:
I installed postgresql-10-cron
I modified postgresql.conf:
shared_preload_libraries = 'pg_cron'
cron.database_name = 'postgres'
I restarted the DB by
service postgresql restart
I logged into my postgres DB with Adminer and executed:
CREATE EXTENSION pg_cron;
=> Success
However, I cannot see any tables in postgres that I could configure...
I tried step 4 again, with the result
ERROR: extension "pg_cron" already exists
Is there anything I don't understand?
The metadata tables are created in the cron schema:
\dt cron.*
To find all objects that belong to the extension, run
\dx+ pg_cron
Related
I have Apache-Airflow implemented on an Ubuntu version 18.04.3 server. When I set it up, I used the sql lite generic database, and this uses the sequential executor. I did this just to play around and get used to the system. Now I'm trying to use the Local Executor, and will need to transition my database from sqlite to the recommended postgres sql.
Does anybody know how to make this transition? All of the tutorials I've found entail setting up Airflow with postgres sql from the beginning. I know there are a ton of moving parts and I'm scared of messsing up what I currently have running. Anybody who knows how to do this or can point me at where to look is much appreciated. Thanks!
Just to complete #lalligood answer with some commands:
In airflow.cfg file look for sql_alchemy_conn and update it to point to your PostgreSQL serv:
sql_alchemy_conn = postgresql+psycopg2://user:pass#hostadress:port/database
For instance:
sql_alchemy_conn = postgresql+psycopg2://airflow:airflow#localhost:5432/airflow
As indicated in the above line you need both user and database called airflow, therefore you need to create that. To do so, open your psql command line and type the following commands to create a user and database called airflow and give all privileges over database airflow to user airflow:
CREATE USER airflow;
CREATE DATABASE airflow;
GRANT ALL PRIVILEGES ON DATABASE airflow TO airflow;
Now you are ready to init the airflow application using postgres:
airflow initdb
If everything was right, access the psql command line again, enter in airflow database with \c airflow command and type \dt command to list all tables of that database. You should see a list of airflow tables, currently it is 23.
Another option other than adding to the airflow.cfg file
is to set the ENV varibale AIRFLOW__CORE__SQL_ALCHEMY_CONN to the postgresql server you want.
Example: export AIRFLOW__CORE__SQL_ALCHEMY_CONN_SECRET=sql_alchemy_conn
Or you can set it in your Dockerfile setting.
See documentation here
I was able to get it working by doing the following 4 steps:
Assuming that you are starting from scratch, initialize your airflow environment with the SQLite database. The key takeaway here is for it to generate the airflow.cfg file.
Update the sql_alchemy_conn line in airflow.cfg to point to your PostgreSQL server.
Create the airflow role + database in PostgreSQL. (Revoke all permissions from public to airflow database & ensure airflow role owns airflow database!)
(Re)Initialize airflow (airflow initdb) & confirm that you see ~19 tables in the airflow database.
I'm trying to update my OpenProject from v7.0 to v8.0 using the new Docker image.Everything went well till I try to import the database. The new version uses Postgresql v9.6 which is incompatible with the former Postgresql v9.4.
There is a good guide on the OpenProject website on how to migrate to Postgresql v9.6: https://www.openproject.org/operations/upgrading/openproject-postgresql-migration-guide-9-6/ . But it covers only the linux installation.
How is it possible to migrate the OpenProject database from Postgresql v9.4 to v9.6 within docker ?
Finally a solution is found. Here are the general steps for the migration:
Create the database backup of the current installation:
Connect to the old Container v7.0
Stop the running services, except postgers, via supervisorctl stop < service_name >
Create the dump of the database with the name "openproject"
Exit the container and copy the created dump outside it
Restore the database into a the new installation:
Copy the former created dump inside the new container v8.0
Connect to the new Container v8.0
Stop the running services, except postgers, via supervisorctl stop < service_name >
Connect to the postgresql database server and delete the database "openproject"
Create a new "openproject" database and assign it to the user "openproject"
Restore the openproject database using the dump file
Exit and restart the container
OpenProject system will automatically recognize the old database structure and will migrate the tables to match the new version.
I hope this will help someone
Currently we have all in one single docker container for our production gitlab, where we are using bundled postgres and redis. So everything in same container. We want to use external postgres db and separate container for redis as well to follow the production standards.
How can I migrate from internal postgres db to external postgres db? If anyone provides process and steps that will be really helpful. We are new to this process. Please let us know If anyone knows
Thank you everyone for your inputs ,
PRS
You can follow the article "Migrating GitLab from internal to external PostgreSQL", which involves:
a database dump/reload, using pg_dumpall
sudo -u gitlab-psql /opt/gitlab/embedded/bin/pg_dumpall \
--username=gitlab-psql --host=/var/opt/gitlab/postgresql > /var/lib/pgsql/database.sql
sudo -u postgres psql -f /var/lib/pgsql/database.sql
Note: yuo can also use a backup of the database, but only if the external PostgreSQL version matches exactly the embedded one.
setting its password
sudo -u postgres psql -c "ALTER USER gitlab ENCRYPTED PASSWORD '***' VALID UNTIL 'infinity';"
and modifying the GitLab configuration:
That is:
# Disable the built-in Postgres
postgresql['enable'] = false
# Fill in the connection details
gitlab_rails['db_adapter'] = 'postgresql'
gitlab_rails['db_encoding'] = 'utf8'
gitlab_rails['db_host'] = '127.0.0.1'
gitlab_rails['db_port'] = 5432
gitlab_rails['db_database'] = "gitlabhq_production"
gitlab_rails['db_username'] = 'gitlab'
gitlab_rails['db_password'] = '***'
apply tour changes:
gitlab-ctl reconfigure && gitlab-ctl restart
#VonC
Hi, let me know about the process I have done below
We currently have single all in one docker gitlab container which is using bundled postgres and redis . To follow the production standards we are looking to maintain separate postgres and redis instances for our prod gitlab..We already have data in bundled db ..so we took back up current gitlab with bundled postgres ..it generated .tar file....Next we did change gitlab.rb to point external post gres db [ same version ]..then we are able connect to gitlab but didn;t see any data because nothing was there as it is fresh db. Later we did the restore using external postgres db ...now we can see all the data?? Can we do in this method ? Now our gitlab is attached to external postgres and I can see all the restored data. Will this process works ? Any downfalls?
How this process is different from pgdump and import ?
I have installed postgres 9.5 in my linux(16.04) machine.I have started service using below command.
sudo service postgresql start
This will start postgres service as a postgres user.
But I want to run postgres a different user(myown user).
How can I do .Please help !!.
You have to recursively change the ownership of the database directory to the new user.
If the WAL directory or tablespaces are outside the data directory, you have to change their ownership too.
Then you will have to configure the startup script so that it starts PostgreSQL as the new user. Watch out: if you installed the startup script with an installation package, any changes to it will probably be lost after an update.
I recommend that you don't do all that and continue running PostgreSQL as postgres.
I am in the process of configuring the backup gem (http://backup.github.io/backup/v4/) to run on my EC2 instance, copy the PostgreSQL database in RDS, and store the backup in a new S3 bucket.
The backup gem runs the pg_dump command, however AWS doesn't allow for the same version of Postgres to be installed on both EC2 and RDS, resulting in the following error:
pg_dump: server version: 9.4.7; pg_dump version: 9.2.13
pg_dump: aborting because of server version mismatch
This is because the EC2 instance has version:
$ pg_dump --version
pg_dump (PostgreSQL) 9.2.13
And the RDS instance has version:
9.4.7-R1 (with the only other version option of 9.5.2-R1)
On EC2, running yum list postgres* only offers Available Packages up to PostgreSQL 9.3.
So it seems like I am unable to either downgrade RDS or upgrade EC2 to a matching version.
Here is my Backup gem model config if it helps: https://gist.github.com/anonymous/35f6f9e81846f53693fb03662c2192ad
Before too many people start reminding me that RDS has built-in backups, I am aware. My use-case: instead of only having full database fallbacks, I would also like the ability to roll back individual users' data to different time periods without affecting the whole database. I planned on keeping these manual backups and eventually writing a script to pull previous user specific data from them.
My friend recommended another option: If a user wants to roll back, I could spin up a new RDS from the automated snapshots, clone my EC2 instance, connect them to each other, collect the user specific data from that snapshot, and then merge those changes back into the main EC2 instance.
Set up PostgreSQL’s YUM repository on your EC2 instance:
https://yum.postgresql.org/
and install a newer PostgreSQL client version.