I have problems fetching MariaDB. Because I don't need this package I'm trying to remove it. First, I tried to understand what include it:
$ grep -nrw ../layers/ -e mariadb
Binary file ../layers/meta-openembedded/.git/index matches
../layers/meta-openembedded/meta-oe/recipes-core/packagegroups/packagegroup-meta-oe.bb:99: leveldb libdbi mariadb mariadb-native \
Looking into packagegroup-meta-oe.bb I found:
RDEPENDS_packagegroup-meta-oe-dbs ="\
leveldb libdbi mariadb mariadb-native \
mysql-python postgresql psqlodbc rocksdb soci \
sqlite \
${#bb.utils.contains("DISTRO_FEATURES", "bluez4", "mongodb", "", d)} \
"
hence I tried to remove packagegroup-meta-oe-dbs in my <image>.bb:
IMAGE_INSTALL_remove = "packagegroup-meta-oe-dbs"
But it still insists to build it.
Where is my fault?
Since packagegroup-meta-oe-dbs is a runtime dependency of packagegroup-meta-oe-dbs, you cannot remove it without removing packagegroup-meta-oe-dbs.
What you need to do is create bbappend for packagegroup-meta-oe-dbs, and add the following line to it:
RDEPENDS_packagegroup-meta-oe-dbs_remove = "mariadb"
Related
In order to track changes for a database I thought that the best solution is to generate a snapshot of the database, make the changes and the get a changelog comparing the snapshot with the db.
For the snapshot I use the following command:
$ liquibase --url=jdbc:postgresql://localhost:5432/test
outputFile=sdk/workspace//output.json --snapShotFormat=json
Now that I have the snapshot, I made some changes in the database (adding 2 columns to 2 different tables). I try to run to following command to compare the snapshot with the db:
$ liquibase --url=jdbc:postgresql://localhost:5432/test -
username=postgres --password=new_password --
referenceUrl=offline:postgresql?snapshot=sdk/testsnapshot.json
diffchangelog
but I get the following error:
$ Unexpected error running Liquibase: Cannot parse snapshot
offline:postgresql?snapshot=sdk/testsnapshot.json
Any idea on how to fix this?
I manage to solve the problem and in case that someone will have the same issue in the future, this is the correct way to do it.
Generate the snapshot:
liquibase
--driver=org.postgresql.Driver \
--classpath=lib/postgresql42.2.5.jre6.jar \
--url=jdbc:postgresql://localhost:5432/test \
--outputFile=sdk/snaptest.json \
--username=postgres \
--password=new_password \
snapshot
Generate the diffChangeLog between the snapshot and the database
liquibase
--driver=org.postgresql.Driver \
--changeLogFile=sdk/workspace/difference_log.xml \
--url=jdbc:postgresql://localhost:5432/test \
--username=postgres \
--password=new_password \
--referenceUrl=offline:postgresql=sdk/snaptest.json \
diffchangelog
Cheers
What I'm trying to do is to convert this installing script for webodm (https://gist.github.com/lkpanganiban/5226cc8dd59cb39cdc1946259c3fea6e) written in bash to be used in tcsh shell under a freenas jail.
I have now enter at part where I can't find a solution to and my hope is that someone can en light me what to do next.
The line that is triggering the problem is :
su - postgres -c "psql -d webodm_dev -c "\""CREATE EXTENSION postgis;"\"" "
The whole error line :
ERROR: could not load library "/usr/local/lib/postgresql/plpgsql.so": dlopen (/usr/local/lib/postgresql/plpgsql.so) failed: /usr/local/lib/postgresql/plpgsql.so: Undefined symbol "MakeExpandedObjectReadOnly"
pkg info give :
postgis24-2.4.5_1 Geographic objects support for PostgreSQL databases
postgresql95-client-9.5.15_2 PostgreSQL database (client)
postgresql95-contrib-9.5.15_2 The contrib utilities from the PostgreSQL distribution
postgresql95-server-9.5.15_2 PostgreSQL is the most advanced open-source database available anywhere
And yes the file exists:
root#webodm2:~ # ls -l /usr/local/lib/postgresql/plpgsql.so
-rwxr-xr-x 1 root wheel 195119 Feb 7 18:16 /usr/local/lib/postgresql/plpgsql.so
root#webodm2:~ #
So anyone have some idea ?
I faced this issue after the upgrade from postgres 11 to 12, here how to fix it for Linux and Mac (without brew)
$ sudo su postgres
$ /usr/lib/postgresql/12/bin/pg_upgrade \
--old-datadir=/var/lib/postgresql/11/main \
--new-datadir=/var/lib/postgresql/12/main \
--old-bindir=/usr/lib/postgresql/11/bin \
--new-bindir=/usr/lib/postgresql/12/bin \
--old-options '-c config_file=/etc/postgresql/11/main/postgresql.conf' \
--new-options '-c config_file=/etc/postgresql/12/main/postgresql.conf' \
you can add --check to do a dry test upgrade without changing anything in your postgres installation.
for Mac users with brew installation:
after the upgrade run the following command"
$ brew postgresql-upgrade-database
That error message means that you have a plpgsql.so from PostgreSQL 9.5 or earlier and try to use it with PostgreSQL 9.6 or later.
Either you are picking up the wrong library, or you copied files around.
Anyway, the problem has nothing to do with PostGIS.
It might be your database has an outdated version, try to run the checks before running brew postgresql-upgrade-database. OR try to restart your service brew services restart postgres.
psql --version # 11.4 <--- psql cli version
psql -c 'select version();' postgres # 10.2 <--- db version in storage
brew info postgres # check pg info <--- found solution
brew postgresql-upgrade-database # upgrade db version in storage and fixed the issue
We are using a server I created on Google Cloud Platform to create and manage the other servers over there. But when trying to create a new server from the Linux command line with the GCloud compute instances create function we receive the following error:
marco#ans-mgmt-01:~/gcloud$ ./create_gcloud_instance.sh app-tst-04 tst,backend-server,bootstrap home-tst 10.20.22.104
ERROR: (gcloud.compute.instances.create) Could not fetch resource:
- The resource 'projects/REMOVED_OUR_PROJECTID/global/images/family/debian-8' was not found
Our script looks like this:
#!/bin/bash
if [ "$#" -ne 4 ]; then
echo "Usage: create_gcloud_instance <instance_name> <tags> <subnet_name> <server_ip>"
exit 1
fi
set -e
INSTANCE_NAME=$1
TAGS=$2
SERVER_SUBNET=$3
SERVER_IP=$4
gcloud compute --project "REMOVED OUR PROJECT ID" instances create "$INSTANCE_NAME" \
--zone "europe-west1-c" \
--machine-type "f1-micro" \
--network "cloudnet" \
--subnet "$SERVER_SUBNET" \
--no-address \
--private-network-ip="$SERVER_IP" \
--maintenance-policy "MIGRATE" \
--scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
--service-account "default" \
--tags "$TAGS" \
--image-family "debian-8" \
--boot-disk-size "10" \
--boot-disk-type "pd-ssd" \
--boot-disk-device-name "bootdisk-$INSTANCE_NAME" \
./clean_known_hosts.sh $INSTANCE_NAME
On the google cloud console (console.cloud.google.com) I enabled the cloud api access scope for the ans-mgmt-01 server and also tried to create a server from there. That's working without problems.
The problem is that gcloud is looking for the image family in your project and not the debian-cloud project where it really exists.
This can be fixed by simply using --image-project debian-cloud.
This way instead of looking for projects/{yourID}/global/images/family/debian-8, it will look for projects/debian-cloud/global/images/family/debian-8.
For me the problem was debian-8(and now debian-9) reached the end of life and no longer supported. Updating to debian-10 or debian-11 fixed the issue
For me the problem was debian-9 after so much time came to an end and tried updating to debian-10 fixed the issue
you could run below command to see if the image is available
gcloud compute images list | grep debian
Below is the result from the command
NAME: debian-10-buster-v20221206
PROJECT: debian-cloud
FAMILY: debian-10
NAME: debian-11-bullseye-arm64-v20221102
PROJECT: debian-cloud
FAMILY: debian-11-arm64
NAME: debian-11-bullseye-v20221206
PROJECT: debian-cloud
FAMILY: debian-11
So you could have some idea from your result
I'm trying to port a database from MySQL to PostgreSQL. I've rebuilt the schema in Postgres, so all I need to do is get the data across, without recreating the tables.
I could do this with code that iterates all the records and inserts them one at a time, but I tried that and it's waaayyyy to slow for our database size, so I'm trying to use mysqldump and a pipe into psql instead (once per table, which I may parallelize once I get it working).
I've had to jump through various hoops to get this far, turning on and off various flags to get a dump that is vaguely sane. Again, this only dumps the INSERT INTO, since I've already prepared the empty schema to get the data into:
/usr/bin/env \
PGPASSWORD=mypassword \
mysqldump \
-h mysql-server \
-u mysql-username \
--password=mysql-password \
mysql-database-name \
table-name \
--compatible=postgresql \
--compact \
-e -c -t \
--default-character-set=utf8 \
| sed "s/\\\\\\'/\\'\\'/g" \
| psql \
-h postgresql-server \
--username=postgresql-username \
postgresql-database-name
Everything except that ugly sed command is manageable. I'm doing that sed to try and convert MySQL's approach to quoting single-quotes inside of strings ('O\'Connor') o PostgreSQL's quoting requirements ('O''Connor'). It works, until there are strings like this in the dump: 'String ending with a backslash \\'... and yes, it seems there is some user input in our database that has this format, which is perfectly valid, but doesn't pass my sed command. I could add a lookbehind to the sed command, but I feel like I'm crawling into a rabbit hole. Is there a way to either:
a) Tell mysqldump to quote single quotes by doubling them up
b) Tell psql to expect backslashes to be interpreted as quoting escapes?
I have another issue with BINARY and bytea differences, but I've worked around that with a base64 encoding/decoding phase.
EDIT | Looks like I can do (b) with set backslash_quote = on; set standard_conforming_strings = off;, though I'm not sure how to inject that into the start of the piped output.
Dump the tables to TSV using mysqldump's --tab option and then import using psql's COPY method.
The file psqlrc and ~/.psqlrc may contain SQL commands to be executed when the client starts. You can put these three lines, or any other settings you would like in that file.
SET standard_conforming_strings = 'off';
SET backslash_quote = 'on';
SET escape_string_warning = 'off';
These settings for psql combined with the following mysqldump command will successfully migrate data only from mysql 5.1 to postgresql 9.1 with UTF-8 text (Chinese in my case). This method may be the only reasonable way to migrate a large database if creating an intermediate file would be too large or too time consuming. This requires you manually migrate the schema, since the two database's data types are vastly different. Plan on typing out some DDL to get it right.
mysqldump \
--host=<hostname> \
--user=<username> \
--password=<password> \
--default-character-set=utf8 \
--compatible=postgresql \
--complete-insert \
--extended-insert \
--no-create-info \
--skip-quote-names \
--skip-comments \
--skip-lock-tables \
--skip-add-locks \
--verbose \
<database> <table> | psql -n -d <database>
Try this:
sed -e "s/\\\\'/\\\\\\'/g" -e "s/\([^\\]\)\\\\'/\1\\'\\'/g"
Yeah, "Leaning Toothpick Syndrome", I know.
We're working on a website, and when we develop locally (one of us from Windows), we use sqlite3, but on the server (linux) we use postgres. We'd like to be able to import the production database into our development process, so I'm wondering if there is a way to convert from a postgres database dump to something sqlite3 can understand (just feeding it the postgres's dumped SQL gave many, many errors). Or would it be easier just to install postgres on windows? Thanks.
I found this blog entry which guides you to do these steps:
Create a dump of the PostgreSQL database.
ssh -C username#hostname.com pg_dump --data-only --inserts YOUR_DB_NAME > dump.sql
Remove/modify the dump.
Remove the lines starting with SET
Remove the lines starting with SELECT pg_catalog.setval
Replace true for ‘t’
Replace false for ‘f’
Add BEGIN; as first line and END; as last line
Recreate an empty development database. bundle exec rake db:migrate
Import the dump.
sqlite3 db/development.sqlite3
sqlite> delete from schema_migrations;
sqlite> .read dump.sql
Of course connecting via ssh and creating a new db using rake are optional
STEP1: make a dump of your database structure and data
pg_dump --create --inserts -f myPgDump.sql \
-d myDatabaseName -U myUserName -W myPassword
STEP2: delete everything except CREATE TABLES and INSERT statements out of myPgDump.sql (using text editor)
STEP3: initialize your SQLite database passing structure and data of your Postgres dump
sqlite3 myNewSQLiteDB.db -init myPgDump.sql
STEP4: use your database ;)
Taken from https://stackoverflow.com/a/31521432/1680728 (upvote there):
The sequel gem makes this a very relaxing procedure:
First install Ruby, then install the gem by running gem install sequel.
In case of sqlite, it would be like this: sequel -C postgres://user#localhost/db sqlite://db/production.sqlite3
Credits to #lulalala .
You can use pg2sqlite for converting pg_dump output to sqlite.
# Making dump
pg_dump -h host -U user -f database.dump database
# Making sqlite database
pg2sqlite -d database.dump -o sqlite.db
Schemas is not supported by pg2sqlite, and if you dump contains schema then you need to remove it. You can use this script:
# sed 's/<schema name>\.//' -i database.dump
sed 's/public\.//' -i database.dump
pg2sqlite -d database.dump -o sqlite.db
Even though there are many very good helpful answers here, I just want to mark this as answered. We ended up going with the advice of the comments:
I'd just switch your development environment to PostgreSQL, developing on top of one database (especially one as loose and forgiving as SQLite) but deploying on another (especially one as strict as PostgreSQL) is generally a recipe for aggravation and swearing. –
#mu is too short
To echo mu's response, DON'T DO THIS..DON'T DO THIS..DON'T DO THIS. Develop and deploy on the same thing. It's bad engineering practice to do otherwise. – #Kuberchaun
So we just installed postgres on our dev machines. It was easy to get going and worked very smoothly.
In case one needs a more automatized solution, here's a head start:
#!/bin/bash
$table_name=TABLENAMEHERE
PGPASSWORD="PASSWORD" /usr/bin/pg_dump --file "results_dump.sql" --host "yourhost.com" --username "username" --no-password --verbose --format=p --create --clean --disable-dollar-quoting --inserts --column-inserts --table "public.${table_name}" "memseq"
# Some clean ups
perl -0777 -i.original -pe "s/.+?(INSERT)/\1/is" results_dump.sql
perl -0777 -i.original -pe "s/--.+//is" results_dump.sql
# Remove public. prefix from table name
sed -i "s/public.${table_name}/${table_name}/g" results_dump.sql
# fix binary blobs
sed -i "s/'\\\\x/x'/g" results_dump.sql
# use transactions to make it faster
echo 'BEGIN;' | cat - results_dump.sql > temp && mv temp results_dump.sql
echo 'END;' >> results_dump.sql
# clean the current table
sqlite3 results.sqlite "DELETE FROM ${table_name};"
# finally apply changes
sqlite3 results.sqlite3 < results_dump.sql && \
rm results_dump.sql && \
rm results_dump.sql.original
when I faced with same issue I did not find any useful advices on Internet. My source PostgreSQL db had very complicated schema.
You just need to remove from your db-file manually everything besides table creating
More details - here
It was VERY easy for me to do using the taps gem as described here:
http://railscasts.com/episodes/342-migrating-to-postgresql
And I've started using the Postgres.app on my Mac (no install needed, drop the app in your Applications directory, although might have to add one line to your PATH envirnment variable as described in the documentation), with Induction.app as a GUI tool to view/query the database.