mySQL Workbench Importing - mysql-workbench

I was trying to import it but i am encountering some errors.
this is my error:
08:49:13 PM Restoring dbDB (contact)
Running: mysql --defaults-extra-file="/tmp/tmpdwf14l/extraparams.cnf" --host=127.0.0.1 --user=root --port=3306 --default-character-set=utf8 --comments
ERROR 1046 (3D000) at line 22: No database selected
Operation failed with exitcode 1
08:49:13 PM Restoring dbDBB (course)
Running: mysql --defaults-extra-file="/tmp/tmpMW20Fb/extraparams.cnf" --host=127.0.0.1 --user=root --port=3306 --default-character-set=utf8
ERROR 1046 (3D000) at line 22: No database selected

Error: You have not selected the default target schema in which to import the data from dump
Create a schema/database in MySQL and select that database in MySQL Workbench while importing data from Dump.
Or
You can edit the dump file and append a SQL statement at the start with some thing like this
create database test;
use test;
Solution as per the dump file of user:
--
-- Table structure for table `course`
--
Write the code as :
create database test1;
use test1;
--
-- Table structure for table `course`
--
This should do.

The error is because you havent selected any database; In the dump right below create schema 'database_name' (or create database 'database_name') add this : use 'database_name';
Replace the database_name with your DB name;

Related

ERROR: syntax error at or near "FUNCTION" while db restoring

I have an error while restoring DB from dump. What does it mean?
ERROR: syntax error at or near "FUNCTION"
LINE 1: ...LETE ON public.currency_rate FOR EACH ROW EXECUTE FUNCTION p...
--
-- Name: currency_rate currency_rate_bt_delete; Type: TRIGGER; Schema: public; Owner: -
--
CREATE TRIGGER currency_rate_bt_delete
INSTEAD OF DELETE ON public.currency_rate
FOR EACH ROW
EXECUTE FUNCTION public.currency_rate_bt_delete();
The problem of your dump/restore is that your create dump with PostgreSQL v13 It generates dump you have shown.
But then you try to restore this dump on PostgreSQL v10 which does not understand that dump
You have to use PROCEDURE instead of FUNCTION before public.currency_rate_bt_delete()
Your trigger query should be like below:
CREATE TRIGGER currency_rate_bt_delete
INSTEAD OF DELETE ON public.currency_rate
FOR EACH ROW
EXECUTE PROCEDURE public.currency_rate_bt_delete();
NOTE : - This answer is limited to the error mentioned in the question.

What should I do when system tables of PostgreSQL are damaged?

My computer shut down because of power failier. And an error occurred in the log after I restarted the database:
ERROR: invalid page header in block 27073 of relation base/263742/11768.
I got to know that damaged relation is 'pg_class' by executing the command:
oid2name -H 127.0.0.1 -p 5432 -U postgres -f 11768
From database "postgres":
Filenode Table Name
----------------------
11768 pg_class
So, what should I do to recover my database as soon as possible? Thank you.

Many syntax and permissions errors on postgres sql dump import

I am trying to import a .sql database dump into my postgres 9.6.1. I've tried in command line as well as the Postico GUI but get a ton of errors (like thousands of lines of errors) on import.
The SQL dump is from a coworker running postgres 9.4.5 and the SQL looks valid.
My Postgres Version:
PostgreSQL 9.6.1 on x86_64-apple-darwin, compiled by
i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc.
build 5658) (LLVM build 2336.11.00), 64-bit
I've tried the following imports:
psql -U postgres dbname < ~/Desktop/dbname_local_db_20161122.sql
Then logging in and trying it:
psql -U postgres dbname
dbname=# \i ~/Desktop/dbname_local_db_20161122.sql
Errors: (there are way more than this)
Password for user postgres:
SET
SET
SET
SET
SET
SET
CREATE EXTENSION
COMMENT
ERROR: schema "public" does not exist
ERROR: extension "citext" does not exist
ERROR: schema "public" does not exist
ERROR: extension "pg_trgm" does not exist
SET
ERROR: function "add_session_metric" already exists with same argument types
ERROR: role "myrole" does not exist
ERROR: function "session_metrics_partition_creation" already exists with same argument types
ERROR: role "myrole" does not exist
ERROR: function "session_metrics_partition_function" already exists with same argument types
ERROR: role "myrole" does not exist
SET
SET
ERROR: permission denied to create "pg_catalog.messages"
DETAIL: System catalog modifications are currently disallowed.
ERROR: relation "messages" does not exist
ERROR: permission denied to create "pg_catalog.Message_id_seq"
DETAIL: System catalog modifications are currently disallowed.
ERROR: relation "Message_id_seq" does not exist
ERROR: relation "Message_id_seq" does not exist
ERROR: permission denied to create "pg_catalog.sessions_users"
invalid command \N
invalid command \N
invalid command \N
invalid command \N
invalid command \N
invalid command \.
ERROR: syntax error at or near "2"
LINE 1: 2 hello 3 1 2015-11-12 09:25:14.646-07 2015-11-12 09:25:14.64...
ERROR: syntax error at or near "1"
LINE 1: 1
^
ERROR: relation "external_session_info_sessions" does not exist
invalid command \.
ERROR: syntax error at or near "2528"
LINE 1: 2528 1
^
invalid command \.
ERROR: relation "feedback_id_seq" does not exist
LINE 1: SELECT pg_catalog.setval('feedback_id_seq', 1, false);
Like I said, the SQL file looks valid. I've checked for compatibility issues from 9.4.5 to 9.6.1 but don't see any.
I do see PERMISSION DENIED but I am running the command as user postgres which has super user permissions:
First, the recommended way is to use pg_dump from the higher (target) database version to create the dump, because that version of pg_dump knows about incompatible changes that happened since and can create a dump that will restore correctly.
Some of the errors are normal if you restore a dump into a database that already has objects with the same names in it; often that is a sign that the dump should actually have been created with pg_dump -C to include a CREATE DATABASE statement.
However, your SQL script seems seriously messed up, and I doubt that it is an unmodified dump of a 9.4.5 database.
pg_dump will never dump any objects in pg_catalog. This schema can only contain system objects which are not included in a dump (they are created by CREATE DATABASE), and as you have seen, not even a superuser may create an object in that schema (unless allow_system_table_mods is on, which it really shouldn't be).

How to make a physical copy of database?

I am looking for a solution to make a copy of a DB2 databaseusing Toad.
I have tried the db2move command like this:
db2move sample COPY -sn BASESAT -co target_db schema_map "((BASESAT,BASESAT4))" -u SATURNE
BASESAT is my database and BASESAT4 is the copy I want to create
I get this error:
When I tried on command mode. I got this
Maybe this could help you.
to create a sample database;
user#host:/home/db2inst1:>db2 "create db sampledb"
DB20000I The CREATE DATABASE command completed successfully.
user#host:/home/db2inst1:>db2 connect to sampledb
Database Connection Information
Database server = DB2
SQL authorization ID = DB2INST1
Local database alias = SAMPLEDB
sample table;
user#host:/home/db2inst1:>db2 "CREATE TABLE SAMPLETABLE (COL1 CHAR(6) NOT NULL, COL2 VARCHAR(24) NOT NULL)"
DB20000I The SQL command completed successfully.
insert an dummy row;
user#host:/home/db2inst1:>db2 "insert into SAMPLETABLE VALUES ('test1','test2')"
DB20000I The SQL command completed successfully.
this is export;
user#host:/home/db2inst1:>mkdir data
user#host:/home/db2inst1:>cd data/
user#host:/home/db2inst1/data:>db2move sampledb export
Application code page not determined, using ANSI codepage 819
***** DB2MOVE *****
Action: EXPORT
Start time: Mon Jul 18 17:49:49 2016
Connecting to database SAMPLEDB ... successful! Server : DB2 Common Server V10.5.7
EXPORT: 147 rows from table "SYSTOOLS"."HMON_ATM_INFO"
EXPORT: 0 rows from table "SYSTOOLS"."HMON_COLLECTION"
EXPORT: 5 rows from table "SYSTOOLS"."POLICY"
EXPORT: 1 rows from table "DB2INST1"."SAMPLETABLE"
Disconnecting from database ... successful!
End time: Mon Jul 18 17:49:49 2016
to generate ddls.
user#host:/home/db2inst1/data:>db2look -d sampledb -e -a -o db2look.sql
-- Generate statistics for all creators
-- Creating DDL for table(s)
-- Output is sent to file: db2look.sql
-- Binding package automatically ...
-- Bind is successful
-- Binding package automatically ...
-- Bind is successful
user#host:/home/db2inst1/data:>db2 terminate
DB20000I The TERMINATE command completed successfully.
this is second database.
user#host:/home/db2inst1/data:>db2 "create db copydb"
DB20000I The CREATE DATABASE command completed successfully.
user#host:/home/db2inst1/data:>db2 "connect to copydb"
Database Connection Information
Database server = DB2
SQL authorization ID = DB2INST1
Local database alias = COPYDB
change database name in db2look as below
CONNECT TO COPYDB;
user#host:/home/db2inst1/data:>db2 -tvf db2look.sql
CONNECT TO COPYDB
Database Connection Information
Database server = DB2
SQL authorization ID = DB2INST1
Local database alias = COPYDB
CREATE SCHEMA "DB2INST1"
DB20000I The SQL command completed successfully.
CREATE TABLE "DB2INST1"."SAMPLETABLE" ( "COL1" CHAR(6 OCTETS) NOT NULL , "COL2" VARCHAR(24 OCTETS) NOT NULL ) IN "USERSPACE1" ORGANIZE BY ROW
DB20000I The SQL command completed successfully.
COMMIT WORK
DB20000I The SQL command completed successfully.
CONNECT RESET
DB20000I The SQL command completed successfully.
TERMINATE
DB20000I The TERMINATE command completed successfully.
You can also use import instead of load.
user#host:/home/db2inst1/data:>db2move copydb load
Application code page not determined, using ANSI codepage 819
***** DB2MOVE *****
Action: LOAD
Start time: Mon Jul 18 17:57:41 2016
Connecting to database COPYDB ... successful! Server : DB2 Common Server V10.5.7
Binding package automatically ... /home/db2inst1/sqllib/bnd/db2common.bnd ... successful!
Binding package automatically ... /home/db2inst1/sqllib/bnd/db2move.bnd ... successful!
* LOAD: table "SYSTOOLS"."HMON_ATM_INFO"
*** ERROR -3304. Check message file tab1.msg!
*** SQLCODE: -3304 - SQLSTATE:
*** SQL3304N The table does not exist.
* LOAD: table "SYSTOOLS"."HMON_COLLECTION"
*** ERROR -3304. Check message file tab2.msg!
*** SQLCODE: -3304 - SQLSTATE:
*** SQL3304N The table does not exist.
* LOAD: table "SYSTOOLS"."POLICY"
*** ERROR -3304. Check message file tab3.msg!
*** SQLCODE: -3304 - SQLSTATE:
*** SQL3304N The table does not exist.
* LOAD: table "DB2INST1"."SAMPLETABLE"
-Rows read: 1
-Loaded: 1
-Rejected: 0
-Deleted: 0
-Committed: 1
**Error occured -1
Disconnecting from database ... successful!
End time: Mon Jul 18 17:57:43 2016
user#host:/home/db2inst1/data:>db2 "connect to copydb"
Database Connection Information
Database server = DB2
SQL authorization ID = DB2INST1
Local database alias = COPYDB
user#host:/home/db2inst1/data:>db2 "select * from SAMPLETABLE"
COL1 COL2
------ ------------------------
test1 test2
1 record(s) selected.
I got the solution thanks to your help.
Here are the steps if someone else has the same problem:
1-create the database in which to copy (in my case BASESAT2)
2-use db2move in command mode like this:
db2move dbname COPY -sn SCHEMA_OF_YOUR_DBname -co TARGET_DB dbname_copy USER user_name USING password
Here is a screenshot.

pg_restore complains about integrity errors on a dump. Is that even possible?

I have dumped an OpenERP DB like this:
pg_dump -Fc -xO -f o7db.dump o7db
The source machine has:
$ pg_dump --version
pg_dump (PostgreSQL) 9.3.5
The I scp the dump to a target machine (an OpenVZ container), where pg_restore is:
$ pg_restore --version
pg_restore (PostgreSQL) 9.3.5
I run pg_restore like this:
pg_restore -d o7db -xO -j3 o7db.dump
The only difference I can see is that postgres user is not the same in both
machines (but that is supposed to be dealt by -O). pg_restore complains
about:
pg_restore: [archiver (db)] Error from TOC entry 8561; 0 1161831 TABLE DATA account_move_line manu
pg_restore: [archiver (db)] COPY failed for table "account_move_line": ERROR: value too long for type character varying(64)
CONTEXT: COPY account_move_line, line 172, column name: "<MASKED DATA HERE....>"
This error is issued several times for several tables. After that, many so
errors about missing tuples follow:
pg_restore: [archiver (db)] Error from TOC entry 6784; 2606 1182924 FK CONSTRAINT account_account_currency_id_fkey manu
pg_restore: [archiver (db)] could not execute query: ERROR: insert or update on table "account_account" violates foreign key constraint "account_account_currency_id_fkey"
DETAIL: Key (currency_id)=(1) is not present in table "res_currency".
Command was: ALTER TABLE ONLY account_account
ADD CONSTRAINT account_account_currency_id_fkey FOREIGN KEY (currency_id) REFERENCES re..
I don't see how this is possible, since the source DB seems to be Ok.
The restored DB has many empty tables (each that failed cause too long
values):
$ psql -d o7db -Ac "select * from account_move_line" | tail -1
(0 rows)
Furthermore, I do the pg_restore on the same source machine:
pg_restore -d o7db_restore -xO -j3 o7db.dump
Everything works as expected. Not a single warning.
What should I do? What am I doing wrong?
The answer is actually given in Moving PostgreSQL database fails on non-ascii characters with 'value too long'
It seems the target server creates DB with a different encoding, so creating the DB with UTF8 before restoring solves the problem.
Credit goes to #habe (https://stackoverflow.com/users/216458/habe)
So, I have voted my question to be closed.