How to copy table between two models in Mysql workbench? - mysql-workbench

I am doing some databese thing, I need copy one table from one model to another, but i try many ways there no effect.
Is there any way for doing this?

If you just want to do a single table through the MySQL Workbench.
In MySQL Workbench:
Connect to a MySQL Server
Expand a Database
Right Click on a table
Select Copy To Clipboard
Select Create Statement
A create statement for the table will be copied to your clipboard similar to the below:
CREATE TABLE `cache` (
`cid` varchar(255) NOT NULL DEFAULT '',
`data` longblob,
`expire` int(11) NOT NULL DEFAULT '0',
`created` int(11) NOT NULL DEFAULT '0',
`headers` text,
`serialized` smallint(6) NOT NULL DEFAULT '0',
PRIMARY KEY (`cid`),
KEY `expire` (`expire`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
Create the table in the new database
Open a new SQL tab for executing queries (File->New Query Tab)
Alter the create table code to include the database to create the table on.
CREATE TABLE `databaseName`.`cache` (
`cid` varchar(255) NOT NULL DEFAULT '',
`data` longblob,
`expire` int(11) NOT NULL DEFAULT '0',
`created` int(11) NOT NULL DEFAULT '0',
`headers` text,
`serialized` smallint(6) NOT NULL DEFAULT '0',
PRIMARY KEY (`cid`),
KEY `expire` (`expire`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
Then click the Execute button (looks like a lightening Bolt)
That will copy the table schema from one db to another using the MySQL workbench. Just refresh the tables in the database and you should see your newly added table

Select tab with source database
In menu: Server->Data Export
Select Schema and the Table as Schema Object
Select option Export to Self-Contained File and check Create Dump in a Single Transaction (self-contained only)
Copy full file path to clipboard
Start Export
Select tab with target database
In menu: Server->Data Import. Make sure your target database name is at the top left corner of the Data Import view
Select Import from self contained file and paste full file path from clipboard
Select Default Target Schema
Select Dump Content (Dump Structure and Data etc…)
Start Import

Your best option is probably to create a stripped down version of the model that contains the objects you want to carry over. Then open the target model and run File -> Include Model.... Select the stripped down source model and there you go.

You can just use a select statement. Here I am creating a duplicate of "original_table" table from the "original_schema" schema/database to the "new_schema" schema :
CREATE TABLE new_schema.duplicate_table AS
Select * from original_schema.original_table;
You can just put any select statement you need ,add a condition and select the columns :
CREATE TABLE new_schema.duplicate_table AS
SELECT column1, column2
FROM original_schema.original_table
WHERE column2 < 11000000;

I think it is worth mentioning that
a copied table may reference fields in tables of the original schema, that do not exist, in the schema where it's to be copied. It might be a good idea, to inspect the table for these discrepancies, before adding it to the other schema.
it's probably a good idea, to check engine compatibility (e.g. InnoDB vs MyISAM) and character set.

step 1 : Righit click on table > copy to clipboard > create statement
step 2: paste clipboard in the query field of workbench.
step 3: remove (``) from the name of the table and name of the model(schema)followed by a dot.
eg : `cusine_menus` -> schema_name.cusine_menus
execute

If you already have your table created and just want to copy the data, I'd recommend using the "Export Data Wizard" and "Import Data Wizard". It is basically choosing stuff in the program for exporting and then importing the data and is easy to use.
MySQL has an article on the wizards here: Table Data Export and Import Wizard
To copy data using the wizards, do the following:
Find the table in the list from which you want to copy data from.
Right click and choose "Table Data Export Wizard."
Choose the columns you wish to copy.
Choose a location to save a *.csv or *.json file with the copied data.
Find the table to insert the copied data to.
Right click and choose "Table data import wizard".
Choose the file you just exported.
Map the columns from the table you copied from to the table you insert to.
Press "Finish". The data is inserted as you chose.

In this post, we are going to show you how to copy a table in MySQL
First, this query will copy the data and structure, but the indexes are not included:
CREATE TABLE new_table SELECT * FROM old_table;
Second, this query will copy the table structure and indexes, but not data:
CREATE TABLE new_table LIKE old_table;
So, to copy everything, including database objects such as indexes, primary key constraint, foreign key constraints, triggers, etc., run these queries:
CREATE TABLE new_table LIKE old_table;
INSERT new_table SELECT * FROM old_table;
If you want to copy a table from one database to another database:
CREATE TABLE destination_db.new_table LIKE source_db.old_table;
INSERT destination_db.new_table
SELECT
*
FROM
source_db.old_table;

create table .m_property_nature like .m_property_nature;
INSERT INTO .m_property_nature SELECT * from .m_property_nature;

You can get the crate table query from table info and use the same query on different database instance.
show create table TABLENAME.content and copy the query;
Run the generated query on another Db instance connected.

Related

add a column to a table which just references an existing column

Is there a way to add a column alias to an existing table, which just references another existing column in the table? such that reads and writes to the new column name will go to the existing column name. Sort of how a view in postgres can act as a read / write alias:
create view temp_order_contacts as (select * from order_emails)
This will make read / write possible to order_emails table but by calling temp_order_contacts instead.
Is there something similar but for columns?
Assuming this is for backwards compatibility; you want to rename a column, but you also want to existing queries to still work.
You can rename the table and create a view with the original name.
-- Move the existing table out of the way.
alter table some_table rename to _some_table;
-- Create a view in its place.
create view some_table as (
select
*,
-- provide a column alias
some_column as some_other_column
from _some_table
);

migrating to agensgraph create foreign table error

So, I'm taking a first look at migrating a PostgreSQL db to agensgraph db.
I'm using the manual https://bitnine.net/wp-content/uploads/2016/11/AgensGraph_Quick_Guide.pdf
first export as csv:
SET CLIENT_ENCODING TO 'utf8';
\COPY samples.samples TO
'C:\Users\garyn\Documents\graph_migration\pg_csv\samples_samples.csv'
WITH DELIMITER E'\t' CSV;
And on page 20 I follow the first steps, creating the foreign table:
CREATE EXTENSION file_fdw;
CREATE SERVER import_server FOREIGN DATA WRAPPER file_fdw;
CREATE FOREIGN TABLE vlabel_profile ( id graphid, properties text) SERVER import_server
OPTIONS( FORMAT 'csv', HEADER 'false',
FILENAME 'C:\Users\garyn\Documents\graph_migration\pg_csv\samples_samples.csv',
delimiter E'\t');
ERROR: cannot create table in graph schema
SQL state: XX000
Now, I haven't set any column names (as header=false) and I haven't changed the id graphid, properties text since the manual says it is setting up the table, but it states the file directory, any ideas how to get past this error? I'm back to being a noob.
The next steps will be:
CREATE FOREIGN TABLE elabel_profile ( id graphid, start graphid, "end" graphid, properties text) SERVER import_server OPTIONS( FORMAT 'csv', HEADER 'false', FILENAME '/path/file.csv', delimiter E'\t');
Then execute the import
CREATE VLABEL test_vlabel; LOAD FROM vlabel_profile AS profile_name CREATE (a:test_vlabel =row_to_json(profile_name)::jsonb);
CREATE ELABEL test_elabel; LOAD FROM elabel_profile AS profile_name MATCH (a:test_vlabel), (b:test_vlabel) WHERE (a).id::graphid = (profile_name).start AND (b).id::graphid = (profile_name).end CREATE (a)-[:test_elabel]->(b);
------------ UPDATE ------------
I'm now trying with the northwind dataset, again following the agens tutorial: https://bitnine.net/tutorial/english-tutorial.html
DROP GRAPH northwind CASCADE;
CREATE GRAPH northwind;
SET graph_path = northwind;
DROP SERVER northwind;
CREATE SERVER northwind FOREIGN DATA WRAPPER file_fdw;
CREATE FOREIGN TABLE categories (
CategoryID int,
CategoryName varchar(15),
Description text,
Picture bytea
)
SERVER northwind
OPTIONS (FORMAT 'csv', HEADER 'true', FILENAME 'D:\northwind\categories.csv', delimiter ',', quote '"', null '');
Same error
I have tried to create a foreign table with northwind dataset you mentioned but it works just fine for me as you see the below screen shot.
I installed the agensgraph and tried the sample with its latest version which is 2.1.0 since I didn't have agensgraph on my window OS.
If you let me know the version of agensgraph you are currently using and how you are accessing to agensgraph, I would be able to help you out more.
re: cannot create table in graph schema
This is an error you will get when your schema is the same as the name of a graph - or there is some other problem related to the default schema.
The default schema is called public. To check your current schema enter
select current_schema();
If it's not public you can set it with
set schema public;
then try to create a table
create table mytable(id int);

DBeaver does not keep primary keys on import/export

I'm using DBeaver to migrate data from Postgres to Derby. When I use the wizard in DBeaver to go directly from one table to another, the primary key in Derby is being generated instead of inserted. This causes issues on foreign keys for subsequent tables.
If I generate the SQL, the primary key is part of the SQL statement and is properly inserted. However there are too many rows to handle in this way.
Is there a way to have DBeaver insert the primary key instead of letting it be generated when importing / exporting directly to database tables?
Schema of target table
CREATE TABLE APP.THREE_PHASE_MOTOR (
ID BIGINT NOT NULL DEFAULT GENERATED_BY_DEFAULT,
VERSION INTEGER NOT NULL,
CONSTRAINT SQL130812103636700 PRIMARY KEY (ID)
);
CREATE INDEX SQL160416184259290 ON APP.THREE_PHASE_MOTOR (ID);
Schema of source table
CREATE TABLE public.three_phase_motor (
id int8 NOT NULL DEFAULT nextval('three_phase_motor_id_seq'::regclass),
"version" int4 NOT NULL,
CONSTRAINT three_phase_motor_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
I found a trick working with version 6.0.5; do these steps:
double click a table name
then select Data tab
then click the gray table corner (the one on top of row order numbers) in order to select all rows
then right click the same gray table corner
then select Generate SQL -> INSERT menu
a window with the INSERT instructions including id (primary key) will popup.
PS: when selecting a subset of rows the same menu would work for only those too
When you go to export, check the Include generated column option, and the primary key (auto-incremented) will be included in the export.
See this for more details: https://github.com/dbeaver/dbeaver/commit/d1f74ec88183d78c7c6620690ced217a52555262
Personally I think this needs to be more clear, and why they excluded it in the first place was not good data integrity.
As of now DBeaver version [22.0.5] you have to select Include generated columns as true, as shown in the following screenshot that will export the primary/generated columns.

Delete column in hive table

I am working with hive version 0.9 and I need delete columns of a hive table. I have searched in several manuals of hive commands but I only I have found commands to version 0.14. Is possible to delete a column of a hive table in hive version 0.9? What is the command?
Thanks.
We can’t simply drop a table column from a hive table using the below statement like sql.
ALTER TABLE tbl_name drop column column_name ---- it will not work.
So there is a shortcut to drop columns from a hive table.
Let’s say we have a hive table.
From this table I want to drop the column Dob. You can use the ALTER TABLE REPLACE statement to drop a column.
ALTER TABLE test_tbl REPLACE COLUMNS(ID STRING,NAME STRING,AGE STRING); you have to give the column names which you want to keep in the table
There isn't a drop column or delete column in Hive.
A SELECT statement can take regex-based column specification in Hive releases prior to 0.13.0, or in 0.13.0 and later releases if the configuration property hive.support.quoted.identifiers is set to none.
That being said you could create a new table or view using the following:
drop table if exists database.table_name;
create table if not exists database.table_name as
select `(column_to_remove_1|...|column_to_remove_N)?+.+`
from database.some_table
where
...
;
This will create a table that has all the columns from some_table except the columns named column_to_remove_1, ... , to column_to_remove_N. You can also choose to create a view instead.
ALTER TABLE table_name REPLACE COLUMNS ( c1 int, c2 String);
NOTE: eliminate column from column list. It will keep matched columns and removed unmentioned columns from table schema.
we can not delete column from hive table . But droping a table(if its external) in hive and the recreating table(with column excluded) ,wont delete ur data .
so what can u do is(if u dont have table structure) run this command :
show create table database_name.table_name;
Then you can copy it and edit it (with column eliminated).Afterwards you can do as per invoke the shell
table details are empid,name,dept,salary ,address. i want remove address column. Just write REPLACE COLUMNS like below query
jdbc:hive2://> alter table employee replace columns(empid int, name string,dept string,salary int);
As mentioned before, you can't drop table using an alter statement.
Alter - replace is not guaranteed to work in all the cases.
I found the best answer for this here:
https://stackoverflow.com/a/48921280/4385453

How to clone or copy records in same table in postgres?

How to clone or copy records in same table in PostgreSQL by creating temporary table.
trying to create clones of records from one table to the same table with changed name(which is basically composite key in that table).
You can do it all in one INSERT combined with a SELECT.
i.e. say you have the following table definition and data populated in it:
create table original
(
id serial,
name text,
location text
);
INSERT INTO original (name, location)
VALUES ('joe', 'London'),
('james', 'Munich');
And then you can INSERT doing the kind of switch you're talking about without using a TEMP TABLE, like this:
INSERT INTO original (name, location)
SELECT 'john', location
FROM original
WHERE name = 'joe';
Here's an sqlfiddle.
This should also be faster (although for tiny data sets probably not hugely so in absolute time terms), since it's doing only one INSERT and SELECT as opposed to an extra SELECT and CREATE TABLE plus an UPDATE.
Did a bit of research, came up with a logic :
Create temp table
Copy records into it
Update the records in temp table
Copy it back to original table
CREATE TEMP TABLE temporary AS SELECT * FROM ORIGINAL WHERE NAME='joe';
UPDATE TEMP SET NAME='john' WHERE NAME='joe';
INSERT INTO ORIGINAL SELECT * FROM temporary WHERE NAME='john';
Was wondering if there was any shorter way to do it.