how can I rename a table / move to a different schema in sql DB2? - db2

I am trying to rename a table in db2 like so
rename table schema1.mytable to schema2.mytable
but getting the following error message:
the name "mytable" has the wrong number of qualifiers.. SQLCODE=-108,SQLSTATE=42601
what is the problem here.... I am using the exact syntax from IBM publib documentation.

You cannot change the schema of a given object. You have to recreate it.
There are severals ways to do that:
If you have only one table, you can export and import/load the table. If you use the IDX format, the DDL will be included in the generated file. If using another format, the table has be created.
You can recreate the table by using:
Create table schema2.mytable like schema1.mytable
You can extract the DDL with the db2look tool
If you are changing the schema name for a schema given, you can use ADMIN_COPY_SCHEMA
These last two options only create the table structure, and you still need to import the data. After having create the table, you insert the data by different ways:
Inserting directly
insert into schema2.mytable select * from schema1.mytable
Via load from cursor
Via a Load or import from file (The file exported in the previous step)
The problem is the foreign relations, because they have to be recreated.
Finally, you can create an alias. It is easier, and you do not have to deal with relations.

You can easily rename a table with this statement:
RENAME TABLE SCHEMA.TABLENAME TO NEWTABLENAME;

You're not renaming table in provided example, you're trying to move to different schema, it's not the same thing. Look into db2move tool for this.

if you want to rename a table in the same schema, you can use like this.
RENAME TABLE schema.table_name TO "new_table_name";
Otherwise, you can use tools like DBeaver to rename or copy tables in a db2 db.

What if you leave it as is and create an alias with the new name and schema.

Renaming a table means to rename a table within same schema .To rename in other schema ,db2 call its ALIAS:
db2 create alias for

Related

how to dump data into a temporary table(without actually creating the temporary table) from an external table in Hive Script during run time

In SQL stored procedures, we have an option of creating a temporary table "#temp" whose structure is as that of another table that it is referring to. Here we don't explicitly create and mention the structure of "#temp" table.
Do we have similar option is HQL Hive script to create a temp table during run time without actually creating the table structure. Thus I can dump data to temp table and use it. Below code shows an example of #temp table in SQL.
SELECT name, age, gender
INTO #MaleStudents
FROM student
WHERE gender = 'Male'
Hive has the concept of temporary tables, which are local to a user's session. These tables behave just like any other table, and can be created using CTAS commands too. Hive automatically deletes all temporary tables at the end of the Hive session in which they are created.
Read more about them here.
Hive Documentation
DWGEEK
You can create simple temporary table. On this table you can perform any operation.
Once you are done with work and log out of your session they will be deleted automatically.
Syntax for temporary table is :
CREATE TEMPORARY TABLE TABLE_NAME_HERE (key string, value string)

Postgres "Like" Keyword

I am trying to create a temporary table in postgres by copying the column names and types from an existing table.
CREATE TEMPORARY TABLE temporary_table LIKE grades;
Typing the query into Postgre, it tells me about an error in LIKE. Is "Like" keyword not usable in Postgre or am I doing something wrong?
You need to wrap the like statement in parentheses:
CREATE TEMPORARY TABLE temporary_table (LIKE grades);
If you want defaults or indexes included as well, you need to add that explicitly
CREATE TEMPORARY TABLE temporary_table
(LIKE grades INCLUDING INDEXES INCLUDING DEFAULTS);

Can I import CSV data into a table without knowing the columns of the CSV?

I have a CSV file file.csv.
In Postgres, I have made a table named grants:
CREATE TABLE grants
(
)
WITH (
OIDS=FALSE
);
ALTER TABLE grants
OWNER TO postgres;
I want to import file.csv data without having to specify columns in Postgres.
But if I run COPY grants FROM '/PATH/TO/grants.csv' CSV HEADER;, I get this error: ERROR: extra data after last expected column.
How do I import the CSV data without having to specify columns and types?
The error is normal.
You created a table with no column. The COPY command try to import data into the table with the good structure.
So you have to create the table corresponding to your csv file before execute the COPY command.
I discovered pgfutter :
"Import CSV and JSON into PostgreSQL the easy way. This small tool abstract all the hassles and swearing you normally have to deal with when you just want to dump some data into the database"
Perhaps a solution ...
The best method for me was to convert the csv to dataframe and then follow
https://github.com/sp-anna-jones/data_science/wiki/Importing-pandas-dataframe-to-postgres
No, it is not possible using the COPY command
If a list of columns is specified, COPY will only copy the data in the
specified columns to or from the file. If there are any columns in the
table that are not in the column list, COPY FROM will insert the
default values for those columns.
COPY does not create columns for you.

Using IMPORT instead of LOAD in DB2

I wanted to prepare a load utility to load the data into DB2 table. The table has columns which contains GENERATEDALWAYS feature set.
So, I am not able to load an unloaded details from the table.
Is it possible to use import for tables having columns with GENERATEDALWAYS set?
Steps I did:
1. db2 "export to tbl.txt of del modified by coldel| select * from <schema.table> where col=value"
2. db2 "delete from <schema.table> where col=value"
3. db2 "import from tbl.txt of del modified by coldel| allow write access warningcount1 insert into <schema.table>"
The columns with "GENERATEDALWAYS" is having NEW Value after import. Is it possible to use import to populate GENERATEDALWAYS columns to have the old values?
Appreciate the assistance.
Thanks,
Mathew Liju
What you are asking is not possible. With IMPORT you can't override columns that have GENERATED ALWAYS. As #Peter Miehle suggests you could alter the table to specify that the column is GENERATED BY DEFAULT, but this may break other applications.
Your question's title implies that you don't want to use the LOAD utility (but you don't mention anything about it in the actual question). However, LOAD is the only way to write data into the table and maintain the values for the generated column as they exist in the file:
db2 "load from tbl.txt of del modified by generatedoverride insert into schema.table"
If you do this, be aware that:
DB2 does not check if there are conflicts with existing rows in the table. You would need to define a unique index on the column(s) in question to resolve this; this would cause DB2 to delete the rows that you just loaded in the DELETE phase of the load.
If your generated column(s) are using IDENTITY, make sure that you alter the column to ensure that future generated values do not conflict with the rows that you just inserted into the table.
maybe you can drop the "generation" from the column and add it after importing with the appropriate values again.
#Ian Bjorhovde has given you the options.
IMPORT actually does INSERTs in the background - ie, it first prepares a INSERT statement with parameter markers and uses the values in the input file for those markers.
In your SQL snapshot you will see INSERT statement that is used.
Anything that is not possible in an INSERT statement isn't possible with IMPORT (kind of .. )

PostgreSQL Creating an Insert Trigger which Remaps Columns

I'm wondering if I can use a trigger on a table to "ignore" columns that are in a COPY statement from STDIN but which are not in the target table. Sorry if the wording/syntax of the question is off, but here is and explanation of what I'm trying to say. I'm new to triggers so any advice is helpful.
I'm using the PostGIS Shapefile importer to copy shapefiles to the spatial tables in my PostgreSQL database.
This creates a COPY statement which contains all the fields in the shapefile something like:
COPY "public"."stations" ("column1","column2","column3","column4", geom) FROM stdin;
column1 and column2 are in the file but not in the target table, so the COPY fails.
Is there a way to create a trigger to create something that would have the same result as:
COPY "public"."stations" ("column3","column4", geom) FROM stdin;
No, you cannot skip columns that are present in the input file. This will error out, before triggers are even invoked. And you cannot use rules either. I quote the manual:
COPY FROM will invoke any triggers and check constraints on the
destination table. However, it will not invoke rules.
You can either edit the file or use a temporary staging table:
COPY to a temporary table with matching columns.
Use INSERT to write the desired columns to the final target table(s) - or the whole range of SQL DDL commands for more sophisticated matters.