I am using SQL Server 2008 R2. I am creating a table using
Select..
Into...
From
where the From is actually 2 joined tables.
Will the resultant table have the indexes of the original tables?
Well, you could try this yourself and see. But the answer is no. It will have a table schema consisting of the columns and types in the Select clause, but no indexes, foreign keys or anything from the source table(s).
Related
I've got PostgreSQL DB with multiple schemas and tables in that schemas. Every row in table have PRIMARY UUID like "Ref_Key" => "41bf3b1e-91f0-491c-a6bd-c48a17e7c252"
Is it possible to find row only by it UUID, without specifying schema and table?
No, that is not possible. You can only query tables that explicitly appear in the FROM clause of a SELECT statement.
I have a postgres database with 30 tables. The main table "general_data" has a column "building_code", which all the other tables use as foreign key.
What I would need is to synchronise the "building_code" columns in all tables, meaning if a row is added in "general_data" a row is created in ALL tables with the the same value in "building_code" (the other columns remain empty).
Is there an SQL function that does that?
How to get exported keys (database metadata).Even though redshift does not support foreign keys and primary keys I am able to see them in system tables.
The problem here is in the system table the multiple columns of a foreign key exist as an array in one column(though redshift doesn't support arrays). Is it possible to extract them in one query.
Use table_constraints table:
SELECT * FROM information_schema.table_constraints;
My database has severals table with some column type 'money'. I would like to alter all these columns (in different tables) in a single statement rather than change type column by column, to avoid omissions.
You'll have to repeat the altering query for every column.
You might want to create a program code to do that for you. You know, with loops.
In order for the database to alter all the tables atomically you should enclose all the altering queries in a transaction (PostgreSQL supports transactional DDL).
I am migrating a large quantity of mostly empty tables into SQL Server 2008.
The tables are vertical partitions of one big logical table.
Problem is this logical table has more than 1024 columns.
Given that most of the fields are null, I plan to use a sparse table.
For all of my tables so far I have been using SELECT...INTO, which has been working really well.
However, now I have "CREATE TABLE failed because column 'xyz' in table 'MyBigTable' exceeds the maximum of 1024 columns."
Is there any way I can do SELECT...INTO so that it creates the new table with sparse support?
What you probably want to do is create the table manually and populate it with an INSERT ... SELECT statement.
To create the table, I would recommend scripting the different component tables and merging their definitions, making them all SPARSE as necessary. Then just run your single CREATE TABLE statement.
You cannot (and probably don't want to anyway). See INTO Clause (TSQL) for the MSDN documentation.
The problem is that sparse tables are a physical storage characteristic and not a logical characteristic, so there is no way the DBMS engine would know to copy over that characteristic. Moreover, it is a table-wide property and the SELECT can have multiple underlying source tables. See the Remarks section of the page I linked where it discusses how you can only use default organization details.