Does the 128 character limit for table names include the database name and schema name? - tsql

Microsoft's Database Objects documentation states that table names can only be 128 characters. Does this include the Schema name? What about the database name?
For example, if I needed to run the following sql statement that copies all data in a source table to a destination table in a different database, I'd write:
SELECT *
INTO DestinationDatabase.DestinationSchema.DestinationTable
FROM SourceDatabase.SourceSchema.SourceTable
Now say I have a table that stores the database name, schema name, and table name for both the source and the destination tables, what size limit should I put on the columns storing these names?
Is it a 128 character limit for each part (database name, schema name, table name) or should the entire identifier (like DestinationDatabase.DestinationSchema.DestinationTable) only be up to 128 characters long?

It's length of sysname data type nvarchar(128). It's per element (so for table separately 128).

Related

pg_largeobject huge, but no tables have OID column type

postrgresql noob, PG 9.4.x, no access to application code, developers, anyone knowledgeable about it
User database CT has 427GB pg_largeobject (PGLOB) table, next largest table is 500ish MB.
Per this post (Does Postgresql use PGLOB internally?) a very reputable member said postgresql does not use PGLOB internally.
I have reviewed the schema of all user tables in the database, and none of them are of type OID (or lo) - which is the value used for PGLOB rows to tie the collection of blob chunks back to a referencing table row. I think this means I cannot use vacuumlo (vacuumlo) to delete orphaned PGLOB rows because that utility searches user objects for those two data types in user tables.
I HAVE identified a table with an integer field type that has int values that match LOID values in PGLOB. This seems to indicate that the developers somehow got their blobs into PGLOB using the integer value stored in a user table row.
QUESTION: Is that last statement possible?
A) If it is not, what could be adding all this data to PGLOB table?
B) If it is possible, is there a way I can programatically search ALL tables for integer values that might represent rows in PGLOB?
NEED: I DESPERATELY need to reduce the size of the PGLOB table, as we are running out of disk space. And no, we cannot add space to existing disk per admin. So I somehow need to determine if there are LOID values in PGLOB that do NOT exist in ANY user tables as integer-type fields and then run lo_unlink to remove the rows. This could get me more usable 8K pages in the table.
BTW, I have also run pg_freespace on PGLOB, and it identified that most of the pages in PGLOB did not contain enough space in which to insert another blob chunk.
THANKS FOR THE ASSISTANCE!
Not really an answer but thinking out loud:
As you found all large objects are stored in a single table. The oid field you refer to is something you add to a table so you can have a pointer to a particular LO oid in pg_largeobject. That being said there is nothing compelling you to store that info in a table, you can just create LO's in pg_largeobject. From the looks of it, and just a guess, the developers stored the oid's as integer's with the intent of doing integer::oid to get a particular LO back as needed. I would look at other information is stored in that table to see if helps determine what the LO's are for?
Also you might join the integer::oid values to the oid(loid) in pg_catalog to see if that table accounts for all of them?
I was able to do a detailed analysis of all user tables in the database, find all columns that contained numeric data with no decimals, and then do a a query from pg_largeobject with a NOT EXISTS clause for every table matching pglob.loid against the appropriate field(s) in the user tables.
I found 25794 LOIDs that could be DELETEd from the PGLOB table, totaling 3.4M rows.
select distinct loid
into OrphanedBLOBs
from pg_largeobject l
where NOT exists (select * from tbl1 cn where cn.noteid = l.loid)
and not exists (select * from tbl1 d where d.document = l.loid)
and not exists (select * from tbl1 d where d.reportid = l.loid)
I used that table to execute lo_unlink(loid) for each of the LOIDs.

Integration/metadatalayeron top of two databases - Oracle SQL Developer

I am supposed to create metadata layer on top of two inventory databases, local DB1 and local DB2.
For each object (table name, column name, etc.) that is present in each local db, there should be three representations (in the same row in the metadata table):
a canonical representation(global level) for the object. This is a representation that identifies the object globally (see first column in the example table below).
a local representation for local DB1:it refers to the name of the column in local DB1 that represents that same object. Also we create another column in the metadata table to store the data type of that column in local DB1 (see columns 2 and 3 in example table below)
a local representation for local DB2: it refers to the name of the column in local DB2 that represents that same object. In addition, we need to store its data type in local DB2 (see columns 4 and 5 in example table below)
The metadata table contains the following columns:
Column 1: Contains the name of a field(canonical representation).
Column 2: Contains the corresponding name of the same field in DB1(local DB1 name)
Column 3: Contains the name of the data type of that field in DB1
Column 4: Contains a function stored as a string, that maps the canonical name to the DB1 name (if applicable)
Column 5: Contains the corresponding name (local DB2 name) of the same field in DB2
Column 6: Contains the data type of that field in DB2
column 7: Contains a function to map the canonical name to the DB2 name (if applicable)
How to use select query to display data of these local databases using this meta_data table.

db2: correlate tablespace file to database object

Using DB2 v9.7 (windows), with an SMS tablespace.
Inside the tablespace folder are files for the various db objects.
Ex) SQL00003.IN1, SQL00003.DAT, etc..
How do I determine which database object corresponds to which file?
(for both indexes and tables)
The digits in the file name (i.e. 00003 = 3) correspond to the TABLEID column from SYSCAT.TABLES. Please note that TABLEID is unique only within a single tablespace, so you need to know what tablespace's container path you are looking at to make this correlation.
All table data is stored in the .DAT file.
All index data (for all indexes) is stored in the .INX file, regardless of how many indexes there are. (Note that it appears you have a typo in the filename SQL00003.IN1 above, this should be SQL00003.INX)
If your table has LOBs, then there will be 2 additional files with the same SQLxxxxx name: a .LBA and a .LB file.

How can I copy an IDENTITY field?

I’d like to update some parameters for a table, such as the dist and sort key. In order to do so, I’ve renamed the old version of the table, and recreated the table with the new parameters (these can not be changed once a table has been created).
I need to preserve the id field from the old table, which is an IDENTITY field. If I try the following query however, I get an error:
insert into edw.my_table_new select * from edw.my_table_old;
ERROR: cannot set an identity column to a value [SQL State=0A000]
How can I keep the same id from the old table?
You can't INSERT data setting the IDENTITY columns, but you can load data from S3 using COPY command.
First you will need to create a dump of source table with UNLOAD.
Then simply use COPY with EXPLICIT_IDS parameter as described in Loading default column values:
If an IDENTITY column is included in the column list, the EXPLICIT_IDS
option must also be specified in the COPY command, or the COPY command
will fail. Similarly, if an IDENTITY column is omitted from the column
list, and the EXPLICIT_IDS option is specified, the COPY operation
will fail.
You can explicitly specify the columns, and ignore the identity column:
insert into existing_table (col1, col2) select col1, col2 from another_table;
Use ALTER TABLE APPEND twice, first time with IGNOREEXTRA and the second time with FILLTARGET.
If the target table contains columns that don't exist in the source
table, include FILLTARGET. The command fills the extra columns in the
source table with either the default column value or IDENTITY value,
if one was defined, or NULL.
It moves the columns from one table to another, extremely quickly; took me 4s for 1GB table in dc1.large node.
Appends rows to a target table by moving data from an existing source
table.
...
ALTER TABLE APPEND is usually much faster than a similar CREATE TABLE
AS or INSERT INTO operation because data is moved, not duplicated.
Faster and simpler than UNLOAD + COPY with EXPLICIT_IDS.

Number of tables in a tablespace in db2

I tried getting the number of tables of a particular tablespace and database from SYSIBM.SYSTABLES using the select query. This number is more than the number of tables for the same tablespace and database stored in the SYSIBM.SYSTABLESPACE table under the NTABLES column. Why is this so?
It could be the fact that systables stores entries for each table, view or alias, in fact a large number of objects that may not necessarily be included within a tablespace.
You could confirm this by only listing those where type = 'T' (or some other combination of the allowed values).
If you select count(*) from systables (for a given tablespace) and group it by type, you may find it reasonably easy to assign some of those types to the tablespace.