DB2 tablespaces: "partition-by-range" or "partition-by-growth" - db2

During the upgrade from DB2 9 to DB2 10 on z/OS, the previous (now retired) DBA converted all tablespaces from "simple" to "universal". How can I determine if they are partition-by-range or partition-by-growth?
Using RC/Query in CA/Tools from Computer Associates, I was able to reverse-engineer the CREATE TABLESPACE statement, but it's not obvious from the code which type of tablespace this is.
CREATE TABLESPACE SNF101
IN DNF1
USING STOGROUP GNF2
PRIQTY 48
SECQTY 48
ERASE NO
BUFFERPOOL BP1
CLOSE NO
LOCKMAX SYSTEM
SEGSIZE 4
FREEPAGE 0
PCTFREE 5
GBPCACHE CHANGED
DEFINE YES
LOGGED
TRACKMOD YES
COMPRESS NO
LOCKSIZE ANY
MAXROWS 255
CCSID EBCDIC
;
Given that CREATE TABLE statement, how can I determine if this is partition-by-range or partition-by-growth?
Thanks!

Check if your version of the CA/Tools is capable of recognizing the tablespace types and also generating the matching DDL.
Check the SYSIBM.SYSTABLESPACE column TYPE, value G indicates partition-by-growth, value R indicates partition by range.

Related

DB2 Z/OS - Rename Tablespace

I use DB2 Z/OS 11.01.
It's possible to rename a tablespace already defined on DB2 z/OS?
From the documentation I don't see any constraints but I can't.
Below I show you my test case:
I create a TS, a table and an index.
--
CREATE TABLESPACE TSFOOT
IN DBTEST01
USING STOGROUP SGTEST01
PRIQTY 48 SECQTY 48
ERASE NO
FREEPAGE 0 PCTFREE 5
GBPCACHE CHANGED
TRACKMOD YES
MAXPARTITIONS 1
LOGGED
DSSIZE 4 G
SEGSIZE 32
BUFFERPOOL BP1
LOCKSIZE ANY
LOCKMAX SYSTEM
CLOSE YES
COMPRESS NO
CCSID EBCDIC
DEFINE YES
MAXROWS 255;
--
COMMIT;
--
CREATE TABLE TEST01.TBFOOT
(FIELD1 CHAR(2) FOR SBCS DATA NOT NULL,
FIELD2 DATE NOT NULL,
FIELD3 DECIMAL(7, 0) NOT NULL,
FIELD4 DECIMAL(7, 0) NOT NULL,
FIELD5 DECIMAL(3, 0) NOT NULL,
CONSTRAINT PK_TBFOOT
PRIMARY KEY (FIELD1,
FIELD2))
IN DBTEST01.TSFOOT
PARTITION BY SIZE
AUDIT NONE
DATA CAPTURE NONE
CCSID EBCDIC
NOT VOLATILE
APPEND NO ;
--
COMMIT;
--
CREATE UNIQUE INDEX TEST01.IXFOOTP
ON TEST01.TBFOOT
(FIELD1 ASC,
FIELD2 ASC)
USING STOGROUP SGTEST01
PRIQTY 48 SECQTY 48
ERASE NO
FREEPAGE 0 PCTFREE 10
GBPCACHE CHANGED
CLUSTER
COMPRESS NO
INCLUDE NULL KEYS
BUFFERPOOL BP2
CLOSE NO
COPY NO
DEFER NO
DEFINE YES
PIECESIZE 2 G;
--
COMMIT;
--
CREATE SYNONYM TBFOOT FOR TEST01.TBFOOT;
--
COMMIT;
--
I run the statements to rename the tablespace.
RENAME TABLESPACE TSFOOT TO TSFOOT_NEW;
I get the following error:
ILLEGAL SYMBOL "TSFOOT". SOME SYMBOLS THAT MIGHT BE LEGAL ARE: . TO. SQLCODE=-104, SQLSTATE=42601, DRIVER=4.19.56
Can I get some help here, please?
Thank you very much.
Using Db2 you have to be cautious when looking for the documentation.
There are 2 different platforms:
Db2 (formerly Db2 for Linux UNIX and Windows)
Db2 for z/OS
Basically they have the same syntax. But there are slightly differences when digging deeper. Rule of thumb: z/OS supports less commands than Unix/Windows.
In your case look at the correct documentation (and create a bookmark): https://www.ibm.com/support/knowledgecenter/SSEPEK_11.0.0/sqlref/src/tpc/db2z_sql_rename.html
And you see that renaming table spaces on z/OS is not possible.

IBM DB2 - Do not unload DECIMAL rows exceeding a certain length

I have an IBM DB2 table where I want to UNLOAD data from so that I can load it into another DB2 table.
Both tables have the same columns (and types), except one decimal field.
It is DECIMAL(6) in the source table and DECIMAL(5) in the destination table.
There are many entries in the source table which only use up to 5 digits in the DECIMAL field and only some use up all 6 of them.
What I am going to do is only copy the entries which go up to 5 digits in the source table and drop all others.
Can I do this only using the UNLOAD statement? So having an option which tells the system "unload column 'id' as DECIMAL(5) (although it is DECIMAL(6) in the table itself) and if an entry of that column uses all 6 digits (>99999) do not unload that row".
Also how would you handle the case if it was the other way around? E.g. unload DECIMAL(5) from source and LOAD as DECIMAL(6) in destination
Why am I doing this? Because the destination table is an older version of the table used by older versions of the applications. We will drop support in 6 months, but until then we need to refresh the datasets in it.
With UNLOAD and LOAD I am talking about the UNLOAD and LOAD utilities for z/OS (?) described under e.g.
https://www.ibm.com/support/knowledgecenter/SSEPEK_11.0.0/ugref/src/tpc/db2z_utl_unload.html
Without knowing your source environment, for Db2 Linux, Unix, Windows the export statement is used with a select statement. So you'd do whatever logic makes sense to turn a DECIMAL(6) into a DECIMAL(5) - including skipping rows that need all 6 digits since there's no way to make them fit into a 5 digit allocation. The actual preferred method in this environment is now to use external tables but they work in a similar fashion.
export to 'myfile.csv' OF del select col1, dec_col6 from thetable where dec_col6 < 100000
OR
create external table 'myfile.csv' using (DELIMITER ',') as select col1, case when dec_col6 > 99999 then 99999 else dec_col6 end from thetable
External Tables
Export
Since you used the word "unload" I suppose you're either using a tool such as High Performance Unload or you are on a different flavor of Db2.

Does Db2 support “accent insensitive” collations?

In Microsoft SQL Server, it's possible to specify an "accent insensitive" collation (for a database, table or column). Is this possible in Db2?
Look at the Unicode Collation Algorithm based collations article.
Collating sequence is specified at the database creation time and can't be changed.
See the 'COLLATE USING locale-sensitive-collation' clause of the CREATE DATABASE command.
There is no way to specify collation sequence at the table or column level, but you can use the COLLATION_KEY_BIT function to compare string expressions.
select
case when c1=c2 then 1 else 0 end r1
, case when COLLATION_KEY_BIT(c1, 'CLDR181_EO_S1')=COLLATION_KEY_BIT(c2, 'CLDR181_EO_S1') then 1 else 0 end r2
from table(values ('Café', 'Cafe')) t(c1, c2);
R1 R2
-- --
0 1
If your database had CLDR181_EO_S1 collation, the result in the 1-st column would be 1.

Postgresql order by - danish characters is expanded

I'm trying to make a "order by" statement in a sql query work. But for some reason danish special characters is expanded in stead of their evaluating their value.
SELECT roadname FROM mytable ORDER BY roadname
The result:
Abildlunden
Æblerosestien
Agern Alle 1
The result in the middle should be the last.
The locale is set to danish, so it should know the value of the danish special characters.
What is the collation of your database? (You might also want to give the PostgreSQL version you are using) Use "\l" from psql to see.
Compare and contrast:
steve#steve#[local] =# select * from (values('Abildlunden'),('Æblerosestien'),('Agern Alle 1')) x(word)
order by word collate "en_GB";
word
---------------
Abildlunden
Æblerosestien
Agern Alle 1
(3 rows)
steve#steve#[local] =# select * from (values('Abildlunden'),('Æblerosestien'),('Agern Alle 1')) x(word)
order by word collate "da_DK";
word
---------------
Abildlunden
Agern Alle 1
Æblerosestien
(3 rows)
The database collation is set when you create the database cluster, from the locale you have set at the time. If you installed PostgreSQL through a package manager (e.g. apt-get) then it is likely taken from the system-default locale.
You can override the collation used in a particular column, or even in a particular expression (as done in the examples above). However if you're not specifying anything (likely) then the database default will be used (which itself is inherited from the template database when the database is created, and the template database collation is fixed when the cluster is created)
If you want to use da_DK as your default collation throughout, and it's not currently your database default, your simplest option might be to dump the database, then drop and re-create the cluster, specifying the collation to initdb (or pg_createcluster or whatever tool you use to create the server)
BTW the question isn't well-phrased. PostgreSQL is very much not ignoring the "special" characters, it is correctly expanding "Æ" into "AE"- which is a correct rule for English. Collating "Æ" at the end is actually more like the unlocalised behaviour.
Collation documentation: http://www.postgresql.org/docs/current/static/collation.html

how to change Postgres Table Field name limit?

I want to create a table whose field name is of 100 characters but postgres limit for no of characters is 64 so how to change that limit to 100?
example:
Create table Test
(
PatientFirstnameLastNameSSNPolicyInsuraceTicketDetailEMRquestionEMR varchar(10)
)
This table creation fails as the name exceeds 64 characters
Actually name's limit is equal to NAMEDATALEN - 1 bytes (not necessarily characters), default value for NAMEDATALEN is 64.
NAMEDATALEN was determined at compile time (in src/include/pg_config_manual.h). You have to recompile PostgreSQL with new NAMEDATALEN limit to make it work.
However think about design and compatibility with other servers with standard 63 bytes limit. It's not common practice to use such long names.
It's because of the special name type (see table 8.5), which is used in pg_catalog. It won't accept anything longer than 63 bytes (plus terminator). There is no workaround.