How to create a larger temporary tablespace? - db2

I installed a DB2 Express C DB2 instance on my windows machine and use it in JUnit tests for testing some code.
With one statement I get the following errorcode:
DB2 SQL Error: SQLCODE=-1585, SQLSTATE=54048, SQLERRMC=null, DRIVER=4.15.134
I learned that this is probably because the page size of my temporary tablespace is to small.
I confirmed this by estimating the row size at about 16k and discovering using IBM Data Studio that my temporary tablespace has a page size of 8k. I therefore want to create a new temporary tablespace with page size 32k.
I tried doing this with the IBM Data Studio, but the field for the page size contains always 8 KB and can't be edited.
I came a little closer to my goal by using the "Befehlszeilenprozessor" I guess that translates to commandline interpreter or something. I executed the following command:
CREATE SYSTEM TEMPORARY TABLESPACE tmp_tbsp PAGESIZE 32K MANAGED BY SYSTEM USING ('C:\DB2\NODE0000\SAMPLE\TNEWTEMP')
And got the following result:
DB21034E Der Befehl wurde als SQL-Anweisung verarbeitet, da es sich um
keinen gültigen Befehl des Befehlszeilenprozessors handelte. Während der
SQL-Verarbeitung wurde Folgendes ausgegeben:
SQL1582N Die Seitengröße (PAGESIZE) für den Tabellenbereich "TMP_TBSP" stimmt
nicht mit der Seitengröße für den Pufferpool "IBMDEFAULTBP" überein, der
diesem Tabellenbereich zugeordnet ist. SQLSTATE=428CB
I guess the relevant part roughly translates to:
SQL1582N the page size (PAGESIZE) for the table space "TMP_TBSP" does not
match the page size of the buffer pool "IBMDEFAULTBP" assigned to this
tablespace. SQLSTATE=428CB
So how can I make a temporary tablespace matching my requirements?

There must be a bufferpool with the matching page size for each tablespace. Use the CREATE BUFFERPOOL statement to create one.

First, you must create Bufferpool of size 32k named MyBF.
Then use the following statement :
CREATE SYSTEM TEMPORARY TABLESPACE tmp_tbsp PAGESIZE 32K MANAGED BY SYSTEM USING ('C:\DB2\NODE0000\SAMPLE\TNEWTEMP') BUFFERPOOL MyBF;

Related

How to query parquet data files from Azure Synapse when data may be structured and exceed 8000 bytes in length

I am having trouble reading, querying and creating external tables from Parquet files stored in Datalake Storage gen2 from Azure Synapse.
Specifically I see this error while trying to create an external table through the UI:
"Error details
New external table
Previewing the file data failed. Details: Failed to execute query. Error: Column 'members' of type 'NVARCHAR' is not compatible with external data type 'JSON string. (underlying parquet nested/repeatable column must be read as VARCHAR or CHAR)'. File/External table name: [DELETED] Total size of data scanned is 1 megabytes, total size of data moved is 0 megabytes, total size of data written is 0 megabytes.
. If the issue persists, contact support and provide the following id :"
My main hunch is that since a couple columns were originally JSON types, and some of the rows are quite long (up to 9000 characters right now, which could increase at any point in time during my ETL), this is some kind of conflict with some possible default limit's I have seen referenced in the documentation (enter link description here). Data appears internally like the following example, please bear in mind sometimes this would be way longer
["100.001", "100.002", "100.003", "100.004", "100.005", "100.006", "100.023"]
If I try to manually create the external table (which has worked every other time I have tried following code similar to this
CREATE EXTERNAL TABLE example1(
[id] bigint,
[column1] nvarchar(4000),
[column2] nvarchar(4000),
[column3] datetime2(7)
)
WITH (
LOCATION = 'location/**',
DATA_SOURCE = [datasource],
FILE_FORMAT = [SynapseParquetFormat]
)
GO
the table is created with no error nor warnings but trying to make a very simple select
SELECT TOP (100) [id] bigint,
[column1] nvarchar(4000),
[column2] nvarchar(4000),
[column3] datetime2(7)
FROM [schema1].[example1]
The following error is shown:
"External table 'dbo' is not accessible because content of directory cannot be listed."
It can also show the equivalent:
"External table 'schema1' is not accessible because content of directory cannot be listed."
This error persists even when creating external table with the argument "max" as it appears in this doc
Summary: How to create external table from parquet files with fields exceeding 4000, 8000 bytes or even up to 2gb, which would be the maximum size according to this
Thank you all in advance

DB2 tablespaces: "partition-by-range" or "partition-by-growth"

During the upgrade from DB2 9 to DB2 10 on z/OS, the previous (now retired) DBA converted all tablespaces from "simple" to "universal". How can I determine if they are partition-by-range or partition-by-growth?
Using RC/Query in CA/Tools from Computer Associates, I was able to reverse-engineer the CREATE TABLESPACE statement, but it's not obvious from the code which type of tablespace this is.
CREATE TABLESPACE SNF101
IN DNF1
USING STOGROUP GNF2
PRIQTY 48
SECQTY 48
ERASE NO
BUFFERPOOL BP1
CLOSE NO
LOCKMAX SYSTEM
SEGSIZE 4
FREEPAGE 0
PCTFREE 5
GBPCACHE CHANGED
DEFINE YES
LOGGED
TRACKMOD YES
COMPRESS NO
LOCKSIZE ANY
MAXROWS 255
CCSID EBCDIC
;
Given that CREATE TABLE statement, how can I determine if this is partition-by-range or partition-by-growth?
Thanks!
Check if your version of the CA/Tools is capable of recognizing the tablespace types and also generating the matching DDL.
Check the SYSIBM.SYSTABLESPACE column TYPE, value G indicates partition-by-growth, value R indicates partition by range.

Why does pg_dump create a gigantic file?

I am currently trying to back up a postgres 10.x database. If I check the size of the database using the snippet here: https://wiki.postgresql.org/wiki/Disk_Usage#Finding_the_largest_databases_in_your_cluster , the database is 180MB. However, if I use
pg_dump my_database > my_database_backup
to backup the data, it creates a file over 2GB in size. Any thoughts on why pg_dump would be creating a backup file that is over 10x the size of the raw data? I assume that the inline sql commands might cause some of a file size increase, but 10x seems a bit extreme to me.
Edit: the specific query done to check db size (there are only 2 databases on this server, so it was within the limit 20)
SELECT d.datname AS Name, pg_catalog.pg_get_userbyid(d.datdba) AS Owner,
CASE WHEN pg_catalog.has_database_privilege(d.datname, 'CONNECT')
THEN pg_catalog.pg_size_pretty(pg_catalog.pg_database_size(d.datname))
ELSE 'No Access'
END AS SIZE
FROM pg_catalog.pg_database d
ORDER BY
CASE WHEN pg_catalog.has_database_privilege(d.datname, 'CONNECT')
THEN pg_catalog.pg_database_size(d.datname)
ELSE NULL
END DESC -- nulls first
LIMIT 20

Where is oid in pg_tblspc error message

Recently I got could not read block error by displaying the following message:
pg_tblspc/16010/PG_9.3_201306121/16301/689225.365
After this error, I am trying the below query by assuming few of the numbers as oid, but my query result is empty rows.
select oid,relname from pg_class where oid=16010 or oid=16301;
Now my question is, what are the numbers on that pg_tablspc? I have gone through the link and I believe I might have missed the main point from there too!
Update: much more detailed write-up at http://blog.2ndquadrant.com/postgresql-filename-to-table/
The following info doesn't consider relfilenode changes due to vacuum full etc.
In:
pg_tblspc/16010/PG_9.3_201306121/16301/689225.365
we have:
pg_tblspc: Indicates that it's a relation in a tablespace other than the default or global tablespaces
16010: the tablespace oid from pg_tablespace.oid,
PG_9.3_201306121: A version-specific, catversion-specific string to allow different Pg versions to co-exist in a tablespace,
16301: the database oid from pg_database.oid
689225: the relation oid from pg_class.oid
365: The segment number. PostgreSQL splits big tables up into extents (segments) of 1GB each.
There may also be a fork number, but there isn't one in this path.
It took a fair bit of source code digging for me to be sure about this. The macro you want is relpathbackend in src/include/common/relpath.h, for anyone else looking, and it calls GetRelationPath in src/common/relpath.c.

ORA-01652 Unable to extend temp segment by in tablespace

I am creating a table like
create table tablename
as
select * for table2
I am getting the error
ORA-01652 Unable to extend temp segment by in tablespace
When I googled I usually found ORA-01652 error showing some value like
Unable to extend temp segment by 32 in tablespace
I am not getting any such value.I ran this query
select
fs.tablespace_name "Tablespace",
(df.totalspace - fs.freespace) "Used MB",
fs.freespace "Free MB",
df.totalspace "Total MB",
round(100 * (fs.freespace / df.totalspace)) "Pct. Free"
from
(select
tablespace_name,
round(sum(bytes) / 1048576) TotalSpace
from
dba_data_files
group by
tablespace_name
) df,
(select
tablespace_name,
round(sum(bytes) / 1048576) FreeSpace
from
dba_free_space
group by
tablespace_name
) fs
where
df.tablespace_name = fs.tablespace_name;
Taken from: Find out free space on tablespace
and I found that the tablespace I am using currently has around 32Gb of free space. I even tried creating table like
create table tablename tablespace tablespacename
as select * from table2
but I am getting the same error again. Can anyone give me an idea, where the problem is and how to solve it. For your information the select statement would fetch me 40,000,000 records.
I found the solution to this. There is a temporary tablespace called TEMP which is used internally by database for operations like distinct, joins,etc. Since my query(which has 4 joins) fetches almost 50 million records the TEMP tablespace does not have that much space to occupy all data. Hence the query fails even though my tablespace has free space.So, after increasing the size of TEMP tablespace the issue was resolved. Hope this helps someone with the same issue. Thanks :)
Create a new datafile by running the following command:
alter tablespace TABLE_SPACE_NAME add datafile 'D:\oracle\Oradata\TEMP04.dbf'
size 2000M autoextend on;
You don't need to create a new datafile; you can extend your existing tablespace data files.
Execute the following to determine the filename for the existing tablespace:
SELECT * FROM DBA_DATA_FILES;
Then extend the size of the datafile as follows (replace the filename with the one from the previous query):
ALTER DATABASE DATAFILE 'D:\ORACLEXE\ORADATA\XE\SYSTEM.DBF' RESIZE 2048M;
I encountered the same error message but don't have any access to the table like "dba_free_space" because I am not a dba. I use some previous answers to check available space and I still have a lot of space. However, after reducing the full table scan as many as possible. The problem is solved. My guess is that Oracle uses temp table to store the full table scan data. It the data size exceeds the limit, it will show the error. Hope this helps someone with the same issue