I created a database called mapdata in which I will create a table called school. One of the datatypes for one of the columns is db2gse.ST_Point. I have tried creating the table school with the column with that datatype but it gave me an error saying db2gse.ST_Point is an undefined name. So then I figured I had to enable the spatial commands using this statement:
db2se enable_db mapdata
But that gives me error as well. It says a temporary table space could not be created because there is no available system temporary table space that has a compatible page size.
How can I resolve this problem?
If you take a look at the db2se enable_db page in the manual you will probably notice this, among other things:
Usage notes
Ensure that you have a system temporary table space with a page size of 8 KB or larger and with a minimum size of 500 pages. This is a requirement to run the db2se enable_db command successfully.
The error message tells you that there is no such tablespace. I suspect that your database also does not have a matching bufferpool.
To create a system temporary tablespace you might use the following commands (assuming your database is configured with automatic storage):
db2 "create bufferpool bp8k pagesize 8 k"
db2 "create system temporary tablespace tmpsys8k pagesize 8 k bufferpool bp8k"
Related
I'm facing weird issue with postgres 11.
I'm creating a bunch of users and then assigning them some roles but also letting them to connect to certain database.
After successfully creating 2478 roles when I try to create new user I get this error:
db=# create user foo;
CREATE ROLE
db=# grant connect on database db to foo;
ERROR: row is too big: size 8168, maximum size 8160
Same error shows up in db log.
I checked if db volume is not running out of space, there is still 1T to spare there...
I can't imagine postgres trying to insert more than 8k when running simple grant...?
edit:
It seems there was similar question asked already (usage privileges on schema):
ERROR: row is too big: size 8168, maximum size 8164
So the solution would be to create one role, say connect_to_my_db and grant connect to that role, and then instead of running GRANT connect to each user do GRANT connect_to_my_db.
You found the solution yourself, let me add an explanation of the cause of the error:
Each table row is stored in one of the 8KB blocks of the table, so that is its size limit.
Normal tables have a TOAST table, where long attributes can be stored out-of-line. This allows PostgreSQL to store very long rows.
Now system catalog tables do not have TOAST tables, so rows are limited to 8KB size.
The access control list of an object is stored in the object's catalog, so many permissions on a table can exceed the limit.
Be glad — if you had to manage permissions for thousands of users individually, you'd end up in DBA hell anyway.
I'm getting the following error message when trying to create temporary tables in DB2 (11.1) on Ubuntu 16.04:
SQL Error [42727]: A table space could not be found with a page size of at least "4096" that authorization ID "DB2INST1" is authorized to use.. SQLCODE=-286, SQLSTATE=42727, DRIVER=4.24.92
This is the query I am trying to run (minimal example to demonstrate behaviour):
CREATE GLOBAL TEMPORARY TABLE testTbl (col1 int NOT null)
I have tried creating an 8KB tablespace with an 8KB bufferpool and granting access for the db2inst1 user to it as described in this question: DB2- Getting A default table space could not be found with a page size of at least "8192" that authorization ID "***" is authorized to use, but this didn't seem to help.
If anyone could give me any insight on why this is happening and how to resolve it would be much appreciated.
Could this be a permissions based issue? The db2inst1 is the default user which was created with the installation so I would assume it has admin privileges over the database.
The CGTT (global temporary table) can only be created in a certain type of tablespace that is different from a regular tablespace.
Use the syntax create user temporary tablespace ... while running as the db2inst1 user , and ensure it completes successfully before retrying the CGTT.
If db2inst1 is the instance owner as you suggest, then it will have the rights to do this. However, if a different account than db2inst1 wants to run the create global temporary table then that account may need to be granted USE access to the user temporary tablespace.
If you plan to use DGTT and CGTT objects then it is wise to ensure at build time per database that relevant user-temporary tablespaces get created for each of the pagesizes 4K, 8K, 16K and 32K pagesize, after ensuring that bufferpools already exist per pagesize, and then ensure that the relevant accounts and roles have USE access, and consider revoking public access to them.
For example, this will create a 4K user temporary tablespace in a Db2-LUW V11.1 database, and will re use the default 4K bufferpool, many of these options can be omitted but this shows what db2look would produce and lets you see what can be changed:
CREATE USER TEMPORARY TABLESPACE "UTMP4K"
PAGESIZE 4096 MANAGED BY AUTOMATIC STORAGE
USING STOGROUP "IBMSTOGROUP"
EXTENTSIZE 4
PREFETCHSIZE AUTOMATIC
BUFFERPOOL "IBMDEFAULTBP"
OVERHEAD INHERIT
TRANSFERRATE INHERIT
FILE SYSTEM CACHING
DROPPED TABLE RECOVERY OFF;
I am approaching the 10 GB limit that Express has on the primary database file.
The main problem appears to be some fixed length char(500) columns that are never near that length.
I have two tables with about 2 million rows between them. These two tables add up to about 8 GB of data with the remainder being spread over another 20 tables or so. These two tables each have 2 char(500) columns.
I am testing a way to convert these columns to varchar(500) and recover the trailing spaces.
I tried this:
Alter Table Test_MAILBACKUP_RECIPIENTS
Alter Column SMTP_address varchar(500)
GO
Alter Table Test_MAILBACKUP_RECIPIENTS
Alter Column EXDN_address varchar(500)
This quickly changed the column type but obviously didn’t recover the space.
The only way I can see to do this successfully is to:
Create a new table in tempdb with the varchar(500) columns,
Copy the information into the temp table trimming off the trailing spaces,
Drop the real table,
Recreate the real table with the new varchar(500) columns,
Copy the information back.
I’m open to other ideas here as I’ll have to take my application offline while this process completes?
Another thing I’m curious about is the primary key identity column.
This table has a Primary Key field set as an identity.
I know I have to use Set Identity_Insert on to allow the records to be inserted into the table and turn it off when I’m finished.
How will recreating a table affect new records being inserted into the table after I’m finished. Or is this just “Microsoft Magic” and I don’t need to worry about it?
The problem with you initial approach was that you converted the columns to varchar but didn't trim the existing whitespace (which is maintained after the conversion), after changing the data type of the columns to you should do:
update Test_MAILBACKUP_RECIPIENTS set
SMTP_address=rtrim(SMTP_address), EXDN_address=rtrim(EXDN_address)
This will eliminate all trailing spaces from you table, but note that the actual disk size will be the same, as SQL Server don't shrink automatically database files, it just mark that space as unused and available for other data.
You can use this script from another question to see the actual space used by data in the DB files:
Get size of all tables in database
Usually shrinking a database is not recommended but when there is a lot of difference between used space and disk size you can do it with dbcc shrinkdatabase:
dbcc shrinkdatabase (YourDatabase, 10) -- leaving 10% of free space for new data
OK I did a SQL backup, disabled the application and tried my script anyway.
I was shocked that it ran in under 2 minutes on my slow old server.
I re-enabled my application and it still works. (Yay)
Looking at the reported size of the table now it went from 1.4GB to 126Mb! So at least that has bought me some time.
(I have circled the Data size in KB)
Before
After
My next problem is the MailBackup table which also has two char(500) columns.
It is shown as 6.7GB.
I can't use the same approach as this table contains a FileStream column which has around 190gb of data and tempdb does not support FleStream as far as I know.
Looks like this might be worth a new question.
For one of my issue, I am trying to understand the functionalities of below DB2 entities,
System Temporary table space.
Page Size.
Table Space.
Buffer Pool.
And below is my observation,
There is a TableSpace linked to a table in the DB2 table syscat.tables
The TableSpaces are linked to BufferPool with the relation defined in syscat.tablespaces
System Temporary table space is the table space that the DB might use while executing the query.
Page Size is an unit that defines the limit of a TableSpace and says how much data can a TableSpace can hold.
Is there something wrong in my above understandings? And when I excute a query how does the DB chooses which TableSpace to choose?
I am running PostgreSQL on Windows 8 using the OpenGeo Suite. I'm running out of disk space on a large join. How can I change the temporary directory where the "hash-join temporary file" gets stored?
I am looking at the PostgreSQL configuration file and I don't see a tmp file directory.
Note: I am merging two tables with 10 million rows using a variable text field which is set to a primary key.
This is my query:
UPDATE blocks
SET "PctBlack1" = race_blocks."PctBlack1"
FROM race_blocks
WHERE race_blocks.esriid = blocks.geoid10
First, make sure you have an index on these columns (of both tables). This would make PostgreSQL use less temporary files. Also, set the GUC work_mem to as high as possible, to make PostgreSQL use more memory for operations like this.
Now, if still need, to change the temporary path, you first need to create a tablespace (if you didn't do it already):
CREATE TABLESPACE temp_disk LOCATION 'F:\pgtemp';
Then, you have to set the GUC temp_tablespaces. You can set it per database, per user, at postgresql.conf or inside the current session (before your query):
SET temp_tablespaces TO 'temp_disk';
UPDATE blocks
SET "PctBlack1" = race_blocks."PctBlack1"
FROM race_blocks
WHERE race_blocks.esriid = blocks.geoid10
One more thing, the user must have CREATE privilege to use this:
GRANT CREATE ON TABLESPACE temp_disk TO app_user;
I was unable to set the F:/pgtemp directory directly in PostgreSQL due to a lack of permissions.
So I created a symlink to it using the windows command line "mklink /D" (a soft link). Now PostgreSQL writes temporary files to c:\Users\Administrator.opengeo\pgdata\Administrator\base\pgsql_tmp they get stored on the F: drive.