SQL Anywhere table length - sqlanywhere

how can I see the length of a table (in bytes) on SQL Anywhere? It's possible?
Thank you

To find the number of bytes taken up by the data in a table:
select db_property('pagesize')*(stab.table_page_count+stab.ext_page_count)
from sys.systab stab join sys.sysuser suser on stab.creator=suser.user_id
where stab.table_name='table_name' and suser.user_name='user_name'
This does not include the size of any indexes or triggers on the table.

Related

PostgreSQL Database size is not equal to sum of size of all tables

I am using an AWS RDS PostgreSQL instance. I am using below query to get size of all databases.
SELECT datname, pg_size_pretty(pg_database_size(datname))
from pg_database
order by pg_database_size(datname) desc
One database's size is 23 GB and when I ran below query to get sum of size of all individual tables in this particular database, it was around 8 GB.
select pg_size_pretty(sum(pg_total_relation_size(table_schema || '.' || table_name)))
from information_schema.tables
As it is an AWS RDS instance, I don't have rights on pg_toast schema.
How can I find out which database object is consuming size?
Thanks in advance.
The documentation says:
pg_total_relation_size ( regclass ) → bigint
Computes the total disk space used by the specified table, including all indexes and TOAST data. The result is equivalent to pg_table_size + pg_indexes_size.
So TOAST tables are covered, and so are indexes.
One simple explanation could be that you are connected to a different database than the one that is shown to be 23GB in size.
Another likely explanation would be materialized views, which consume space, but do not show up in information_schema.tables.
Yet another explanation could be that there have been crashes that left some garbage files behind, for example after an out-of-space condition during the rewrite of a table or index.
This is of course harder to debug on a hosted platform, where you don't have shell access...

Most efficient way to DECODE multiple columns -- DB2

I am fairly new to DB2 (and SQL in general) and I am having trouble finding an efficient method to DECODE columns
Currently, the database has a number of tables most of which have a significant number of their columns as numbers, these numbers correspond to a table with the real values. We are talking 9,500 different values (e.g '502=yes' or '1413= Graduate Student')
In any situation, I would just do WHERE clause and show where they are equal, but since there are 20-30 columns that need to be decoded per table, I can't really do this (that I know of).
Is there a way to effectively just display the corresponding value from the other table?
Example:
SELECT TEST_ID, DECODE(TEST_STATUS, 5111, 'Approved, 5112, 'In Progress') TEST_STATUS
FROM TEST_TABLE
The above works fine.......but I manually look up the numbers and review them to build the statements. As I mentioned, some tables have 20-30 columns that would need this AND some need DECODE statements that would be 12-15 conditions.
Is there anything that would allow me to do something simpler like:
SELECT TEST_ID, DECODE(TEST_STATUS = *TableWithCodeValues*) TEST_STATUS
FROM TEST_TABLE
EDIT: Also, to be more clear, I know I can do a ton of INNER JOINS, but I wasn't sure if there was a more efficient way than that.
From a logical point of view, I would consider splitting the lookup table into several domain/dimension tables. Not sure if that is possible to do for you, so I'll leave that part.
As mentioned in my comment I would stay away from using DECODE as described in your post. I would start by doing it as usual joins:
SELECT a.TEST_STATUS
, b.TEST_STATUS_DESCRIPTION
, a.ANOTHER_STATUS
, c.ANOTHER_STATUS_DESCRIPTION
, ...
FROM TEST_TABLE as a
JOIN TEST_STATUS_TABLE as b
ON a.TEST_STATUS = b.TEST_STATUS
JOIN ANOTHER_STATUS_TABLE as c
ON a.ANOTHER_STATUS = c.ANOTHER_STATUS
JOIN ...
If things are too slow there are a couple of things you can try:
Create a statistical view that can help determine cardinalities from the joins (may help the optimizer creating a better plan):
https://www.ibm.com/support/knowledgecenter/sl/SSEPGG_9.7.0/com.ibm.db2.luw.admin.perf.doc/doc/c0021713.html
If your license admits you can experiment with Materialized Query Tables (MQT). Note that there is a penalty for modifications of the base tables, so if you have more of a OLTP workload, this is probably not a good idea:
https://www.ibm.com/developerworks/data/library/techarticle/dm-0509melnyk/index.html
A third option if your lookup table is fairly static is to cache the lookup table in the application. Read the TEST_TABLE from the database, and lookup descriptions in the application. Further improvements may be to add triggers that invalidate the cache when lookup table is modified.
If you don't want to do all these joins you could create yourself an own LOOKUP function.
create or replace function lookup(IN_ID INTEGER)
returns varchar(32)
deterministic reads sql data
begin atomic
declare OUT_TEXT varchar(32);--
set OUT_TEXT=(select text from test.lookup where id=IN_ID);--
return OUT_TEXT;--
end;
With a table TEST.LOOKUP like
create table test.lookup(id integer, text varchar(32))
containing some id/text pairs this will return the text value corrseponding to an id .. if not found NULL.
With your mentioned 10k id/text pairs and an index on the ID field this shouldn't be a performance issue as such data amount should be easily be cached in the corresponding bufferpool.

huge sql dump even after deleting large number of rows of data

The size of sql dump is same(30GB) even if I delete large number of rows from mysql (myisam) table
note: this variabe is innodb_file_per_table ON
mysql> delete from radacct where YEAR(acctstarttime)='2014';
Query OK, 1963534 rows affected (1 hour 30.58 sec)
what it the question ?
if you have troubles with storin bakup in once.
maybe it will be esyer to transport it for you if it was smaller in size, bud more parts ?
part 1
select * from radacct where YEAR(acctstarttime)='2014' and id<100000000 order by id asc;
part 2
select * from radacct where YEAR(acctstarttime)='2014' and id<200000000 order by id asc;
etc ...
And after you cound compres it
PS: I cant add replay to your comment. so i will add it here:
You can view this page Vary use-full info. MySQL InnoDB not releasing disk space after deleting data rows from table

Oracle order by query very slow

I am facing a problem when doing order by on a table.
My select query is working fine, but when i do order by (even on the primary key) it just goes on and on with no results. Finally i need to kill the session. The table has20K records.
Any suggestion for the this?
Query is as:
SELECT * FROM Users ORDER BY ID;
I do not about know about the query plan as i am new to oracle
For the unordered query, Is SQL Developer retrieving and displaying 20K rows, or just the fisrt 50? Your comparison might not be fair.
What is the size of those 20K rows: select bytes/1024/1024 MB from user_segments where segment_name = 'USERS'; I've seen many cases where a few megabytes of data use many gigabytes of storage. Maybe the data was very large before and somebody just deleted it (this doesn't remove the space). Or maybe somebody inserted those rows 1 at a time with an APPEND hint, and each row is taking an entire block.
Your query might be waiting for more temp tablespace for sorting, look at DBA_RESUMABLE.

DB2 error code -670 when adding a new column programmatically

I'm developing against a DB2 database, and at some point I get an error code "-670" when trying to add a new column.
The error code indicates an insufficiently sized tablespace page size, anyway, I just went and ran a DESCRIBE command and I estimate I don't have more than 17K for the table width (I just added the numeric value contained in the "Length" column), anyway I'm not sure of that estimate since I have many BLOB columns. There is a SQL command (or DB2 command line utility) I could use to retrieve the exact info regarding the table width?
The sum of the LENGTH values in the output of the DESCRIBE TABLE command is a fairly accurate gauge of row width if you don't count the BLOB, CLOB, or LONG VARCHAR columns, which are not stored inline with the rest of the columns. There is a small amount of overhead bytes that aren't shown in that report, but it's usually not a significant portion of the table. DB2 has historically stored large objects separately to improve manageability and performance of the rest of the data in the table. DB2 has recently supported storing large objects inline in order to make use of compression and buffering, but I haven't seen it used widely and I doubt it will become a popular approach.
It sounds like it's time for you to relocate your table to a tablespace with a larger page size. Unless you're maxed out at a 32K page already, you have the option of doubling your page size by migrating your table to a larger bufferpool and tablespace, which will give you more room for additional columns. If you need to keep the data from the old table, loading from a cursor is a quick way to copy a large amount of data from one table to another within the same database. Your other option is to export the table's contents to a flatfile so you can drop and recreate the table in the wider tablespace and load the data back in.
Answering my own question, this script can be very useful in giving you a very good estimation about the used table width size (hence, you can have an idea about the remaining free space):
select SUM(300) from sysibm.syscolumns where tbname = 'MY_TABLE' and (typename = 'BLOB' or typename = 'DBCLOB')
select 2 * SUM(length) from sysibm.syscolumns where tbname = 'MY_TABLE' and typename = 'VARGRAPHIC'
select SUM(length) from sysibm.syscolumns where tbname = 'MY_TABLE' and typename != 'BLOB' and typename != 'DBCLOB' and typename != 'VARGRAPHIC'