Why i am unable to view the values of XMLTYPE data type column in Oracle SQL Developer grid? - oracle-sqldeveloper

I am using Oracle DB version 11g Enterprise Edition Release 11.2.0.4.0 and SQL Developer version 18.1.0.095.
When i run the query against the table as below
SELECT USER_DATA fROM table1;
all i get as output in the grid is (XMLTYPE). I am unable to view the value of the XMLTYPE data type column in Oracle SQL Developer grid.
I have changed the setting in SQL Developer by checking the "Display XML value in Grid" also but no luck so far. Can someone pls suggest how to view in the grid?
create table TEST_SOLACE_EXT_QTABLE (
SENDER_PROTOCOL number,
USER_DATA SYS.XMLTYPE,
USER_PROP sys.anydata
)
pctfree 10 pctused 0 initrans 1 maxtrans 255 xmltype column USER_DATA store as basicfile binary xml (
tablespace users
enable storage in row
chunk 8192
retention
nocache logging
storage ( INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL
DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT )
)
ALLOW NONSCHEMA DISALLOW ANYSCHEMA;
Thanks & Regards
Gautam

Related

PostgreSQL Database size is not equal to sum of size of all tables

I am using an AWS RDS PostgreSQL instance. I am using below query to get size of all databases.
SELECT datname, pg_size_pretty(pg_database_size(datname))
from pg_database
order by pg_database_size(datname) desc
One database's size is 23 GB and when I ran below query to get sum of size of all individual tables in this particular database, it was around 8 GB.
select pg_size_pretty(sum(pg_total_relation_size(table_schema || '.' || table_name)))
from information_schema.tables
As it is an AWS RDS instance, I don't have rights on pg_toast schema.
How can I find out which database object is consuming size?
Thanks in advance.
The documentation says:
pg_total_relation_size ( regclass ) → bigint
Computes the total disk space used by the specified table, including all indexes and TOAST data. The result is equivalent to pg_table_size + pg_indexes_size.
So TOAST tables are covered, and so are indexes.
One simple explanation could be that you are connected to a different database than the one that is shown to be 23GB in size.
Another likely explanation would be materialized views, which consume space, but do not show up in information_schema.tables.
Yet another explanation could be that there have been crashes that left some garbage files behind, for example after an out-of-space condition during the rewrite of a table or index.
This is of course harder to debug on a hosted platform, where you don't have shell access...

How to use Vaccum full in PostgreSQL

I'm trying to test for vaccum in Postgres.
Created a table:
create table test_vc(col integer);
Inserted multiple records.
INSERT INTO test_vc SELECT * FROM generate_series(1, 1000000);
Checked the size of table:
SELECT pg_size_pretty(pg_relation_size('test_vc'));
o/p: 36 MB
Deleted multiple records:
DELETE FROM test_vc WHERE col > 50000;
Again checked the size of table which is still same, i.e. 36 MB.
Then performed VACCUM ino order to get the free spaces.
VACUUM FULL test_vc;
But still the size of table remains same, i.e. 36 MB.
If I try using truncate on this table then size comes back to 0 bytes.
Kindly let me know where Vaccum is not being properly performed.
Please find the vaccum setting:

Oracle: Conversion from WE8ISO8859P1 to AL32UTF8

I am trying to consolidate some databases and I have some problems with CHARACTER Sets.
My database looks like this:
Source Database
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
NLS_CHARACTERSET WE8ISO8859P1
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_LENGTH_SEMANTICS BYTE
Destination Database
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
NLS_CHARACTERSET AL32UTF8
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_LENGTH_SEMANTICS BYTE
I made an export to a schema, but when I imported in the new database I have a lot of errors like:
ORA-02374: conversion error loading table owner.table_name
ORA-12899: value too large for column col_name (actual: 403, maximum: 400)
I have run the csscan utility. Here is the result:
TABLE Convertible Truncation Lossy
------ ------------- ------------- ------------
1 0 18 24
2 2,248 120 19,854
3 2,155 120 19,551
4 5,431 294 41,531
5 5,925 114 18,352
6 129 4 5,095
7 109 4 5,017
8 2,149 151 5,219
------ ------------- ------------- -------------
So, there is any way to find out the value of VARCHAR2 in destination before the import? I can alter the structure of the tables. In our example I can alter my table and modify the size of column from 400 to 403.
If you need more information or anything else, please leave a comment.
Thanks in advance!
The csscan tool will tell you how big the Truction records will be after the conversion, and you can use that to change the size of your field. In following example, you need to change your the size of SAMPLE_COLUMN from VARCHAR(40) to VARCHAR(43) to cover everything that is being converted:
csscan .err sample output:
User : SAMPLE_USER
Table : SAMPLE_TABLE
Column: SAMPLE_COLUMN
Type : VARCHAR2(40)
Number of Exceptions : 5
Max Post Conversion Data Size: 43
Hope this helps.

Redshift Copy and auto-increment does not work

I am using the COPY command from redshift to copy json data from S3.
The table definition is as follows:
CREATE TABLE my_raw
(
id BIGINT IDENTITY(1,1),
...
...
) diststyle even;
The command for copy i am using is as follows:
COPY my_raw FROM 's3://dev-usage/my/2015-01-22/my-usage-1421928858909-15499f6cc977435b96e610298919db26' credentials 'aws_access_key_id=XXXX;aws_secret_access_key=XXX' json 's3://bemole-usage/schemas/json_schema' ;
I am expecting that any new id inserted will always be > select max(id) from my_raw . In fact it's clearly not the case.
If I issue the above copy command twice, the first time the ids start from 1 to N although that file is creating 114 records(that's a known issue with redshift when it has multiple shards). The second time the ids are also between 1 and N but it took free numbers that were not used in the first copy.
See below for a demo:
usagedb=# COPY my_raw FROM 's3://bemole-usage/my/2015-01-22/my-usage-1421930213881-b8afbe07ab34401592841af5f7ddb31c' credentials 'aws_access_key_id=XXXX;aws_secret_access_key=XXXX' json 's3://bemole-usage/schemas/json_schema' COMPUPDATE OFF;
INFO: Load into table 'my_raw' completed, 114 record(s) loaded successfully.
COPY
usagedb=#
usagedb=# select max(id) from my_raw;
max
------
4556
(1 row)
usagedb=# COPY my_raw FROM 's3://bemole-usage/my/2015-01-22/my-usage-1421930213881-b8afbe07ab34401592841af5f7ddb31c' credentials 'aws_access_key_id=XXXX;aws_secret_access_key=XXXX' json 's3://bemole-usage/schemas/my_json_schema' COMPUPDATE OFF;
INFO: Load into table 'my_raw' completed, 114 record(s) loaded successfully.
COPY
usagedb=# select max(id) from my_raw;
max
------
4556
(1 row)
Thx in advance
The only solution i found to make sure have sequential Ids based on the insertion is to maintain a pair of tables. The first one is the stage table in which the items are inserted by the COPY command. The stage table will actually not have an ID column.
Then I have another table that is the exact replica of the stage table except that it has an additional column for the Ids. Then there is a job that takes care of filling the master table from the stage using the ROW_NUMBER() function.
In practice, this means executing the following statement after each Redshift COPY is performed:
insert into master
(id,result_code,ct_timestamp,...)
select
#{startIncrement}+row_number() over(order by ct_timestamp) as id,
result_code,...
from stage;
Then the Ids are guaranteed to be sequential/consecutives in the master table.
I can't reproduce your problem, however it is interesting how you have identity columns set correctly in conjunction with copy. Here a small summary:
Be aware that you can specify the columns (and their order) for a copy command.
COPY my_table (col1, col2, col3) FROM s3://...
So if:
EXPLICIT_IDS flag is NOT set
no columns listed like shown above
and you csv does not contain data for the IDENTITY column
then the identity values in the table will be set automatically in monotonously as we all want it.
doc:
If an IDENTITY column is included in the column list, then EXPLICIT_IDS must also be specified; if an IDENTITY column is omitted, then EXPLICIT_IDS cannot be specified. If no column list is specified, the command behaves as if a complete, in-order column list was specified, with IDENTITY columns omitted if EXPLICIT_IDS was also not specified.

PostgreSQL select result size

I have a table in PostgreSQL DB and make a select from this table with some constraints, and than I want to know how much disk space does this select has. I know that there is a postgres function pg_total_relation_size that gives me the size of some table in DB, but how can I find the 'subtable' size?
Any Ideas?
I use PostgreSQL 9.1
To get the data size, allowing for TOAST compression, etc:
regress=> SELECT sum(pg_column_size(devices)) FROM devices WHERE country = 'US';
sum
-----
105
(1 row)
To get the disk storage required including block allocation overhead, headers, etc etc:
regress=> CREATE TEMPORARY TABLE query_out AS SELECT * FROM devices WHERE country = 'US';
SELECT 3
regress=> SELECT pg_total_relation_size('query_out');
pg_total_relation_size
------------------------
16384
(1 row)
Why are the results so different? Because the latter query is reporting the size of the 8k block for the main table, and the 8k block for the TOAST table. It doesn't care that these blocks are mostly empty.