How to add datafile in SYSTEM tablespece in oracle 10g - oracle10g

i am trying to this way to add datafile in system tablespace but it gives error please suggest. how to add datafile in it.
SQL> ALTER TABLESPACE SYSTEM ADD DATAFILE '/u01/oracle/oradata
/orcl/system02.dbf' SIZE 10240M;
*
ERROR at line 1:
ORA-19502: write error on file "/u01/oracle/oradata/orcl/system02.dbf",
blockno
193536 (blocksize=8192)
ORA-27072: File I/O error
Linux-x86_64 Error: 2: No such file or directory
Additional information: 4
Additional information: 193536
Additional information: 610304

You might not be having sufficient space on the drive, instead of allocating 10g chunk, allocate 100m and allow to autoextend on 10g;
ALTER TABLESPACE SYSTEM ADD DATAFILE '/u01/oracle/oradata/orcl/system02.dbf' SIZE 100m autoextend on maxsize 10g;

First check the how much space available system tablespace using below commands:
select sum(bytes) from dba_free_space where tablespace_name='SYSTEM';
and
select tablespace_name, extent_management from dba_tablespaces where tablespace_name='SYSTEM';
then try this..
alter tablespace SYSTEM add datafile '/u01/oracle/oradata/orcl/system02.dbf' SIZE 10240M;

Related

db2 how to configure external tables using extbl_location, extbl_strict_io

db2 how to configure external tables using extbl_location, extbl_strict_io. Could you please give insert example for system table how to set up this parameters. I need to create external table and upload data to external table.
I need to know how to configure parameters extbl_location, extbl_strict_io.
I created table like this.
CREATE EXTERNAL TABLE textteacher(ID int, Name char(50), email varchar(255)) USING ( DATAOBJECT 'teacher.csv' FORMAT TEXT CCSID 1208 DELIMITER '|' REMOTESOURCE 'LOCAL' SOCKETBUFSIZE 30000 LOGDIR '/tmp/logs' );
and tried to upload data to it.
insert into textteacher (ID,Name,email) select id,name,email from teacher;
and get exception [428IB][-20569] The external table operation failed due to a problem with the corresponding data file or diagnostic files. File name: "teacher.csv". Reason code: "1".. SQLCODE=-20569, SQLSTATE=428IB, DRIVER=4.26.14
If I correct understand documentation parameter extbl_location should pointed directory where data will save. I suppose full directory will showed like
$extbl_location+'/'+teacher.csv
I found some documentation about error
https://www.ibm.com/support/pages/how-resolve-sql20569n-error-external-table-operation
I tried to run command in docker command line.
/opt/ibm/db2/V11.5/bin/db2 get db cfg | grep -i external
but does not information about external any tables.
CREATE EXTERNAL TABLE statement:
file-name
...
When both the REMOTESOURCE option is set to LOCAL (this is its default value) and the extbl_strict_io configuration parameter is set
to NO, the path to the external table file is an absolute path and
must be one of the paths specified by the extbl_location configuration
parameter. Otherwise, the path to the external table file is relative
to the path that is specified by the extbl_location configuration
parameter followed by the authorization ID of the table definer. For
example, if extbl_location is set to /home/xyz and the authorization
ID of the table definer is user1, the path to the external table file
is relative to /home/xyz/user1/.
So, If you use relative path to a file as teacher.csv, you must set extbl_strict_io to YES.
For an unload operation, the following conditions apply:
If the file exists, it is overwritten.
Required permissions:
If the external table is a named external table, the owner must have read and write permission for the directory of this file.
If the external table is transient, the authorization ID of the statement must have read and write permission for the directory of this file.
Moreover you must create a sub-directory equal to your username (in lowercase) which is owner of this table in the directory specified in extbl_location and ensure, that this user (not the instance owner) has rw permission to this sub-directory.
Update:
To setup presuming, that user1 runs this INSERT statement.
sudo mkdir -p /home/xyz/user1
# user1 must have an ability to cd to this directory
sudo chown user1:$(id -gn user1) /home/xyz/user1
db2 connect to mydb
db2 update db cfg using extbl_location /home/xyz extbl_strict_io YES

Where is /pg_log/ in postgres 9.6

I was trying to trace the slow queries. I'm new to Pg9.6.
I could not find the /pg_log/ folder in the new version. It was available in /data/pg_log/ in older versions(I was using 9.2)..
If this is a repeating question, please tag.
connect to your postgres and run:
t=# show log_directory;
log_directory
---------------
pg_log
(1 row)
t=# show logging_collector ;
logging_collector
-------------------
on
(1 row)
https://www.postgresql.org/docs/9.6/static/runtime-config-logging.html
log_directory (string)
When logging_collector is enabled, this parameter determines the
directory in which log files will be created. It can be specified as
an absolute path, or relative to the cluster data directory. This
parameter can only be set in the postgresql.conf file or on the server
command line. The default is pg_log.
You could also want to check all not default values with
select name,setting from pg_settings where source <>'default' and name like 'log%';

How to resolve pg_restore hangup

Application creates Postgres backup files using command
pg_dump mydb -ib -Z3 -f mybackup.backup -Fc -h localhost -T firma*.attachme -T firma*.artpilt
In Postgres 9.3 in Windows 8 x64
Creating empty database and using pgadmin to restore from this file in Postgres 9.3 in windows 7 x64
runs forever.
CPU usage by pg_restore is 0% .
postgres log file does not contain any information in normal log level.
Backup file was transferred over web. Its header starts with PGDMP and there are lot of create command in start. Its size is 24MB so restore cannto take long time.
Restore is done to same computer where server exists.
How to restore from backup? How to check .backup file integrity ?
I tried to use 7-zip test option to test it, but 7-zip wrote that it cannot open file.
Update
select * from pg_stat activity
shows number of pg_restore processes (8 jobs was specified on restore since cpu has 8 cores) starting at 10:51 when backup starts. All of them have idle status and start time does not change.
Running this query multiple times does not change result.
930409;"betoontest";8916;10;"postgres";"pg_restore";"::1";"";49755;"2014-11-18 10:51:39.262+02";"";"2014-11-18 10:51:42.064+02";"2014-11-18 10:51:42.094+02";f;"idle";"CREATE INDEX makse_dokumnr_idx ON makse USING btree (dokumnr);
"
930409;"betoontest";9640;10;"postgres";"pg_restore";"::1";"";49760;"2014-11-18 10:51:39.272+02";"";"2014-11-18 10:51:39.662+02";"2014-11-18 10:51:42.044+02";f;"idle in transaction (aborted)";"COPY rid (id, reanr, dokumnr, nimetus, hind, kogus, toode, partii, myygikood, hinnak, kaubasumma, yhik, kulukonto, kuluobjekt, rid2obj, reakuupaev, kogpak, kulum, baasostu, ostuale, rid3obj, rid4obj, rid5obj, rid6obj, rid7obj, rid8obj, rid9obj, kaskogus, a (...)"
930409;"betoontest";8176;10;"postgres";"pg_restore";"::1";"";49761;"2014-11-18 10:51:39.272+02";"";"2014-11-18 10:51:42.064+02";"2014-11-18 10:51:42.094+02";f;"idle";"CREATE INDEX attachme_idmailbox_idx ON attachme USING btree (idmailbox);
"
930409;"betoontest";8108;10;"postgres";"pg_restore";"::1";"";49765;"2014-11-18 10:51:39.272+02";"";"2014-11-18 10:51:42.064+02";"2014-11-18 10:51:42.094+02";f;"idle";"CREATE INDEX makse_kuupaev_kellaeg_idx ON makse USING btree (kuupaev, kellaaeg);
"
930409;"betoontest";8956;10;"postgres";"pg_restore";"::1";"";49764;"2014-11-18 10:51:39.282+02";"";"2014-11-18 10:51:42.074+02";"2014-11-18 10:51:42.094+02";f;"idle";"CREATE INDEX makse_varadokumn_idx ON makse USING btree (varadokumn);
"
930409;"betoontest";11780;10;"postgres";"pg_restore";"::1";"";49763;"2014-11-18 10:51:39.292+02";"";"2014-11-18 10:51:42.064+02";"2014-11-18 10:51:42.094+02";f;"idle";"ALTER TABLE ONLY mitteakt
ADD CONSTRAINT mitteakt_pkey PRIMARY KEY (klient, toode);
"
930409;"betoontest";4680;10;"postgres";"pg_restore";"::1";"";49762;"2014-11-18 10:51:39.292+02";"";"2014-11-18 10:51:42.064+02";"2014-11-18 10:51:42.094+02";f;"idle";"ALTER TABLE ONLY mailbox
ADD CONSTRAINT mailbox_pkey PRIMARY KEY (guid);
"
930409;"betoontest";5476;10;"postgres";"pg_restore";"::1";"";49766;"2014-11-18 10:51:39.332+02";"";"2014-11-18 10:51:42.064+02";"2014-11-18 10:51:42.094+02";f;"idle";"CREATE INDEX makse_kuupaev_idx ON makse USING btree (kuupaev);
Data is restored partially. Maybe file is truncated and postgres or pg_restore waits for data forever. How to prevent such hangups?

Trouble with PostgreSQL loading a large csv file into a table

On my setup, PostgreSQL 9.2.2 seems to error out when trying to load a large csv file into a table.
The size of the csv file is ~9GB
Here's the SQL statement I'm using to do the bulk load:
copy chunksBase (chunkId, Id, chunk, chunkType) from path-to-csv.csv' delimiters ',' csv
Here's the error I get after a few minutes:
pg.ProgrammingError: ERROR: out of memory
DETAIL: Cannot enlarge string buffer containing 1073723635 bytes by 65536 more bytes.
CONTEXT: COPY chunksbase, line 47680536
I think that the buffer can't allocate more than exactly 1GB, which makes me think that this could be a postgresql.conf issue.
Here's the uncommented lines in postgresql.conf:
bash-3.2# cat postgresql.conf | perl -pe 's/^[ \t]*//' | grep -v '^#' | sed '/^$/d'
log_timezone = 'US/Central'
datestyle = 'iso, mdy'
timezone = 'US/Central'
lc_messages = 'en_US.UTF-8' # locale for system error message
lc_monetary = 'en_US.UTF-8' # locale for monetary formatting
lc_numeric = 'en_US.UTF-8' # locale for number formatting
lc_time = 'en_US.UTF-8' # locale for time formatting
default_text_search_config = 'pg_catalog.english'
default_statistics_target = 50 # pgtune wizard 2012-12-02
maintenance_work_mem = 768MB # pgtune wizard 2012-12-02
constraint_exclusion = on # pgtune wizard 2012-12-02
checkpoint_completion_target = 0.9 # pgtune wizard 2012-12-02
effective_cache_size = 9GB # pgtune wizard 2012-12-02
work_mem = 72MB # pgtune wizard 2012-12-02
wal_buffers = 8MB # pgtune wizard 2012-12-02
checkpoint_segments = 16 # pgtune wizard 2012-12-02
shared_buffers = 3GB # pgtune wizard 2012-12-02
max_connections = 80 # pgtune wizard 2012-12-02
bash-3.2#
Nothing that explicitly sets a buffer to 1GB.
What's going on here? Even if the solution is to increase a buffer in postgresql.conf, why is postgres seeming to try and bulk load an entire csv file into ram on the single copy call? One would think that loading large csv files is a common task; I can't be the first person to come across this problem; so I would figure that postgres would have handled chunking the bulk load so that the buffer limit was never reached in the first place.
As a workaround, I'm splitting the csv into smaller files, and then calling copy for each file. This seems to be working fine. But it's not a particularly satisfying solution, because now I have to maintain split versions of each large csv that I want to load into postgres. There has to be a more proper way to bulk load a large csv file into postgres.
EDIT1: I am in the process of making sure that the csv file is not malformed in any way. I'm doing this by trying to load all split csv files into postgres. If all can be loaded, then this indicates that the issue here is not likely due to the csv file being malformed. I've already found a few issues. Not sure yet if these issues are causing the string buffer error when trying to load the large csv.
It turned out to be a malformed csv file.
I split the large csv into smaller chunks (each with 1 million rows) and started loading each one into postgres.
I started getting more informative errors:
pg.ProgrammingError: ERROR: invalid byte sequence for encoding "UTF8": 0x00
CONTEXT: COPY chunksbase, line 15320779
pg.ProgrammingError: ERROR: invalid byte sequence for encoding "UTF8": 0xe9 0xae 0x22
CONTEXT: COPY chunksbase, line 369513
pg.ProgrammingError: ERROR: invalid byte sequence for encoding "UTF8": 0xed 0xaf 0x80
CONTEXT: COPY chunksbase, line 16602
There were a total of 5 rows with invalid utf8 byte sequences, out of a few hundred million. After removing those rows, the large 9GB csv loaded just fine.
It would have been nice to get the invalid byte sequence errors when loading the large file initially. But at least they appeared once I started isolating the problem.
Note that the line number mentioned in the error when loading the large file initially, had no relation with the encoding errors that were found when loading the smaller csv subset files. The initial line number was the point in the file where exactly 1GB of data took place, so it was related to the 1GB buffer allocation error. But, that error had nothing to do with the real problem...

Create Database Oracle

I'm new to oracle. And I'm kind of amused how complicated is to create a database in ORA. It seems that I have to follow all the steps bellow just to create a database. Is there an easier way (without an IDE)?
FROM (http://www.dba-oracle.com/oracle_create_database.htm):
EXTREMELY minimal manual Oracle database creation script
Set your ORACLE_SID
export ORACLE_SID=test
export ORACLE_HOME=/path/to/oracle/home
Create a minimal init.ora
$ORACLE_HOME/dbs/init.ora
control_files = (/path/to/control1.ctl,/path/to/control2.ctl,/path/to/control3.ctl)
undo_management = AUTO
undo_tablespace = UNDOTBS1
db_name = test
db_block_size = 8192
sga_max_size = 1073741824 #one gig
sga_target = 1073741824 #one gig
Create a password file
$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/pwd.ora password=oracle entries=5
Start the instance
sqlplus / as sysdba
startup nomount
Create the database
create database test
logfile group 1 ('/path/to/redo1.log') size 100M,
group 2 ('/path/to/redo2.log') size 100M,
group 3 ('/path/to/redo3.log') size 100M
character set WE8ISO8859P1
national character set utf8
datafile '/path/to/system.dbf' size 500M autoextend on next 10M maxsize unlimited extent management local
sysaux datafile '/path/to/sysaux.dbf' size 100M autoextend on next 10M maxsize unlimited
undo tablespace undotbs1 datafile '/path/to/undotbs1.dbf' size 100M
default temporary tablespace temp tempfile '/path/to/temp01.dbf' size 100M;
Note: there's some other things you can do here, like "ARCHIVELOG" "SET TIME_ZONE =" and "USER SYS IDENTIFIED BY password" and "USER SYSTEM IDENTIFIED BY password"
Run catalog and catproc
#?/rdbms/admin/catalog.sql
#?/rdbms/admin/catproc.sql
Change passwords
alter user sys identified by whatever;
alter user system identified by whatever;