I am using DB2 LUW in a windows machine. I want to get the logs for DDL & DML queries used in the database.
The default logs(for example S000001.LOG) contains 'null' and not in a readable format. So I enabled auditing and extracted the archived audit logs into .del files.
But the audit log extraction creates .del like this:
execute.del
"2019-09-05-01.19.44.443001","EXECUTE","STATEMENT",13,0,"TEST2","Administrator","ADMINISTRATOR","ADMINISTRATOR",,,"*LOCAL.DB2.190904193137","db2bp.exe",,,,,,,,"ADMINISTRATOR","SQLC2O29",203,," "," ",10,1,0,0,"WRITE_DML","auditlobs.0.42/","CS","auditlobs.42.808/",1,0,,,,,,"2019-09-05-01.19.44.178765",,"DB2","DESKTOP-R9O62O0"
the empty spaces are like NULNULNUL while opening the file in notepad++
auditlob.file
insert into db2admin.testtable values(223)GEN_CMPL ( DD ( ¸ 0 ¸ 8 ¸ # ¸ H ¸ P ¸
X ¸
This file contains characters like STX NUL EOX US... etc
In my case either I should get the logs in any readable format(Like db2diag.log file) or I have to forward the logs to a syslog server in a standard format.
What is the best way to do it?
Is there any possibility to write the audit logs as System Application Events Like MSSQL DDL/DML Auditing? so that I could easily forward those logs.
auditlobs.file and execute.del -> https://imgur.com/a/9LydhYK
Thanks in advance..!
These CSV files may be imported or loaded into Db2 tables for further analysis / processing.
You may use any other tools with an ability to process CSV files and log their contents to whatever system.
Related
I've spent quite some time trying to google this issue. However, I keep finding articles and answers to the opposite of what I want.
System Specifications:
CentOS 6.7
JBoss 4.2.3
PostgreSQL v8.4
I'm using JBoss as a message server and storing messages into PostgreSQL database.
Problem:
Prior to the upgrade, messages would be stored into the database with carriage returns formatted as ^M and any database query would return the same format.
After the upgrade, messages in the log for JBoss still shows a ^M in the messages being sent through. However, the ^M is now being populated in the database with \r.
Reasons:
I have many scripts that rely on the ^M to parse out messages and
lines.
I want the data in the database to be the same as the data that
resides in my logs.
When using vim, I find that reading and locating ^M is easier than reading and locating \r.
Update
Encoding of the database is UTF-8.
The field I'm accessing is a text type.
Upon further testing, it seems that the \r is not actually being put into the database.
I have a Perl script that connects to the database and creates a file of records. All these records show the ^M which I desire.
However, using a command like the following in the CLI will output records with \r:
psql dbName -U dbUser -c "select record from table" > records.txt
Work Around
Found a temporary work around to change the \r to a ^M.
psql dbName -U dbUser -t -c"select record from table" | sed 's!\\r!^M!g' > records.txt
In order to correctly use ^M in the sed command:
Hold down control (ctrl)
Press v then release v
Press m then release m
^M should now be displayed and can now release the control key
I am tring to backup a kdb+ database including all scripts and resource files. i can copy table from below command but this doesn't include scripts and dependency files. Is there any way to copy entire database of Kdb+ or available any tool for this.
copy tables command.
h:hopen hsym `$"localhost:5050"
([x;y] #[`.;y;:;] x y) [h;] each h"tables[]"
You can save and load contexts (taken from http://code.kx.com/q4m3/12_Workspace_Organization/#126-saving-and-loading-contexts):
`:currentws set value `.
That will include the functions that are currently loaded. Presumably scripts are already on file.
we can use
copy (select * from mytbl) to 'D:/products.csv' with csv header
to import data in mytbl to local disk D
so is it possible to use the same method to upload the file directly into a FTP-Server ?
i tried like this
copy (select * from mytbl) to 'ftp://usrname:mypasswrd#ftp.drivehq.com/masters/3/product/products.csv' with csv header
but got this error
ERROR: relative path not allowed for COPY to file
SQL state: 42602
using PostgreSQL 9.2
PostgreSQL does not support any source/destination for COPY other than a file or stdin/stdout.
What you can do is COPY to stdout and pipe that to a program that writes the data to the ftp dir. psql's \copy is useful for this:
psql -c "\copy mytable to stdout with (format csv, header)" | ncftpput -c my.ftp.host /path/on/host
You can use any tool that accepts the input data on a pipe to write to the remote ftp file; ncftpput is just one option.
A future PostgreSQL version may add support for invoking COPY with a pipe, e.g. COPY ... TO '|/some/command', but there are serious security concerns with running programs under the PostgreSQL user that would make this a superuser-only operation and of questionable safety even then. It's much safer to run the program client-side, and psql is ideal for that.
Using COPY statement of PostgreSQL, we can load data from a text file into data base's table as below:
COPY CME_ERROR_CODES FROM E'C:\\Program Files\\ERROR_CODES\\errcodes.txt' DELIMITER AS '~'
The above statement is run from a machine which has postgresql client where as the server is in another windows machine. Running the above statement is complaining me that ERROR: could not open file "C:\Program Files\ERROR_CODES\errcodes.txt" for reading: No such file or directory.
After some research, i observed that COPY statement is looking for the loader file(errcodes.txt) in the postgresql server's machine at the same path (C:\Program Files\ERROR_CODES). To test this , i have create the same folder structure in the postgresql server's machine and kept the errcodes.txt file in there. Then the COPY statement worked well. It looks very tough constraint for me with COPY statement.
Is there any setting needed to avoid this? or it is the behavior of COPY statement? I didn't find any information on PostgreSQL documents.
here's the standard solution:
COPY foo (i, j, k) FROM stdin;
1<TAB>2<TAB>3
\.
The data must be properly escaped and tab-separated.
Actually, it is in the docs, even in grammar definition you have STDIN... See: http://www.postgresql.org/docs/9.1/static/sql-copy.html
If you're using some programming language with COPY support, you will have pg_putcopy or similar function. So you don't have to worry about escaping and concatenation.
Hints how to do this manually in Python -> Recreating Postgres COPY directly in Python?
The Perl way -> http://search.cpan.org/dist/DBD-Pg/Pg.pm#COPY_support
Hope this helps.
From the documentation
Quote:
COPY with a file name instructs the PostgreSQL server to directly read from or write to a file. The file must be accessible to the server and the name must be specified from the viewpoint of the server. When STDIN or STDOUT is specified, data is transmitted via the connection between the client and the server.
If you want to copy from a local machine file to a server use \copy command.
I am facing a peculiar problem where i need to update a particular value in database to say 'Hellò'. When i run normal update statement the values are updated fine. But when i put it i a .sql script file and then run the update statement the last character gets replaced by a junk value. Can some one enlighten me on this and how oracle processes script files?
If the update statement works then this isn't an issue with the character encoding in the database so there are two main culprits to look at; your software and your file encoding.
Check that you editor is UTF8 compliant - Notepad for example is not, but Wordpad is, and so are better editors like UltraEdit. You also need to check that the file is saved as UTF8 since this in not always the default and if you edited and saved a file with Notepad, for example, it won't be UTF8 any more.
Once you have a UTF8 file then you must load it into the database via software that supports UTF8, which excludes SQLPlus in Windows 10g. SQL Developer is OK. If you want to use SQLPLus upgrade your client to 11g. You don't say how you are loading it, so check that whatever you are using supported UTF8.