I am in the middle of developing a C extension library for PostgreSQL. I am using a lot of ereport() calls to help with future debugging.
A typical example of usage in my code would be something like this:
ereport(NOTICE, (errmsg("[%s]: Returned nonzero result (%d).", (const char*)__FUNCTION__, ret)) );
However, when I look in /var/log/postgresql/postgresql-8.4-main.log my messages do not seem to be appearing there - only messages from what I assume is the db server daemon.
So, where are my log messages being stored?
BTW, I am running PG 8.4 on Ubuntu Linux (10.0.4)
By default, logging of non-critical messages is not enabled on fresh installs. You configure it by setting log_destination and logging_collector.
PostgreSQL has several logging levels, and by default, NOTICE level is not saved to log files (even when they are enabled) . This is configured by log_min_messages setting. However NOTICE it emitted to client by default. This is configured by client_min_messages setting.
So if you want these to be stored in log files, you will have to either change NOTICE to WARNING in your code, or set log_min_messages = notice.
See this http://www.postgresql.org/docs/8.4/static/runtime-config-logging.html
and maybe this http://www.depesz.com/index.php/2011/05/06/understanding-postgresql-conf-log/
Related
I'm hoping for some insight into a problem I'm having with using pgAudit for a PostgreSQL 12 managed instance in GCP Cloud SQL.
Thus far, I've done the following to set this up:
Database flags:
cloudsql.enable_pgaudit=on
pgaudit.log=ddl
pgaudit.log_client=yes (turned this one on for debugging purposes)
pgaudit.log_relation=on
After enabled the cloudsql.enable_pgaudit flag and restarting the instance, I issued a CREATE EXTENSION pgaudit command, and confirmed that it was successful. I've also enabled the data access logs as suggested in the Google documentation (they didn't specify which permissions were needed in IAM, so I erred on the side of everything). I've also tried setting pgaudit.log=all to see if ANYTHING could be captured, with the same catch that nothing is being logged.
With pgaudit.log_client=on, I would expect to see the audit log information returned when viewing the Server Output in DBeaver, but nothing appears there.
Anyone have any insight as to what I might be missing? My goal, ultimately is to capture DDL operations with session logging. I've generally attempted testing by creating a dropping a table in an effort to get the log for those operations, i.e.
create table dstest_table (columnone varchar(150));
drop table dstest_table;
I've tried a few more things to get this to work, including setting the flags additionally at the database level. So far, nothing seems to be getting logged.
Update: Never did get pgAudit to work properly, however, found that DDL operations can be logged outside of pgAudit via the log_statement=ddl flag on the server. Set this, and I'm now getting what I need.
Database Flags
Cloud Logging API Data Access Log
Cloud SQL Data Access Log
log_statement=ddl as a flag allows for logging DDL statements without using pgAudit, so the majority of the setup was unnecessary. Set this flag and the operations I needed are now logged.
I am using ORACLE database in a windows environment and running a JSP/servlet web application in tomcat. After I do some operations with the application it gives me the following error.
ORA-12518, TNS: listener could not hand off client connection
can any one help me to identify the reason for this problem and propose me a solution?
The solution to this question is to increase the number of processes :
1. Open command prompt
2. sqlplus / as sysdba; //login sysdba user
3. startup force;
4. show parameter processes; // This shows 150(some default) processes allocated, then increase the count to 800
5. alter system set processes=800 scope=spfile;
As Tried and tested.
In my case I found that it is because I haven't closed the database connections properly in my application. Too many connections are open and Oracle can not make more connections. This is a resource limitation. Later when I check with oracle forum I could see some reasons that have mentioned there about this problem. Some of them are.
In most cases this happens due to a network problem.
Your server is probably running out of memory and need to swap memory to disk.One cause can be an Oracle process consuming too much memory.
if it is the second one, please verify large_pool_size or check dispatcher were enough for all connection.
You can refer bellow link for further details.
https://community.oracle.com/message/1874842#1874842
I ran across the same problem, in my case it was a new install of the Oracle client on a new desktop that was giving the error, other clients were working so I knew it wouldn't be a fix to the database configuration. tnsping worked properly but sqlplus failed with the ora-12518 listener error.
I had the tnsnames.ora entry with a SID instead of a service_name, then once I fixed that, still the same error and found I had the wrong service_name as well. Once I fixed that, the error went away.
If from one day to another the issue shows for no apparent reasons, add these following lines at the bottom of the listner.ora file. If your oracle_home environment variable is set like this:
(ORACLE_HOME = C:\oracle11\app\oracle\product\11.2.0\server)
The lines to add are:
ADR_BASE_LISTENER = C:\oracle11\app\oracle\
DIRECT_HANDOFF_TTC_LISTENER=OFF
I had the same problem when executing queries in my application. I'm using Oracle client with Ruby on Rails.
The problem started when I accidentally started several connections with the DB and didn't close them.
When I fixed this, everything started to work fine again.
Hope this helps another one with the same problem.
I experienced the same error after upgrading to Windows 10. I solved it by starting services for Oracle which are stopped.
Start all the services as shown in the following image:
I had the same issue. After restarting all Oracle services it worked again.
same problem encountered for me.
And from oracle server listener log, can see more information.
and I found that the SERVICE_NAME is not match the tnsnames.ora configured Service name. so I changed the application's data source configuration from SID value to Service_NAME value and it fixed.
23-MAY-2019 02:44:21 * (CONNECT_DATA=(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=XXXXXX$))(SERVICE_NAME=orclaic)) * (ADDRESS=(PROTOCOL=tcp)(HOST=::1)(PORT=50818)) * establish * orclaic * 12518
TNS-12518: TNS:listener could not hand off client connection
TNS-12560: TNS:protocol adapter error
TNS-00530: Protocol adapter error
64-bit Windows Error: 203: Unknown error
I had the same issue in real time application and the issue gone by itself next day. upon checking, it was found that server ran out of memory due to additional processes running.
So in my case, the reason was server run out of memory
first of all
check the listener log
check the show parameter processes vs select count(*) from v$processes;
increase the process, it may require SGA increase
;
I'm using PostgreSQL 10 server version locally for educational purposes, and one of my tasks is to force a PANIC Error on this version just to test how is reported in the log files.
I've edited the /etc/postgresql/10/main/postgresql.conf and modified the following lines:
log_min_messages = PANIC
log_min_error_statement = PANIC
Following PostgreSQL's lastest documentation it specifies that:
PANIC Reports an error that caused all database sessions to
abort.
I wanted to know if there was an easy way to trigger this kind of error and get it to be printed on the log files.
I searched a bit, but didn't find anything that could work easily.
Setting those two parameters to PANIC is not a good idea. Leave them at ERROR or WARNING, PANIC messages will be logged anyway.
There are many ways to provoke a panic. You could for example remove write permissions on the pg_wal directory or other important data structures. Then create some data modification activity (or call pg_switch_wal a couple of times).
I had successfully configured Oracle webcenter on some of my VM.
To access it from my local machine I did some changes in firewell setting.
Then after the home page is not accessible and i get 404 error.
i.e.,
http://:8080/cs/REST/ is not accessible where as some other REST URLs are accessible such as :
http://:8080/cs/REST/types/
http://:8080/cs/REST/sites/
http://:8080/cs/REST/sites/FirstSiteII/
I think something wrong with my asset type configuration. How to resolve?
Any idea would work for me.
You should be looking at log files which can be generated by the content server.
The “View Server Output” menu provides access to the most recent server output logs.
Iirc, you can set different levels of tracing and you should select the option(s) which are relevant to you issue - otherwise the trace log file will generate a huge amount of text - much of it irrelevant to you & making it particularly hard to read.
The log file is timestamped but it would be better served if you have a single-user make a single attempt to land on your URL(s).
Server output also contains tracing output if enabled. Tracing is typically enabled while
debugging errors. If server output is being captured in a file, the file could grow large if tracing options are enabled. Consider disabling all server tracing options (especially if “verbose” option is checked), to keep server output file size in check.
I don't believe that there's anything served at /cs/REST/ - what would you expect to see?
On Centos 6.2, trying to get the kernel log redirected to the serial console, I came across an issue where agetty seems to be respawning every few keypresses.
That is, I get a login prompt in the middle of typing (after logging in).
In order to investigate the issue further, I'm looking for the location of agetty logs, but to no avail. Where and how can I see log messages for respawned agetty process?
The "diagnostics" section of the "agetty" command manpage states:
Depending on how the program was configured, all diagnostics are written to
the console device or reported via the syslog(3) facility. Error messages are
produced if the port argument does not specify a terminal device; if there is
no utmp entry for the current process (System V only); and so on.
The syslog facility by default writes the "/var/log/messages" file, but it can be configured to write another file by editing its configuration file "/etc/syslog.conf".
Finally, if the error you get is "respawning too fast", you should check your "/etc/inittab" file, as described here.