How can I get log information from PostgreSQl server?
I found ability to watch it in pgAdmin Tolls->ServerStatus. Is there a SQL way, or API, or consol way, to show content of log file(s)?
Thanks.
I am command line addict, so I prefer “console” way.
Where you find logs and what will be inside those depends on how you've configured your PostgreSQL cluster. This the first thing I change after creating a new cluster, so that I get all the information I need and in the expected location.
Please, review your $PGDATA/postgresql.conf file for your current settings and adjust them in the appropriate way.
The adminpack extension is the one providing the useful info behind pgAdmin3's server status functionality.
Related
I'm hoping for some insight into a problem I'm having with using pgAudit for a PostgreSQL 12 managed instance in GCP Cloud SQL.
Thus far, I've done the following to set this up:
Database flags:
cloudsql.enable_pgaudit=on
pgaudit.log=ddl
pgaudit.log_client=yes (turned this one on for debugging purposes)
pgaudit.log_relation=on
After enabled the cloudsql.enable_pgaudit flag and restarting the instance, I issued a CREATE EXTENSION pgaudit command, and confirmed that it was successful. I've also enabled the data access logs as suggested in the Google documentation (they didn't specify which permissions were needed in IAM, so I erred on the side of everything). I've also tried setting pgaudit.log=all to see if ANYTHING could be captured, with the same catch that nothing is being logged.
With pgaudit.log_client=on, I would expect to see the audit log information returned when viewing the Server Output in DBeaver, but nothing appears there.
Anyone have any insight as to what I might be missing? My goal, ultimately is to capture DDL operations with session logging. I've generally attempted testing by creating a dropping a table in an effort to get the log for those operations, i.e.
create table dstest_table (columnone varchar(150));
drop table dstest_table;
I've tried a few more things to get this to work, including setting the flags additionally at the database level. So far, nothing seems to be getting logged.
Update: Never did get pgAudit to work properly, however, found that DDL operations can be logged outside of pgAudit via the log_statement=ddl flag on the server. Set this, and I'm now getting what I need.
Database Flags
Cloud Logging API Data Access Log
Cloud SQL Data Access Log
log_statement=ddl as a flag allows for logging DDL statements without using pgAudit, so the majority of the setup was unnecessary. Set this flag and the operations I needed are now logged.
I am working with MongoDb version 4.2.
I searched the documentation to find how to get my cluster read and write concern defaults, but couldn't.
Is there no shell command that shows this?
Thanks
The getDefaultRWConcern administrative command retrieves the global default read or write concern settings.
When I run a flow in Tableau Server, it fails with the following error message:
Unfortunately this error is not helpful in understanding the actual cause of the problem.
Is there a way to see the actual underlying error? Or how am I supposed to debug this?
The flow runs fine in my Tableau Prep.
(EDIT: I used state here that I used a different data source to test in prep, but this is no longer true)
Arguably that error log does give you a hint as to what the issue is. The issue is with the Output step. This is most likely due to a permissions error when Tableau Server goes to publish the output since you can do it locally in Tableau Prep.
Are credentials for your flows able to be embedded on server? This will impact whether the output will be accessible. Are all flows run using a service account? Make sure that service account has access to the output location.
If these troubleshooting steps don't work, check the server logs. For this you'll need to check the logs on Tableau Server using the command line to see if there is a more detailed response. If you have the access, run tsm maintenance ziplogs to zip the log files and investigate.
I am currently trying to setup a redirect on write for an installation of OpenLdap 2.2.
I have two instances running. One is configured to be read-only (only read access, database specified as read-only) and has redirect configured to point to the second instance. The second instance is configured to allow for the desired write permissions.
When I attempt a modify on the first instance it fails as expected but does not send back the referral. Am I missing a piece of the configuration? Am I even on the right path? Any guidance would be greatly appreciated. Thanks.
In the database section of you slapd.conf do you add the redirection like this ? :
updateref "ldap://master-host:port/"
So, it turns out the best way to do this is to go ahead and set up replication using slurpd and point all requests at the slave instance. Unfortunately you can't set up the master and slave on the same host (for obvious reasons, but still), so I had to spin up a second VM to get this going.
Honestly, if I was not trying to replicate a redirect problem it wouldn't be worth it, but I have to duplicate a production issue.
For more information on slapd and specifically slurpd, the OpenLDAP documentation is actually crazy helpful: slurpd config for OpenLDAP 2.2
I’m working on a experiment regarding to a course I’m taking about tuning DB2. I’m using the EC2 from Amazon (aws) to conduct the experiment.
My problem is, however, that I have to test a non-compression against row-compression in DB2 and to do that I’ve created a bsh file that run those experiments. But when I reach to my compression part I get the error ”Transaction log is full”; and no matter how low I set the inserts for it is complaining about my transaction log.
I’ve scouted Google for a day now trying to find some way to flush / clear the log or just get rit of it, i don’t need it. I’ve tried to increase the size but nothing has helped.
Please, I hope someone has an answer to solve this frustrating problem
Thanks
- Mestika
There is no need to "clear the log" in DB2. When a transaction is rolled back, DB2 releases the log space used by the transaction.
If you've increased the log size and it has not helped, please post more information about what you're trying to do.
No need of restarting. Just try to force the applications using DB2 force applications all.
Increase the Actie Log File Size and try to force application connections and terminate the connections.
Try to run the job now.
db2 force applications all
db2 update db cfg for sample using logfilsiz 5125
db2 force applications all
db2 terminate
db2 connect to sample
Run your job and monitor.
Just restart the instance, it would release the pending logs and you should be fine