Looking for PostgreSQL or MySQL LogAppender - postgresql

Has anyone point me to a PostgreSQL log appender for kaa? Thanks a lot!
James

We had a requirement similar to this. We wrote a custom JDBC log appender which was very customized for our needs and requirements.

You can find a guide to creating custom log appender in documentation. Also here is the code examples of log appenders available in Kaa by default.

I used Django and postgres on AWS and created a rest API within the django app. Was quite easy to set it up. You can reference the procedure here. It all worked fine with the already built-in REST log appender in the kaa sandbox.

Related

activemq-artemis log monitoring and alerting

I have been looking to find if there is an alternate way to monitor warning/errors that are in logs. Currently I am using the logs to know the error code which keeps on changing for every update. Is there an alternate way to know them?
Apache ActiveMQ Artemis uses the JBoss Logging framework to do its logging and is configurable via the logging.properties, see the documentation.
You could use a custom handler to intercept warning/errors.

running camunda with Spring boot & mongodb

Has anyone been able to get Camunda to run with Spring Boot and mongodb?
I tried several approaches and always got into a brick wall.
What I tried:
1. jpa / hibernate-ogm
I was able to initiate a connection to mongo after creating my own CamundaDatasourceConfiguration and ProcessEngineConfigurationImpl.
It failed when Camunda tried to get table metadata. I couldn't plug out this behavior.
2. jdbc driver for mongo by progress
I set up the jdbc url and driver class by progress.
Camunda then gets stuck during the startup process and does not get to the point where Jetty is fully started, i.e. the "Jetty started on port XYZ" message in the log.
3. camunda with postgres with mongo FDW
FDW is a mechanism for postress to interface an external datasource. This way an application can work with postgres over jdbc while the FDW will take care of reading and writing the date to the external source, be it a file, mongodb, etc.
After realizing 1 and 2 don't work, I started working on 3.
Has anyone succeeded in doing this and can share how?
so I ran into the same problem and decided to share my answers with you.
Currently it is not possible to run the Camunda-Engine with a NoSQL Database.
In this Camunda-Forum-Post one of the guys at Camunda also says it is not possible to run the engine completely without a database.
And in the offical Camunda-Docs there is also a list with all supported environments. Currently there are only SQL-Databases listed:
https://docs.camunda.org/manual/7.10/introduction/supported-environments/
But in some earlier Blog-Posts they metioned, that they want to make some proof-of-concept examples with the use of NoSQL-Databases. So we can expect, that these databases will be supported in the future, but not at the moment.
(note: the flowable engine is doing the same proof of concepts, they mentioned, that they want to be able to use NoSQL-databases by the end of the next year).

Connecting remotely to Titan Server? [from code]

I want to connect from my code (a scala project) to TitanDB. Code and gremlinserver/titandb are in two different hosts.
In this example, the connection is made from the same host in which titandb has been installed.
What if I don't run the code in the same host?
I imagine there could be a configuration file in which I put the hostname and the port. But I can't find anything like it.
So the question is: is connecting remotely, from code, to Titan Server possible?
Thank you in advance
I think this might be helpful.
You can just connect your application to the local instance of Titan DB. You only have to properly configure the index and the backend storage of each instance of Titan.
Hope this helps.
I am not exactly sure how it may work with scala but with java you can just pass into the factory a configuration file based on what is outlined here. e.g.
graph = TitanFactory.open('path/to/configuration.properties')
In that configuration you can specify a remote host.

ZooKeeper interface for PHP

I already have a ZooKeeper cluster up and running, but I want to interface with it through PHP code? I've seen the ZooKeeper php extension on GitHub (https://github.com/andreiz/php-zookeeper) but I'm new to PHP/ZK and I'm not sure how to get started with connecting to ZK using PHP.
I have used the same PHP client and did not have any problem. It is a pretty good API. I followed the steps from here to get started:
http://systemsarchitect.net/distributed-application-in-php-with-apache-zookeeper/
And then you can also view the example code on GitHub.

How to get PostgreSQl log

How can I get log information from PostgreSQl server?
I found ability to watch it in pgAdmin Tolls->ServerStatus. Is there a SQL way, or API, or consol way, to show content of log file(s)?
Thanks.
I am command line addict, so I prefer “console” way.
Where you find logs and what will be inside those depends on how you've configured your PostgreSQL cluster. This the first thing I change after creating a new cluster, so that I get all the information I need and in the expected location.
Please, review your $PGDATA/postgresql.conf file for your current settings and adjust them in the appropriate way.
The adminpack extension is the one providing the useful info behind pgAdmin3's server status functionality.