Store changelog in openldap database - centos

I want to store changelog in my OpenLDAP database which contains details like this:: This is a sample entry from another system which is using openLDAP to store all the information about changes
I configured audit log but they are storing less information and it is in LDIF file not directly in the OpenLDAP database. I am new to OpenLDAP please help, I have no idea how to configure this.

You are looking for the accesslog overlay. https://openldap.org/software/man.cgi?query=slapo-accesslog&apropos=0&sektion=5&manpath=OpenLDAP+2.4-Release&arch=default&format=html

Related

How to take backup of Tableau Server Repository(PostgreSQL)

we are using 2018.3 version of Tableau Server. The server stats like user login, and other stats are getting logged into PostgreSQL DB. and the same being cleared regularly after 1 week.
Is there any API available in Tableau to connect the DB and take backup of data somewhere like HDFS or any place in Linux server.
Kindly let me know if there are any other way other than API as well.
Thanks.
You can enable access to the underlying PostgreSQL repository database with the tsm command. Here is a link to the documentation for your (older) version of Tableau
https://help.tableau.com/v2018.3/server/en-us/cli_data-access.htm#repository-access-enable
It would be good security practice to limit access to only the machines (whitelisted) that need it, create or use an existing read-only account to access the repository, and ideally to disable access when your admin programs are complete (i.e.. enable access, do your query, disable access)
This way you can have any SQL client code you wish query the repository, create a mirror, create reports, run auditing procedures - whatever you like.
Personally, before writing significant custom code, I’d first see if the info you want is already available another way, in one of the built in admin views, via the REST API, or using the public domain LogShark or TabMon systems or with the Addon (for more recent versions of Tableau) the Server Management Add-on, or possibly the new Data Catalog.
I know at least one server admin who somehow clones the whole Postgres repository database periodically so he can analyze stats offline. Not sure what approach he uses to clone. So you have several options.

Clarification on Oracle DB Audit Configuration - Settings

I have read information regarding audit configuration of in Oracle 12c, however, looking for some clarification. Some information I read led to some confusion.
The audit config I am reviewing has the following settings:
audit_sys_operations
TRUE
audit_file_dest
D:\ORACLE\ADMIN\HOSTNAME\ADUMP
audit_trail
DB
SQL> spool off;
My understanding is that the adump directory is the default location on the database. Also, the AUDIT_TRAIL initialization parameter is set to DB, which I understand directs all audit records to the database audit trail. We have a Syslog configured that collects event logs from various servers, including this particular database server; however, I do not believe it is collecting database audit trail. My concern here is that the logs are written to the DB, and not to an external location. Wouldn’t having the AUDIT_TRAIL set to =OS be more appropriate, security wise? If the DB becomes inaccessible, so will the DB logs? I want to make sure my understanding is correct. I am not the DBA.
In your configuration the "adump" location will contain logs generated by "sysdba" activity, but not the general user audit trail. Setting audit_trail=os will send everything to the OS, but beginning with Oracle 12c and moving forward Oracle has implemented a "unified audit trail" in which everything is consolidated into a common database view and "OS" is no longer an option. Your configuration is the original "core" audit architecture, which is still supported (for now) for backwards compatibility. Ultimately you should move towards unified auditing and use some other tool to export your audit data to syslog or some other consolidation service. Check this link for more info: https://docs.oracle.com/en/database/oracle/oracle-database/12.2/upgrd/recommended-and-best-practices-complete-upgrading-oracle-database.html#GUID-EB285325-CA65-41B4-BE58-D3F69CFED789

how to save application users created in jboss in a database or serialize them

I want to migrate the management and application users created in jboss using add-user.bat utility while upgrading the jboss version.
For that i was thinking if it is possible to store the users created in jboss in a database or may be something like serializing them while creating and update the jaas cache of the jboss server at the time of boot up.
Is there any way to export the user list from an existing jboss installation?
So can anyone please help me with either of the above?
Or may you please suggest me which should be the best approach?
Unfortunately, there is no direct way of exporting application/management users from the JBoss server. But all the application and management users created for jboss will present in application-users.properties and mgmt-users.properties files.
You can get usernames with the encrypted password in the following format
username=HEX( MD5( username ':' realm ':' password))
e.g.
admin=2a0923285184943425d1f53ddd58ec7a
All the roles and groups for the users will present in application-roles.properties
and mgmt-groups.properties files.
Location for above files :
standalone mode: JBOSS_HOME/standalone/configuration/
domain mode: JBOSS_HOME/domain/configuration/

Making postgresql logs in JSON format

I am using postgresql 9.5 on ubuntu 16.04.
Is there any way in postgresql so that it's logs can be stored in JSON format ?
I need to send it to elasticsearch, that's why I need to make postgresql logs in JSON format.
I followed this tutorial, but did not quite understood that what and where it was asking me to make changes in the conf file.
PostgreSQL listens to their community and your voice is heard !
PostgreSQL 15 beta has been released on May 15th, 2022. PostgreSQL version 15 now supports the jsonlog logging format which is to be released in third quarter of 2022 .
You have to make the below change in the postgresql.conf file
log_destination = 'jsonlog'
The log output will be written to a file, making it the third type of destination of this kind, after stderr and csvlog.
You can send these generated json logs to elasticsearch or any application for further log aggergations.
Check here for more info
Update: PostgreSQL v15 is out now. You can now explore them here
PostgreSQL self doesn't support any other formats than plain text and CSV. When you need other formats, then you need to get somewhere (or write by self) special extension that is able to touch log API and format and push PostgreSQL logs. One extension was developed by Michael and it is described in mentioned link. Here is link to source code: https://github.com/michaelpq/pg_plugins/tree/master/jsonlog . You have to compile this extension like any other (PostgreSQL extension) - code is in C language, and then you can use it.
As I am understanding your problem statement is you want to push postgresql logs to Elasticsearch.
For this I would recommend to use filebeat where you can simple enable the PostgresSQL module and set the log path. Filebeat start reading logs file and push to Elasticsearch.
You can visualize your data from kibana with readymade dashboard. It is simple plug and play.

Use Cygnus to store historical data from Orion ContextBroker in a local Hadoop database

We are currently working in a project where we use Orion ContextBroker to store information from different sensors and Wirecloud to show them in a web page.
We want to store historical data from these sensors in order to show them in a graph. I have looked around the Fiware documentation and they recommend to store the data in a Cosmos instance of Fi-lab, through Cygnus.
The thing is that we would like to store that historical data in a local Hadoop based server we have in our company, not in Cosmos, because we are running this project in a local net where we don't have internet access, and also to have that information stored in our local server.
Is it possible to configure Cygnus to redirect the output data to my file system? If so, which files must be configured in order to achieve this?
Thank you
The answer is yes. Cygnus is meant to persist context data in whatever HDFS-based filesystem (as the one used by Cosmos), thus nothing special has to be done when configuring Cygnus.
If you download the lastest version (0.7.0 at the moment of writting this), you will need to configure:
A cygnus_instance_default.conf file from cygnus_instance.conf.template. This is the instance configuration. From 0.7.1 is possible to have multiple instance configurations that are run in a parallel way, and they all have to called cygnus_instance_<whatever>.conf.
A agent.conf file from agent.conf.template. This is the Flume specific configuration that you will find in the README.md.