Clarification on Oracle DB Audit Configuration - Settings - oracle12c

I have read information regarding audit configuration of in Oracle 12c, however, looking for some clarification. Some information I read led to some confusion.
The audit config I am reviewing has the following settings:
audit_sys_operations
TRUE
audit_file_dest
D:\ORACLE\ADMIN\HOSTNAME\ADUMP
audit_trail
DB
SQL> spool off;
My understanding is that the adump directory is the default location on the database. Also, the AUDIT_TRAIL initialization parameter is set to DB, which I understand directs all audit records to the database audit trail. We have a Syslog configured that collects event logs from various servers, including this particular database server; however, I do not believe it is collecting database audit trail. My concern here is that the logs are written to the DB, and not to an external location. Wouldn’t having the AUDIT_TRAIL set to =OS be more appropriate, security wise? If the DB becomes inaccessible, so will the DB logs? I want to make sure my understanding is correct. I am not the DBA.

In your configuration the "adump" location will contain logs generated by "sysdba" activity, but not the general user audit trail. Setting audit_trail=os will send everything to the OS, but beginning with Oracle 12c and moving forward Oracle has implemented a "unified audit trail" in which everything is consolidated into a common database view and "OS" is no longer an option. Your configuration is the original "core" audit architecture, which is still supported (for now) for backwards compatibility. Ultimately you should move towards unified auditing and use some other tool to export your audit data to syslog or some other consolidation service. Check this link for more info: https://docs.oracle.com/en/database/oracle/oracle-database/12.2/upgrd/recommended-and-best-practices-complete-upgrading-oracle-database.html#GUID-EB285325-CA65-41B4-BE58-D3F69CFED789

Related

pgAudit not logging anything in GCP Cloud SQL

I'm hoping for some insight into a problem I'm having with using pgAudit for a PostgreSQL 12 managed instance in GCP Cloud SQL.
Thus far, I've done the following to set this up:
Database flags:
cloudsql.enable_pgaudit=on
pgaudit.log=ddl
pgaudit.log_client=yes (turned this one on for debugging purposes)
pgaudit.log_relation=on
After enabled the cloudsql.enable_pgaudit flag and restarting the instance, I issued a CREATE EXTENSION pgaudit command, and confirmed that it was successful. I've also enabled the data access logs as suggested in the Google documentation (they didn't specify which permissions were needed in IAM, so I erred on the side of everything). I've also tried setting pgaudit.log=all to see if ANYTHING could be captured, with the same catch that nothing is being logged.
With pgaudit.log_client=on, I would expect to see the audit log information returned when viewing the Server Output in DBeaver, but nothing appears there.
Anyone have any insight as to what I might be missing? My goal, ultimately is to capture DDL operations with session logging. I've generally attempted testing by creating a dropping a table in an effort to get the log for those operations, i.e.
create table dstest_table (columnone varchar(150));
drop table dstest_table;
I've tried a few more things to get this to work, including setting the flags additionally at the database level. So far, nothing seems to be getting logged.
Update: Never did get pgAudit to work properly, however, found that DDL operations can be logged outside of pgAudit via the log_statement=ddl flag on the server. Set this, and I'm now getting what I need.
Database Flags
Cloud Logging API Data Access Log
Cloud SQL Data Access Log
log_statement=ddl as a flag allows for logging DDL statements without using pgAudit, so the majority of the setup was unnecessary. Set this flag and the operations I needed are now logged.

How to take backup of Tableau Server Repository(PostgreSQL)

we are using 2018.3 version of Tableau Server. The server stats like user login, and other stats are getting logged into PostgreSQL DB. and the same being cleared regularly after 1 week.
Is there any API available in Tableau to connect the DB and take backup of data somewhere like HDFS or any place in Linux server.
Kindly let me know if there are any other way other than API as well.
Thanks.
You can enable access to the underlying PostgreSQL repository database with the tsm command. Here is a link to the documentation for your (older) version of Tableau
https://help.tableau.com/v2018.3/server/en-us/cli_data-access.htm#repository-access-enable
It would be good security practice to limit access to only the machines (whitelisted) that need it, create or use an existing read-only account to access the repository, and ideally to disable access when your admin programs are complete (i.e.. enable access, do your query, disable access)
This way you can have any SQL client code you wish query the repository, create a mirror, create reports, run auditing procedures - whatever you like.
Personally, before writing significant custom code, I’d first see if the info you want is already available another way, in one of the built in admin views, via the REST API, or using the public domain LogShark or TabMon systems or with the Addon (for more recent versions of Tableau) the Server Management Add-on, or possibly the new Data Catalog.
I know at least one server admin who somehow clones the whole Postgres repository database periodically so he can analyze stats offline. Not sure what approach he uses to clone. So you have several options.

Enable local development access to PostgreSQL DB on Amazon RDS

I'm in the early stages of a web project which requires a database. Until now, I've managed to get away with using an SQLite database locally for development and a PostgreSQL database running on AWS RDS in "production" (mainly just for alpha testers). I haven't really had any state in the database that I couldn't just blow away and re-seed whenever necessary.
However, I'm now at the point in my project where I'm going to have state in the production database that I can't easily reproduce via seeding in my local SQLite database. So I've decided to create another development database that I create via a script which just takes the last snapshot of my production database and creates a production database. I've managed to get this script running with some degree of success...
But I'm having difficulty connecting to this development database in my local development environment. Each time I try to connect, I timeout. Most of the resources on Amazon seem to indicate that this is likely a security group issue. The security group corresponding to my database currently has these inbound settings (security group erased, but it is the group listed as my RDS security group):
Is there something obviously wrong here? How do I set up my security groups such that I can connect to this development database on my local machine?
The source shouldn't be set to the same security group, but rather whatever source you'll be connecting from. You can use 0.0.0.0/0 to enable traffic from any source.

Sitecore MongoDB not creating all database/collections

We are working on Sitecore deployment in Azure.
Sitecore Experience Platform 8.0 rev. 160115
MongoDB - 3.0.4
We installed MongoDB, and we can connect to localhost using Robomongo. We can only see “Analytics” database/collections.
Our connection strings setup are:
Connectionstring.config
But the other 3 databases and collections are not created.
Tracking.live
Tracking.history
Tracking.contact
In Sitecore.Analytics.config file – the setting “Analytics.Enabled” is set to true.
Sitecore.Analytics.config
In log we found some references to xDB cloud initialization failed issues, therefore we disabled it.
Are we missing any configurations? Any help or suggestions are appreciated.
Thank you
Keep in mind that MongoDB is schemaless. Of course, in a production environment you would probably have to create these databases manually - to ensure that access rights are assigned correctly. But in a development environment, any database can be created on the fly.
The only reason the analytics database was created for you is because Sitecore creates indexes for the Interactions collection. Otherwise, you wouldn't see this database until xDB wrote some data into it. Same goes for any MongoDB collection - those won't appear until there's either data being written or an index created.
The other three databases will be created once the aggregation/processing logic is executed. I.e. when your instance starts to actually collect and process visit data.
As a conclusion, don't worry about these databases missing (for now). Just verify that xDB functionality is working properly.

Use Cygnus to store historical data from Orion ContextBroker in a local Hadoop database

We are currently working in a project where we use Orion ContextBroker to store information from different sensors and Wirecloud to show them in a web page.
We want to store historical data from these sensors in order to show them in a graph. I have looked around the Fiware documentation and they recommend to store the data in a Cosmos instance of Fi-lab, through Cygnus.
The thing is that we would like to store that historical data in a local Hadoop based server we have in our company, not in Cosmos, because we are running this project in a local net where we don't have internet access, and also to have that information stored in our local server.
Is it possible to configure Cygnus to redirect the output data to my file system? If so, which files must be configured in order to achieve this?
Thank you
The answer is yes. Cygnus is meant to persist context data in whatever HDFS-based filesystem (as the one used by Cosmos), thus nothing special has to be done when configuring Cygnus.
If you download the lastest version (0.7.0 at the moment of writting this), you will need to configure:
A cygnus_instance_default.conf file from cygnus_instance.conf.template. This is the instance configuration. From 0.7.1 is possible to have multiple instance configurations that are run in a parallel way, and they all have to called cygnus_instance_<whatever>.conf.
A agent.conf file from agent.conf.template. This is the Flume specific configuration that you will find in the README.md.