Icinga database cleanup - postgresql

I am working in Icinga for performance data collection,
I have to clear all plugin data more than 30 days, how can I do this. I had some google searches does not help.
some references:
External Commands List
Database model
I am using:
RHEL os
icinga2 from source build
postgresql
Using NRPE for collecting remote server data
Is there any tool available to cleanup or any queries to delete all database entries older than 30 days?

http://docs.icinga.org/latest/en/configido.html#configido-ido2db
From the manual, it looks like your ido2db.cfg needs to be configured with the proper data:
max_systemcommands_age=43200
max_servicechecks_age=43200
max_hostchecks_age=43200
max_eventhandlers_age=43200
max_externalcommands_age=43200
max_logentries_age=43200
max_acknowledgements_age=43200
max_notifications_age=43200
max_contactnotifications_age=43200
max_contactnotificationmethods_age=43200
Also, make sure that trim_db_interval is set somewhat sane. The default of 3600 should be sufficient.
trim_db_interval=3600

Related

Updating online Mongo Database from offline copy

I have a large Mongo database (5M documents). I edit the database from an offline application, so I store the database on my local computer. However, I want to be able to maintain an online copy of the database, so that my website can access it.
How can I update the online copy regularly, without having to upload multiple GBs of data every time?
Is there some way to "track changes" and upload only the diff, like in Git?
Following up on my comment:
Can't you store the commands you used on your offline db, and then
apply them on the online db, through a script running on SSH for
instance ? Or even better upload a file with all the commands you ran
on your offline base, to your server and then execute them with a cron
job, or a bash script ? (The only requirement would be for your bases
to have the same start point, and same state, when you execute the
script)
I would recommend to store all the queries you execute on your offline base, to do this you have many options, I can think about the following : You can set the profiling level to log all your queries.
(Here is a more detailed thread on the matter: MongoDB logging all queries)
Then you would have to extract then somehow (grep ?), or store them directly in another file on the fly, when they are executed.
For the uploading of the script, it depends on what you would like to use, but i suppose you would need to do it during low usage hours, and you could automate the task with a CRON job, and an SSH tunnel.
I guess it all depends on your constraints (security, downtime, etc..)

Cannot see reimported InfluxDB data by issuing SELECT query, where is it?

Recently I had to wipe and reinstall my InfluxDB database which manages about 100 smart home devices from plugs to switches and a lot of sensors (/var/lib/influxdb was about 2GB). Since the sensors were continuing to collect data, I took the corrupt database offline, set up a new Influx instance which then continued collecting data. My idea was to inspect the broken DB and copy the intact parts of the data over to the new one later.
I actually managed to export most data using influxdb_inspect export -database foo ... to a file. I also managed to initiate a reimport using influx -import -path ..., which churned along happily for about two days, until my data was copied.
But when I issue requests now to the new database, the imported data is not found. It exists nowhere. Queries like before the crash only return data collected since the reinstall.
The filesystem size is similar so the data is in there somewhere:
pi#raspberrypi:~ $ sudo du -ks /var/lib/influxdb*
1910720 /var/lib/influxdb
1902236 /var/lib/influxdb2
1910284 /var/lib/influxdb-old
influxdb is the old DB, influxdb2 is the new current one, influxdb-old is a previous backup copy.
But a call like SELECT value FROM my_measurement which would return 100.000s of values from the old database now just returns a few hundred (collected since two days ago). Also all my frontend tools (like Grafana) which used to return two years worth of data for visualization now just show the last two days.
So: where is the reimported data gone?
I am using a Raspi 4b, Raspbian Linux, with ioBroker and InfluxDB 1.8.6.
Solved it.
My old database had a different retention policy configured as the default, and the new database did not use the same default retention policy. And since none of my queries specified an explicit retention policy, only the data in the default retention policy bucket was found.
Re-imported all data with the retention policy set to the previous value, and everything seems to be OK.
It is possible to move data between policies, like this:
SELECT * INTO "db"."newrp"."newmeasurement" FROM "db"."oldrp"."oldmeasurement" GROUP BY *
but I ended up re-importing which also worked fine.
See https://community.influxdata.com/t/applying-retention-policies-to-existing-measurments/802 for the above command.

How to take backup of Tableau Server Repository(PostgreSQL)

we are using 2018.3 version of Tableau Server. The server stats like user login, and other stats are getting logged into PostgreSQL DB. and the same being cleared regularly after 1 week.
Is there any API available in Tableau to connect the DB and take backup of data somewhere like HDFS or any place in Linux server.
Kindly let me know if there are any other way other than API as well.
Thanks.
You can enable access to the underlying PostgreSQL repository database with the tsm command. Here is a link to the documentation for your (older) version of Tableau
https://help.tableau.com/v2018.3/server/en-us/cli_data-access.htm#repository-access-enable
It would be good security practice to limit access to only the machines (whitelisted) that need it, create or use an existing read-only account to access the repository, and ideally to disable access when your admin programs are complete (i.e.. enable access, do your query, disable access)
This way you can have any SQL client code you wish query the repository, create a mirror, create reports, run auditing procedures - whatever you like.
Personally, before writing significant custom code, I’d first see if the info you want is already available another way, in one of the built in admin views, via the REST API, or using the public domain LogShark or TabMon systems or with the Addon (for more recent versions of Tableau) the Server Management Add-on, or possibly the new Data Catalog.
I know at least one server admin who somehow clones the whole Postgres repository database periodically so he can analyze stats offline. Not sure what approach he uses to clone. So you have several options.

How to replicate a postgresql database from local to web server

I am new in the form and also new in postgresql.
Normally I use MySQL for my project but I’ve decided to start migrating towards postgresql for some valid reasons which I found in this database.
Expanding on the problem:
I need to analyze data via some mathematical formulas but in order to do this I need to get the data from the software via the API.
The software, the API and Postgresql v. 11.4 which I installed on a desktop are running on windows. So far I’ve managed to take the data via the API and import it into Postgreql.
My problem is how to transfer this data from
the local Postgresql (on the PC ) to a web Postgresql (installed in a Web server ) which is running Linux.
For example if I take the data every five minutes from software via API and put it in local db postgresql, how can I transfer this data (automatically if possible) to the db in the web server running Linux? I rejected a data dump because importing the whole db every time is not viable.
What I would like is to import only the five-minute data which gradually adds to the previous data.
I also rejected the idea of making a master - slave architecture
because not knowing the total amount of data, on the web server I have almost 2 Tb of hard disk while on the local pc I have only one hard disk that serves only to take the data and then to send it to the web server for the analysis.
Could someone please help by giving some good advice regarding how to achieve this objective?
Thanks to all for any answers.

Sitecore MongoDB not creating all database/collections

We are working on Sitecore deployment in Azure.
Sitecore Experience Platform 8.0 rev. 160115
MongoDB - 3.0.4
We installed MongoDB, and we can connect to localhost using Robomongo. We can only see “Analytics” database/collections.
Our connection strings setup are:
Connectionstring.config
But the other 3 databases and collections are not created.
Tracking.live
Tracking.history
Tracking.contact
In Sitecore.Analytics.config file – the setting “Analytics.Enabled” is set to true.
Sitecore.Analytics.config
In log we found some references to xDB cloud initialization failed issues, therefore we disabled it.
Are we missing any configurations? Any help or suggestions are appreciated.
Thank you
Keep in mind that MongoDB is schemaless. Of course, in a production environment you would probably have to create these databases manually - to ensure that access rights are assigned correctly. But in a development environment, any database can be created on the fly.
The only reason the analytics database was created for you is because Sitecore creates indexes for the Interactions collection. Otherwise, you wouldn't see this database until xDB wrote some data into it. Same goes for any MongoDB collection - those won't appear until there's either data being written or an index created.
The other three databases will be created once the aggregation/processing logic is executed. I.e. when your instance starts to actually collect and process visit data.
As a conclusion, don't worry about these databases missing (for now). Just verify that xDB functionality is working properly.