How to monitor over 500+ servers using Grafana from SQL server as data source - grafana

Currently we're monitoring our SQL servers running in Windows platform via MS SQL server reporting services using shared data sources. To confirm what I mean, we don't store data at centralized server to monitor over 500 target servers. We keep monitoring data on local SQL database servers and use shared data source in SSRS to create dashboards.
Now in our firm we're encouraged to use Grafana as dashboard since they have purchased or running some Grafana server licensing. What I know of Grafana instance is that it can be given to us to monitor SQL servers as described above.
My question is how would Grafana dynamically connect to those 500 plus servers? I see it creates data source once but how will I change or create multiple data sources when I have around 1000 servers to monitor?
Please suggest guide.

You may have to code a bit and use data source provisioning and/or Grafana datasource API for it to pickup the new data source.
If you could set up a system (user-data/ init script/IaC) where this API is called everytime a new server comes up, then you will be able to maintain the data sources without maintainance.

Related

Connection failure Postgresql on AWS-RDS Instance in a private network from PowerBI Desktop & Service

I have an AWS RDS (PostgreSQL) that is inside a private network - only accessible via a VPN and Bastian Host.
I am able to establish connection from PBI Desktop to "PostgreSQL-RDS Instance." By creating SSH tunneling from my Laptop (localhost) to Bastian Host using ODBC Driver. With this approach all the data is imported onto PBI desktop(import mode).
But our requirement is to establish connection through a direct query to refresh data real time and generate the Reports Dynamically which I am not able to.
I entered the database credentials into the Power BI desktop tool, and it not working correctly in the power bi desktop, getting a Timeout Error.
I must use direct query, I can't use import.
Any help is appreciated.
An exact error that you are getting would help get to the root cause of the issue. However, a few basic troubleshooting steps that I'd suggest are:
Ensure that you have a compatible version of the software installed on your machine such as the Npgsql-4.0.9. AT times the latest version of the software usually causes issues.
Ensure that you remove the semicolon at the end of the query.
Once you get the query running successfully on the desktop version, when you publish it to the web version, the visuals will not be able to connect to the database unless an on-premises data gateway is setup. To do so, more details on setting up a data gateway to automatically refresh the dataset for the power bi web version are here:
Refresh AWS RDS database from Power BI Web you are successfully able to query directly

Power BI connection problem with Postgresql

I'm using Power BI version 2.84 to connect to Postgresql server. In PBI desktop everything works fine, I can connect to the server, import and refresh data smoothly.
However when I publish it to PBI server, I can't refresh it anymore due to 'encrypted connection'. I have checked all of my connection settings and make sure they are not encrypted at all but the problem is still there.
Please let me know if you have any solution for this.
Cheers
I assume you are using direct query?
If you want to use direct query you will need to set up On-Premises data gateway.
on premise gateways
And then you should add gateway cluster in PowerBI web version gateway cluster:
Data gateway
I think everything is quite straightforward here.
But do you need direct query? If you are ok with refreshing your data a few times a day, you could set up a ODBC connection (when importing data, choose ODBC option not postgresql).
You would need to set up ODBC drivers, (Control panel -> Administrative tools -> Data sources) And create a new one (you should download Postgresql ODBC driver if you have none)
Then you also need to create On-Premises data gateway and set up refresh intervals.

How to take backup of Tableau Server Repository(PostgreSQL)

we are using 2018.3 version of Tableau Server. The server stats like user login, and other stats are getting logged into PostgreSQL DB. and the same being cleared regularly after 1 week.
Is there any API available in Tableau to connect the DB and take backup of data somewhere like HDFS or any place in Linux server.
Kindly let me know if there are any other way other than API as well.
Thanks.
You can enable access to the underlying PostgreSQL repository database with the tsm command. Here is a link to the documentation for your (older) version of Tableau
https://help.tableau.com/v2018.3/server/en-us/cli_data-access.htm#repository-access-enable
It would be good security practice to limit access to only the machines (whitelisted) that need it, create or use an existing read-only account to access the repository, and ideally to disable access when your admin programs are complete (i.e.. enable access, do your query, disable access)
This way you can have any SQL client code you wish query the repository, create a mirror, create reports, run auditing procedures - whatever you like.
Personally, before writing significant custom code, I’d first see if the info you want is already available another way, in one of the built in admin views, via the REST API, or using the public domain LogShark or TabMon systems or with the Addon (for more recent versions of Tableau) the Server Management Add-on, or possibly the new Data Catalog.
I know at least one server admin who somehow clones the whole Postgres repository database periodically so he can analyze stats offline. Not sure what approach he uses to clone. So you have several options.

How to replicate a postgresql database from local to web server

I am new in the form and also new in postgresql.
Normally I use MySQL for my project but I’ve decided to start migrating towards postgresql for some valid reasons which I found in this database.
Expanding on the problem:
I need to analyze data via some mathematical formulas but in order to do this I need to get the data from the software via the API.
The software, the API and Postgresql v. 11.4 which I installed on a desktop are running on windows. So far I’ve managed to take the data via the API and import it into Postgreql.
My problem is how to transfer this data from
the local Postgresql (on the PC ) to a web Postgresql (installed in a Web server ) which is running Linux.
For example if I take the data every five minutes from software via API and put it in local db postgresql, how can I transfer this data (automatically if possible) to the db in the web server running Linux? I rejected a data dump because importing the whole db every time is not viable.
What I would like is to import only the five-minute data which gradually adds to the previous data.
I also rejected the idea of making a master - slave architecture
because not knowing the total amount of data, on the web server I have almost 2 Tb of hard disk while on the local pc I have only one hard disk that serves only to take the data and then to send it to the web server for the analysis.
Could someone please help by giving some good advice regarding how to achieve this objective?
Thanks to all for any answers.

Use Cygnus to store historical data from Orion ContextBroker in a local Hadoop database

We are currently working in a project where we use Orion ContextBroker to store information from different sensors and Wirecloud to show them in a web page.
We want to store historical data from these sensors in order to show them in a graph. I have looked around the Fiware documentation and they recommend to store the data in a Cosmos instance of Fi-lab, through Cygnus.
The thing is that we would like to store that historical data in a local Hadoop based server we have in our company, not in Cosmos, because we are running this project in a local net where we don't have internet access, and also to have that information stored in our local server.
Is it possible to configure Cygnus to redirect the output data to my file system? If so, which files must be configured in order to achieve this?
Thank you
The answer is yes. Cygnus is meant to persist context data in whatever HDFS-based filesystem (as the one used by Cosmos), thus nothing special has to be done when configuring Cygnus.
If you download the lastest version (0.7.0 at the moment of writting this), you will need to configure:
A cygnus_instance_default.conf file from cygnus_instance.conf.template. This is the instance configuration. From 0.7.1 is possible to have multiple instance configurations that are run in a parallel way, and they all have to called cygnus_instance_<whatever>.conf.
A agent.conf file from agent.conf.template. This is the Flume specific configuration that you will find in the README.md.