I haven't been able to find how to take a Postgres instance on Google Cloud SQL (on GCP) and hook it up to a grafana dashboard to visualize the data that is in the DB. Is there an accepted easy way to do this? I'm a complete newbie to grafana and have limited experience with GCP(used cloud sql proxy to connect to a postgres instance)
Grafana display the data. Google Cloud Monitoring store the data to display. So, you have to make a link between both.
And boom, magically, a plug-in exists!
Note: when you know what you search, it's easier to find it. Understand your architecture to reach the next level!
Related
I'm new to confluence and would like to know that is there anyway I can connect part of an existing confluence page with another postgreSQL db by making API calls instead of creating any sockets from Confluence infrastructure. The below Image might help to understand what I want to achieve. I'm open to any or all options that can help me achieve this.
Requirement:
Have a confluence page updating the frontend with data from DB
No/Minimal changes to the confluence Infra backend
As I click on get data on the front end, It should fetch data from the DB and populate on the screen
I have tried googling all the similar solutions that I can find but I couldn't find any that suits the specific requirement that I have. I tried looking at Atlassian's page for connecting with DB and other db connecting guides from the below mentioned sources.
Source 1 - Atlassian
Source 2 - Atlassian
These two sources shows how to connect the DB using a JDBC connection to confluence and troubleshoot any issues arising out of it. Which I want to keep as the last resort to implement.
Source 3 - Agix - uses JDBC
This article also shows a way to connect Confluence server to db via jdbc, hosted on CentOs server.
Source 4
This shows a way to connect Jira to DB again utilising the Jira Setup configuration.
Please note - I want to touch the Existing Confluence Infra as minimal as possible.
Update:- I have used the data source for the space to get the DB connected. Now the challenge is to get the Data from user and feed into the DB. Any leads, How can I do that? I'm using SQL macro to fetch the data from the DB but not sure how to feed user input from a form to the DB.
If you mean to use PostgreSQL as core data DB for Confluence then you just need to follow those guides you specified links to as Confluence supports most SQL databases. But if you mean to get data from some other PostgreSQL DB just as container of some data or system - it seems to be better option to configure separate DB for Confluence (as it is rather big) and use Java API/ REST API to integrate the systems.
I would like to create an automated data pulling from our PostgreSQL database to a Google sheet. I've tried JDBC service, but it doesn't work, maybe incorrect variables/config. Does anyone already try doing this? I'd also like to schedule the extraction every hour.
According the the documentation, only Google Cloud SQL MySQL, MySQL, Microsoft SQL Server, and Oracle databases are supported by Apps Script's JDBC. You may have to either move to a new database or develop your own API services to handle the connection.
As for scheduling by the hour, you can use Apps Script's installable triggers.
Currently we're monitoring our SQL servers running in Windows platform via MS SQL server reporting services using shared data sources. To confirm what I mean, we don't store data at centralized server to monitor over 500 target servers. We keep monitoring data on local SQL database servers and use shared data source in SSRS to create dashboards.
Now in our firm we're encouraged to use Grafana as dashboard since they have purchased or running some Grafana server licensing. What I know of Grafana instance is that it can be given to us to monitor SQL servers as described above.
My question is how would Grafana dynamically connect to those 500 plus servers? I see it creates data source once but how will I change or create multiple data sources when I have around 1000 servers to monitor?
Please suggest guide.
You may have to code a bit and use data source provisioning and/or Grafana datasource API for it to pickup the new data source.
If you could set up a system (user-data/ init script/IaC) where this API is called everytime a new server comes up, then you will be able to maintain the data sources without maintainance.
I'm using Power BI version 2.84 to connect to Postgresql server. In PBI desktop everything works fine, I can connect to the server, import and refresh data smoothly.
However when I publish it to PBI server, I can't refresh it anymore due to 'encrypted connection'. I have checked all of my connection settings and make sure they are not encrypted at all but the problem is still there.
Please let me know if you have any solution for this.
Cheers
I assume you are using direct query?
If you want to use direct query you will need to set up On-Premises data gateway.
on premise gateways
And then you should add gateway cluster in PowerBI web version gateway cluster:
Data gateway
I think everything is quite straightforward here.
But do you need direct query? If you are ok with refreshing your data a few times a day, you could set up a ODBC connection (when importing data, choose ODBC option not postgresql).
You would need to set up ODBC drivers, (Control panel -> Administrative tools -> Data sources) And create a new one (you should download Postgresql ODBC driver if you have none)
Then you also need to create On-Premises data gateway and set up refresh intervals.
As a part of my thesis project, I have been given a MongoDB dump of size 240GB which is on my external hard drive. I'll have to use this data to run my python scripts for a short duration. However, since my dataset is huge and I cannot mongoimport on my local mongodb server (since I don't have enough internal memory), my professor gave me a $100 google cloud platform coupon so I can use the google cloud computing resources.
So far I have researched that I can do it this way:
Create a compute engine in GCP and install mongodb on remote engine. Transfer the MongoDB dump to remote instance and run the scripts to get the output.
This method works well but I'm looking for a method to create a remote database server in GCP so I that I can run my scripts locally, which is something like one of the following.
Creating a remote mongodb server on GCP so that I can establish a remote mongo connection to run my scripts locally.
Transferring the mongodb dump to google's datastore so then I can use the datastore API to remotely connect and run my scripts locally.
I have given a thought of using MongoDB atlas but because of the size of the data, I will be billed hugely and I cannot use my GCP coupon.
Any help or suggestions on how of either of the two methods can be implemented is appreciated.
There is 2 parts to your question
First, you can create a compute engine VM with MongoDB installed and load your backup on it. Then, open the right firewall rules for allowing the connexion from your local environment to the Google Compute Engine VM. The connexion will be performed with a simple login/password.
You can use a static IP on your VM. By the way, in case of reboot on the VM you will keep the same IP (and it will be easier for your local connexion).
Second, BE CAREFUL to datastore. It's a good product, serverless NoSQL database, document oriented, but it's absolutely not the MongoDB equivalent. You can't perform aggregate, you are limited in search capabilities,... It's designed for specific use case (I don't know yours, but don't think that is the MongoDB equivalent!).
Anyway, if you use Datastore, you will have to use a service account or to install Google Cloud SDK on your local environment to be authenticated and to be able to request Datastore API. No login/password in this case.