Integrating Confluence and PostgreSQL via rest api's - postgresql

I'm new to confluence and would like to know that is there anyway I can connect part of an existing confluence page with another postgreSQL db by making API calls instead of creating any sockets from Confluence infrastructure. The below Image might help to understand what I want to achieve. I'm open to any or all options that can help me achieve this.
Requirement:
Have a confluence page updating the frontend with data from DB
No/Minimal changes to the confluence Infra backend
As I click on get data on the front end, It should fetch data from the DB and populate on the screen
I have tried googling all the similar solutions that I can find but I couldn't find any that suits the specific requirement that I have. I tried looking at Atlassian's page for connecting with DB and other db connecting guides from the below mentioned sources.
Source 1 - Atlassian
Source 2 - Atlassian
These two sources shows how to connect the DB using a JDBC connection to confluence and troubleshoot any issues arising out of it. Which I want to keep as the last resort to implement.
Source 3 - Agix - uses JDBC
This article also shows a way to connect Confluence server to db via jdbc, hosted on CentOs server.
Source 4
This shows a way to connect Jira to DB again utilising the Jira Setup configuration.
Please note - I want to touch the Existing Confluence Infra as minimal as possible.
Update:- I have used the data source for the space to get the DB connected. Now the challenge is to get the Data from user and feed into the DB. Any leads, How can I do that? I'm using SQL macro to fetch the data from the DB but not sure how to feed user input from a form to the DB.

If you mean to use PostgreSQL as core data DB for Confluence then you just need to follow those guides you specified links to as Confluence supports most SQL databases. But if you mean to get data from some other PostgreSQL DB just as container of some data or system - it seems to be better option to configure separate DB for Confluence (as it is rather big) and use Java API/ REST API to integrate the systems.

Related

How can you integrate grafana with Google Cloud SQL

I haven't been able to find how to take a Postgres instance on Google Cloud SQL (on GCP) and hook it up to a grafana dashboard to visualize the data that is in the DB. Is there an accepted easy way to do this? I'm a complete newbie to grafana and have limited experience with GCP(used cloud sql proxy to connect to a postgres instance)
Grafana display the data. Google Cloud Monitoring store the data to display. So, you have to make a link between both.
And boom, magically, a plug-in exists!
Note: when you know what you search, it's easier to find it. Understand your architecture to reach the next level!

Automate data loading to Google Sheet from PostgreSQL database

I would like to create an automated data pulling from our PostgreSQL database to a Google sheet. I've tried JDBC service, but it doesn't work, maybe incorrect variables/config. Does anyone already try doing this? I'd also like to schedule the extraction every hour.
According the the documentation, only Google Cloud SQL MySQL, MySQL, Microsoft SQL Server, and Oracle databases are supported by Apps Script's JDBC. You may have to either move to a new database or develop your own API services to handle the connection.
As for scheduling by the hour, you can use Apps Script's installable triggers.

How to take backup of Tableau Server Repository(PostgreSQL)

we are using 2018.3 version of Tableau Server. The server stats like user login, and other stats are getting logged into PostgreSQL DB. and the same being cleared regularly after 1 week.
Is there any API available in Tableau to connect the DB and take backup of data somewhere like HDFS or any place in Linux server.
Kindly let me know if there are any other way other than API as well.
Thanks.
You can enable access to the underlying PostgreSQL repository database with the tsm command. Here is a link to the documentation for your (older) version of Tableau
https://help.tableau.com/v2018.3/server/en-us/cli_data-access.htm#repository-access-enable
It would be good security practice to limit access to only the machines (whitelisted) that need it, create or use an existing read-only account to access the repository, and ideally to disable access when your admin programs are complete (i.e.. enable access, do your query, disable access)
This way you can have any SQL client code you wish query the repository, create a mirror, create reports, run auditing procedures - whatever you like.
Personally, before writing significant custom code, I’d first see if the info you want is already available another way, in one of the built in admin views, via the REST API, or using the public domain LogShark or TabMon systems or with the Addon (for more recent versions of Tableau) the Server Management Add-on, or possibly the new Data Catalog.
I know at least one server admin who somehow clones the whole Postgres repository database periodically so he can analyze stats offline. Not sure what approach he uses to clone. So you have several options.

running camunda with Spring boot & mongodb

Has anyone been able to get Camunda to run with Spring Boot and mongodb?
I tried several approaches and always got into a brick wall.
What I tried:
1. jpa / hibernate-ogm
I was able to initiate a connection to mongo after creating my own CamundaDatasourceConfiguration and ProcessEngineConfigurationImpl.
It failed when Camunda tried to get table metadata. I couldn't plug out this behavior.
2. jdbc driver for mongo by progress
I set up the jdbc url and driver class by progress.
Camunda then gets stuck during the startup process and does not get to the point where Jetty is fully started, i.e. the "Jetty started on port XYZ" message in the log.
3. camunda with postgres with mongo FDW
FDW is a mechanism for postress to interface an external datasource. This way an application can work with postgres over jdbc while the FDW will take care of reading and writing the date to the external source, be it a file, mongodb, etc.
After realizing 1 and 2 don't work, I started working on 3.
Has anyone succeeded in doing this and can share how?
so I ran into the same problem and decided to share my answers with you.
Currently it is not possible to run the Camunda-Engine with a NoSQL Database.
In this Camunda-Forum-Post one of the guys at Camunda also says it is not possible to run the engine completely without a database.
And in the offical Camunda-Docs there is also a list with all supported environments. Currently there are only SQL-Databases listed:
https://docs.camunda.org/manual/7.10/introduction/supported-environments/
But in some earlier Blog-Posts they metioned, that they want to make some proof-of-concept examples with the use of NoSQL-Databases. So we can expect, that these databases will be supported in the future, but not at the moment.
(note: the flowable engine is doing the same proof of concepts, they mentioned, that they want to be able to use NoSQL-databases by the end of the next year).

mongodb - user connection string, secure password

I've been following a tutorial with express, node and mongo.
I have in a config file on the server side:
production:{
db:'mongodb://MYUSERNAME:MYPASSWORD#ds033307.mongolab.com:33307/dbname',
rootPath:rootPath,
port:process.env.PORT||80
}
so, i have my username and password in clear text in a server side javascript file. should i be worried about this? if yes, where else can I put it?
Thanks.
Edit: I went back and had a look at mongolab and heroku (where my site is hosted) docs.
Where I found: "The MongoLab add-on contributes one config variable to your Heroku environment: MONGOLAB_URI", and so I was able to put the MONGOLAB_URI env var into my config and move the password out of the source code.
With regards to the same datacenter, am I right to assume heroku would not be hosting my mongolab database in their datacenter, but would instead be calling out to a cloud service mongo database? Not much I can do then, is there, if I want to stick with mongolab and heroku?
I know this question is old but according to Heroku's docs they currently use 2 datacenters (https://devcenter.heroku.com/articles/regions#data-center-locations).
Their US server is 'amazon-web-services::us-east-1' and their EU alternative is 'amazon-web-services::eu-west-1'.
Both of these data centers are available when launching mongo instances on Mongolab so you can choose for both your app and your db to be on the same datacenter giving much improved security.
I think you should always be concerned about storing passwords in source code files. Generally you would be much better off keeping it in a configuration file that is managed separately. This gives you the flexibility to use the same code with a different configuration file to point to development or qa databases.
Of bigger concern perhaps - are you hosting your application in the same datacenter that MongoLab is hosting your database? If not, that user name and password, along with your data, will traverse the internet in the clear.
MongoLab does not currently support SSL (other than for their RestAPI) so even they recommend being in the same data center:
Do you support SSL?
Not yet but it is on our roadmap to be available in Summer 2014. In
the meantime, we highly recommend that you run your application and
database in the same datacenter. If you have a Dedicated plan, we also
highly recommend that you configure custom firewall rules for your
database(s).
Rest API:
Each MongoLab account comes with a REST API that can be used to access
the databases, collections and documents belonging to that account.
The API exposes most the operations you would find in the MongoDB
driver, but offers them as a RESTful interface over HTTPS.
I would definitely read MongoLab's security page fairly closely:
https://docs.mongodb.com/manual/security/