Enterprise Library Database Trace Listener? - enterprise-library

I'm using EntLib v4 for Logging and currently I'm saving the events to the default text file listener.
I would like to use MS SQL database as my event sink and I saw that the database listener is already provided, but I don't know how to create logging database and stored procedures?
After googling around I saw that in v3 the database creation scripts were shipped with the EntLib, but I can't find them in v4.

I just checked and its in the installation for the source. On my machine its in C:\EntLib4Src\Blocks\Logging\Src\DatabaseTraceListener\Scripts.
You can use the createloggingdb.cmd file or parse loggingdatabase.sql yourself for the relevant commands.

Related

Integrating Confluence and PostgreSQL via rest api's

I'm new to confluence and would like to know that is there anyway I can connect part of an existing confluence page with another postgreSQL db by making API calls instead of creating any sockets from Confluence infrastructure. The below Image might help to understand what I want to achieve. I'm open to any or all options that can help me achieve this.
Requirement:
Have a confluence page updating the frontend with data from DB
No/Minimal changes to the confluence Infra backend
As I click on get data on the front end, It should fetch data from the DB and populate on the screen
I have tried googling all the similar solutions that I can find but I couldn't find any that suits the specific requirement that I have. I tried looking at Atlassian's page for connecting with DB and other db connecting guides from the below mentioned sources.
Source 1 - Atlassian
Source 2 - Atlassian
These two sources shows how to connect the DB using a JDBC connection to confluence and troubleshoot any issues arising out of it. Which I want to keep as the last resort to implement.
Source 3 - Agix - uses JDBC
This article also shows a way to connect Confluence server to db via jdbc, hosted on CentOs server.
Source 4
This shows a way to connect Jira to DB again utilising the Jira Setup configuration.
Please note - I want to touch the Existing Confluence Infra as minimal as possible.
Update:- I have used the data source for the space to get the DB connected. Now the challenge is to get the Data from user and feed into the DB. Any leads, How can I do that? I'm using SQL macro to fetch the data from the DB but not sure how to feed user input from a form to the DB.
If you mean to use PostgreSQL as core data DB for Confluence then you just need to follow those guides you specified links to as Confluence supports most SQL databases. But if you mean to get data from some other PostgreSQL DB just as container of some data or system - it seems to be better option to configure separate DB for Confluence (as it is rather big) and use Java API/ REST API to integrate the systems.

Implement Oracle external table like functionality in Azure managed postgresql

Currently we are using Oracle 19c external table functionality on-prem whereby CSV files are loaded to a specific location on DB server and they get automatically loaded into an oracle external table. The file location is mentioned as part of the table DDL.
We have a requirement to migrate to azure managed postgresql. As per checking the postgresql documentation, similar functionality as oracle external table can be achieved in standalone postgresql using "foreign tables" with the help of file_fdw extension. But in azure managed postgresql, we cannot use this since we do not have access to the DB file system.
One option I came across was to use azure data factory but that looks like an expensive option. Expected volume is about ~ 1 million record inserts per day.
Could anyone advise possible alternatives? One option I was thinking was to have a scheduled shell script running on an azure VM which loads the files to postgresql using PSQL commands like \copy. Would that be a good option for the volume to be supported?
Regards
Jacob
We have one last option that could be simple to implement in migration. We need to use Enterprise DB (EDB) which will avoid the vendor lock-in and also it is free of cost.
Check the below video link for the migration procedure steps.
https://www.youtube.com/watch?v=V_AQs8Qelfc

Automate data loading to Google Sheet from PostgreSQL database

I would like to create an automated data pulling from our PostgreSQL database to a Google sheet. I've tried JDBC service, but it doesn't work, maybe incorrect variables/config. Does anyone already try doing this? I'd also like to schedule the extraction every hour.
According the the documentation, only Google Cloud SQL MySQL, MySQL, Microsoft SQL Server, and Oracle databases are supported by Apps Script's JDBC. You may have to either move to a new database or develop your own API services to handle the connection.
As for scheduling by the hour, you can use Apps Script's installable triggers.

Use Cygnus to store historical data from Orion ContextBroker in a local Hadoop database

We are currently working in a project where we use Orion ContextBroker to store information from different sensors and Wirecloud to show them in a web page.
We want to store historical data from these sensors in order to show them in a graph. I have looked around the Fiware documentation and they recommend to store the data in a Cosmos instance of Fi-lab, through Cygnus.
The thing is that we would like to store that historical data in a local Hadoop based server we have in our company, not in Cosmos, because we are running this project in a local net where we don't have internet access, and also to have that information stored in our local server.
Is it possible to configure Cygnus to redirect the output data to my file system? If so, which files must be configured in order to achieve this?
Thank you
The answer is yes. Cygnus is meant to persist context data in whatever HDFS-based filesystem (as the one used by Cosmos), thus nothing special has to be done when configuring Cygnus.
If you download the lastest version (0.7.0 at the moment of writting this), you will need to configure:
A cygnus_instance_default.conf file from cygnus_instance.conf.template. This is the instance configuration. From 0.7.1 is possible to have multiple instance configurations that are run in a parallel way, and they all have to called cygnus_instance_<whatever>.conf.
A agent.conf file from agent.conf.template. This is the Flume specific configuration that you will find in the README.md.

SQL Service Broker creating objects in SQL Server Database Project in VS 2012

So I've started a SQL Server database project inside VS 2012. I have done this for other databases already but not related to Service Broker.
For testing I had already created db, queues, etc through a T-SQL script including Message Types which was in an XML format. i.e.
[//blah.com/Items/RequestItem]
When I try to do something like this in the DB Project it's not allowing me too due to special chars.
Anyone done this? Gotten around it?
Is there a way to simply put my already created T-SQL file in the database project and have it use it?
See my comment above. I was able to import the script by Right clicking on the database Project.