Trying to copy data from PostgreSQL DB on an Ubuntu box that needs IPs whitelisted to access it. With Azure Data Factory IPs changing all the time and since i cannot install Self-hosted integration runtime as its a Linux server, what other options are available to be able to copy data from this PostgreSQL DB into an Azure SQL DB without having to worry about the IP addresses. Any suggestions or known solutions for this please?
Based on the document,ADF Self-hosted integration runtime can't be installed on the linux server,only could be used on the windows server.
BTW,this feature will also not be supported recently,please follow this feedback link.
the latest comment:Currently we don't have any plan on this yet. Could
you share us your reasons why do you want Linux?
As workaround,i suggest you get an idea of Azure DMS(Database Migration Service). Please see more detail about it from this link and this video.
Related
I'm currently working on a project with some colleagues and a colleague of mine linked a database created in her postgreSQL server to our visual studio project, but we don't know how she can share the database with the rest of us, or how can we modify the database without having it.
We're using postgreSQL 14.
One option is to create your database in a cloud provider such as AWS.
You can take a look at: https://aws.amazon.com/rds/postgresql/
This way all of you will be able to access the database.
In the Application Map feature of Azure ApplicationInsights, it seems that the PostgreSql db dependency is not shown by default, whereas Azure storage queues, blobs are shown, and so are other http dependencies. This doc by Microsoft doesn't explain why either.
Does anyone know why and when this feature will be available?
I believe it's bc for .Net, PostgreSql is not an auto-collected dependency. You will have to manually wire it up, according to this article.
I've started using Azure Pipelines, and my django application runs tests which require a local PostgreSQL server.
I'm pretty stumped as I cannot find any information in the MS documentation for how to change the agents configuration to include a local PostgreSQL server.
It seems like a very simple thing to do but I cannot seem to find the relevant documentation. Looking at the agent pool information it lists MySQL as being installed locally.
How can I include a local PostgreSQL server in the configuration of the agent that will build my application and run tests?
Some options:
Create a Docker container with your testing prerequisites in it, and then run your tests in the container
Create your own self-hosted agent and install whatever software you need on it, and then use that instead of the Microsoft-hosted agents.
We are using the maven plugin org.sonarsource.scanner.maven.sonar-maven-plugin to store our reports to our sonar qube instance. For company reasons our CI server is in a different network zone then this instance and postgres default port is closed. I wonder if there is an option to store the reports in a different way then having them written directly to database via jdbc, as opening ports is a tedious task here ;)
Furthermore, we also have some older pieces of software that need to be analyzed with a local sonar runner instance and the same question applies here (so if there is another way to store the reports)
Since version 5.2 there is no connection to the database from the scanners. So the easiest/safest course of action here should be to upgrade to the latest LTS (at time of writing 5.6 )
I have a huge wiki on redmine and I want to move it to confluence. I found this project https://github.com/vile/redmine2confluence-wiki that would do the job since Atlassian doesn't support this functionality. My problem is that I don't know how to run the script on Confluence server since I am using their cloud. Can anyone guide me ?
The majority of the process should be run on your redmine environment provided it has internet access & has python installed.
The last step cannot be performed for Confluence Cloud
Here's the part (at the end) which confirms this:
This will result in a script called fix-author.sql to be run on the Confluence server.
Being cloud, you don't have access to the underlying Confluence server or the database for that matter.