Azure ApplicationInsights Application Map doesn't show PostgreSql dependency by default - postgresql

In the Application Map feature of Azure ApplicationInsights, it seems that the PostgreSql db dependency is not shown by default, whereas Azure storage queues, blobs are shown, and so are other http dependencies. This doc by Microsoft doesn't explain why either.
Does anyone know why and when this feature will be available?

I believe it's bc for .Net, PostgreSql is not an auto-collected dependency. You will have to manually wire it up, according to this article.

Related

Confluence migration from cloud to server

We have migrated a space from cloud instance to server instance,in cloud instance we were using "Plantuml diagrams for confluence" but in server we are using "Confluence PlantUML Plugin" .so macro name are different in both cloud and server ,so macro name for cloud is "plantumlcloud" but for server it is "plantuml".so ,in pages after migration it is showing "plantumlcloud" not a valid macro ,kindly help to resolve.
In general, migration of confluence spaces to another application which is not running the same plugins will cause any functionality of that plugin to break.
If you migrate hosting platforms, and have the equivalent version of the plugin for your new platform, created by the same developer, in most cases you will retain functionality, however there will often be differences between versions.
These differences are found especially when downgrading, and moving from cloud to server is a very definite example of a downgrade, as cloud will always run the latest version.
In general I would reccomend against a migration from cloud to server, and when it must be done, time should be spent to ensure compatability with all plugins, and migration guides and plans should be made and followed.
As commented by #tgdavies, there seems to be an equivelent version of the plugin you were using on cloud, so hopefully that can resolve your issue.

WSO2 Identity Manager 5.6 : backup and restore procedures

Good morning,
I looked in the forum here and could not find the answer. If I overlooked it, I apologize...
I just joined an existing project team using WSO2 Identity Manager 5.6 and API Gateway.
I understand that WSO2 Identity Manager is made up of several components, among which openLDAP (which contains a Berkeley database) and a postgreSQL database.
The current backup / restore procedures simply 'tar' the whole directory which contains all files related to WSO2 (including directories which contain database files), without stopping WSO2.
I'm a bit doubtful about this type of process for backing up. Is that the right thing to do?
If not, what would the right procedure be?
If I understand correctly, postgreSQL is only used for WSO2 'internal state data' storage, so backing it up may not be useful. So I'm thinking that maybe an export of openLDAP (slapcat command) be enough.
Backing openLDAP is probably not enough. Depending on how the WSO2 components (IS + APIM) are installed, you may also have H2 DBs for the local registry, Solr indexes for the UI, Velocity templates for API deployments, and/or Synapse XMLs for the APIs.
I recommend you to first compare your installation directories and files with the vanilla zips so you know how is it configured before changing your backup process.

Azure Data Factory - Copy Data from PostgreSQL DB on Ubuntu

Trying to copy data from PostgreSQL DB on an Ubuntu box that needs IPs whitelisted to access it. With Azure Data Factory IPs changing all the time and since i cannot install Self-hosted integration runtime as its a Linux server, what other options are available to be able to copy data from this PostgreSQL DB into an Azure SQL DB without having to worry about the IP addresses. Any suggestions or known solutions for this please?
Based on the document,ADF Self-hosted integration runtime can't be installed on the linux server,only could be used on the windows server.
BTW,this feature will also not be supported recently,please follow this feedback link.
the latest comment:Currently we don't have any plan on this yet. Could
you share us your reasons why do you want Linux?
As workaround,i suggest you get an idea of Azure DMS(Database Migration Service). Please see more detail about it from this link and this video.

What's the anatomy of a Bluemix/Cloud Foundry node red project?

There's lots of documentation and a kludgy console to set up continuous deployment in Cloud Foundry, but I haven't found any documentation on what the artifacts inside a repository need to be.
I don't want to cut-n-paste flows from the node red editor. If that's the only way, then IBM is not ready for prime time. I also am aware of most everything about my flows being in the Cloudant nodered db.
A node red application is more than the flows though. What about my _design docs for my dbs?
I need device info and other stuff from the Watson console, Cloudant info and my flows packaged up into something deployable.
Has anyone scripted this?
What I mean by this is I can clone a Docker project, an npm project and all sorts of projects that implement a build->test->push mechanism. They employ a configuration script of some sort (e.g. package.json) and contain a bunch of source files for the actual application, test scripts, db scripts, whatever is necessary to deploy the application and its environment into a host. I see lots of documentation on the toolchain and its features, but I'm not clear on if it's possible to make use of it for my hosted node red application. Or if I have to write the scripting mechanisms to offload flow info from the nodered db and query all my other dbs for their respective _design docs and all the other configuration information required to set up an IoT node red application.
I forgot to mention, the copy/paste method loses information; you get no tab level metadata. The only way to get all the flow stuff is to pull if from the nodered flow record.
Node-RED will release a new version in a couple of days that will introduce projects, so you'll be able to use GitHub and all the usual tools to handle your app: https://twitter.com/NodeRED/status/956934949784956931 and https://nodered.org/docs/user-guide/projects/
While it doesn't address your short-term needs, I think it's the best long-term solution. Hopefully that helps.

Azure deployment versions

I will try to make it simplify. I am using windows azure cloud to host our web services and databases. and these web services are accessible via URL: "https://server.mydomain.com"
now we made a few major changes to our model and hence web services as a whole. This breaks the API interface for older users. Now we want to deploy the latest version on URL: "https://server.mydomain.com/v2" so that old users can still access the older version.
I searched around SO and other resources but i couldnt find a definite answer how to deploy new version without messing up the old version.
Anything in right direction will be helpful.
In one of the projects I was working on, we built in a versioning scheme on top of our Web API. We used this tutorial to get started. I would recommend starting there.
Sorry for the generic answer, if you post some more specifics I will make some updates.
I'd suggest to deploy separate cloud service and use "v2.server.mydomain.com"