MongoDB Trigger for Azure Functions - mongodb

Azure Functions don’t have a trigger for MongoDB right out of the box. Is there some custom MongoDB trigger out there that will allow for me to take advantage of Change Streams in MongoDB? Ideally, I would like to find a MongoDB trigger equivalent to the “CosmosDB” trigger in Azure Functions that takes advantage of Change Feed. If that doesn’t exist, is there some other way that I can take advantage of MongoDB change stream with Azure Functions? We use Azure Functions extensively and need some way to incorporate it with MongoDB. Specifically, we need a trigger for database changes in MongoDB. I’ve seen examples of using Azure Functions with MongoDB with a Http Trigger, but we need a trigger that makes use of the MongoDB change stream.

Related

How to execute COSMOS DB stored procedure with parameters via powershell

Looking for a powershell script/rest API to execute cosmos db stored procedure with partition key value.
You can use the REST API to execute Stored Procedures.
https://{databaseaccount}.documents.azure.com/dbs/{db-id}/colls/{coll-id}/sprocs/{sproc-name}
There is no native means to interact with Cosmos DB on its data-plane via PowerShell. There are three options you can explore. One of them is calling REST directly from PowerShell as indicated in the previous answer below. Your other options...
You can use this PowerShell Rest API Sample from the .NET SDK GitHub Repo. However, this requires authenticating via the REST API mentioned in the previous answer which can be a bit cumbersome.
You can create your own Custom PowerShell cmdlet in C#/.NET and then call that from your PS script. This may take longer than the example above but is easier to write and maintain. It also gives you the ability to do whatever you were looking to do in a stored procedure and simply implement in C# using the .NET SDK which can also yield benefits in maintainability.

Tool for Azure cognitive search similar to Logstash?

My company has lots of data(Database: PostgreSQL) and now the requirement is to add search feature in that,we have been asked to use Azure cognitive search.
I want to know that how we can transform the data and send it to the Azure search engine.
There are few cases which we have to handle:
How will we transfer and upload on index of search engine for existing data?
What will be the easy way to update the data on search engine with new records in our Production Database?(For now we are using Java back end code for transforming the data and updating the index, but it is very time consuming.)
3.What will be the best way to manage when there's an update on existing database structure? How will we update the indexer without doing lots of work by creating the indexers every time?
Is there anyway we can automatically update the index whenever there is change in database records.
You can either write code to push data from your PostgreSQL database into the Azure Search index via the /docs/index API, or you can configure an Azure Search Indexer to do the data ingestion. The upside of configuring an Indexer to do the ingestion is that you can also configure it to monitor the datasource on a schedule for updates, and have those updates reflected into the search index automatically. For example via SQL Integrated Change Tracking Policy
PostgreSQL is a supported datasource for Azure Search Indexers, although the datasource is in preview (not get generally available).
Besides the answer above that involves coding on your end, there is a solution you may implement using Azure Data Factory PostgreSQL connector with a custom query that tracks for recent records and create a Pipeline Activity that sinks to an Azure Blob Storage account.
Then within Data Factory you can link to a Pipeline Activity that copies to an Azure Cognitive Search index and add a trigger to the pipeline to run at specified times.
Once the staged data is in the storage account in delimitedText format, you can also use built-in Azure Blob indexer with change tracking enabled.

Cannot create a trigger using console that depends on a crawler in AWS Glue

I am trying to create a trigger in Glue that would watch a set of my crawlers and then trigger an ETL job. Based on the documentation, this should be fairly straightforward.
However when I log into the console and try to create a conditional trigger, the only option I get is to watch other jobs and not crawlers. This got me confused. Is this a deprecated functionality?
For the time being I will use Cloudformation instead, but am now curious if that is also not going to be supported in the future.

How to import or sync data to Neo4?

I have a REST API around a PostgreSQL database, the API was built using the Django REST Framework (python). I have access to the PostgreSQL database and the API, but I'm not allowed to modify the django/python code.
My first approach is to make, kind of, an HTTP POST request via a trigger every time a new record is created in PostgreSQL. I found this but seems like it's not the best way to do what I need.
In the Neo4j side, I'm thinking of a periodic HTTP GET request to the API from within a cypher function, but does not exist such thing.
You should use the APOC procedures for integrating with other DBs via JDBC. PostgreSQL is supported.
You can also use APOC procedures like apoc.periodic.schedule or apoc.periodic.countdown to periodically execute a Cypher query in the background.

How do we restrict Mongodb's automated caching to a specific collection

We've just started using Mongodb to replace many of the core sql tables in our app. We'd like to use the same Mongodb instance to store data on performance and usage as a replacement for google analytics. However we don't want this collection automatically using up system memory that could have gone to the primary collection.
Is there a way to control mongodb's automated caching functionality?