DocumentDB Why triggers can not be triggered from Azure portal? - triggers

I have a pre-trigger on replace action. I realized that, unlike SQL Server triggers, the DocumentDB triggers won't fire when you update documents in Azure portal. Do I miss any settings from the portal? or this is how DocumentDB trigger work? only can be triggered from application code?
Thanks!

Your understanding is correct. There is no technical blocker to allow this feature from portal, only that it is currently missing.
It's certainly a valid request to be placed # https://feedback.azure.com/forums/263030-documentdb

DocumentDb Database Triggers are not automatically raised via DML, like create & delete operations, which is common in other databases.
That is, triggers must be specified for each database operation you make in application code. Also, the trigger should be of the same type, that is, an insert operations can only take a create trigger type, not replace type.
Since, I have azure function documentdb output bindings, and don't do the DML operations myself. After wasting a lot time debugging, moved on to create a stored procedure under the database collection, and then called it via Azure function code using below kind of code.
This works perfectly:
// call stored procedure, nodejs, azure
'use strict';
var DocumentClient = require('documentdb').DocumentClient;
var client = new DocumentClient(process.env.DB_HOST, {masterKey: process.env.DB_M_KEY});
var sprocLink =
'dbs/' + sprocDbName1 +
'/colls/' + sprocCollName1 +
'/sprocs/' + sprocName1;
var sprocParams = {
key1: "val1",
key2: "val2"
};
client
.executeStoredProcedure(
sprocLink,
sprocParams1,
function (err, results) {
if (err) {
context.log.error('err');
context.log.error(err);
return;
}
context.log.verbose('results');
context.log.verbose(results);
return;
});
Note: Give values for DB_HOST (url ending with :443/), DB_M_KEY, sprocDbName1 (your db name), sprocCollName1 (your collection name, sprocName1 (your stored proc name)
Before doing above, a stored procedure (sproc) should be created in inside DocumentDb database collection.
Hope that helps.

Related

Prisma: very slow nested writes when connecting with remote DB Postrgsql on AWS

I am writing a very basic query with prisma:
async createContext(contextData: CreateContextDto): Promise<ContextRO> {
const statements = contextData.body
.split('\n')
.filter((statement) => statement !== '')
.map((statement) => ({ content: statement }));
const context = await this.prisma.context.create({
data: {
contextName: contextData.name,
userId: contextData.user,
statements: {
create: statements,
},
},
include: {
statements: true,
},
});
return { context };
With local PostgreSQL the same query takes around 4s. When connecting to PostgreSQL on AWS it goes up to 90 seconds.
Any ideas why is it taking so long?
Please find an example repo reproducing this issue.
And cli output when running Prisma with 'DEBUG=*'
ps. if I run the same query with typeorm with PostgreSQL on aws, it takes 1-2 seconds so it is not a problem with deployment. (check branch "typeorm" to see the comparison)
You should use createMany instead of create. create uses a separate insert under the hood for every single nested write. If there are a lot of statements connected to one context record, you're making a lot of separate queries to the remote database, which is quite slow.
What you can do is:
Use create to create one context record, without the nested statement records.
Use a separate createMany for the statement records, manually specifying the contextId using the id you got from step 1.
You could also wrap queries 1 and 2 in a transaction, if you think that's appropriate.

Calling Snowflake Stored Procedure from Tableau

I have a snowflake stored procedure which exports data to S3 based on dynamic input parameters. I am trying to set this up via tableau, so that I can use tableau parameters and call the snowflake stored procedure from Tableau, is this possible in any way?
While there's no straightforward solution, you could accomplish this task with a series of Snowflake facilities:
Create a task that monitors information_schema.query_history() every X minutes.
Have this task check for queries executed under a Tableau session.
If any of these queries have a parameter set by your Tableau dashboard that indicates the user wants to export these results, then do so.
You can check that a session was initiated by Tableau searching the query history for ALTER SESSION SET QUERY_TAG = { "tableau-query-origins": { "query-category": "Data" } }.

Creating a Stored Procedure in DocumentDB via powershell

We can create an SP by getting the collection's self link like the below statement:
$procedure = $client.CreateStoredProcedureAsync($coll_list.SelfLink,$proc)
Can we create a stored procedure using Uri like in the below statement:
$procedure = $docClient.CreateStoredProcedureAsync($dbUri,$proc).Result
Also, how to take care of the default Indexing Policy that gets created using powershell? Right now it creates a custom indexing policy. I want to create a default indexing policy.

JPA: How to call a stored procedure

I have a stored procedure in my project under sql/my_prod.sql
there I have my function delete_entity
In my entity
#NamedNativeQuery(name = "delete_entity_prod",
query = "{call /sql/delete_entity(:lineId)}",
and I call it
Query query = entityManager.createNamedQuery("delete_entity_prod")
setParameter("lineId",lineId);
I followed this example: http://objectopia.com/2009/06/26/calling-stored-procedures-in-jpa/
but it does not execute the delete and it does not send any error.
I haven't found clear information about this, am I missing something? Maybe I need to load the my_prod.sql first? But how?
JPA 2.1 standardized stored procedure support if you are able to use it, with examples here http://en.wikibooks.org/wiki/Java_Persistence/Advanced_Topics#Stored_Procedures
This is actually they way you create a query.
Query query = entityManager.createNamedQuery("delete_entity_prod")
setParameter("lineId",lineId);
To call it you must execute:
query.executeUpdate();
Of course, the DB must already contain the procedure. So if you have it defined in your SQL file, have a look at Executing SQL Statements from a Text File(this is for MySQL but other database systems use a similar approach to execute scripts)
There is no error shown because query is not executed at any point - just instance of Query is created. Query can be executed by calling executeUpdate:
query.executeUpdate();
Then next problem will arise: Writing some stored procedures to file is not enough - procedures live in database, not in files. So next thing to do is to check that there is correct script to create stored procedure in hands (maybe that is currently content of sql/my_prod.sql) and then use that to create procedure via database client.
All JPA implementations do not support calling stored procedures, but I assume Hibernate is used under the hood, because that is also used in linked tutorial.
It can be the case that current
{call /sql/delete_entity(:lineId)}
is right syntax for calling stored procedure in your database. It looks rather suspicious because of /sql/. If it turns out that this is incorrect syntax, then:
Consult manual for correct syntax
Test via client
Use that as a value of query attribute in NamedNativeQuery annotation.
All that with combination MySQL+Hibernate is explained for example here.

Row-Level Update Lock using System.Transactions

I have a MSSQL procedure with the following code in it:
SELECT Id, Role, JurisdictionType, JurisdictionKey
FROM
dbo.SecurityAssignment WITH(UPDLOCK, ROWLOCK)
WHERE Id = #UserIdentity
I'm trying to move that same behavior into a component that uses OleDb connections, commands, and transactions to achieve the same result. (It's a security component that uses the SecurityAssignment table shown above. I want it to work whether that table is in MSSQL, Oracle, or Db2)
Given the above SQL, if I run a test using the following code
Thread backgroundThread = new Thread(
delegate()
{
using (var transactionScope = new TrasnsactionScope())
{
Subject.GetAssignmentsHavingUser(userIdentity);
Thread.Sleep(5000);
backgroundWork();
transactionScope.Complete();
}
});
backgroundThread.Start();
Thread.Sleep(3000);
var foregroundResults = Subject.GetAssignmentsHavingUser(userIdentity);
Where
Subject.GetAssignmentsHavingUser
runs the sql above and returns a collection of results and backgroundWork is an Action that updates rows in the table, like this:
delegate
{
Subject.UpdateAssignment(newAssignment(user1, role1));
}
Then the foregroundResults returned by the test should reflect the changes made in the backgroundWork action.
That is, I retrieve a list of SecurityAssignment table rows that have UPDLOCK, ROWLOCK applied by the SQL, and subsequent queries against those rows don't return until that update lock is released - thus the foregroundResult in the test includes the updates made in the backgroundThread.
This all works fine.
Now, I want to do the same with database-agnostic SQL, using OleDb transactions and isolation levels to achieve the same result. And I can't for the life of me, figure out how to do it. Is it even possible, or does this row-level locking only apply at the db level?