Firestore trigger temporal information - google-cloud-firestore

Hi so i understand firestore write triggers run out of order with respect to time. Is is possible to get timestamp information on when a write occured within the trigger functions execution context?

If you're using the Firebase CLI to deploy, every background function is delivered an EventContext object as its second parameter. You can use its timestamp property. Or, you can have the client write it into the document.
I assume something similar is available for the context object provided to code deployed by gcloud.

Related

If many Kafka streams updates domain model (a.k.a materialized view)?

I have a materialized view that is updated from many streams. Every one enrich it partially. Order doesn't matter. Updates comes in not specified time. Is following algorithm is a good approach:
Update comes and I check what is stored in materialized view via get(), that this is an initial one so enrich and save.
Second comes and get() shows that partial update exist - add next information
... and I continue with same style
If there is a query/join, object that is stored has a method that shows that the update is not complete isValid() that could be used in KafkaStreams#filter().
Could you share please is this a good plan? Is there any pattern in Kafka streams world that handle this case?
Please advice.
Your plan looks good , you have the general idea, but you'll have to use the lower Kafka Stream API : Processor API.
There is a .transform operator that allow you to access a KeyValueStatestore, inside this operation implementation you are free to decide if you current aggregated value is valid or not.
Therefore send it downstream or returning null waiting for more information.

CQRS - How to handle if a command requires data from db (query)

I am trying to wrap my head around the best way to approach this problem.
I am importing a file that contains bunch of users so I created a handler called
ImportUsersCommandHandler and my command is ImportUsersCommand that has List<User> as one of the parameters.
In the handler, for each user that I need to import I have to make sure that the UserType is valid, this is where the confusion comes in. I need to do a query against the database, to get list of all possible user types and than for each user I am importing, I want to verify that the user type id in the import matches one that is in the db.
I have 3 options.
Create a query GetUserTypesQuery and get the rest of this and then pass it on to the ImportUsersCommand as a list and verify inside the command handler
Call the GetUserTypesQuery from the command itself and not pass it (command calling another query)
Do not create a GetUsersTypeQuery and just do the query results within the command (still a query but no query/handler involved)
I feel like all these are dirty solutions and not the correct way to apply CQRS.
I agree option 1 sounds the best but would maybe suggest adding a pre handler to validate your input?
So ImportUsersCommandHandler deals with importing you data (and only that) and add a handler that runs before that validates (in your example, checks the user types and maybe other stuff) and bails out of it does not pass. So it queries the db, checks the usertypes and does whatever it needs to if it fails. Otherwise it just passes down to your business handler (ImportUsersCommandHandler).
I am used to using Mediatr in NET Core and this pattern works well (this is what we do) so sorry if this does not fit with your environment/setup!

Triggering Kusto commands using 'ADX Command' activity in ADFv2 vs calling WebAPI on it

In ADFv2 (Azure Data Factory V2) if we need to trigger a command on an ADX (Azure Data Explorer) cluster , we have two choices:-
Use 'Azure Data Explorer Commmand' activity
Use POST method provided in the 'WebActivity' activity
Having figured out that both the methods work I would say from development/maintenance point of view the first method sounds more slick and systematic especially because it is out of the box feature to support Kusto in ADFv2. Is there any scenario where the Web Activity method would be more preferable or more performant? I am trying to figure out if it's alright to simply use the ADX Command activity all the time to run any Kusto command from ADFv2 instead of ever using the Web activity,
It is indeed recommended to use the "Azure Data Explorer Command" activity:
That activity is more comfortable, as you don't have to construct by yourself a the HTTP request.
That command takes care of few things for you, such as:
In case you are running an async command, it will poll the Operations table until your async command is completed.
Logging.
Error handling.
In addition, you should take into consideration that the result format will be different between both cases, and that each activity has its own limits in terms of response size and timeout.

How to create a Logic App Custom Connector polling trigger?

I've been able to create a Logic App Custom Connector with a webhook trigger by following the docs, however I can't find any documentation on creating a polling trigger. I was only able to find Jeff Hollan's trigger examples, but the polling trigger doesn't seem compatible with the custom connector.
I tried setting up a polling trigger by performing the following steps:
Create an Azure Function with a GET operation expecting a date time query parameter
Have the function return a set of entities that have changed since the last poll
Configure the custom connector to call the Azure Function with the date time query parameter
Configure the response body of the custom connector
Try different things in the 'Trigger configuration' section, but this is most confusing to me.
Whatever I tried, the trigger always fails with a 404 in the trigger outputs, similar to what I initially had with the webhook trigger type.
There are a few things that confuse me:
1. Path of trigger query seems screwed up
It looks like the custom connector UI screws up the path to the trigger. I noticed this when I downloaded the OpenAPI file. The path to my trigger API should be /api/trigger/tasks/completed, but in the OpenAPI file it read /trigger/api/trigger/tasks/completed. It appears the custom connector adds /trigger in front of the path. I sometimes noticed it doing this multiple times, giving me something similar to /trigger/trigger/trigger/api/trigger/tasks/completed. I fixed this in the OpenAPI file and re-imported it into the custom connector.
2. Trigger Configuration section
I don't understand what to do in the Trigger Configuration section of a polling trigger.
I assume the query parameter to monitor state change is some parameter I define myself, e.g. a timestamp, to determine what entities to return.
As the 'select value to pass to selected query param' I would expect I could pick a timestamp from the trigger response. It looks like I can only pick values from a collection, not scalar values from the response as I would expect. How does that work?
Is 'trigger hint' just some information or does it actually control something?

Tarantool shiny dashboard

I want to use Tarantool database for logging user activity.
Are there any out of the box solutions to create web dashboard with nice charts based on the collected data?
A long time ago, using an old-old version of tarantool I've created a draft of tarbon - time-series database, with carbon-cache identical interface.
Since that time the protocol have changed, but the generic idea still the same: use spaces to store data, compact data organization and correct indexes to access spaces as time-series rows and lua for preparing resulting jsons.
That solution was perfect in performance (either on reads or on writes), but that old version lacks disk storage and without disk I was very limited to metrics capacity.
Tarantool has embedded lua language so u could generate json from your data and use any charting library. For example D3.js has method to load json directly from url.
d3.json(url[, callback])
Creates a request for the JSON file at the specified url with the mime type "application/json". If a callback is specified, the request is immediately issued with the GET method, and the callback will be invoked asynchronously when the file is loaded or the request fails; the callback is invoked with two arguments: the error, if any, and the parsed JSON. The parsed JSON is undefined if an error occurs. If no callback is specified, the returned request can be issued using xhr.get or similar, and handled using xhr.on.
You also could look at c3.js simple facade for d3