How can I emit custom metrics from a Cadence workflow or activity? - cadence-workflow

Cadence emits a bunch of metrics using tally. Is it possible to emit my own metrics using Cadence SDK?
Go SDK has cadence.GetMetricsScope().Counter(counterName).Inc() but it doesn’t seem to work when I just call it. Am I missing some required configuration?

You can add your own metrics using both the Go and Java SDKs. The following examples show how to do it using the Go SDK, and Java SDK isn’t much different than it.
In workflow code:
cadence.GetMetricsScope().Counter(counterName).Inc(1)
In activity code:
cadence.GetActivityMetricsScope().Counter(counterName).Inc(1)
However, just doing that will not be sufficient to actually emit the metrics. The reason is that tally uses tally.NoopScope by default, which does nothing as the name implies. Therefore, you’d also need to set up the MetricsScope as part of the WorkerOptions as in the following example:
workerOptions := cadence.WorkerOptions{
MetricsScope: myTallyScope,
}
worker := cadence.NewWorker(service, domain, taskList, workerOptions)
You can create myTallyScope by following the examples in tally documentation for Go and Java.

Related

Transfer client configuration between environments

For securing a frontend application, I created a new Keycloak client with a custom configuration:
mapper which includes "client roles"
scope configuration
client-specific roles (composite and non-composite roles)
This setup works fine in the local development setup. Now we need to transfer this configuration to the other environments like develop/preproduction/production stage.
As far as I understand, Keycloak offers the following exports:
Complete realm
Specific client
It looks as if both apporaches have some major drawbacks. Either I would need to overwrite the complete realm (which I definitely don't want to do in production) or I can import the basic client configuration which is missing all the roles.
And as soon as we, for example, add more roles later on, then we would need to re-configure all stages manually.
Is there some "good practice" how to deal with that? Does keycloak offer some kind of "sync" between stages?
I thought it is hard answer question.
it is compare API call vs UI configuration.
Disadvantage of API call I prefer API call but it takes a time to figure out API function and call order is matter and some properties missing in parent have to set detail in child, complicated structure API URL path ( example id/property/id/property), require more deep of knowledge for Keycloak.
Advantage of API call more fine tunning fast, easy organize from top to bottom (example configure client, Auth resources, auth scopes, policies and permissions to other environment), can transfer 100% configuration.
Disadvantage of UI configuration - not flexible, if un-match, id makes error, can't update/add a partial data (example get client's resource missing it's scopes - it have to set by separate API call), can't move 100% configuration from source to target environment, can make human error
Advantage of UI configuration - easy, quick even manual
My preference is API call - using Postman (single API call or running correction for a sequence of API call - at the local and develop stage, can simple unit test and check HTTP status) and curl call with Bash Schell for higher stage. If check condition of target, can handle scenario based transfer(example already setting, skip that configuration)
One more tips, If using a debug section by F12 in Chrome or Firefox, can see the API call in network tab. It saves time to figure out API call methods and payload/response JSON data.

How to send custom dimensions, medium, source or referer with an event via Measurement Protocol V2?

With v1 of the measurement protocol, you could use these parameters to add custom dimensions or change medium, source or refer for a page view:
https://ssl.google-analytics.com/collect?v=1&tid=UA-xxxxxxxx&cid=[custom-id]&t=pageview&dp=[Url of pageview]&dh=[hostname of pageview]&cm=[new-medium]&cs=[new-source]&dr=[new-referer]&cd1=[custom-dimension-1]&cd2=[custom-dimension-2]
How is it done in measurement protocol v2?
I couldn't find any documentation about the page-view-event in V2 (for example it's just not mentioned here
https://developers.google.com/analytics/devguides/collection/protocol/ga4/reference/events), even the event-builder (https://ga-dev-tools.web.app/ga4/event-builder/) doesn't support a simple page-view.
So, all I got so far is this:
$data = '
{ "client_id": "'.[custom-id].'",
"events": [
{
"name": "page_view",
"params": {
"page_location": "'.[Url of pageview].'"
}
}
]
}
';
So, what are possible parameters for a page-view-event?
Ok, a few things here right away that you should know if you're playing with MP:
Measurement protocol is a poor name. It implies there's more than one protocol for data gathering. There's none. There is just only one protocol for tracking.
MP2 still largely MP1. Google tries to pose GA4 as a new product, but it's just our old good GA UA with a simplified backend and overengineered front-end that tries to deliver the level of quality Site Catalyst/Omniture/Adobe Analytics have been delivering for a decade. MP is largely the same. dr, cm, cs and a lot of other fields are still there. cds aren't there anymore cuz they're replaced with eps and ups, but more about that a bit later.
GA4 uses this big marketing claim that the new analytics is so wonderfully event-based, unlike the old one. When I dug into why they keep claiming it everywhere, I realized that the only difference is that pageviews are now events. Not much difference really. But yes, a pageview is just an event named page_view. We'll talk about it a bit more later.
Custom dimensions are no more. Now they're called event properties and user properties. The same thing really, Google just tries to make it less obvious that there are no more session level custom dimensions. Or product-level CDs. Though the product level is seemingly on their roadmap.
Make sure you're using the correct measurement id. They made it a lot harder to find it in GA4. It's no longer just the property id visible in the property list, unfortunately.
GA's real-time reports don't include all dimensions, especially if those dimensions are involved in advanced metrics/dimensions calculations. Do not use real time reports for inspecting the content of your events. It's not meant for debugging. It's a vanity report. Still helpful to check the volume of events when you're sending a bunch and expect to see them in GA. Google even has a warning here:
Like the DebugView report, the Realtime report performs limited attribution analysis to ensure responsive reporting. We recommend that you refer to the Acquisition reports for the most accurate attribution information.
Finally, what I often do instead of reading the so-still-unfinished-and-not-really-helpful documentation on MP2, is either use a library like this.
Or, since 1 is the case, I would just implement a moniker tracking in my test GTM, then see what and how it sends to where in the Network debugger and simply reimplement it on my side exactly how GTM does it. No magic involved. Here is how my GTM tag would look like:
With a trigger on any click or any page load. After all is done, I publish the lib. Then I would inject this GTM's code in a local site, or in my test site, or however else you want to test it. And trigger the tag that you need to mimic with MP.
I use this wonderful extension to show all events that fire and their details right in my console.
Now this is how the above tag looks on my test site through the extension:
It's pretty useful.
How do I know that page_referrer is used as dr instead of ep in GTM? Here is the list of the fields that will never be seen as ep. But Google doesn't care enough to map them properly to what these fields are called in MP, so you either have to test, or know, or google it elsewhere.
Finally, here is how the network request looks like:
I published the tag to prod (I keep a test site in prod), so you can go and look at it. Or just find a site that uses GA4 and see its network requests. How does google know that this is a pageview? by the event name: en=page_view
Of course, you do the same with medium and source. Judging from the documentation I've linked to above, the medium and source look like campaign_source and campaign_medium in GTM. GTM maps them accordingly to cs and cm fields. And that's how you know these are the correct mp fields. Give GA time to process these and check on them in a few days.
Good, now this is applicable to the enhanced ecommerce hits too, it's just that they have more variables and data structures in them typically.
Finally, if you want to simulate batch events, you can just make a few tags fire in rapid succession and GTM will neatly pack them in one network request if they fit. You can then digest how the packing is done through the same methods as we do here and simulate.

Hitting Trello API for Cucumber testing experience and stuck on trying to get the state of the test Trello board reset

I have written the following Cucumber feature/scenario targeting the Trello REST API:
Feature: Change existing board details
In order to keep my boards up to date
As Trello member
I want to be able to edit board details
Scenario: Update the board name
Given I have an existing board with id 59f8c6debdf037ee708c302f
When I request to update the name
Then the name of the board should change
However, once I run through my tests, any subsequent test runs will fail because the board name is never reset to its initial state, and I'm asserting that it matches the initial name here (but after running my tests it will change to the updated name):
[Given("I have an existing board with id (.*)")]
public void GivenIHaveAnExistingBoard(string id)
{
request = restHelper.GetBoard(id);
if (request.StatusCode == HttpStatusCode.OK)
{
board = JsonConvert.DeserializeObject<Board>(request.Content);
testBoardId = board.Id;
Assert.That(board.Name, Is.EqualTo(initialBoardName));
}
else
{
Console.Write(String.Format("Request unsuccessful with status code {0}", request.StatusCode));
}
}
I started writing another Given step that sets the board name to the initial name before running the test, but I realized I am then using the REST query I'm testing (a PUT request to the endpoint to rename a board) in my setup. This seems... wrong. Is there a better way to ensure my test Trello board returns to a default state before running tests?
Cucumber is a tool for driving the development of software, not a tool for testing existing software and even less a tool for testing someone else's software, and even less a tool for testing an existing api.
You can use Cucumber in a couple of ways to test your own API's. You can apply an abstract approach and talk in general business language about the services available, or you can write scenarios that talk about request codes and return values (I greatly prefer the first approach, if you are doing the second you might as well write unit tests).
A general approach when interacting with API's that are not under your control is to record responses so you can avoid the problem of your actions changing data and resulting responses. A tool like VCR (https://github.com/vcr/vcr) is worth reading about.
Another approach is to not do any write testing at all, make all your tests read only.
Any automated test of an external api that actually call the api run risk of being very flakey because:
the api may not be available
the api may get really annoyed with you repeatedly calling it
any write to the api may have cascading effects on subsequent reads of the api
a write may not be undoable
So generally with Cukes you would be working with your own api, or writing features for your software that uses an external api.

Loopback.io and CouchDB connector

I am trying to explore the opportunity to build a connector for CouchDB for Loopback.io.
I know CouchDB has a REST interface but - for some reason - when putting the baseURL of my Couch local server into a Rest connector in Loopback, I get an error back on some headers missing in the request from Couch.
Since some useful functions could be added to exploit views and so on, I am exploring the loopback-connector-couchdb creation.
So easy question is: what are the methods that a connector needs to implement to map exactly to the standard API endpoints offered by Loopback.io for a model?
Basic example:
POST /models (with payload body) --> all good on the "create" function of the connector
DELETE /models/{id} --> I get an error saying that the destroyAll function is NOT implemented (correct) but the destroy function IS implemented instead...
what is the difference between HEAD /models/{id} and GET /models/{id}/exists in terms of the functions called?
I try to verify the existence of the model created (successfully) in CouchDB via ID and use GET /models/{id}/exists and instead of having the function "exists" called in the Connector, another function called "Count" is called instead.
It is as if some but not all functions are mapped to the connector (note, I am not using the DataAccessObject property of the connector, as that seems to be more for additional methods, so to speak... and one of the methods does work!)
...I am confused!
Thanks for any guidance. I am trying to follow this, but i can't easily map standard API endpoints to the minimum functions of the connector (see point 2 above, for instance)
Building a connector - Loopback.io documentation
I would suggest playing with the API explorer to figure out your endpoints.
Create a sample LoopBack project via slc loopback
Create some models via slc loopback:model
Start the app via slc run
Browse to localhost:3000/explorer
In there you can see all the endpoints that are automatically generated by LoopBack. Like if you click the GET endpoint for a model, it will show the query as GET /api/<modelname>.

How to get PostSharp to log aspect caught exceptions to a rolling text file vs. the event log

I have googled mush, and found nothing, so please bare with a possible silly question. I have my own logging, of events and stats, that logs to the Event Log. I would like to log long and verbose error information to a 30 day rolling text file. How do I do this?
To log with PostSharp you can either use included Diagnostics Pattern Library or create your own custom aspect.
The diagnostics library can log the names of methods being invoked together with parameter/return values. The actual logging messages are sent to one of the supported logging back-ends (Console, System.Diagnostics.Trace, Log4Net, NLog, EnterpriseLibrary).
You can follow the PostSharp docs to add logging with the chosen back-end first, and then you would need to set up that back-end to write messages to a rolling text file. The configuration depends on the specific back-end, there are examples for log4net, NLog, etc.
If you want to write more custom information to the log, then it would be better to create your own logging aspect. You can start with the example in the PostSharp docs. Again it would be better to prepare your message and then just pass it on to the logging library that will handle writing to the rolling text file. That way you get the powerful configuration options provided by the library and don't need to re-implement low-level details.