How to initiate a workflow in Activiti using REST API - rest

I have created a Activit Process using Service Tasks etc with eclipse and deployed the .bar to Activiti which is running on tomcat. It was successfully deployed I can start my process using activiti-explorer without any issue. The deployed process name is "My process" and it is listed under Processes->Deployed Process Definitions in the Activiti-Explorer as well. In the diagram it has the name "myProcess:1:1473"
But I have two questions.
I need to start my process using REST call. (i.e. Without using Activiti-explorer) . What is the URL for that? I tried several variations of (http://localhost:8080/activiti-rest/service/runtime/process-instances) but none of them working.
When I restart the tomcat my process instance is not shown in the Activit -explorer. Each time I restart I need to redeploy the process .bar file. Is that the natural behavior of the engine?

For your first question check this guide for further details:
POST runtime/process-instances should be your endpoint (Be sure to make a POST request, with application/jsonas your content type)
The payload on the other hand should be formatted in one of three templates:
Request body (start by process definition id):
{
"processDefinitionId":"oneTaskProcess:1:158",
"businessKey":"myBusinessKey",
"variables": [
{
"name":"myVar",
"value":"This is a variable",
}
]
}
Request body (start by process definition key):
{
"processDefinitionKey":"oneTaskProcess",
"businessKey":"myBusinessKey",
"tenantId": "tenant1",
"variables": [
{
"name":"myVar",
"value":"This is a variable",
}
]
}
Request body (start by message):
{
"message":"newOrderMessage",
"businessKey":"myBusinessKey",
"tenantId": "tenant1",
"variables": [
{
"name":"myVar",
"value":"This is a variable",
}
]
}
As for your second issue, you should be aware that the OOTB (Out Of The Box) config may involve automatic DB cleaning upon each and every restart, you need to locate that config and override it with values of your choice! Check this section for further info, the databaseSchemaUpdate param might be exactly what you are looking for!

Related

Deployed Keycloak Script Mapper does not show up in the GUI

I'm using the docker image of Keycloak 10.0.2. I want Keycloak to supply access_tokens that can be used by Hasura. Hasura requires custom claims like this:
{
"sub": "1234567890",
"name": "John Doe",
"admin": true,
"iat": 1516239022,
"https://hasura.io/jwt/claims": {
"x-hasura-allowed-roles": ["editor","user", "mod"],
"x-hasura-default-role": "user",
"x-hasura-user-id": "1234567890",
"x-hasura-org-id": "123",
"x-hasura-custom": "custom-value"
}
}
Following the documentation, and using a script I found online, (See this gist) I created a Script Mapper jar with this script (copied verbatim from the gist), in hasura-mapper.js:
var roles = [];
for each (var role in user.getRoleMappings()) roles.push(role.getName());
token.setOtherClaims("https://hasura.io/jwt/claims", {
"x-hasura-user-id": user.getId(),
"x-hasura-allowed-roles": Java.to(roles, "java.lang.String[]"),
"x-hasura-default-role": "user",
});
and the following keycloak-scripts.json in META-INF/:
{
"mappers": [
{
"name": "Hasura",
"fileName": "hasura-mapper.js",
"description": "Create Hasura Namespaces and roles"
}
]
}
Keycloak debug log indicates it found the jar, and successfully deployed it.
But what's the next step? I can't find the deployed mapper anywhere in the GUI, so how do I activate it? I tried creating a protocol Mapper, but the option 'Script Mapper' is not available. And Scopes -> Evaluate generates a standard access token.
How do I activate my deployed protocol mapper?
Of course after you put up a question on SO you still keep searching, and I finally found the answer in this JIRA issue. The scripts feature has been a preview feature since (I think) version 8.
So when starting Keycloak you need to provide:
-Dkeycloak.profile.feature.scripts=enabled
and after that your Script Mapper will show up in the Mapper Type dropdown on the Create Mapper screen, and everything works.

How to implement a serviceworker in SFCC (Demandware)

I was wondering if anyone here has experience with implementing a service worker in SFCC/Demandware.
I generate a service worker with Webpack with sw-precache-webpack-plugin
The problem is: a service worker should be available from the root of the domain. so site.com/sw.js.
JS files will come normally in the static/ folder.
Anyone an idea how to serve this JS file from the root of the project in Demandware/SFCC?
Unfortunately, registering a service worker under an scope that is in an upper path than the service worker file itself does not work (as stated in MDN):
The service worker will only catch requests from clients under the service worker's scope.
The max scope for a service worker is the location of the worker.
(Source: https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers)
Solution
Here is a suggestion for a working approach for serving "/sw.js" in Demandware (Sales Force):
Create a new controller (or pipeline), e.g. "ServiceWorker-GetFile"; the response should be a file content, which can be read from whatever source you wish:
Content asset (dw.content.ContentMgr.getContent());
Library file (dw.content.ContentMgr.getContent() or directly reading a file with dw.io.File / dw.io.FileReader);
even Site preference (although I wouldn't recommend it);
Create an entry in Business Manager / Merchant Tools / SEO / Aliases to route "/sw.js" to "ServiceWorker-GetFile", i.e. use something along:
{
...
"your-host" : [
...,
{
"if-site-path": "/sw.js",
"pipeline": "ServiceWorker-GetFile"
}
]
}
This may seem like an unnecessary overhead, but it was the only way I could findfor serving files with root path in the URI.
Serving other root files as well
By expanding the controller (renaming it to, say, "Content-GetFile" and adding GET/POST parameters like "name" and/or "source") this could be conveniently used for other files as well ("/manifest.json", "/.well-known/assetlinks.json" etc.). In the next example of Business Manager / ... / Aliases, let Content-GetFile accept two parameters: "name" (which would be a file name in the content library or a content asset ID) and "source" (which would be "file" or "asset"):
...
{
"if-site-path": "/sw.js",
"pipeline": "Content-GetFile",
"params": {
"name": "/ServiceWorker/sw.js",
"source": "file"
}
},
{
"if-site-path": "/manifest.json",
"pipeline": "Content-GetFile",
"params": {
"name": "MANIFEST_JSON",
"source": "asset"
}
}
Note that your code should handle appropriately the base paths of the resources (e.g. "/ServiceWorker/sw.js" from the above example does not speak much; you should know whether this is a path in a content library or a path relative to "cartridges//static/default/js/").
Dynamic content
Since the suggested approach uses a controller, you can dynamically process the content before serving it to the user (e.g. if you need to add/remove the "/v12435145145/" part from DMW links). Sky is your limits. :)
I'm currently messing around with the service workers on DW as well.
In my case I have directly added the script inside a footer.isml file like this:
<script>
//init service worker
if ('serviceWorker' in navigator) {
window.addEventListener('load', () => {
navigator.serviceWorker
.register("${URLUtils.staticURL('/lib/sw/sw.js')}")
.then(registration => {
console.log(
`Service Worker registered! Scope: ${registration.scope}`
);
})
.catch(err => {
console.log(`Service Worker registration failed: ${err}`);
});
});
}
</script>
This works for me, well at least I can see the Service Worker registered message.
I also had some issues due to the SSL certificate since my development environment doesn't have a proper SSL but it's using HTTPs routes, so chrome was complaining about it, I needed to run chrome via terminal using this command:
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --user-data-dir=/tmp/foo --ignore-certificate-errors --unsafely-treat-insecure-origin-as-secure=[YOUR DOMAIN]
Unfortunately I'm not able to make work any line of code inside that service worker file, even I tried on Safari, since it has a Service Workers option in the develop menu, but it's not showing any service worker running.
I Hope it will helps you.

write log to Application insights from local service fabric

I am trying to integrate Azure App insights service into the service fabric app for logging and instrumentation. I am running fabric code on my local VM. I exactly followed the document here [scenario 2]. Other resources on learn.microsoft.com also seem to indicate the same steps. [ex: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-diagnostics-event-aggregation-eventflow
For some reason, I don’t see any event entries in App insights. No errors in code when I do this:
ServiceEventSource.Current.ProcessedCountMetric("synced",sw.ElapsedMilliseconds, crc.DataTable.Rows.Count);
eventflowconfig.json contents
{
"inputs": [
{
"type": "EventSource",
"sources": [
{ "providerName": "Microsoft-ServiceFabric-Services" },
{ "providerName": "Microsoft-ServiceFabric-Actors" },
{ "providerName": "mystatefulservice" }
]
}
],
"filters": [
{
"type": "drop",
"include": "Level == Verbose"
}
],
"outputs": [
{
"type": "ApplicationInsights",
// (replace the following value with your AI resource's instrumentation key)
"instrumentationKey": "XXXXXXXXXXXXXXXXXXXXXX",
"filters": [
{
"type": "metadata",
"metadata": "metric",
"include": "ProviderName == mystatefulservice && EventName == ProcessedCountMetric",
"operationProperty": "operation",
"elapsedMilliSecondsProperty": "elapsedMilliSeconds",
"recordCountProperty": "recordCount"
}
]
}
],
"schemaVersion": "2016-08-11"
}
In ServiceEventSource.cs
[Event(ProcessedCountMetricEventId, Level = EventLevel.Informational)]
public void ProcessedCountMetric(string operation, long elapsedMilliSeconds, int recordCount)
{
if (IsEnabled())
WriteEvent(ProcessedCountMetricEventId, operation, elapsedMilliSeconds, recordCount);
}
EDIT
Adding diagnostics pipeline code from "Program.cs in fabric stateful service
using (var diagnosticsPipeline =
ServiceFabricDiagnosticPipelineFactory.CreatePipeline($"{ServiceFabricGlobalConstants.AppName}-mystatefulservice-DiagnosticsPipeline")
)
{
ServiceRuntime.RegisterServiceAsync("mystatefulserviceType",
context => new mystatefulservice(context)).GetAwaiter().GetResult();
ServiceEventSource.Current.ServiceTypeRegistered(Process.GetCurrentProcess().Id,
typeof(mystatefulservice).Name);
// Prevents this host process from terminating so services keep running.
Thread.Sleep(Timeout.Infinite);
}
Event Source is a tricky technology, I have been working with it for a while and always have problems. The configuration looks good, It is very hard to investigate without access to the environments, so I will make my suggestions.
There are a few catches you must be aware of:
If you are listening etw events from a different process, your process must be running with a user with permissions on 'Performance Log Users. Check which identity your service is running on and if it is part of performance log users, who has permissions to create event sessions to listen for these events.
Ensure the events are being emitted correctly and you can listen to them from Diagnostics Events Window, if it is not showing there, there is a problem in the provider.
For testing purposes, comment out the line if (IsEnabled()). it is an internal check to validate if your events should be emitted. I had situations where it is always false and skip the emit of events, probably it cache the results for a while, the docs are not clear how it should work.
Whenever possible, use the EventSource from the nuget package instead of the framework one, the framework version is full of bugs and lack fixes found in the nuget version.
Application Insights are not RealTime, sometimes it might take a few minutes to process your events, I would recommend to output the events to a console or file and check if it is listening correctly, afterwards, enable the AppInsights.
The link you provide is quite outdated and there's actually a much better way to log application error and exception info to Application insights. For example, the above won't help with tracking the call hierarchy of an incoming request between multiple services.
Have a look at the Microsoft App Insights Service Fabric nuget packages. It works great for:
Sending error and exception info
Populating the application map with all your services and their dependencies (including database)
Reporting on app performance metrics,
Tracing service call dependencies end-to-end,
Integrating with native as well as non-native SF applications

Extended application descriptor file and invalid datasource

I have two applications:
hrportalcore: The core application with BaseController, ...
hrportalrequestleave: A sample application extended from the hrportalcore application
The hrportalcore has the namespace de.example.core and there are the dataSources also maintained. (sap.app.dataSources in manifest.json). The datasource is:
[...]
"HRPOJavaLeave": {
"uri": "<path>",
"type": "OData",
"settings": {
"annotations": [],
"odataVersion": "2.0",
"localUri": ""
}
}
[...]
The datasources can be used without any problems in the extended application but the console brings the following errors:
It says the datasource contains errors, but it can be used (strange?).
Another thing is, that the Component-preload.js file is loaded from a wrong place once a time. The application works without problems, but it is - as said - loaded once from a wrong location?
My manifest.json of the hrportalrequestleave looks like in the extension part (sap.ui5.extends):
[...]
"extends": {
"component": "de.example.core",
"extensions": {}
},
[...]
The parent is defined rightly in the neo-app.json as /parent to show to hrportalcore.
jQuery.sap.declare("de.example.request.leave.Component");
// use the load function for getting the optimized preload file if present
if (!jQuery.sap.isDeclared("de.example.core.Component")) {
sap.ui.component.load({
name: "de.example.core",
// Use the below URL to run the extended application when SAP-delivered application is deployed on cloud
url: jQuery.sap.getModulePath("de.example.request.leave") + "/parent"
// we use a URL relative to our own component
// extension application is deployed with customer namespace
});
}
this.de.example.core.Component.extend("de.example.request.leave.Component", {
metadata: {
manifest: "json"
}
});
This all happens in the Fiori Launchpad of HANA Cloud Platform
Solution
manifest.json of hrportalcore: Always use the up-to-date version which you have deployed on your HCP in the applicationVersion property:
{
"_version": "1.2.0",
"sap.app": {
"_version": "1.2.0",
"applicationVersion": {
"version": "1.6.2"
},
...
manifest.json of hrportalrequestleave (Extension project): As above, always use the up-to-date version which you have deployed on your HCP in the applicationVersion property.
DataSource not found?!
If you have a extension project (like hrportalrequestleave < hrportalcore), then the manifest.json of both applications are merged like jQuery.extend(...). All properties, expect the sap.app tree, because it describes really the application and can not be copied from parent extension.
Now, when you use a dataSource from the parent extension, it will not be found. That means for you, you must define the sap.app.dataSources in the extension project manifest.json.
The error in the log
"Error in application dependency de.example.core.Component: No descriptor was found"
suggests that the manifest.json contains a dependency to "de.example.core.Component" instead of "de.example.core". According to your code snippets, the "extends" dependency is correct. Do you have other dependencies?
The AppIndex in the backend calculates the transitive closure of dependencies and if it can't find an installation with that ID, the above error is created and logged on the client side.
If your manifest.json looks okay but might have contained a wrong dependency in the past, then it might be necessary to re-run the AppIndex (or schedule it for a regular run).
The fact that the app works despite the config error is caused by the code that you've shown above. It explicitly loads the de.example.core component from an explicitly calculated URL. But before that step, the framework already tries to load it, based on the information in the manifest.json and there the information about the explicit URL is missing.
BTW: the code that calculates the URL suggests that even after fixing the manifest.json, the AppIndex might not find the component as it seems to be stored in a sub package of the de.example.request.leave app. Not sure if the AppIndex can handle this (it can handle nested components if they are listed as embedded components in the top level manifest.json, but I'm not sure if it recognizes such embedded components in the dependencies section. As a result it might try to load the embedded component although it has been loaded together with the enclosing component already.

BAM 2.5.0 - Monitoring realtime traffic - Error when creating a new execution plan "Imported streams cannot be empty"

New user of the BAM with CEP integration, I'm currently following the "Monitoring Realtime Traffic" sample from WSo2 Doc and block when creating the Execution-plan step. Link to doc
The doc requires to
4. Under Import Stream select org.wso2.sample.rt.traffic for Import Stream, and enter traffic for As.
Unfortunatly when I click "import" nothing happens (in the doc it shows we should get //imported from org.wso2.sample.rt.traffic:1.0.0)
When I try to add the execution plan I get the "Imported streams cannot be empty"
Am I making a mistake ?
Regards
Vpl
I was able to solve the issue of this UI problem by creating the event stream directly into the registry table. For that I've created the following resource
/_system/governance/StreamDefinitions/org.wso2.sample.rt.traffic/1.0.0
containing
{
"streamId": "org.wso2.sample.rt.traffic:1.0.0",
"name": "org.wso2.sample.rt.traffic",
"version": "1.0.0",
"payloadData": [
{
"name": "entry",
"type": "STRING"
}
]
}
With a Media Type: application/json
Then creating the execution plan I could import the event stream and continue the use-case/ / tutorial
Regards