How to implement a serviceworker in SFCC (Demandware) - service

I was wondering if anyone here has experience with implementing a service worker in SFCC/Demandware.
I generate a service worker with Webpack with sw-precache-webpack-plugin
The problem is: a service worker should be available from the root of the domain. so site.com/sw.js.
JS files will come normally in the static/ folder.
Anyone an idea how to serve this JS file from the root of the project in Demandware/SFCC?

Unfortunately, registering a service worker under an scope that is in an upper path than the service worker file itself does not work (as stated in MDN):
The service worker will only catch requests from clients under the service worker's scope.
The max scope for a service worker is the location of the worker.
(Source: https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers)
Solution
Here is a suggestion for a working approach for serving "/sw.js" in Demandware (Sales Force):
Create a new controller (or pipeline), e.g. "ServiceWorker-GetFile"; the response should be a file content, which can be read from whatever source you wish:
Content asset (dw.content.ContentMgr.getContent());
Library file (dw.content.ContentMgr.getContent() or directly reading a file with dw.io.File / dw.io.FileReader);
even Site preference (although I wouldn't recommend it);
Create an entry in Business Manager / Merchant Tools / SEO / Aliases to route "/sw.js" to "ServiceWorker-GetFile", i.e. use something along:
{
...
"your-host" : [
...,
{
"if-site-path": "/sw.js",
"pipeline": "ServiceWorker-GetFile"
}
]
}
This may seem like an unnecessary overhead, but it was the only way I could findfor serving files with root path in the URI.
Serving other root files as well
By expanding the controller (renaming it to, say, "Content-GetFile" and adding GET/POST parameters like "name" and/or "source") this could be conveniently used for other files as well ("/manifest.json", "/.well-known/assetlinks.json" etc.). In the next example of Business Manager / ... / Aliases, let Content-GetFile accept two parameters: "name" (which would be a file name in the content library or a content asset ID) and "source" (which would be "file" or "asset"):
...
{
"if-site-path": "/sw.js",
"pipeline": "Content-GetFile",
"params": {
"name": "/ServiceWorker/sw.js",
"source": "file"
}
},
{
"if-site-path": "/manifest.json",
"pipeline": "Content-GetFile",
"params": {
"name": "MANIFEST_JSON",
"source": "asset"
}
}
Note that your code should handle appropriately the base paths of the resources (e.g. "/ServiceWorker/sw.js" from the above example does not speak much; you should know whether this is a path in a content library or a path relative to "cartridges//static/default/js/").
Dynamic content
Since the suggested approach uses a controller, you can dynamically process the content before serving it to the user (e.g. if you need to add/remove the "/v12435145145/" part from DMW links). Sky is your limits. :)

I'm currently messing around with the service workers on DW as well.
In my case I have directly added the script inside a footer.isml file like this:
<script>
//init service worker
if ('serviceWorker' in navigator) {
window.addEventListener('load', () => {
navigator.serviceWorker
.register("${URLUtils.staticURL('/lib/sw/sw.js')}")
.then(registration => {
console.log(
`Service Worker registered! Scope: ${registration.scope}`
);
})
.catch(err => {
console.log(`Service Worker registration failed: ${err}`);
});
});
}
</script>
This works for me, well at least I can see the Service Worker registered message.
I also had some issues due to the SSL certificate since my development environment doesn't have a proper SSL but it's using HTTPs routes, so chrome was complaining about it, I needed to run chrome via terminal using this command:
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --user-data-dir=/tmp/foo --ignore-certificate-errors --unsafely-treat-insecure-origin-as-secure=[YOUR DOMAIN]
Unfortunately I'm not able to make work any line of code inside that service worker file, even I tried on Safari, since it has a Service Workers option in the develop menu, but it's not showing any service worker running.
I Hope it will helps you.

Related

Deploying custom Keycloak theme with Operator (v15.1.1 & v16.0.0)

I have a theme with a size >1MB (which precludes the configmap-solution provided as an answer to this question).
This theme has been been packaged according to the Server Development Guide - its folder structure is
META-INF/keycloak-themes.json
themes/[themeName]/login/login.ftl
themes/[themeName]/login/login-reset-password.ftl
themes/[themeName]/login/template.ftl
themes/[themeName]/login/template.html
themes/[themeName]/login/theme.properties
themes/[themeName]/login/messages/messages_de.properties
themes/[themeName]/login/messages/messages_en.properties
themes/[themeName]/login/resources/[...]
The contents of keycloak-themes.json are
{
"themes": [{
"name" : "[themeName]",
"types": [ "login" ]
}]
}
where [themeName] is my theme name.
Keycloak is running with 3 instances, its resource spec includes:
extensions:
- [URL-to-jar]
Deployment was successful according to the logs of each pod - each log contains a message containing
Deployed "[jar-name].jar" (runtime-name : "[jar-name].jar")
However, in the admin console, I cannot select the theme from the extension for the login-theme. Creating a new realm via crd with a preconfigured login-theme via spec-entry
loginTheme: [themeName]
also does not work - in the admin-console, the selected entry for the login-theme is empty.
I may be missing something basic, but it seems like this ought to work according to this answer if I am not mistaken.
As is so often the case, an uncaught typo was the source of the error.
The directory-structure must not be
META-INF/keycloak-themes.json
themes/[theme-name]/[theme-role]/theme.properties
[...]
But instead
META-INF/keycloak-themes.json
theme/[theme-name]/[theme-role]/theme.properties
[...]
Given a correct structure, keycloak-operator can successfully deploy and load custom-themes as jar-extensions.

Caching external downloads with Workbox

I'm working on a GatsbyJS site using gatsby-plugin-offline which is available at example.com and would like to make PDF files to which I link on example.com but are at download.example.com/example.pdf available offline. Is that possible?
Yes, it's possible. I'm not 100% familiar with gatsby-plugin-offline's configuration, but it looks like https://www.gatsbyjs.org/packages/gatsby-plugin-offline/#available-options describes a process for appending additional service worker logic to thee end of its default configuration:
plugins: [{
resolve: `gatsby-plugin-offline`,
options: {
appendScript: require.resolve(`src/custom-sw-code.js`),
},
}]
Then in src/custom-sw-code.js:
workbox.routing.registerRoute(
({url}) => url.pathname.endsWith('.pdf'),
// Use StaleWhileRevalidate, CacheFirst, etc. as desired.
new workbox.strategies.StaleWhileRevalidate({cacheName: 'pdfs'})
);

write log to Application insights from local service fabric

I am trying to integrate Azure App insights service into the service fabric app for logging and instrumentation. I am running fabric code on my local VM. I exactly followed the document here [scenario 2]. Other resources on learn.microsoft.com also seem to indicate the same steps. [ex: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-diagnostics-event-aggregation-eventflow
For some reason, I don’t see any event entries in App insights. No errors in code when I do this:
ServiceEventSource.Current.ProcessedCountMetric("synced",sw.ElapsedMilliseconds, crc.DataTable.Rows.Count);
eventflowconfig.json contents
{
"inputs": [
{
"type": "EventSource",
"sources": [
{ "providerName": "Microsoft-ServiceFabric-Services" },
{ "providerName": "Microsoft-ServiceFabric-Actors" },
{ "providerName": "mystatefulservice" }
]
}
],
"filters": [
{
"type": "drop",
"include": "Level == Verbose"
}
],
"outputs": [
{
"type": "ApplicationInsights",
// (replace the following value with your AI resource's instrumentation key)
"instrumentationKey": "XXXXXXXXXXXXXXXXXXXXXX",
"filters": [
{
"type": "metadata",
"metadata": "metric",
"include": "ProviderName == mystatefulservice && EventName == ProcessedCountMetric",
"operationProperty": "operation",
"elapsedMilliSecondsProperty": "elapsedMilliSeconds",
"recordCountProperty": "recordCount"
}
]
}
],
"schemaVersion": "2016-08-11"
}
In ServiceEventSource.cs
[Event(ProcessedCountMetricEventId, Level = EventLevel.Informational)]
public void ProcessedCountMetric(string operation, long elapsedMilliSeconds, int recordCount)
{
if (IsEnabled())
WriteEvent(ProcessedCountMetricEventId, operation, elapsedMilliSeconds, recordCount);
}
EDIT
Adding diagnostics pipeline code from "Program.cs in fabric stateful service
using (var diagnosticsPipeline =
ServiceFabricDiagnosticPipelineFactory.CreatePipeline($"{ServiceFabricGlobalConstants.AppName}-mystatefulservice-DiagnosticsPipeline")
)
{
ServiceRuntime.RegisterServiceAsync("mystatefulserviceType",
context => new mystatefulservice(context)).GetAwaiter().GetResult();
ServiceEventSource.Current.ServiceTypeRegistered(Process.GetCurrentProcess().Id,
typeof(mystatefulservice).Name);
// Prevents this host process from terminating so services keep running.
Thread.Sleep(Timeout.Infinite);
}
Event Source is a tricky technology, I have been working with it for a while and always have problems. The configuration looks good, It is very hard to investigate without access to the environments, so I will make my suggestions.
There are a few catches you must be aware of:
If you are listening etw events from a different process, your process must be running with a user with permissions on 'Performance Log Users. Check which identity your service is running on and if it is part of performance log users, who has permissions to create event sessions to listen for these events.
Ensure the events are being emitted correctly and you can listen to them from Diagnostics Events Window, if it is not showing there, there is a problem in the provider.
For testing purposes, comment out the line if (IsEnabled()). it is an internal check to validate if your events should be emitted. I had situations where it is always false and skip the emit of events, probably it cache the results for a while, the docs are not clear how it should work.
Whenever possible, use the EventSource from the nuget package instead of the framework one, the framework version is full of bugs and lack fixes found in the nuget version.
Application Insights are not RealTime, sometimes it might take a few minutes to process your events, I would recommend to output the events to a console or file and check if it is listening correctly, afterwards, enable the AppInsights.
The link you provide is quite outdated and there's actually a much better way to log application error and exception info to Application insights. For example, the above won't help with tracking the call hierarchy of an incoming request between multiple services.
Have a look at the Microsoft App Insights Service Fabric nuget packages. It works great for:
Sending error and exception info
Populating the application map with all your services and their dependencies (including database)
Reporting on app performance metrics,
Tracing service call dependencies end-to-end,
Integrating with native as well as non-native SF applications

Extended application descriptor file and invalid datasource

I have two applications:
hrportalcore: The core application with BaseController, ...
hrportalrequestleave: A sample application extended from the hrportalcore application
The hrportalcore has the namespace de.example.core and there are the dataSources also maintained. (sap.app.dataSources in manifest.json). The datasource is:
[...]
"HRPOJavaLeave": {
"uri": "<path>",
"type": "OData",
"settings": {
"annotations": [],
"odataVersion": "2.0",
"localUri": ""
}
}
[...]
The datasources can be used without any problems in the extended application but the console brings the following errors:
It says the datasource contains errors, but it can be used (strange?).
Another thing is, that the Component-preload.js file is loaded from a wrong place once a time. The application works without problems, but it is - as said - loaded once from a wrong location?
My manifest.json of the hrportalrequestleave looks like in the extension part (sap.ui5.extends):
[...]
"extends": {
"component": "de.example.core",
"extensions": {}
},
[...]
The parent is defined rightly in the neo-app.json as /parent to show to hrportalcore.
jQuery.sap.declare("de.example.request.leave.Component");
// use the load function for getting the optimized preload file if present
if (!jQuery.sap.isDeclared("de.example.core.Component")) {
sap.ui.component.load({
name: "de.example.core",
// Use the below URL to run the extended application when SAP-delivered application is deployed on cloud
url: jQuery.sap.getModulePath("de.example.request.leave") + "/parent"
// we use a URL relative to our own component
// extension application is deployed with customer namespace
});
}
this.de.example.core.Component.extend("de.example.request.leave.Component", {
metadata: {
manifest: "json"
}
});
This all happens in the Fiori Launchpad of HANA Cloud Platform
Solution
manifest.json of hrportalcore: Always use the up-to-date version which you have deployed on your HCP in the applicationVersion property:
{
"_version": "1.2.0",
"sap.app": {
"_version": "1.2.0",
"applicationVersion": {
"version": "1.6.2"
},
...
manifest.json of hrportalrequestleave (Extension project): As above, always use the up-to-date version which you have deployed on your HCP in the applicationVersion property.
DataSource not found?!
If you have a extension project (like hrportalrequestleave < hrportalcore), then the manifest.json of both applications are merged like jQuery.extend(...). All properties, expect the sap.app tree, because it describes really the application and can not be copied from parent extension.
Now, when you use a dataSource from the parent extension, it will not be found. That means for you, you must define the sap.app.dataSources in the extension project manifest.json.
The error in the log
"Error in application dependency de.example.core.Component: No descriptor was found"
suggests that the manifest.json contains a dependency to "de.example.core.Component" instead of "de.example.core". According to your code snippets, the "extends" dependency is correct. Do you have other dependencies?
The AppIndex in the backend calculates the transitive closure of dependencies and if it can't find an installation with that ID, the above error is created and logged on the client side.
If your manifest.json looks okay but might have contained a wrong dependency in the past, then it might be necessary to re-run the AppIndex (or schedule it for a regular run).
The fact that the app works despite the config error is caused by the code that you've shown above. It explicitly loads the de.example.core component from an explicitly calculated URL. But before that step, the framework already tries to load it, based on the information in the manifest.json and there the information about the explicit URL is missing.
BTW: the code that calculates the URL suggests that even after fixing the manifest.json, the AppIndex might not find the component as it seems to be stored in a sub package of the de.example.request.leave app. Not sure if the AppIndex can handle this (it can handle nested components if they are listed as embedded components in the top level manifest.json, but I'm not sure if it recognizes such embedded components in the dependencies section. As a result it might try to load the embedded component although it has been loaded together with the enclosing component already.

How to initiate a workflow in Activiti using REST API

I have created a Activit Process using Service Tasks etc with eclipse and deployed the .bar to Activiti which is running on tomcat. It was successfully deployed I can start my process using activiti-explorer without any issue. The deployed process name is "My process" and it is listed under Processes->Deployed Process Definitions in the Activiti-Explorer as well. In the diagram it has the name "myProcess:1:1473"
But I have two questions.
I need to start my process using REST call. (i.e. Without using Activiti-explorer) . What is the URL for that? I tried several variations of (http://localhost:8080/activiti-rest/service/runtime/process-instances) but none of them working.
When I restart the tomcat my process instance is not shown in the Activit -explorer. Each time I restart I need to redeploy the process .bar file. Is that the natural behavior of the engine?
For your first question check this guide for further details:
POST runtime/process-instances should be your endpoint (Be sure to make a POST request, with application/jsonas your content type)
The payload on the other hand should be formatted in one of three templates:
Request body (start by process definition id):
{
"processDefinitionId":"oneTaskProcess:1:158",
"businessKey":"myBusinessKey",
"variables": [
{
"name":"myVar",
"value":"This is a variable",
}
]
}
Request body (start by process definition key):
{
"processDefinitionKey":"oneTaskProcess",
"businessKey":"myBusinessKey",
"tenantId": "tenant1",
"variables": [
{
"name":"myVar",
"value":"This is a variable",
}
]
}
Request body (start by message):
{
"message":"newOrderMessage",
"businessKey":"myBusinessKey",
"tenantId": "tenant1",
"variables": [
{
"name":"myVar",
"value":"This is a variable",
}
]
}
As for your second issue, you should be aware that the OOTB (Out Of The Box) config may involve automatic DB cleaning upon each and every restart, you need to locate that config and override it with values of your choice! Check this section for further info, the databaseSchemaUpdate param might be exactly what you are looking for!