Service Fabric and Application Insights Performance Counters - azure-service-fabric

Where can I find a list of performance counter names to be used with a Service Fabric cluster? There is a list published here, but I would need the actual exact names to be used in the cluster's ARM template. Currently I have the following configuration in the template :
"WadCfg": {
"DiagnosticMonitorConfiguration": {
"overallQuotaInMB": "1000",
"sinks": "applicationInsights",
"DiagnosticInfrastructureLogs": {},
"PerformanceCounters": {
"PerformanceCounterConfiguration": [
{
"counterSpecifier": "\\Processor(_Total)\\% Processor Time",
"sampleRate": "PT3M"
},
{
"counterSpecifier": "\\Memory\\Available MBytes",
"sampleRate": "PT3M"
}
]
}
But only the "Memory\Available MBytes" actually shows up in Application Insights.

Those counters are the actual windows performance counters. So you just need to look for them. Some examples:
http://techgenix.com/Key-Performance-Monitor-Counters/
http://www.appadmintools.com/documents/windows-performance-counters-explained/
Judging by all this information, the performance counters all follow the same pattern:
first column\second column
\\Processor(_Total)\\% Processor Time
\\Memory\\Available MBytes
\\Network Interface(*)\\Bytes Received/sec
...
You might be able to find some more counters by running typeperf directly on the service fabric VM and capturing the output. You can also run it locally to get an idea of what is possible.
http://defaultreasoning.com/2009/06/25/list-all-performance-counters-on-a-windows-computer-and-export-it-to-a-file/
C:>TypePerf.exe –q > counters.txt

Related

How to implement a serviceworker in SFCC (Demandware)

I was wondering if anyone here has experience with implementing a service worker in SFCC/Demandware.
I generate a service worker with Webpack with sw-precache-webpack-plugin
The problem is: a service worker should be available from the root of the domain. so site.com/sw.js.
JS files will come normally in the static/ folder.
Anyone an idea how to serve this JS file from the root of the project in Demandware/SFCC?
Unfortunately, registering a service worker under an scope that is in an upper path than the service worker file itself does not work (as stated in MDN):
The service worker will only catch requests from clients under the service worker's scope.
The max scope for a service worker is the location of the worker.
(Source: https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers)
Solution
Here is a suggestion for a working approach for serving "/sw.js" in Demandware (Sales Force):
Create a new controller (or pipeline), e.g. "ServiceWorker-GetFile"; the response should be a file content, which can be read from whatever source you wish:
Content asset (dw.content.ContentMgr.getContent());
Library file (dw.content.ContentMgr.getContent() or directly reading a file with dw.io.File / dw.io.FileReader);
even Site preference (although I wouldn't recommend it);
Create an entry in Business Manager / Merchant Tools / SEO / Aliases to route "/sw.js" to "ServiceWorker-GetFile", i.e. use something along:
{
...
"your-host" : [
...,
{
"if-site-path": "/sw.js",
"pipeline": "ServiceWorker-GetFile"
}
]
}
This may seem like an unnecessary overhead, but it was the only way I could findfor serving files with root path in the URI.
Serving other root files as well
By expanding the controller (renaming it to, say, "Content-GetFile" and adding GET/POST parameters like "name" and/or "source") this could be conveniently used for other files as well ("/manifest.json", "/.well-known/assetlinks.json" etc.). In the next example of Business Manager / ... / Aliases, let Content-GetFile accept two parameters: "name" (which would be a file name in the content library or a content asset ID) and "source" (which would be "file" or "asset"):
...
{
"if-site-path": "/sw.js",
"pipeline": "Content-GetFile",
"params": {
"name": "/ServiceWorker/sw.js",
"source": "file"
}
},
{
"if-site-path": "/manifest.json",
"pipeline": "Content-GetFile",
"params": {
"name": "MANIFEST_JSON",
"source": "asset"
}
}
Note that your code should handle appropriately the base paths of the resources (e.g. "/ServiceWorker/sw.js" from the above example does not speak much; you should know whether this is a path in a content library or a path relative to "cartridges//static/default/js/").
Dynamic content
Since the suggested approach uses a controller, you can dynamically process the content before serving it to the user (e.g. if you need to add/remove the "/v12435145145/" part from DMW links). Sky is your limits. :)
I'm currently messing around with the service workers on DW as well.
In my case I have directly added the script inside a footer.isml file like this:
<script>
//init service worker
if ('serviceWorker' in navigator) {
window.addEventListener('load', () => {
navigator.serviceWorker
.register("${URLUtils.staticURL('/lib/sw/sw.js')}")
.then(registration => {
console.log(
`Service Worker registered! Scope: ${registration.scope}`
);
})
.catch(err => {
console.log(`Service Worker registration failed: ${err}`);
});
});
}
</script>
This works for me, well at least I can see the Service Worker registered message.
I also had some issues due to the SSL certificate since my development environment doesn't have a proper SSL but it's using HTTPs routes, so chrome was complaining about it, I needed to run chrome via terminal using this command:
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --user-data-dir=/tmp/foo --ignore-certificate-errors --unsafely-treat-insecure-origin-as-secure=[YOUR DOMAIN]
Unfortunately I'm not able to make work any line of code inside that service worker file, even I tried on Safari, since it has a Service Workers option in the develop menu, but it's not showing any service worker running.
I Hope it will helps you.

write log to Application insights from local service fabric

I am trying to integrate Azure App insights service into the service fabric app for logging and instrumentation. I am running fabric code on my local VM. I exactly followed the document here [scenario 2]. Other resources on learn.microsoft.com also seem to indicate the same steps. [ex: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-diagnostics-event-aggregation-eventflow
For some reason, I don’t see any event entries in App insights. No errors in code when I do this:
ServiceEventSource.Current.ProcessedCountMetric("synced",sw.ElapsedMilliseconds, crc.DataTable.Rows.Count);
eventflowconfig.json contents
{
"inputs": [
{
"type": "EventSource",
"sources": [
{ "providerName": "Microsoft-ServiceFabric-Services" },
{ "providerName": "Microsoft-ServiceFabric-Actors" },
{ "providerName": "mystatefulservice" }
]
}
],
"filters": [
{
"type": "drop",
"include": "Level == Verbose"
}
],
"outputs": [
{
"type": "ApplicationInsights",
// (replace the following value with your AI resource's instrumentation key)
"instrumentationKey": "XXXXXXXXXXXXXXXXXXXXXX",
"filters": [
{
"type": "metadata",
"metadata": "metric",
"include": "ProviderName == mystatefulservice && EventName == ProcessedCountMetric",
"operationProperty": "operation",
"elapsedMilliSecondsProperty": "elapsedMilliSeconds",
"recordCountProperty": "recordCount"
}
]
}
],
"schemaVersion": "2016-08-11"
}
In ServiceEventSource.cs
[Event(ProcessedCountMetricEventId, Level = EventLevel.Informational)]
public void ProcessedCountMetric(string operation, long elapsedMilliSeconds, int recordCount)
{
if (IsEnabled())
WriteEvent(ProcessedCountMetricEventId, operation, elapsedMilliSeconds, recordCount);
}
EDIT
Adding diagnostics pipeline code from "Program.cs in fabric stateful service
using (var diagnosticsPipeline =
ServiceFabricDiagnosticPipelineFactory.CreatePipeline($"{ServiceFabricGlobalConstants.AppName}-mystatefulservice-DiagnosticsPipeline")
)
{
ServiceRuntime.RegisterServiceAsync("mystatefulserviceType",
context => new mystatefulservice(context)).GetAwaiter().GetResult();
ServiceEventSource.Current.ServiceTypeRegistered(Process.GetCurrentProcess().Id,
typeof(mystatefulservice).Name);
// Prevents this host process from terminating so services keep running.
Thread.Sleep(Timeout.Infinite);
}
Event Source is a tricky technology, I have been working with it for a while and always have problems. The configuration looks good, It is very hard to investigate without access to the environments, so I will make my suggestions.
There are a few catches you must be aware of:
If you are listening etw events from a different process, your process must be running with a user with permissions on 'Performance Log Users. Check which identity your service is running on and if it is part of performance log users, who has permissions to create event sessions to listen for these events.
Ensure the events are being emitted correctly and you can listen to them from Diagnostics Events Window, if it is not showing there, there is a problem in the provider.
For testing purposes, comment out the line if (IsEnabled()). it is an internal check to validate if your events should be emitted. I had situations where it is always false and skip the emit of events, probably it cache the results for a while, the docs are not clear how it should work.
Whenever possible, use the EventSource from the nuget package instead of the framework one, the framework version is full of bugs and lack fixes found in the nuget version.
Application Insights are not RealTime, sometimes it might take a few minutes to process your events, I would recommend to output the events to a console or file and check if it is listening correctly, afterwards, enable the AppInsights.
The link you provide is quite outdated and there's actually a much better way to log application error and exception info to Application insights. For example, the above won't help with tracking the call hierarchy of an incoming request between multiple services.
Have a look at the Microsoft App Insights Service Fabric nuget packages. It works great for:
Sending error and exception info
Populating the application map with all your services and their dependencies (including database)
Reporting on app performance metrics,
Tracing service call dependencies end-to-end,
Integrating with native as well as non-native SF applications

Cloud foundry on Google Compute engine can't create container

I am very new with Cloud foundry. I have added cloud foundry for google compute engine platform by this guides source1 and source2.
Terraform was used for creating needed infrastructure. It seemed all was fine I didn't get any errors during deployment cloud foundry itself and bosh cck command returns that there are no any problems. But when I tried to deploy my hello world app, I got next error message in terminal after cf push command:
Creating container
Failed to create container
FAILED
Error restarting application: StagingError.
After checking log files I found next message:
{
"timestamp":"1474637304.026303530",
"source":"garden-linux",
"message":"garden-linux.loop-mounter.mount-file.mounting",
"log_level":2,
"data":{
"destPath":"/var/vcap/data/garden/aufs_graph/aufs/diff/08829a3252c1d60729e3b5482b0fb109652c9ab5beff9724e4e4ae756a0bc3ce",
"error":"exit status 32",
"filePath":"/var/vcap/data/garden/aufs_graph/backing_stores/08829a3252c1d60729e3b5482b0fb109652c9ab5beff9724e4e4ae756a0bc3ce",
"output":"mount: wrong fs type, bad option, bad superblock on /dev/loop0,\n missing codepage or helper program, or other error\n In some cases useful info is found in syslog - try\n dmesg | tail or so\n\n",
"session":"2.276"
}
}{
"timestamp":"1474637304.026949406",
"source":"garden-linux",
"message":"garden-linux.pool.acquire.provide-rootfs-failed",
"log_level":2,
"data":{
"error":"mounting file: mounting file: exit status 32",
"handle":"ec6e7469-0ef0-48a8-bcd0-82f4a2ea173f-5de2e641d9284aeea209ca447ffffb6d",
"session":"9.545"
}
}
{
"timestamp":"1474637304.027062416",
"source":"garden-linux",
"message":"garden-linux.garden-server.create.failed",
"log_level":2,
"data":{
"error":"mounting file: mounting file: exit status 32",
"request":{
"Handle":"ec6e7469-0ef0-48a8-bcd0-82f4a2ea173f-5de2e641d9284aeea209ca447ffffb6d",
"GraceTime":0,
"RootFSPath":"/var/vcap/packages/rootfs_cflinuxfs2/rootfs",
"BindMounts":[
{
"src_path":"/var/vcap/data/executor_cache/6942123d3462ad9d21a45729c3cae183-1474475979582384649-1.d",
"dst_path":"/tmp/lifecycle"
}
],
"Network":"",
"Privileged":true,
"Limits":{
"bandwidth_limits":{
},
"cpu_limits":{
"limit_in_shares":512
},
"disk_limits":{
"inode_hard":200000,
"byte_hard":6442450944,
"scope":1
},
"memory_limits":{
"limit_in_bytes":1073741824
}
}
},
"session":"11.44187"
}
}{
"timestamp":"1474637304.034646988",
"source":"garden-linux",
"message":"garden-linux.garden-server.destroy.failed",
"log_level":2,
"data":{
"error":"unknown handle: ec6e7469-0ef0-48a8-bcd0-82f4a2ea173f-5de2e641d9284aeea209ca447ffffb6d",
"handle":"ec6e7469-0ef0-48a8-bcd0-82f4a2ea173f-5de2e641d9284aeea209ca447ffffb6d",
"session":"11.44188"
}
}
And meantime in dmesg | tail I got next:
[161023.238082] aufs test_add:283:garden-linux[7681]: uid/gid/perm
/var/vcap/data/garden/aufs_graph/aufs/diff/d350dcd30f6d6f8b37eabe06a3b73bcea0a87f9aff4edf15f12792269fc9f97c
4294967294/4294967294/0755, 0/0/0755 [161023.238109] aufs
au_opts_verify:1597:garden-linux[7681]: dirperm1 breaks the protection
by the permission bits on the lower branch [161023.413392] device
wtj3qdqhig0t-0 entered promiscuous mode
I'm not sure that this issues connected or that it is issue at all, but I post them here in order to be sure, that I didn't miss anything.
I don't know how to fix this problem and where, should I look solution for terraform scripts or for bosh manifest files. We have micro service architecture with three nodes on node js and one on ruby, so deployment is very important question for us.
here is my application manifest.yml file:
---
applications:
- name: hello_cloud
memory: 128M
buildpack: https://github.com/cloudfoundry/nodejs-buildpack
instances: 1
random-route: true
command: "node server.js"
My goal is to be able deploy applications using cloud foundry. If you have any additional questions or I wrote something unclear feel free to write me.
This issue is related a conflict between garden and the 4.4 Linux kernel. To use the example cloudfoundry manfest, use the follow stemcell:
bosh upload stemcell https://bosh.io/d/stemcells/bosh-google-kvm-ubuntu-trusty-go_agent?v=3262.19
bosh deploy
You may need to delete your cf deployment before re-deploying due to quota issues.

How to initiate a workflow in Activiti using REST API

I have created a Activit Process using Service Tasks etc with eclipse and deployed the .bar to Activiti which is running on tomcat. It was successfully deployed I can start my process using activiti-explorer without any issue. The deployed process name is "My process" and it is listed under Processes->Deployed Process Definitions in the Activiti-Explorer as well. In the diagram it has the name "myProcess:1:1473"
But I have two questions.
I need to start my process using REST call. (i.e. Without using Activiti-explorer) . What is the URL for that? I tried several variations of (http://localhost:8080/activiti-rest/service/runtime/process-instances) but none of them working.
When I restart the tomcat my process instance is not shown in the Activit -explorer. Each time I restart I need to redeploy the process .bar file. Is that the natural behavior of the engine?
For your first question check this guide for further details:
POST runtime/process-instances should be your endpoint (Be sure to make a POST request, with application/jsonas your content type)
The payload on the other hand should be formatted in one of three templates:
Request body (start by process definition id):
{
"processDefinitionId":"oneTaskProcess:1:158",
"businessKey":"myBusinessKey",
"variables": [
{
"name":"myVar",
"value":"This is a variable",
}
]
}
Request body (start by process definition key):
{
"processDefinitionKey":"oneTaskProcess",
"businessKey":"myBusinessKey",
"tenantId": "tenant1",
"variables": [
{
"name":"myVar",
"value":"This is a variable",
}
]
}
Request body (start by message):
{
"message":"newOrderMessage",
"businessKey":"myBusinessKey",
"tenantId": "tenant1",
"variables": [
{
"name":"myVar",
"value":"This is a variable",
}
]
}
As for your second issue, you should be aware that the OOTB (Out Of The Box) config may involve automatic DB cleaning upon each and every restart, you need to locate that config and override it with values of your choice! Check this section for further info, the databaseSchemaUpdate param might be exactly what you are looking for!

Logstash-Forwader 3.1 state file .logstash-forwarder not updating

I am having an issue with Logstash-forwarder 3.1.1 on Centos 6.5 where the state file /.logstash-forwarder is not updating as information is sent to Logstash.
I have found as activity is logged by logstash-forwarder the corresponding offset is not recorded in /.logstash-forwarder 'logrotate' file. The ./logstash-forwarder file is being recreated each time 100 events are recorded but not updated with data. I know the file has been recreated because I changed permissions to test, and permissions are reset each time.
Below are my configurations (With some actual data italicized/scrubbed):
Logstash-forwarder 3.1.1
Centos 6.5
/etc/logstash-forwarder
Note that the "paths" key does contain wildcards
{
"network": {
"servers": [ "*server*:*port*" ],
"timeout": 15,
"ssl ca": "/*path*/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/a/b/tomcat-*-*/logs/catalina.out"
],
"fields": { "type": "apache", "time_zone": "EST" }
}
]
}
Per logstash instructions for Centos 6.5 I have configured the LOGSTASH_FORWARDER_OPTIONS value so it looks like the following:
LOGSTASH_FORWARDER_OPTIONS="-config /etc/logstash-forwarder -spool-size 100"
Below is the resting state of the /.logstash-forwarder logrotate file:
{"/a/b/tomcat-set-1/logs/catalina.out":{"source":"/a/b/tomcat-set-1/logs/catalina.out","offset":433564,"inode":*number1*,"device":*number2*},"/a/b/tomcat-set-2/logs/catalina.out":{"source":"/a/b/tomcat-set-2/logs/catalina.out","offset":18782151,"inode":*number3*,"device":*number4*}}
There are two sets of logs that this is capturing. The offset has stayed the same for 20 minutes while activities have been occurred and sent over to Logstash.
Can anyone give me any advice on how to fix this problem whether it be a configuration setting I missed or a bug?
Thank you!
After more research I found it was announced that Filebeats is the preferred forwarder of choice now. I even found a post by the owner of Logstash-Forwarder that the program is full of bugs and is not fully supported any more.
I have instead moved to Centos7 using the latest version of the ELK stack, using Filbeats as the forwarder. Things are going much smoother now!