Tool for explore and query ETW logs from Service Fabric on premise - azure-service-fabric

I’m looking for a tool that will allow me to explore and query/search in logs from service fabric written in etl format. I tried to use MessageAnalyzer but it was loading long time and hangs, second tool windows logs explorer but after conversion to evtx logs looks like below and are useless for me:
- <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
- <System>
<Provider Name="Microsoft-ServiceFabric" Guid="{cbd93bc2-71e5-4566-b3a7-595d8eeca6e8}" />
<EventID>65534</EventID>
<Version>1</Version>
<Level>0</Level>
<Task>65534</Task>
<Opcode>254</Opcode>
<Keywords>0xffffffffffffff</Keywords>
<TimeCreated SystemTime="2018-08-17T14:11:30.484723000Z" />
<EventRecordID>11534</EventRecordID>
<Correlation />
<Execution ProcessID="14332" ThreadID="5124" ProcessorID="3" KernelTime="9" UserTime="63" />
<Channel />
<Computer>Machine Name</Computer>
<Security />
</System>
- <ProcessingErrorData>
<ErrorCode>15003</ErrorCode>
<DataItemName />
<EventPayload>0101005B69002C006F6E3D223022206C657..74656D706C6174653D22537461727441735072696D61727941726773222F3E0D0A20203C6576656E742076</EventPayload>
</ProcessingErrorData>
</Event>
I saw that on Azure (https://channel9.msdn.com/Events/dotnetConf/2018/S208 0 on 35 minute) there is option to use Application Insights to query result. Is there any tool that allow me to do this locally?

I generally use PerfView for most ETW Log analysis. It provides very good filter capabilities on the raw files without need to convert the logs to any format other format, also is very lightweight to process huge log files.
The Good side of using Tools like Log Analytics on OMS or Application Insights is that it provides advanced features like Alerting, Aggregation and SQL like queries on these same events. Also, after setup, you don't have to handle large log files(generally in the hosted on a Blob storage) to find logs for the application.
For Development, PerfView does the job, for production analysis I would recommend you go for OMS or AppInsights.
The only downside from LogAnalitics is that the events are not shown in real time, it takes a few minutes before you can see then in the portal, but is still faster than finding and copying the files for analysis on PerfView or other tools.

Related

Set service Startup type in WiX installer

I am trying to set a pre-installed service's startup type to Automatic, using WiX. Another task was to start the service on install, which I achieved with:
<ServiceControl
Id="ServiceRunningState"
Name="[Service Name]"
Start="install"
Stop="install"
Wait="yes" />
Now I would also like to set the startup type. I have tried the following (see answer):
<ServiceConfig
Id="ServiceStartup"
ServiceName="[Service Name]"
DelayedAutoStart="yes"
OnInstall="yes"
OnReinstall="yes" />
But this didn't change the startup type of the service (tested from Manual startup type). And besides, I want the startup type to be Automatic, not Automatic (Delayed Start).
Please note that I am trying to modify an existing service, so there is no ServiceInstall element.
The two elements (ServiceControl and ServiceConfig) are children within a Component parent element.
Any help is appreciated :)
MSI doesn't support changing the startup type of a service that the package doesn't install. ServiceConfig doesn't let you get around that:
Applies only to installed auto-start services or services installed by this package with SERVICE_AUTO_START in the StartType field of the ServiceInstall table.
Solved by editing the registry via RegistryKey, see example:
<RegistryKey Root="HKLM"
Key="SYSTEM\CurrentControlSet\Services\[Service Name]"
Action="create">
<RegistryValue Type="integer" Name="Start" Value="2" />
<RegistryValue Type="integer" Name="DelayedAutostart" Value="0" />
</RegistryKey>
Note service may appear as Automatic (Delayed Start) in Services GUI. However, after restarting, Services GUI displayed the service startup type as Automatic.
Set the "DelayedAutoStart" parameter to "no", rather than "yes".

Peach 3 Dumb Fuzz Tutorial - Unable to locate WinDbg

I am attempting a quick tutorial on fuzz testing and using Peach Fuzzer to do so. After running the fuzzer, i receive the error:
Could not start monitor "WindowsDebugger". Error, unable to locate WinDbg please specify using "WinDbgPath" parameter.
I'm really unsure how to begin fixing this problem. Any help would be appreciated.
Where is your WindowsDebugger? It would have to reside on the machine that is running the program that is being fuzzed.
Also, what does your peach pit look like for the Agent entity? Does it look similar to something like this?
<Agent name="RemoteAgent" location="tcp://127.0.0.1:9001">
<!-- Run and attach windbg to a vulnerable server. -->
<Monitor class="WindowsDebugger">
<Param name="CommandLine" value="C:\Documents and Settings\Administrator\Desktop\vulnserver\vulnserver.exe"/>
<Param name="WinDbgPath" value="C:\Program Files\Debugging Tools for Windows (x86)" />
</Monitor>
</Agent>
You can also follow along with this blog post I wrote about using Peach 3 to fuzz a sample network server called VulnServer, http://rockfishsec.blogspot.com/2014/01/fuzzing-vulnserver-with-peach-3.html

Is it available multi task tag in <StartUp> or do I have to merge these cmd files to only one?

I am new Azure development and writing powershell script.
I want to run two cmd files for azure start up tasks. I added these files into solutions and set properties as "copy always".After I add new note into ServiceDefinition.csdef Here it is :
<Startup>
<Task commandLine="Startup\startupcmd.cmd > c:\logs\startuptasks.log" executionContext="elevated" taskType="background">
<Environment>
<Variable name="EMULATED">
<RoleInstanceValue xpath="/RoleEnvironment/Deployment/#emulated" />
</Variable>
</Environment>
</Task>
<Task commandLine="Startup\disableTimeout.cmd" executionContext="elevated" />
</Startup>
It's not deploying and getting this error : Instance 0 of role Web is busy
Now In my question : Is it available multi task tag in <StartUp> or do I have to merge these cmd files to only one ?
As per definition:
The Startup element describes a collection of tasks that run when the
role is started.
So yes, the answer to your concrete question is: Yes, you can define multiple startup tasks.
State Busy is almost fine, in terms it is bit better than cycling! What I would suggest it to enable Remote Desktop and connect to see what is going on with the start up task. Busy is set until all simple tasks have completed and returned 0 exit code. Your task may fail or may hang for a while and that's why you would see busy.

NServiceBus pipeline with Distributors

I'm building a processing pipeline with NServiceBus but I'm having trouble with the configuration of the distributors in order to make each step in the process scalable. Here's some info:
The pipeline will have a master process that says "OK, time to start" for a WorkItem, which will then start a process like a flowchart.
Each step in the flowchart may be computationally expensive, so I want the ability to scale out each step. This tells me that each step needs a Distributor.
I want to be able to hook additional activities onto events later. This tells me I need to Publish() messages when it is done, not Send() them.
A process may need to branch based on a condition. This tells me that a process must be able to publish more than one type of message.
A process may need to join forks. I imagine I should use Sagas for this.
Hopefully these assumptions are good otherwise I'm in more trouble than I thought.
For the sake of simplicity, let's forget about forking or joining and consider a simple pipeline, with Step A followed by Step B, and ending with Step C. Each step gets its own distributor and can have many nodes processing messages.
NodeA workers contain a IHandleMessages processor, and publish EventA
NodeB workers contain a IHandleMessages processor, and publish Event B
NodeC workers contain a IHandleMessages processor, and then the pipeline is complete.
Here are the relevant parts of the config files, where # denotes the number of the worker, (i.e. there are input queues NodeA.1 and NodeA.2):
NodeA:
<MsmqTransportConfig InputQueue="NodeA.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" />
<UnicastBusConfig DistributorControlAddress="NodeA.Distrib.Control" DistributorDataAddress="NodeA.Distrib.Data" >
<MessageEndpointMappings>
</MessageEndpointMappings>
</UnicastBusConfig>
NodeB:
<MsmqTransportConfig InputQueue="NodeB.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" />
<UnicastBusConfig DistributorControlAddress="NodeB.Distrib.Control" DistributorDataAddress="NodeB.Distrib.Data" >
<MessageEndpointMappings>
<add Messages="Messages.EventA, Messages" Endpoint="NodeA.Distrib.Data" />
</MessageEndpointMappings>
</UnicastBusConfig>
NodeC:
<MsmqTransportConfig InputQueue="NodeC.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" />
<UnicastBusConfig DistributorControlAddress="NodeC.Distrib.Control" DistributorDataAddress="NodeC.Distrib.Data" >
<MessageEndpointMappings>
<add Messages="Messages.EventB, Messages" Endpoint="NodeB.Distrib.Data" />
</MessageEndpointMappings>
</UnicastBusConfig>
And here are the relevant parts of the distributor configs:
Distributor A:
<add key="DataInputQueue" value="NodeA.Distrib.Data"/>
<add key="ControlInputQueue" value="NodeA.Distrib.Control"/>
<add key="StorageQueue" value="NodeA.Distrib.Storage"/>
Distributor B:
<add key="DataInputQueue" value="NodeB.Distrib.Data"/>
<add key="ControlInputQueue" value="NodeB.Distrib.Control"/>
<add key="StorageQueue" value="NodeB.Distrib.Storage"/>
Distributor C:
<add key="DataInputQueue" value="NodeC.Distrib.Data"/>
<add key="ControlInputQueue" value="NodeC.Distrib.Control"/>
<add key="StorageQueue" value="NodeC.Distrib.Storage"/>
I'm testing using 2 instances of each node, and the problem seems to come up in the middle at Node B. There are basically 2 things that might happen:
Both instances of Node B report that it is subscribing to EventA, and also that NodeC.Distrib.Data#MYCOMPUTER is subscribing to the EventB that Node B publishes. In this case, everything works great.
Both instances of Node B report that it is subscribing to EventA, however, one worker says NodeC.Distrib.Data#MYCOMPUTER is subscribing TWICE, while the other worker does not mention it.
In the second case, which seem to be controlled only by the way the distributor routes the subscription messages, if the "overachiever" node processes an EventA, all is well. If the "underachiever" processes EventA, then the publish of EventB has no subscribers and the workflow dies.
So, my questions:
Is this kind of setup possible?
Is the configuration correct? It's hard to find any examples of configuration with distributors beyond a simple one-level publisher/2-worker setup.
Would it make more sense to have one central broker process that does all the non-computationally-intensive traffic cop operations, and only sends messages to processes behind distributors when the task is long-running and must be load balanced?
Then the load-balanced nodes could simply reply back to the central broker, which seems easier.
On the other hand, that seems at odds with the decentralization that is NServiceBus's strength.
And if this is the answer, and the long running process's done event is a reply, how do you keep the Publish that enables later extensibility on published events?
The problem you have is that your nodes don't see each others list of subscribers. The reason you're having that problem is that your trying out a production scenario (scale-out) under the default NServiceBus profile (lite) which doesn't support scale-out but makes single-machine development very productive.
To solve the problem, run the NServiceBus host using the production profile as described on this page:
http://docs.particular.net/nservicebus/hosting/nservicebus-host/profiles
That will let different nodes share the same list of subscribers.
Other than that, your configuration is right on.

Why does Log4Net run so slow in my Windows Service?

I have a windows service that uses log4net. We noticed that the service in question was running painfully slow so we attached a debugger to it and stepped through. It appears that each time it tries to write an entry to the log via log4net that it takes anywhere from 10 to 30 seconds before the next line of code can execute. Obviously this adds up...
The service is 2.0 .net
We're using log4Net 1.2.0.30714.
We've tested this on a machine running vista and a machine running win sever 2003 and have seen the same or similar results.
Jeff mentioned a performance problem with Log4Net in Podcast 20. It's possible that you are seeing a similar issue.
It turned out that someone had added an SMPTAppender in a config file which was overriding the one in our app. As a result the errant SMPT server address was unreachable. log4net was trying to log the error for a minute per request and then giving up and going on to the next line of code. Correcting the smtp address fixed the problem.
I have log4net with adonet appender and have not seen any decremental performance of my windows service. what appender are you using?
Check your config file for Log4Net settings. Log4Net can be configured to log to a remote machine, and if the connection is slow, so will be your logging speed.
Well I'm not remoting... this is writing to the log file on the machine it's running on. Here's my appender settings:
<appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender,log4net">
<file value="D:\\ROPLogFiles\\FileProcessor.txt" />
<appendToFile value="true" />
<datePattern value="yyyyMMdd" />
<rollingStyle value="Date" />
<layout type="log4net.Layout.PatternLayout,log4net">
<param name="ConversionPattern" value="%d [%t] %-5p %c [%x] - %m%n" />
</layout>
<threshold value="INFO" />
</appender>
the default maximum file size is 10mb . if your files are about this size and your file systems is quite full and probably heavily fragmented, it may be possible that the problem lies there. how big are your log files? i encountered similar problems with logfiles at gigabyte size.