Perfview is not stopping - perfview

I'm running this PerfView command:
PerfView.exe /Merge:true /zip:true /NoNGenRundown /NoClrRundown /KeepAllEvents /ThreadTime /DumpHeap /NoView /NoGui /MaxCollectSec:30 collect
but it seems that even if I defined /MaxCollectSec:30 to 30 seconds the actual data collection process is not stopping and keep adding data to PerfViewData.etl file
This is the output from console windows that Perfview open when running command:
VERBOSE LOG IN: PerfViewData.log.txt
EXECUTING: PerfView /Merge:true /zip:true /NoNGenRundown /NoClrRundown /KeepAllEvents /ThreadTime /DumpHeap /NoView /NoGui /MaxCollectSec:30 collect
Pre V4.0 .NET Rundown disabled, Type 'E' to enable symbols for V3.5 processes.
Do NOT close this console window. It will leave collection on!
Type S to stop collection, 'A' will abort.
Kernel Log: C:\PerfView\PerfViewData.kernel.etl
User mode Log: C:\PerfView\PerfViewData.etl
Starting collection at 12/07/2017 14:26:32
Collecting 10 sec: Size= 10.5 MB.
Collecting 20 sec: Size= 16.4 MB.
Exceeded MaxCollectSec 30
So here it is: Exceeded MaxCollectSec 30 but keep writing to etl files.
I want to send to client an Perfview command to collect system wide data and send me back the zip file with all ETL files from Perfview. Currently command does not stop - somebody know why ? What should I add/remove from command so it will stop automatically after 30 seconds ?

I know it's been a while, but it looks like the /DumpHeap switch is the problem here - if you remove it, the trace will finish on time. I checked the PerfView source code and when DumpHeap is selected there is some interaction with the GUI window:
if (parsedArgs.DumpHeap)
{
// Take a heap snapshot.
GuiHeapSnapshot(parsedArgs, true);
// Ensure that we clean up the heap snapshot state.
parsedArgs.DumpHeap = false;
}
You may create an issue in perfview describing your problem.

Related

Program and Run PIC18 with pickit4 on linux

I am on linux ubuntu and target is a PIC18F47J53.
I basically want to program the chip and then let it run, using command lines and using pickit4.
using ipecmd (from mplab x ide v5.45), this is my command:
/opt/microchip/mplabx/v5.45/sys/java/zulu8.40.0.25-ca-fx-jre8.0.222-linux_x64/bin/java -jar /opt/microchip/mplabx/v5.45/mplab_platform/mplab_ipe/ipecmd.jar -TPPK4 /P18F47J53 -M -F"/path_to_myfile.hex" -W
This is my output
DFP Version Used : PIC18F-J_DFP,1.4.41,Microchip
*****************************************************
Connecting to MPLAB PICkit 4...
Currently loaded versions:
Application version............00.06.66
Boot version...................01.00.00
Script version.................00.04.17
Script build number............db473af2f4
Tool pack version .............1.6.961
PICkit 4 is supplying power to the target (3.25 volts).
Target device PIC18F47J53 found.
Device Revision Id = 0x1
*****************************************************
Calculating memory ranges for operation...
Erasing...
The following memory area(s) will be programmed:
program memory: start address = 0x0, end address = 0x3ff
program memory: start address = 0x1fc00, end address = 0x1fff7
configuration memory
Programming/Verify complete
Program Report
30-Jan-2021, 12:54:41
Device Type:PIC18F47J53
Program Succeeded.
Operation Succeeded
All good, and takes about 12 seconds, however, after that the pickit4 turns off the power target, and the pickit LED is BLUE (I guess state "ready")
The main question is how can I let the pickit4 powering the boards? any specific parameter? (I cannot find on the readme.html)
If I use MPLAB X IPE GUI to program, the programming is much quicker (3 or 4 seconds), the pickit LED is YELLOW and the target is left powered on. (I selected "release from reset")
I have tried to get the log out with as many details as possible, but I cannot see the commands sent to the pickit4.
Any idea? thanks
I realize that it's been a while since you asked, but i put the answer here for anyone who needs it. Add -OL to your command line options.

For some reason, a warning is issued when calling the procedure SYSPROC.ADMIN_CMD ('EXPORT to ...')

I have the following problem:
I am using the following command:
EXPORT TO "D:\ExportFiles\ACTIVATE_DICT.csv" OF DEL MODIFIED BY TIMESTAMPFORMAT="YYYY/MM/DD HH:MM:SS" STRIPLZEROS MESSAGES "D:\ExportFiles\FMessage.txt" SELECT * FROM DB2INST4.ACTIVATE_DICT;
In the Command Editor of the program, the Control Center successfully exported data from the ACTIVATE_DICT table to a CSV file ACTIVATE_DICT.csv.
But for a number of reasons, I need you to execute this command in the IBM Data Studio or DataGrip program, and there it cannot be executed in this form.
Therefore, I read the following manual enter link description here
and based on it wrote the following command:
CALL SYSPROC.ADMIN_CMD('EXPORT to /lotus/ExportFiles/ACTIVATE_DICT.csv OF DEL MODIFIED BY TIMESTAMPFORMAT="YYYY/MM/DD HH:MM:SS" STRIPLZEROS MESSAGES /lotus/ExportFiles/FMessage.txt SELECT * FROM DB2INST4.ACTIVATE_DICT');
Here is the message on the result of the command:
[2018-10-11 15:15:23] [ ][3107] There is at least one warning
message in the message file.. SQLCODE=3107, SQLSTATE= ,
DRIVER=4.23.42 [2018-10-11 15:15:23] 1 row retrieved starting from 1
in 75 ms (execution: 29 ms, fetching: 46 ms)
And in the / lotus / ExportFiles / directory there is no ACTIVATE_DICT.csv file and there is no FMessage.txt file in the / lotus / ExportFiles / directory.
Question: How then to correctly execute this command ??? Maybe I'm doing something wrong?
sqlcode 3107 is a warning message:
SQL3107W At least one warning message was encountered during LOAD processing.
Explanation
You can load data into a database from a file, tape, or named pipe using the LOAD command. You can specify that any warnings or errors from the LOAD processing be printed to a message file. If no message file is specified, the warnings or errors are printed to standard out (unless the database manager instance is configured as a partitioned-database environment.)
It is to tell you to read message log in the message file you specified. In your case: /lotus/ExportFiles/FMessage.txt
Please read into the file to see what error is logged and if you need help understand what is logged, please post the content of the file.
This message is returned when at least one warning was received during processing. If a message file is being used, the warnings and errors will be printed there.
This warning does not affect processing.
User response
Review the message file warning.
EXPORT command using the ADMIN_CMD procedure
See use of the 'MESSAGES ON SERVER' clause, and how to get these messages using the result set returned by this routine in this case.

sqlcmd not showing RESTORE database stats

The following command in a cmd window
sqlcmd -S. -Usa -Ppass -dmaster -Q "RESTORE DATABASE [MYDATABASE] FROM DISK = 'D:\SQL Server\MYDATABASE.BAK' WITH FILE = 1, NOUNLOAD, REPLACE, STATS = 10"
displays the following progress output:
10 percent processed.
20 percent processed.
30 percent processed.
40 percent processed.
50 percent processed.
60 percent processed.
70 percent processed.
80 percent processed.
90 percent processed.
100 percent processed.
Processed 32320 pages for database 'MYDATABASE', file 'MYDATABASE' on file 1.
Processed 7 pages for database 'MYDATABASE', file 'MYDATABASE_log' on file 1.
But it turns that the progress is shown only after the entire restore, turning the stats during the process useless.
Any advice?
Here is the version of sqlcmd tool:
Microsoft (R) SQL Server Command Line Tool
Version 12.0.2000.8 NT
Copyright (c) 2014 Microsoft. All rights reserved.
Update Dec-2016:
Just including the comment from Microsoft Connect link shared in comments:
SQLCMD was rewritten in SQL 2012 to use ODBC. Here is a small
regression error that appears to have sneaked in.
It's the same effect reported when calling RAISERROR('Hello', 0, 1) WITH NOWAIT along a script.
I believe you can look in the SQL logs to see the progress ongoing.
you can query percent_complete in sys.dm_exec_requests
use start to open a separate window and issue a select percent_complete from sys.dm_exec_requests where percent_complete > 0

Dump file + PDB files - is it possible to create dump on one PC and then investigate it on another?

One certain problem is only reproducible on customer side.
We cannot reproduce it locally despite all our attempts.
But I know that TaskMgr in Windows 2008 R2 has a possibility to create dump file for a process. So, my question: is it possible to create dump on customer site for a certain process of our software and then investigate that dump file locally?
We already made a new build of our software (we saved a build sandbox and *.PDB files for all binaries). Then we installed that on site and now we are waiting when customer report that problem happens again so we will create a dump file for hanging process and then try to investigate it.
My question has 2 parts:
Would such method work at all?
If yes - how exactly to do that?
At the moment I have some doubt if that would work. Because I have tried to create a proof-test on my local Win 2008 R2 VM. I build all with .PDB files, then I run our software in a mode when it makes a long pause in the middle and I clicked "Create Dump File" in TaskMgr exactly when it does a pause (its simple call of Sleep(30000)). Then I tried to load that dump file in WinDbg and check what I could find there. First thing which makes me pessimistic about such way is a wrong stack trace. In particular - I cannot see a full stack trace in WinDbg. It shows me only stack trace for wow64.dll and ntdll.dll modules, I cannot see stack trace for our code.
In particular I see only this:
wow64cpu!TurboDispatchJumpAddressEnd+0x6c0
wow64cpu!TurboDispatchJumpAddressEnd+0x56b
wow64!Wow64SystemServiceEx+0x1ce
wow64!Wow64LdrpInitialize+0x42a
ntdll!RtlUniform+0x6e6
ntdll!RtlCreateTagHeap+0xa7
ntdll!LdrInitializeThunk+0xe
But when I try to attach process with debugger I see a full call-stack, like this:
ntdll.dll! 7754fd910
[Frames below may be incorrect and/or missing, no symbols loaded for ntdll.dll]
ntdll.dll!7754fd9l0
KernelBase.dll! 76ae3bd50
KernelBase.dll! 76ae44a 5Q
ScrVm.DLL!Profiler::DoSleep(intmilliseconds=30000) Line 205
ScrVm.DLL!Script::VmToolKit::iMethod_Sleep(unsigned char & han
ScrVm.DLL!CComponent::Invoke(const _SU::basic_string<char,std
ScrVm.DLL!Script::VirtualMachine::do_Invoke(Script::VmCommand
ScrVm.DLL!Script::VirtualMachine::InnerLoop( Line 4471
ScrVm.DLL!Script::VirtualMachine::Execute(unsigned long hFunc=
ScrVm.DLL!ScriptProcessor::Run(const _SU::basic_string<char,st
ScrVm.DLL!ScriptProcessor::ProcessDocumentO Line 285 + 0x40 by
ScrVm.DLL!DocumentProcessor::Process(BinaryDOM::Document * pDo
ScrVm.DLL!CFuncExecScript::ExecuteO Line 219
ScrVm.DLL!SrvManager::ExecuteO Line 586 +0xldbytes
ScrVm.DLL!SrvManager::Run(tag_TReqHdr "pRequestBuf=0x00187
ScrVm.DLL!SrvManager::HandleRequest(tag_TReqHdr " pRequest
ScrVm.DLL!SrvProcessRequest(tag_TReqHdr * pRequesffiuf=0x0
ScrVm.DLL!ProcessRequest(char "pRequesffiuf=0x001873b6, char "
ScrVm.DLL!ProcessRequest_DLL(char " achMsg=0x001873b6, char "a
siteExec212.exe!00409b2d0
siteExec212.exe!0040a4cfO
As you can see WinDbg seems only showing last 7 items in stack which are useless for me. Question - is it possible to discover the full stack trace from dump file created in TaskMgr in Windows 7/2008? Or at least - I need more items in stack trace, to see from what place in our code this call was made.
Note: compiler MS VisualStudio 2008, WinDbg 6.12 x64.
Since your process is 32 bit you must use the 32 bit version of Task Manager to create the dump. Default installs have it in C:\Windows\SysWow64\taskmgr.exe
Also, make sure to use the 32 bit version of windbg.

Return code of scheduled task prefixed with 0x8007000 in list view, registered as 0 in the event log

I am currently trying to setup monitoring of windows scheduled tasks in Zabbix. It seemed easy enough to just monitor the Microsoft-Windows-TaskScheduler/Operational event log filtered by 201 events and regexing on the return code, but when I started simulating errors to test the monitoring, nothing happened.
It turns out that all our windows 2012 servers always log "return code 0" in the event log, even though it actually, sort of, displays it correctly in the Task Scheduler list view. When I say "sort of", it's because the "Last Run Result" actually displays 0x80070001 if the exit code of the program run by the scheduled task is 1.
I have spend a lot of time tweaking the settings, like user account, Run only when user is logged on, Run whether user is logged on or not, setting path on the action, Run with highest privileges, Configure for Vista/7/2012, etc. Nothing helped.
Finally I did some testing on my local machine, Windows 7, and a 2008R2 server, both of which just worked as expected.
The specific task I was testing ran a PowerShell script, using -Command so that it properly propagates the exit, but to rule out any PS issues I also tested with a batch file containing "exit 1" and finally with a small C# console program, that just returns whatever you supply on the command line.
PS, batch and console program all work fine on 7 and 2008, but they all fail in the same manner on 2012.
I've google this to death, but keep coming up short. Apparently 0x80070005 and other similar error codes are have some meaning, but that's not what happens in my case. In my case it seems that my exit code is bitwise or'ed with 0x80070000.
I should note that in all the cases, even 2012, the program started by the task, actually executes and run to the end, it's just the exit code which is handled weirdly.
Following is the output from the test runs:
From Powershell (my shell writes :( if $LASTEXITCODE > 0 ):
54 :( .\ExitCodeTest.exe 1
55 :( $LASTEXITCODE
1
56 :) .\ExitCodeTest.exe 10
57 :( $LASTEXITCODE
10
Windows Server 2008 R2 Standard:
Last Run Result (from list view): 0xA
Event 201 from event log Microsoft-Windows-TaskScheduler/Operational:
Task Scheduler successfully completed task "\ErrorTest" ,
instance "{b67a26cf-7fd8-461a-93d9-a5e48e72e558}" ,
action "D:\Tasks\ExitCodeTest.exe" with return code 10.
Windows Server 2012 Datacenter (notice that the return code in the event log is 0):
Last Run Result (from list view): 0x8007000A
Event 201 from event log Microsoft-Windows-TaskScheduler/Operational:
Task Scheduler successfully completed task "\error test" ,
instance "{2bde46b8-2858-4772-a7ec-d66b29d893a6}" ,
action "D:\Tasks\ExitCodeTest.exe" with return code 0.
Source for ExitCodeTest.exe:
static void Main( string[] args )
{
int exitCode = 0;
if ( args.Length > 0 )
{
exitCode = Convert.ToInt32( args[0] );
}
Environment.Exit( exitCode );
}
Please help, I am at my wits end.
Thanks,
John
(this is NOT an answer, but StackOverflow is refusing to let me add comments - when I click 'add comment', browser scrolls to top of page :-/)
You may be misinterpreting the Last Run Result column. According to Wikipedia (http://en.wikipedia.org/wiki/Windows_Task_Scheduler), LRR values of 0, 1 and 10 are common. Ignore the 0x8007 prefix - this just indicates a WIN32 error code transformed into an HRESULT (http://msdn.microsoft.com/en-us/library/gg567305.aspx).
Try running the test and forcing an exit code of something other than 1 or 10 to see if this influences LRR.
This does not explain of course why action return code is 0 in 2012. Error code 10 is defined as 'environment is incorrect'. Could it be that 2012 server does not want to run 32bit executable?
One other suggestion (and I'm a little out of my depth); according to (http://msdn.microsoft.com/en-us/library/system.environment.exit(v=vs.110).aspx): "Exit requires the caller to have permission to call unmanaged code. The return statement does not.". Might be worth re-compiling ExitCodeTest as follows:
static int Main(string[] args)
{
int exitCode = 0;
if ( args.Length > 0 )
{
exitCode = Convert.ToInt32( args[0] );
}
return exitCode;
}
I'm seeing a similar issue on Server 2012 with a batch file that looks like it succeeds, shows a return value of 0 in event log, but a Last Run Result of 0x80070001.
I see MSFT has a hotfix available for Server 2012 which might address this issue:
http://support.microsoft.com/kb/3003689
I had this problem and fixed it this way.
Instead of calling a batch file move the commands into the actions section of the scheduled task.
I realize this may not work for you as some batch files are long.
I suspect it has to do with circumventing security on a scheduled task -- if you can change the batch file then you could get a scheduled task to run as the identity without windows being the wiser.