Environment: Visual Studio 2017, Windows 10 ver. 1709. Compiling mode: release.
When I call:
accelerator_view acc_view = accelerator().default_view;
an exception is raised (see figure link below), but the code performs fine afterwards.
But when the executable process exits and I call:
::GetExitCodeProcess(hChildProcess, &retVal);
from a caller process, instead of returning 0, it returns a garbage value in retVal.
Digging the source code, the problem seems to be in the snipped code below (SchedulerBase.cpp, line 149)
// Auto-reset event that is not signalled initially
m_hThrottlingEvent = platform::__CreateAutoResetEvent();
// Use a trampoline for UMS
if (!RegisterWaitForSingleObject(&m_hThrottlingWait, m_hThrottlingEvent, SchedulerBase::ThrottlerTrampoline, this, INFINITE, WT_EXECUTEDEFAULT))
{
throw scheduler_resource_allocation_error(HRESULT_FROM_WIN32(GetLastError()));
}
I think it is beyond my hands to fix it, because the code above is inside MFC. The same code works well when compiling with Visual Studio 2013. Refer to the figure attached of the stack, showing the raised exception (and catched inside) when I call
accelerator_view acc_view = accelerator().default_view;
The question: how to clean up the AMP before exiting and the getting the correct result when calling GetExitCodeProcess()?
Here is the figure:
Solved! If you add
concurrency::amp_uninitialize();
after using AMP framework, when the caller process calls
::GetExitCodeProcess(hChildProcess, &retVal);
The retVal parameter is filled correctly.
Related
I am trying to use the implementation of std::iostream provided by boost::asio on top of boost::asio::ip::tcp::socket. My code replicate almost line to line the example that is published in Boost Asio's documentation:
#include <iostream>
#include <stdexcept>
#include <boost/asio.hpp>
int main()
{
using boost::asio::ip::tcp;
try
{
boost::asio::io_service io_service;
tcp::endpoint endpoint(tcp::v4(), 8000);
tcp::acceptor acceptor(io_service, endpoint);
for (;;)
{
tcp::iostream stream; // <-- The exception is triggered on this line, on the second loop iteration.
boost::system::error_code error_code;
acceptor.accept(*stream.rdbuf(), error_code);
std::cout << stream.rdbuf() << std::flush;
}
}
catch (std::exception& exception)
{
std::cerr << exception.what() << std::endl;
}
return 0;
}
The only difference is the use I make of the resulting tcp::iostream: I forward everything I receive to the standard output.
When I compile this code with VisualStudio2019/toolset v142 and Boost from the NuGet boost-vc142, I get an Access Violation Exception only in the second iteration in the for loop, in the function
template <typename Service>
Service& service_registry::use_service(io_context& owner)
{
execution_context::service::key key;
init_key<Service>(key, 0);
factory_type factory = &service_registry::create<Service, io_context>;
return *static_cast<Service*>(do_use_service(key, factory, &owner));
} // <-- The debugger show the exception was raised on this line
in asio/detail/impl/service_registry.hpp. So the first iteration everything goes as planned, the connection is accepted, the data shows up on the standard output, and as soon as the stream is instanciated on the stack for the second time, the exception pops.
I don't have a high confidence in the accuracy of this location of the exception reported by the debugger. For some reason, the stack seams to be messed up and show only one frame.
If the declaration of stream is moved out of the loop, no exception is raised any more but then I need to stream.close() at the end of the loop, or nothing shows up on the standard output except the data from the first client's connection.
Basically, as soon as I try to instanciate more than one boost::asio::tcp::iostream (not necessarily at the same time), the exception is raised.
I tried the exact same code under linux (Arch linux, latest version of g++, same version of Boost) and everything works perfectly.
I could work around this issue by not using iostreams, but my idea is to feed the data received on the tcp socket to a parser which only accept implementations of std::iostream, hence I would still need to wrap asio's tcp socket in an homebrewed (and mediocre) implementation of std::iostream.
Does anybody have an idea on what's wrong with this setup, if I missed a crucial #define somewhere or anything?
Update:
Subsequent investigation show that the only situation where the access violation happens is when the executable is run from within Visual Studio (typ. from the menu Debug -> Start Debugging).
The build process seems to have no effect (calling directly cl.exe, using MSBuild, using devenv.exe).
Moreover, if the executable is run from a command prompt, and only then the debugger is attached, no access violation happens.
At this point, the issue is most likely not linked to the code itself.
Okay, it was exceedingly painful to test this on windows.
Of course I first tried on Linux (clang/gcc) and MingW 8.1 on windows.
Then I bit the bullet and jumped the hoops to get MSVC in command line with boost packages¹.
I cheated by manually copying the .lib/.dll for boost_{system,date_time,regex} into the working directory so the command line stayed "wieldy":
C:\work>C:\Users\sghee\Downloads\nuget.exe install boost_system-vc142
C:\work>C:\Users\sghee\Downloads\nuget.exe install boost_date_time-vc142
C:\work>C:\Users\sghee\Downloads\nuget.exe install boost_regex-vc142
(Be sure to get some coffee during those)
C:\work\> cl /EHsc test.cpp /I .\boost.1.72.0.0\lib\native\include /link
Now I can run test.exe
C:\work\> test.exe
And it listens fine, accepts connections (sequentially, not simultaneously). If you connect a second client while the first is still connected, it will be queued and be accepted only after the first disconnects. That's fine, because it's what you expect with the synchronous accept and loop.
I used Ncat.exe (from Nmap) to connect:
C:\Program Files (x86)\Nmap>.\ncat.exe localhost 8000
Quirk: The buffering was fine with the MSVC cl.exe build (linewise) as opposed to MingW behaviour, even though MingW also uses ws2_32.dll. #trivia
I know this doesn't "help", but maybe you can compare notes and see what is different with your system.
Video Of Test
¹ (that's a tough job without VS and also I - obviously - ran out of space, because 50GiB for a VM can't be enough right)
I am using the vscode-mock-debug git as the basis for my work.
Activation event is OnDebug, although same result
I implement provideDebugConfigurations in my DebugConfigurationProvider and its not getting called.
provideDebugConfigurations(folder: WorkspaceFolder | undefined, token?: CancellationToken): DebugConfiguration[] {
return [...my data in here];
}
the resolveDebugConfiguration (the original from mock-debug) is called, I can set a breakpoint. However the provideDebugConfigurations is never getting reached. build 1.36 of vsce. am I missing something obvious ?
this is the answer from the vscode team: https://github.com/microsoft/vscode/issues/78362
I have investigated this and it is expected behavior.
Namely, provideDebugConfigurations is only called then the debug configurations are needed to generate a launch.jsonfile. If you click on the configure command the provideDebugConfigurations will get nicely called.
However if you do not have a launch.json and you simply press Debug Start, vscode will try to start debugging without using debug configurations, but using one on the fly provided by the resolveDebugConfiguration call.
More about this can be found in our docs https://code.visualstudio.com/api/extension-guides/debugger-extension
Thus closing this as designed.
I'm using Debug.Break() to track down bugs in my code. I want to insert a "Halt" in the Visual Studio IDE so that I can inspect values.
Now I'm in a situation where the IDE just goes over Debug.Break() without stopping:
_lrNavMeshPath = nParentForLineRenderer.gameObject.AddComponent<LineRenderer>();
if (_lrNavMeshPath == null)
{
System.Diagnostics.Debugger.Break();
}
What might be the reason why the IDE just goes over this line without stopping?
I have set a breakpoint at this line, so I see that this line is called.
Thank you.
I am playing with the matrixmultiplication project downloadable from the bottom of the site:
http://blogs.msdn.com/b/nativeconcurrency/archive/2011/11/02/matrix-multiplication-sample.aspx
When I change the values of M, N, W from 256 to 4096, an unhandled exception is thrown:
Unhandled exception at 0x7630C42D in MatrixMultiplication.exe: Microsoft C++ exception: Concurrency::accelerator_view_removed at memory location 0x001CE2F0.
The console output is:
Using device: NVIDIA GeForce GT 640M
MatrixDiemnsion C(4096x4096) = A(4096x4096) * B(4096x4096)
CPU(single core) exec completed.
AMP Simple
The next statement to be executed is leaving the function mxm_amp_simple.
I am using VS2013 Ultimate on Windows 7 Professional N.
Why does this occur and how to prevent this from happening?
EDIT: I have found that the greatest value for M,N,W with which AMP Simple does not lead to a breakpoint being hit is 2800 (M=2800, N=2800, W=2800).
AMP Tiled on the other hand sometimes leads to a breakpoint, and in other cases executes correctly for M,N,W equal to 4096.
The exception is accompanied by a system error message:
"Display driver stopped responding and has recovered. Display driver NVIDIA Windows Kernel Mode Driver, Version 331.65 stopped responding and has successfully recovered."
In case someone else needs this.
This issue is most likely caused by Timeout Detection and Recovery (TDR). If kernel runs for more then 2 seconds windows will kill it and throw Concurrency::accelerator_view_removed exception. The easiest way to check this is to wrap code in try / catch bock. E.g.
try {
av_c.synchronize();
} catch (const Concurrency::accelerator_view_removed& e) {
printf("%s\n", e.what());
}
Microsoft has a blog post with more information, including pointers to instructions how to disable it.
I'm writing an application that can be started either as a standard WinForms app or in unattended mode from the command-line. The application was built using the VS 2k5 standard WinForms template.
When the application is executed from the command-line, I want it to output information that can be captured by the script executing the application. When I do this directly from Console.WriteLine(), the output does not appear, although it can be captured by piping to a file.
On the other hand, I can force the application to pop up a second console by doing a P/Invoke on AllocConsole() from kernel32. This is not what I want, though. I want the output to appear in the same window the application was called from.
This is the salient code that allows me to pop up a console from the command line:
<STAThread()> Public Shared Sub Main()
If My.Application.CommandLineArgs.Count = 0 Then
Dim frm As New ISECMMParamUtilForm()
frm.ShowDialog()
Else
Try
ConsoleControl.AllocConsole()
Dim exMan As New UnattendedExecutionManager(ConvertArgs())
IsInConsoleMode = True
OutputMessage("Application started.")
If Not exMan.SetSettings() Then
OutputMessage("Execution failed.")
End If
Catch ex As Exception
Console.WriteLine(ex.ToString())
Finally
ConsoleControl.FreeConsole()
End Try
End If
End Sub
Public Shared Sub OutputMessage(ByVal msg As String, Optional ByVal isError As Boolean = False)
Trace.WriteLine(msg)
If IsInConsoleMode Then
Console.WriteLine(msg)
End If
If isError Then
EventLog.WriteEntry("ISE CMM Param Util", msg, EventLogEntryType.Error)
Else
EventLog.WriteEntry("ISE CMM Param Util", msg, EventLogEntryType.Information)
End If
End Sub
Raymond Chen recently posted (a month after the question was posted here on SO) a short article about this:
How do I write a program that can be run either as a console or a GUI application?
You can't, but you can try to fake it.
Each PE application contains a field
in its header that specifies which
subsystem it was designed to run
under. You can say
IMAGE_SUBSYSTEM_WINDOWS_GUI to mark
yourself as a Windows GUI application,
or you can say
IMAGE_SUBSYSTEM_WINDOWS_CUI to say
that you are a console application. If
you are GUI application, then the
program will run without a console.
The subsystem determines how the
kernel prepares the execution
environment for the program. If the
program is marked as running in the
console subsystem, then the kernel
will connect the program's console to
the console of its parent, creating a
new console if the parent doesn't have
a console. (This is an incomplete
description, but the details aren't
relevant to the discussion.) On the
other hand, if the program is marked
as running as a GUI application, then
the kernel will run the program
without any console at all.
In that article he points to another by Junfeng Zhang that discusses how a couple of programs (Visual Studio and ildasm) implement this behavior:
How to make an application as both GUI and Console application?
In VisualStudio case, there are actually two binaries: devenv.com and devenv.exe. Devenv.com is a Console app. Devenv.exe is a GUI app. When you type devenv, because of the Win32 probing rule, devenv.com is executed. If there is no input, devenv.com launches devenv.exe, and exits itself. If there are inputs, devenv.com handles them as normal Console app.
In ildasm case, there is only one binary: ildasm.exe. It is first compiled as a GUI application. Later editbin.exe is used to mark it as console subsystem. In its main method it determines if it needs to be run as console mode or GUI mode. If need to run as GUI mode, it relaunches itself as a GUI app.
In the comments to Raymond Chen's article, laonianren has this to add to Junfeng Zhang's brief description of how Visual Studio works:
devenv.com is a general purpose console-mode stub application. When it runs it creates three pipes to redirect the console's stdin, stdout and stderr. It then finds its own name (usually devenv.com), replaces the ".com" with ".exe" and launches the new app (i.e. devenv.exe) using the read end of the stdin pipe and the write ends of the stdout and stderr pipes as the standard handles. Then it just sits and waits for devenv.exe to exit and copies data between the console and the pipes.
Thus even though devenv.exe is a gui app it can read and write the "parent" console using its standard handles.
And you could use devenv.com yourself for myapp.exe by renaming it to myapp.com. But you can't in practise because it belongs to MS.
Update 1:
As said in Michael Burr answer, Raymond Chen recently posted a short article about this. I am happy to see that my guess was not totally wrong.
Update 0:
Disclaimer: This "answer" is mostly speculation. I post it only because enough time has passed to establish that not many people have the answer to what look like a fundamental question.
I think that the "decision" if the application is gui or console is made at compile time and not at runtime. So if you compile your application as gui application, even if you don't display the gui, its still a gui application and doesn't have console. If you choose to compile it as console application then at minimum you will have a console windows flashing before moving to gui "mode". And I don't know if it is possible in managed code.
The problem is fundamental, I think, Because a console application has to take "control" of the calling console application. And it has to do so before the code of the child application is running.
If you want to check if your app is started from the command line in .NET, you can use Console.GetCursorPosition().
The reason that this works is that when you start it from the command line, the cursor moves away from the initial point ((0, 0)) because you typed something in the terminal (the name of the app).
You can do this with an equality check (code in C#):
class Program
{
public static void Main
{
if (Console.GetCursorPosition() == (0, 0))
{
//something GUI
}
else
{
//something not GUI
}
}
}
Note: You must set the output type to Console Application as other output types will make Console.GetCursorPosition() throw an exception.