I have tested two cases:
I use STEPCAFControl_Reader then STEPControl_Reader to read my step file but both methods crash when I call STEPCAFControl_Reader::Transfer, repsectively STEPControl_Reader:: TransferRoots.
By using STEPControl_Reader, I displayed a log on my console, then there is a message like this:
1 F:(BOUNDED_SURFACE,B_SPLINE_SURFACE,B_SPLINE_SURFACE_WITH_KNOTS,GEOMETRIC_REPRESENTATION_ITEM,RATIONAL_B_SPLINE_SURFACE,REPRESENTATION_ITEM,SURFACE): Count of Parameters is not 1 for representation_item
EDIT:
There is a null reference inside TransferRoots() method.
const Handle(Transfer_TransientProcess) &proc = thesession->TransferReader()->TransientProcess();
if (proc->GetProgress().IsNull())
{
//This condition does not exist from the source code
std::cout << "GetProgress is null" << std::endl;
return 0;
}
Message_ProgressSentry PS ( proc->GetProgress(), "Root", 0, nb, 1 );
My app and FreeCAD crash but if I use CAD Assitant which OCC official viewer, it loads.
It looks like comments already provide an answer to the question - or more precisely answers:
STEPCAFControl_Reader::ReadFile() returns reading status, which should be checked before calling STEPCAFControl_Reader::Transfer().
Normally, it is a good practice to put OCCT algorithm into try/catch block and check for OCCT exceptions (Standard_Failure).
Add OCC_CATCH_SIGNALS at the beginning of try statements (required only on Linux) and OSD::SetSignal(false) within working thread creation to redirect abnormal cases (access violation, NULL dereference and others) to C++ exceptions (OSD_Signal which is subclass of Standard_Failure). This may conflict other signal handlers in mixed environment - so check also documentation of other frameworks used by application.
If you catch failures like NULL dereference on calling OCCT algorithm with valid arguments - this is a bug in OCCT which is desirable to be fixed in one or another way, even if input STEP file contains syntax/logical errors triggering such kind of issues. Report the issue on OCCT Bugtracker with sufficient information for reproducing bug, including sample files - it is not helpful to developers just saying that OCCT crashes somewhere. Consider also contributing into this open source project by debugging OCCT code and suggesting patches.
Check STEP file reading log for possible errors in the file itself. Consider reporting an issue to system producing a broken file, even if main file content can be loaded by STEP readers.
It is a common practice to use OSD::SetSignal() within OCCT-based applications (like CAD Assistant) to improve their robustness on non-fatal errors in application/OCCT code. It is more user friendly reporting an internal error message instead of silently crashing.
But it should be noted, that OSD::SetSignal() doesn't guarantee application not being crashed nor that application can work properly after catching such failure - due to asynchronous nature of some signals, the memory can be already corrupted at the moment, when C++ exception has been raised leading to all kinds of undesired behavior. For that reason, it is better not ignoring such kind of exceptions, even if it looks like application works fine with them.
OSD::SetSignal(false); // should be called ones at application startup
STEPCAFControl_Reader aReader;
try
{
OCC_CATCH_SIGNALS // necessary for redirecting signals on Linux
if (aReader.ReadFile (theFilePath) != IFSelect_RetDone) { return false; }
if (!aReader.Transfer (myXdeDoc)) { return false; }
}
catch (Standard_Failure const& theFailure)
{
std::cerr << "STEP import failed: " << theFailure.GetMessageString() << "\n";
return false;
}
return true;
Related
I am trying to use the implementation of std::iostream provided by boost::asio on top of boost::asio::ip::tcp::socket. My code replicate almost line to line the example that is published in Boost Asio's documentation:
#include <iostream>
#include <stdexcept>
#include <boost/asio.hpp>
int main()
{
using boost::asio::ip::tcp;
try
{
boost::asio::io_service io_service;
tcp::endpoint endpoint(tcp::v4(), 8000);
tcp::acceptor acceptor(io_service, endpoint);
for (;;)
{
tcp::iostream stream; // <-- The exception is triggered on this line, on the second loop iteration.
boost::system::error_code error_code;
acceptor.accept(*stream.rdbuf(), error_code);
std::cout << stream.rdbuf() << std::flush;
}
}
catch (std::exception& exception)
{
std::cerr << exception.what() << std::endl;
}
return 0;
}
The only difference is the use I make of the resulting tcp::iostream: I forward everything I receive to the standard output.
When I compile this code with VisualStudio2019/toolset v142 and Boost from the NuGet boost-vc142, I get an Access Violation Exception only in the second iteration in the for loop, in the function
template <typename Service>
Service& service_registry::use_service(io_context& owner)
{
execution_context::service::key key;
init_key<Service>(key, 0);
factory_type factory = &service_registry::create<Service, io_context>;
return *static_cast<Service*>(do_use_service(key, factory, &owner));
} // <-- The debugger show the exception was raised on this line
in asio/detail/impl/service_registry.hpp. So the first iteration everything goes as planned, the connection is accepted, the data shows up on the standard output, and as soon as the stream is instanciated on the stack for the second time, the exception pops.
I don't have a high confidence in the accuracy of this location of the exception reported by the debugger. For some reason, the stack seams to be messed up and show only one frame.
If the declaration of stream is moved out of the loop, no exception is raised any more but then I need to stream.close() at the end of the loop, or nothing shows up on the standard output except the data from the first client's connection.
Basically, as soon as I try to instanciate more than one boost::asio::tcp::iostream (not necessarily at the same time), the exception is raised.
I tried the exact same code under linux (Arch linux, latest version of g++, same version of Boost) and everything works perfectly.
I could work around this issue by not using iostreams, but my idea is to feed the data received on the tcp socket to a parser which only accept implementations of std::iostream, hence I would still need to wrap asio's tcp socket in an homebrewed (and mediocre) implementation of std::iostream.
Does anybody have an idea on what's wrong with this setup, if I missed a crucial #define somewhere or anything?
Update:
Subsequent investigation show that the only situation where the access violation happens is when the executable is run from within Visual Studio (typ. from the menu Debug -> Start Debugging).
The build process seems to have no effect (calling directly cl.exe, using MSBuild, using devenv.exe).
Moreover, if the executable is run from a command prompt, and only then the debugger is attached, no access violation happens.
At this point, the issue is most likely not linked to the code itself.
Okay, it was exceedingly painful to test this on windows.
Of course I first tried on Linux (clang/gcc) and MingW 8.1 on windows.
Then I bit the bullet and jumped the hoops to get MSVC in command line with boost packages¹.
I cheated by manually copying the .lib/.dll for boost_{system,date_time,regex} into the working directory so the command line stayed "wieldy":
C:\work>C:\Users\sghee\Downloads\nuget.exe install boost_system-vc142
C:\work>C:\Users\sghee\Downloads\nuget.exe install boost_date_time-vc142
C:\work>C:\Users\sghee\Downloads\nuget.exe install boost_regex-vc142
(Be sure to get some coffee during those)
C:\work\> cl /EHsc test.cpp /I .\boost.1.72.0.0\lib\native\include /link
Now I can run test.exe
C:\work\> test.exe
And it listens fine, accepts connections (sequentially, not simultaneously). If you connect a second client while the first is still connected, it will be queued and be accepted only after the first disconnects. That's fine, because it's what you expect with the synchronous accept and loop.
I used Ncat.exe (from Nmap) to connect:
C:\Program Files (x86)\Nmap>.\ncat.exe localhost 8000
Quirk: The buffering was fine with the MSVC cl.exe build (linewise) as opposed to MingW behaviour, even though MingW also uses ws2_32.dll. #trivia
I know this doesn't "help", but maybe you can compare notes and see what is different with your system.
Video Of Test
¹ (that's a tough job without VS and also I - obviously - ran out of space, because 50GiB for a VM can't be enough right)
I'm trying to use Casablanca to consume a REST api.
I've been following the microsoft tutorial, how ever i'm getting a crash and I cannot figure it out.
I'm using visual studio 2017 with C++11
I've codded a function GetRequest() that do work when used in a new empty project, but when I try to use it on my Project (Very big project with millions of code lines).
I'm crashing in the constructor of http_client, in the file xmemory0 line 118.
const uintptr_t _Ptr_container = _Ptr_user[-1];
This is a link to the callstack : https://i.imgur.com/lBm0Hv7.png
void RestManager::GetRequest()
{
auto fileStream = std::make_shared<ostream>();
// Open stream to output file.
pplx::task<void> requestTask = fstream::open_ostream(U("results.html")).then([=](ostream outFile)
{
*fileStream = outFile;
// Create http_client to send the request.
http_client client(U("XXX/XXX.svc/"));
// Build request URI and start the request.
uri_builder builder(U("/IsLive"));
builder.append_query(U("q"), U("cpprestsdk github"));
return client.request(methods::GET, builder.to_string());
})
// Handle response headers arriving.
.then([=](http_response response)
{
printf("Received response status code:%u\n", response.status_code());
// Write response body into the file.
return response.body().read_to_end(fileStream->streambuf());
})
// Close the file stream.
.then([=](size_t)
{
return fileStream->close();
});
// Wait for all the outstanding I/O to complete and handle any exceptions
try
{
requestTask.wait();
}
catch (const std::exception &e)
{
printf("Error exception:%s\n", e.what());
}
}
EDIT : I just want to add that the http_client constructor is the issue. It always crash inside it no matter what I send as parameter.
The wierd thing is that it's not crashing when i just make a main() that call this function.
I guess it must be due to some memory issues, however I have no idea how could I debug that.
Does anyone would have an idea about it?
Thanks and have a great day!
I've experienced a similar issue on ubuntu. It works in an empty project, but crashes randomly when put into an existing large project, complaining memory corruptions.
Turns out that the existing project loaded a proprietary library, which is using cpprestsdk (casablanca) internally. Even cpprestsdk is static linked, its symbols are still exported as Weak Symbols. So either my code crashes, or the proprietary library crashes.
Ideally, my project can be divided into several libraries, and load them with RTLD_LOCAL to avoid symbol clashes. But the proprietary library in my project only accept RTLD_GLOBAL, otherwise it crashes... So the import order and flags become important:
dlopen("my-lib-uses-cpprest", RTLD_LOCAL); //To avoid polluting the global
dlopen("proprietary-lib-with-built-in-cpprest", RTLD_GLOBAL); //In my case, this lib must be global
dlopen("another-lib-uses-cpprest", RTLD_DEEPBIND); //To avoid being affected by global
"it will probably never concern anyone."
I agree with that.
I guess this issues was very specific, and it will probably never concern anyone, but still I'm going to update on everything I found out about it.
On this project, we are using custom allocator, if i'm not wrong, it's not possible to give our custom allocator to this lib, which result to many random crash.
A good option to fix it would be to use the static version to this lib, however, since we are using a lot of dynamic lib, this option wasn't possible for us.
If you are on my case, I would advice to use the libcurl and rapidjson, it's a bit harder to use, but you can achieve the same goal.
I have a pyclips / clips program for which I wrote some unit tests using pytest.
Each test case involes an initial clips.Clear() followed by the execution of real clips COOL code via clips.Load(rule_file.clp). Running each test individually works fine.
Yet, when telling pytest to run all tests, some fail with ClipsError: S03: environment could not be cleared. In fact, it depends on the order of the tests in the .py file. There seem to be test cases, that cause the subsequent test case to throw the exception.
Maybe some clips code is still "in use" so that the clearing fails?
I read here that (clear)
Clears CLIPS. Removes all constructs and all associated data structures (such as facts and instances) from the CLIPS environment. A clear may be performed safely at any time, however, certain constructs will not allow themselves to be deleted while they are in use.
Could this be the case here? What is causing the (clear) command to fail?
EDIT:
I was able to narrow down the problem. It occurs under the following circumstances:
test_case_A comes right before test_case_B.
In test_case_A there is a test such as
(test (eq (type ?f_bio_puts) clips_FUNCTION))
but f_bio_puts has been set to
(slot f_bio_puts (default [nil]))
So testing the type of a slot variable, which has been set to [nil] initially, seems to cause the (clear) command to fail. Any ideas?
EDIT 2
I think I know what is causing the problem. It is the test line. I adapted my code to make it run in the clips Dialog Windows. And I got this error when loading via (batch ...)
[INSFUN2] No such instance nil in function type.
[DRIVE1] This error occurred in the join network
Problem resided in associated join
Of pattern #1 in rule part_1
I guess it is a bug of pyclips that this is masked.
Change the EnvClear function in the CLIPS source code construct.c file adding the following lines of code to reset the error flags:
globle void EnvClear(
void *theEnv)
{
struct callFunctionItem *theFunction;
/*==============================*/
/* Clear error flags if issued */
/* from an embedded controller. */
/*==============================*/
if ((EvaluationData(theEnv)->CurrentEvaluationDepth == 0) &&
(! CommandLineData(theEnv)->EvaluatingTopLevelCommand) &&
(EvaluationData(theEnv)->CurrentExpression == NULL))
{
SetEvaluationError(theEnv,FALSE);
SetHaltExecution(theEnv,FALSE);
}
In my C++ QuickFix application, I am recording all MarketDataIncrementalRefresh messages i am getting into a file. This is being done using the following code:
void Application::onMessage(const FIX44::MarketDataIncrementalRefresh& message, const FIX::SessionID&)
{
ofstream myfile("tapedol.txt", std::ios::app);
myfile << message << endl << endl;
}
This part's working just fine. The problem occurs when I try to load the message later on.
FIX::Message msg
ifstream myfile("tapedol.txt");
getline(myfile,aux);
msg = aux;
msg.getField(55);
The program crashes every time it executes the last line. I suspect the problem is at the assignment to msg, but i'm not sure. If it is, what is the correct way to do such assignment? If not, how can I process the data within tapedol.txt, so that a message of type MarketDataIncremental refresh would be generated for each string in the file?
Your question is not complete enough to provide a full answer, but I do see one red flag:
msg.getField(55);
The Symbol field is not a top-level field of MarketDataIncrementalRefresh (it's inside the NoMDEntries repeating group), so this line will fail. I think it would raise a FieldNotFound exception.
My C++ is rusty, but you should be able to catch an exception or something that should tell you exactly what line is erroring out. Barring that, you need to open up a debugger. Just saying "it crashed" means you quit looking too soon.
That sinking feeling when you realize you have no idea what's going on...
I've been using this code in my network code for almost two years without problems.
if (!CFReadStreamOpen(myReadStream)) {
CFStreamError myErr = CFReadStreamGetError(myReadStream);
if (myErr.error != 0) {
// An error has occurred.
if (myErr.domain == kCFStreamErrorDomainPOSIX) {
// Interpret myErr.error as a UNIX errno.
strerror(myErr.error);
} else if (myErr.domain == kCFStreamErrorDomainMacOSStatus) {
OSStatus macError = (OSStatus)myErr.error;
}
// Check other domains.
}
}
I believe it was originally based on the code samples given here:
http://developer.apple.com/library/mac/#documentation/Networking/Conceptual/CFNetwork/CFStreamTasks/CFStreamTasks.html
I recently noticed, however, that some connections are failing, because CFReadStreamOpen returns false but the error code is 0. After staring at the above link some more, I noticed the CFRunLoopRun() statement, and added it:
if (!CFReadStreamOpen(myReadStream)) {
CFStreamError myErr = CFReadStreamGetError(myReadStream);
if (myErr.error != 0) {
// An error has occurred.
if (myErr.domain == kCFStreamErrorDomainPOSIX) {
// Interpret myErr.error as a UNIX errno.
strerror(myErr.error);
} else if (myErr.domain == kCFStreamErrorDomainMacOSStatus) {
OSStatus macError = (OSStatus)myErr.error;
}
// Check other domains.
} else
// start the run loop
CFRunLoopRun();
}
This fixed the connection problem. However, my app started showing random problems - interface sometimes not responsive, or not drawing, text fields not editable, that kind of stuff.
I've read up on CFReadStreamOpen and on run loops (specifically, that the main run loop runs by itself and I shouldn't run a run loop unless I'm setting it up myself in a secondary thread - which I'm not, as far as I know). But I'm still confused about what's actually happening above. Specifically:
1) Why does CFReadStreamOpen sometimes return FALSE and error code 0? What does that actually mean?
2) What does the CFRunLoopRun call actually do in the above code? Why does the sample code make that call - if this code is running in the main thread I shouldn't have to run the run loop?
I guess I'll answer my own question, as much as I can.
1) In my code, at least, CFReadStreamOpen always seems to return false. The documentation is a bit confusing, but I read it to mean the stream wasn't opened yet, but will be open later in the run loop.
2) Most of the calls I was making were happening in the main thread, where the run loop was already running, so calling CFRunLoopRun was unnecessary. The call that was giving me problems was happening inside a block, which apparently spawned a new thread. This new thread didn't start a new run loop - so the stream would never open unless I explicitly ran the new thread's run loop.
I'm still not 100% clear on what happens if I call CFRunLoopRun() on a thread with an already running run loop, but it's obviously not good.
I ended up ditching my home-brewed networking code and switching to ASIHTTPRequest, which I was considering to do anyway.