I've got the following error message while I run simulation at OMNeT++:
Implicit chunk serialization is disabled to prevent unpredictable performance degradation (you may consider changing the Chunk::enableImplicitChunkSerialization flag or passing the PF_ALLOW_SERIALIZATION flag to peek) -- in module (inet::visualizer::PhysicalLinkCanvasVisualizer) WirelessB.visualizer.physicalLinkVisualizer (id=23), at t=0.022885250545s, event #11".`
How should I handle it?
Related
There are several types of error that T-SQL's TRY/CATCH won't catch at all (see below). Some of these are understandable (though I'd prefer for low-severity errors to also be caught, as they would be in application code), and some are unavoidable because errors that terminate the session don't even allow the CATCH to execute.
But why the limitation on compilation errors in the same scope? Is it impossible to handle or was it a deliberate decision? If the latter, what was the reasoning for it? Why can we catch compilation errors once they reach the parent scope, but not in the current scope?
Also, I wasn't around for the "good old days", but apparently it's possible to catch these using the old methods of error handling, so why not at least just handle it the same way under the hood?
Warnings or informational messages that have a severity of 10 or lower.
Errors that have a severity of 20 or higher that stop the SQL Server Database Engine task processing for the session. If an error occurs that has severity of 20 or higher and the database connection is not disrupted, TRY...CATCH will handle the error.
Attentions, such as client-interrupt requests or broken client connections.
When the session is ended by a system administrator by using the KILL statement.
The following types of errors are not handled by a CATCH block when they occur at the same level of execution as the TRY...CATCH construct:
Compile errors, such as syntax errors, that prevent a batch from running.
Errors that occur during statement-level recompilation, such as object name resolution errors that occur after compilation because of deferred name resolution.
Object name resolution errors
AFAIK when some errors were reported from within one processor, other will not be run.
Is there any way to force processing independently?
According to the documentation, only uncaught errors or exceptions stop further processing:
If a processor throws an uncaught exception, the tool may cease other active annotation processors. [...] Since annotation processors are run in a cooperative environment, a processor should throw an uncaught exception only in situations where no error recovery or reporting is feasible.
If a processor reports an error, the errorRaised() method should return true in the next round but not stop processing:
Printing a message with an error kind will raise an error.
If a processor raises an error, the current round will run to completion and the subsequent round will indicate an error was raised.
It says "the tool may cease other active annotation processors" (emphasis added). So if you need to keep going on even in case of uncaught exceptions, it should be possible with the right processing tool. I don't know if there are any tools that support this though. At least the processing tool should be the direction to look for if you need this feature.
For example, if I implement some simple object caching, which method is faster?
1. return isset($cache[$cls]) ? $cache[$cls] : $cache[$cls] = new $cls;
2. return #$cache[$cls] ?: $cache[$cls] = new $cls;
I read somewhere # takes significant time to execute (and I wonder why), especially when warnings/notices are actually being issued and suppressed. isset() on the other hand means an extra hash lookup. So which is better and why?
I do want to keep E_NOTICE on globally, both on dev and production servers.
I wouldn't worry about which method is FASTER. That is a micro-optimization. I would worry more about which is more readable code and better coding practice.
I would certainly prefer your first option over the second, as your intent is much clearer. Also, best to keep away edge condition problems by always explicitly testing variables to make sure you are getting what you are expecting to get. For example, what if the class stored in $cache[$cls] is not of type $cls?
Personally, if I typically would not expect the index on $cache to be unset, then I would also put error handling in there rather than using ternary operations. If I could reasonably expect that that index would be unset on a regular basis, then I would make class $cls behave as a singleton and have your code be something like
return $cls::get_instance();
The isset() approach is better. It is code that explicitly states the index may be undefined. Suppressing the error is sloppy coding.
According to this article 10 Performance Tips to Speed Up PHP, warnings take additional execution time and also claims the # operator is "expensive."
Cleaning up warnings and errors beforehand can also keep you from
using # error suppression, which is expensive.
Additionally, the # will not suppress the errors with respect to custom error handlers:
http://www.php.net/manual/en/language.operators.errorcontrol.php
If you have set a custom error handler function with
set_error_handler() then it will still get called, but this custom
error handler can (and should) call error_reporting() which will
return 0 when the call that triggered the error was preceded by an #.
If the track_errors feature is enabled, any error message generated by
the expression will be saved in the variable $php_errormsg. This
variable will be overwritten on each error, so check early if you want
to use it.
# temporarily changes the error_reporting state, that's why it is said to take time.
If you expect a certain value, the first thing to do to validate it, is to check that it is defined. If you have notices, it's probably because you're missing something. Using isset() is, in my opinion, a good practice.
I ran timing tests for both cases, using hash keys of various lengths, also using various hit/miss ratios for the hash table, plus with and without E_NOTICE.
The results were: with error_reporting(E_ALL) the isset() variant was faster than the # by some 20-30%. Platform used: command line PHP 5.4.7 on OS X 10.8.
However, with error_reporting(E_ALL & ~E_NOTICE) the difference was within 1-2% for short hash keys, and up 10% for longer ones (16 chars).
Note that the first variant executes 2 hash table lookups, whereas the variant with # does only one lookup.
Thus, # is inferior in all scenarios and I wonder if there are any plans to optimize it.
I think you have your priorities a little mixed up here.
First of all, if you want to get a real world test of which is faster - load test them. As stated though suppressing will probably be slower.
The problem here is if you have performance issues with regular code, you should be upgrading your hardware, or optimize the grand logic of your code rather than preventing proper execution and error checking.
Suppressing errors to steal the tiniest fraction of a speed gain won't do you any favours in the long run. Especially if you think that this error may keep happening time and time again, and cause your app to run more slowly than if the error was caught and fixed.
Do the risks caused by bypassing Perl safe signals for example like shown in the second timeout example in the DBI documentation concern only the code that uses such bypassing?
The code in that example works hard to localize the change to just that section of code, or any code called from it.
There is not 100% guarantee that no code will be effected outside the code that bypasses safe signals, because signals are no longer safe. In the example the call being timed out is a DBI->connect. For most DBD's this will be implemented mostly in C, unless the C code can handle being aborted and tried again you might find that some data structures internal to the DBD, or the libraries it uses, are left in a inconstant state.
The chances of the example code going wrong is probably incredibly tiny. My personal anecdote on the issues is that I had used the traditional Perl signal handling for years before safe signals were introduced and for a long time I had never had a problem. I hadn't even been very cautious about what I did in my signal handlers. Then we managed to hit a data set that actually did trigger memory corruptions in about 1 out of ever 100 runs. Just modifying the signal handlers to use better practices, similar to those in the example, eliminated our issues.
What does that even mean? By using unsafe signals, you can corrupt Perl's internals and Perl variables. It can also cause problem if a non-reentrant C library call is interrupted.
This can lead to SEGFAULTs and other problems, and those may only manifest themselves outside the block where the timeout is in effect.
I have a matlab script, that every now and them produces the message:
Caught std::exception Exception message is:
bad allocation
Unexpected error status flag encountered. Resetting to proper state.
What could be causing this?
This is a bug in MATLAB, in which some part of MATLAB is not handling a std::bad_alloc exception correctly (std::bad_alloc is an out-of-memory exception thrown from the C++ runtime library).
The "Unexpected error status flag encountered. Resetting to proper state." is an internal diagnostic - you shouldn't see it unless MATLAB has gotten into a bad state, which in this case is happening because it's encountering the bad_alloc someplace where it was not expected. Recent versions of MATLAB have fixed most of these issues, except in extremely low-memory situations (like, there's less than 1 kilobyte of free memory left). What version are you using?
My best guess is that your script is trying to allocate some memory and failing. The occurrence of such an error will depend on the availability of memory on your computer at the time allocation is attempted. The available memory will vary according to what is going on at the time in other programs, the operating system, even the state of your Matlab session.
For a more accurate diagnosis, you'll have to tell us more, maybe even post your script.
It was happening to me, and it turned out I had too many files open. fclose('all') set everything back to normal, and I made sure that all my fopen were followed by fclose.