PJSUA: verbosity of debug information on the application console - sip

If I create an application using PJSUA, then after the pjsua_create() system call, a huge amount of debugging information falls onto the console of my application.
This is convenient at the development stage, but after it interferes with the work with programm.
How can you predefine verbosity level of this debugging information? So that when pjsua_create() is called, it is already set.
Thank you for the informative answers.

You can do this by calling pj_log_set_level(int level) (link) for example before pj_init(). Also you can define PJ_LOG_MAX_LEVEL const in config_site.h with value suited to your needs at compilation time.
This also may be interesting for you (link).

Related

Can I use search or regex to catch debugger output with a breakpoint in Xcode?

I'm getting this super obnoxious output printed out thousands of times slowing down my program:
2021-11-08 12:37:57.183588-0800 (myScheme)[27459:701276] [boringssl] boringssl_metrics_log_metric_block_invoke(144) Failed to log metrics
In StackOverflow, I'm only finding that it is related to an XCode bug and there isn't much that can be done about it. However, I'd like to experiment with alternative pieces of code that might be able to perform whatever task is being run but without triggering this stupid issue.
Is there a way I can set a breakpoint for this so I can study the stack trace which leads to it?
Thanks
A few observations:
This output is a simple logging statement (such as generated by OSLog or Logger) in a framework. You generally cannot control what logging these frameworks do, so don’t worry about it.
In terms of slowing down your app, the logging system is exceptionally efficient, depending upon the level of the individual logging message, so I wager that debugging messages are not slowing down your app (observably). There are different logging levels, i.e., “debug”, “info”, “notice”, “error”, and “fault”, each with its own performance characteristics. See WWDC 2020 Exploring logging in Swift. In particular, “debug” logging messages are highly performant.
The real issue is how to filter your debugger console so your salient logging statements don’t get lost in all the cruft from logging statements from all the frameworks you might be using (such as “boringssl”).
I find that filtering the console is extremely useful. Unfortunately, The Xcode console filter does not offer regex or “negative” filters (i.e., you cannot say “show me everything except boringssl messages”). But you can tell it to show you your particular logging statements.
In particular, you might consider using Logger (or OSLog for older targets), instead of print, for your logging statements:
import os
private let logger = Logger(subsystem: Bundle.main.bundleIdentifier!, category: "AppDelegate") // use whatever category you want; I personally use a separate logger for each compilation unit and make the “category” the name of that unit
And later, in lieu of print, use Logger:
logger.debug(...)
(In older targets, you can use OSLog; the syntax is slightly different, but the idea is the same.)
Anyway, when I then look at my unfiltered log, I’ll see all the cruft, where it is hard to see the messages I care about:
But I can focus on my salient events, in this case by filtering on “[AppDele” for log messages from my AppDelegate:
This logging pattern also allows you to watch iOS logging messages emitted by a device on one’s macOS Console, which is critical when diagnosing issues that only manifest themselves when not attached to a debugger. This is illustrated in that WWDC video.
In short, do not worry about framework logging messages, but just have a workflow that lets you easily focus on the console messages that matter.

Wait for eglSwapBuffers posting to complete

I need to know when posting completes after eglSwapBuffers. I was thinking eglWaitNative might halt execution until positing is complete, but I find it unclear reading the spec, chapter 3.8:
https://www.khronos.org/registry/egl/specs/eglspec.1.5.pdf
It would appear eglWaitNative is used to synchronizing "native" rendering API such as Xlib and GDI. However as far as I know eglSwapBuffers might be running on top of Wayland which can´t render shit. Still, it would seem reasonable to believe the EGL_CORE_NATIVE_ENGINE engine always points out the "marking engine" doing buffer swaps...
From 3.10.3 I read:
Subsequent client API commands can be issued immediately, but will not
be executed until posting is completed.
I suppose I could do something like this but I´d rather use "pure" egl if possible:
eglSwapBuffers(...);
glClear(...); // "Dummy" command.
My project is using OpenGL Safety Critical profile 1.0.1, EGL 1.3 and some vendor specific extensions. Sync objects are not available.

What does sys_vm86old syscall do?

My question is quite simple.
I encountered this sys_vm86old syscall (when reverse engineering) and I am trying to understand what it does.
I found two sources that could give me something but I'm still not sure that I fully understand; these sources are
The Source Code and this page which gives me this paragraph (but it's more readable directly on the link):
config GRKERNSEC_VM86
bool "Restrict VM86 mode"
depends on X86_32
help:
If you say Y here, only processes with CAP_SYS_RAWIO will be able to
make use of a special execution mode on 32bit x86 processors called
Virtual 8086 (VM86) mode. XFree86 may need vm86 mode for certain
video cards and will still work with this option enabled. The purpose
of the option is to prevent exploitation of emulation errors in
virtualization of vm86 mode like the one discovered in VMWare in 2009.
Nearly all users should be able to enable this option.
From what I understood, it would ensure that the calling process has cap_sys_rawio enabled. But this doesn't help me a lot...
Can anybody tell me ?
Thank you
The syscall is used to execute code in VM86 mode. This mode allows you to run old "real mode" 32bit code (like present in some BIOS) inside a protected mode OS.
See for example the Wikipedia article on it: https://en.wikipedia.org/wiki/Virtual_8086_mode
The setting you found means you need CAP_SYS_RAWIO to invoke the syscall.
I think X11 especially is using it to call BIOS methods for switching the video mode. There are two syscalls, the one with old suffix offers less operations but is retained for binary (ABI) compatibility.

Catalyst Development Server - not showing routes and errors

To set the seem, I'm an experienced developer and have coded many languages over the years, including a good bit of Perl back in late 90's early 00's. Since then I haven't touched Perl, but now have a client who wants some changes making to an existing open source project built using Perl5 and Catalyst. I've quickly worked through the Catalyst tutorials, read a few books online and am now starting to feel my way.
I have the existing project up and running on a clean Debian Wheezy VM and am testing the code an my changes using the Catalyst Development Server.
While working through the tutorials and writing a few test apps, the development server would always output a lot of useful information when run, such as the configured routes etc.. But under this project, when I run the server I don't get a lot of output. I don't even get messages sent to $c->log->debug();
I run the server with the following command:
perl ./script/asnn_panel_server.pl -d -r
Which outputs the following:
HTTP::Server::PSGI: Accepting connections at http://0:3000/
I can access the server and the application is running fine.
In a test controller action I can try the following lines:
$c->log->debug("A test debug message");
print "A test print message\n";
The debug log message does not appear in my development server output, but the print line does. So I know the call to $c->log->debug() is not blowing up, because the next line is executing, but where is it going?
So essentially I feel I 'could' get more useful output from the Catalyst Development server, but am not.
I have googles but can't find anything of relevance. Sorry if I'm going in the wrong direction here, I do know what I doing in general, but have a lot to pick up here in a short amount of time!
I suspect my issues might be specific to the open source project I'm working on, but there's not a lot of help to be had from that direction. Could anyone give me any pointers as to what to investigate?
UPDATE : I now realise that the application is using log4perl, which is configured to send $c->log->debug() to syslog. I still don't know why the Catalyst Development server isn't providing much output.
:wq
For anyone coming upon this later, if you want to see the developer debug stream (stuff about the routes and classes and models your application is using, etc you need to be in debug mode, which you can do easily by setting CATALYST_DEBUG=1 in your env (I often start my app like "CATALYST_DEBUG=1 perl -Ilib script/myapp_server.pl"
There is sadly a difference between debug as a log level and debugging mode. The way catalyst works is that if you are in debugging mode (via CATALYST_DEBUG=1, or any of the other documented ways this gets turned on) all this debugging stream gets sent to the log, most of it logged at the debug level (again debug as a log level is distinct from debugging developer mode :( ) So you need both debugging mode and your logger should be set to listen at the debug level.
If you use the default catalyst log, it is debug level by default, so doing CATALYST_DEBUG=1 is all you need. If you use a different logger be sure to enable debug log level for your development setup, if you wish to see those developer stream logs.
Messages sent to $c->log->debug() are generally disabled in production environments. If it doesn't seem to matter whether you start your scripts with or without the -d switch, then I'd suggest something downstream in the sequence is setting the environment variable CATALYST_DEBUG to 0 or undef unilaterally.
That said, you should be able to see the output of $c->log->info() or $c->log->warn() calls. The answer to that question should help you determine if the problem is log4perl or Catalyst related.
Hopefully that will get you on your way.

Tips for finding things in your program that are broken that you don't know about?

I was working on something for a client today when I found a way to break some functionality in our program.
(The code is really legacy code, it's been in development for about 10 years and I've only been working here for about a year.)
It didn't cause an error, or cause the program to crash, but if a user was using the program and duplicated the behavior I'm pretty sure they'd be holding up their "WTF?" flag.
In our program we have named fields (textboxes) and static text (labels) that can be linked with the textboxes. When the textbox is not filled in the label(s) that were linked to them disappear.
The functionality that I broke was, when you change the name of a textbox that already has one label or more linked to it, and save the file, without re-associating the one or more labels associated with the textbox, the formerly-associated labels appear when the textbox is blank.
Now my thinking on the matter is that a simple observer pattern could have solved this problem in the first place, but then I didn't write the code.
I was thinking that if I could dig up more situations like this with the guys in my shop, that maybe I could talk them into considering unit testing, decoupling, applying patterns where they are called for and the like.
So for this reason I was wondering if anyone had any tips for finding broken (but not error causing) functionality in any sort of app (web-based, desktop, etc...)
For an app to fail usability, it has to have a defined set of expected behaviors.
"Is this textbox SUPPOSED to do nothing when the enter key is pressed?" Maybe it is, maybe it isn't. I've seen apps where a tester/reviewer reports something that they ASSUME should work another way, when in actuality the client specifically asked that they DON'T want the form submitted on a return key press, but only a submit button click.
So basically you have to define proper behaviour before you can determine incorrect behavior.
Hire some testers.
If it has an interface, then one of my favorite unconventional test is putting 5-10 year old children in front of it. You'd be surprised what they can come up with (especially the younger ones). While this may sound like a joke, it isn't -- it really works, because children don't have the mindset of only going through "mindset" paths.
And yeah, children are the experts in "breaking things" xP.
Code inspections, i.e. reading the source code: if you had taken time to read/inspect the source code, looking for "smells" or even just looking for code whose behaviour you don't immediately understand and agree with, you might have been holding up your "WTF?" flag too.
Test, test, test.
Do unexpected things. Start doing one task and switch another to see if anything goes haywire. Use the back button when you're not supposed to. Open it in two windows. Let it time out.
Test in all browsers, especially IE.
You can find database connections/sessions aren't released by:
working out the minimum number of connections you need to do something
setting resource limits to that minimum number
ensuring one "run" of the scenario that should use exactly that number (and release it afterwards)
then run it again a few times... do you run out of connections?
I used to work in a company where programmers regularly used to forget to de-allocate db connections. The standard answer was to reduce the resource to a minimum to see if there's a leak - and to try to work out where it is by restarting the system and running different scenarios repeatedly.
The first hour of code review, with the first reviewer, will do the most to find quality problems. But here's the thing: You don't need to convince people of quality problems. You need to convince them of the value of fixing bugs, and of rewriting only when the present quality absolutely justifies it.
I've dealt with some seriously bad code in my time. But you can't just rewrite. You need a spec before you can even tell if the rewrite is an improvement.
Sometimes, you have to infer the spec from the code and then check it against some human somewhere. But by the time you've done that, you understand the code as written and are now better prepared to repair than to rewrite -- most of the time.
Repair proceeds by a process of small behavior-preserving modifications that render the spec more clear in the code. Then, when you find something that looks wrong, you don't just change it. You ask around until you find the person responsible for that decision, and you get them to show you where in the spec it says that behavior X is correct. (This conversation can take many forms.) If you're lucky, they'll tell you that behavior X is in fact incorrect, and then you've earned your pay.
assert()
Also unit testing with coverage analysis.
This is particular to the Visual Studio IDE, although it probably also applies to others:
During testing, always at some point run in the debugger with "Break when an exception is thrown" turned on.
This can often help expose exceptions which are incorrectly being silently caught and which represent bugs, but otherwise may not be evident.
Code reviews should always also include reviews of the unit test code.
The problem is that with ad-hoc testing it's impossible to know how much or how well a developer has tested their code. So, you're at the mercy of different developers definition of the word "done".
If you include reviews of the unit test code at the same time you review the production code you should have a good idea of whether the code is really complete; in that "complete" includes "tested". Not just "Hey, I'll throw it over the wall to the testers!".