Using Eclipse To Study Large Code Base, and Trace Method Calls - eclipse

I am starting to work with a very large code base (a large webapp), and want to be able to see the method calls in order to understand how the requests are served. So, I want to use Eclipse to trace method calls for any request that comes in. I'm not sure, but I think that the best way of doing this is through remote debugger; so, I have already created a remote debugger. Now, my question is the following:
How can I configure the debugger such that as soon as a request comes in, the debugger would pause, and allow me to control its step through.
Is there a better way of tracing method calls (for the purpose of studying the code), or using a debugger is really the best method?

Use the trace module, and exclude framework or other uninteresting directories.
-m trace --file=/tmp/trace.log -t --ignore-dir=/home/unifield-server
You can then tail the trace file as it runs.
$ tail -f trace.log
You can also periodically clear it or append a marker for later analysis
$ echo > trace.log
$ echo 'about to press save button' >> trace.log

Related

Customise debugger stop/restart button behavior

I'm using VS Code to write and debug a C++ program binding to libfuse.
Unfortunately, if you kill a libfuse process with SIGKILL, you then have to use sudo umount -f <mountpoint> before you can start the program again. It's a minor nuisance if I have to do this every time I want to stop or restart debugging, especially as I shouldn't have to authenticate to do such a regular task (the sudo is somehow necessary despite the mount occurring as my user).
While I think this is mainly FUSE's fault (it should gracefully recover from a process being ungracefully killed and unmount automatically instead of leaving the directory saying Transport endpoint is not connected), I also think there should be a way to customise VS Code (or any IDE) to run some clean-up when you want to stop debugging.
I've found that entering -exec signal SIGTERM in the Debug Console will gracefully unmount the directory correctly, stop the process and tell VS Code it's no longer debugging (status bar changes back from orange to blue). But I can't seem to find a way to automate this. I've tried using a .gdbinit file, with some inspiration from this question:
handle SIGTERM nostop
# This doesn't work as hook-quit isn't run when quitting via MI mode, which VS Code uses...
define hook-quit
signal SIGTERM
end
But as noted in the linked question, GDB ignores quit hooks when in MI mode, and VS Code uses MI mode.
The ideal solution for me would be if I could put something in a .vscode configuration file telling it to send -exec signal SIGTERM when I click the stop or restart buttons (and then wait for whatever notification it's getting that debugging has stopped, before restarting if applicable) but I imagine there probably isn't an option for that.
Even if the buttons can't be customised, I'd be happy with the ability to have a keybinding that would just send -exec signal SIGTERM to the Debug Console without me having to open said console and enter the command, though the Command Palette doesn't show anything useful here (nothing that looks like it will send a specified Debug Console command), so I don't expect there's a bindable command for that either.
Does anyone have any suggestions? Or would these belong as feature requests over on the VS Code github? Any way to get GDB to respect its quit hook in MI mode, or to get FUSE to gracefully handle its process being killed would be appreciated too.

Devel::Cover not collecting any data after startup with mod_perl2

I want to check Selenium's coverage of my web app, which runs on mod_perl2 on CentOS 6.5.
So I installed Devel::Cover, put use Devel::Cover; in my httpd.conf's <Perl> section, and restarted Apache. It immediately writes some coverage data from my custom ErrorLogging.pm module, but then if I hit any of the app's pages via a browser, nothing further happens.
I also tried changing this in httpd.conf:
StartServers 1
MinSpareServers 1
MaxSpareServers 1
...just to make sure it'd be collecting all data from the same process. However, after restarting Apache and trying again, the result was the same.
UPDATE: I also tried launching httpd with -D ONE_PROCESS as mentioned in this thread, but the result was more or less the same, except that I had to Ctrl+C the service when done testing, because it takes over the terminal, and at that point it segfaulted. But the coverage database in the end was virtually identical.
The docs don't mention anything different that I can see. How can I get Devel::Cover to record coverage data for code execution that happens in response to actual browser requests via mod_perl2?

Waiting for mongodb to prealloc

Does MongoDB create a file I can poll for in order to determine when prealloc is done? Right now I have a script to run rs.init(..config..), but I need to wait with triggering it until mongod is up and running.
Since tail -f | grep .. | xarg.. the log file is a bit of a flaky hack, I wondered if there is any other way to determine that mongod is done with prealloc?
We have the same problem for testing replica sets with the PHP driver. Here we use the mongo shell's ReplSetTest() functionality to get around this. You can see here how that works:
https://github.com/mongodb/mongo-php-driver/blob/master/tests/utils/myconfig.js#L9
However, I am not sure how well this works for non-test environments as the amount of options you can give are rather limited (such as, you can't set a data dir properly as things are hardcoded). All the functions and code for this is all in JavaScript at https://github.com/mongodb/mongo/blob/master/src/mongo/shell/replsettest.js — this should give you an overview how it works and allows you to rewrite it in your preferred language.
try to use inotify (i am not sure exactly), for example, if it is necessary to determine that the file is closed after writing:
[maverick#mutabor ~]$ pyinotify -e IN_CLOSE_WRITE /tmp/testfile

How to improve workflow for creating a Lua-based Wireshark dissector

I've finally created a Dissector for my UDP protocol in Lua for Wireshark, but the work flow is just horrendous. It consists of editing my custom Lua file in my editor, then double-clicking my example capture file to launch Wireshark to see the changes. If there was an error, Wireshark informs me via dialogs or a red line in the Tree analysis sub-pane. I then re-edit my custom Lua file and then close that Wireshark instance, then double-click my example capture file again. It's like compiling a C file and only seeing one compiler error at a time.
Is there a better (faster) way of looking at my changes, without having to restart Wireshark all the time?
At the time, I was using Wireshark 1.2.9 for Windows with Lua enabled.
The best way to automate this is by using command line. Yep, use tshark instead of loading gui thingy.
If your lua script is called "proto.lua" and it defines an protocol called "MyProto" that uses port 8888, you can test your dissector using:
tshark -X lua_script:proto.lua -O MyProto -V -f "port 8888"
-V option makes tshark print all the info of all protocols.
-O option filters the -V option to make it show all the info only on the listed(CSV) protocols.
-f option filters all packets that doesn't conform to the rule. In this case any packet that is not from the right port.
The latest Wireshark release comes with a primitive console for running lua script. It can be found under Tools -> Lua -> Evaluate. From there, you should be able to reload your dissector by running dofile(). You'll also have to remove the previous version of your dissector.
Here's an example for a TCP-based dissector.
local tcp_dissector_table = DissectorTable.get("tcp.port")
tcp_dissector_table:remove(pattern, yourdissector)
yourdissector = nil
dofile("c:/path/to/dissector.lua")
I recommend placing this code in a function inside your file.
Now there's a problem with this answer: If your script created a Proto object, it seems that you can't create it again with the same id. The constructor for the Proto class calls the C function proto_register_protocol() (see epan/wslua/wslua_proto.c). I can't find any lua function that will unregister the protocol. In fact, I can't even find a C function to unregister it.
You might be able to write a trivial wrapper function that Wireshark loads, and have it just load the real file from disk (e.g. via dofile()). This could probably "trick" Wireshark into always reloading your Lua code until you're more comfortable with it and can remove this hack.
I've been facing the same problem for quite a while, so I have decided to create a tool that would help me streamline that "horrendous workflow". The tool in question is Wirebait. It is designed to let you run your Lua dissectors as you write them without Wireshark.
It is very quick and easy to install and use. All you have to do is load the Wirebait module and add a five liner snippet on top of your dissector script. Then if you use an IDE such as ZeroBrane Studio, Wirebait allows you to literally write and debug your code on the fly, no need for wireshark. If you don't even have a pcap file, you can use a hexadecimal string representing the data you want to dissect.

Where can I find application runtime errors using Nginx, Starman, Plack and Catalyst?

I have managed successfully to server my Catalyst app on my development machine using Plack + Starman, using a daemon script I based on one I found in Dave Rolsky's Silki distribution.
I then set up nginx to reverse proxy to my Starman server, and aliased the static directory for nginx to serve. So far, so good. However, I am at a loss as to where my application STDERR is supposed to be logging to. It isn't reaching nginx (I suppose that makes sense) but I can't find much documentation as to where Starman may be logging it - if anywhere. I did have a look at Plack's Middleware modules but only saw options for access logs.
Can someone help me?
It's going nowhere. Catalyst::Log is sending data to STDERR, and the init script is sending STDERR to /dev/null.
You have a few basic choices:
Replace Catalyst::Log with something like Catalyst::Log::Log4perl or simply a subclass of Catalyst::Log with overridden _send_to_log -- either one will allow you to send the logging output somewhere other than STDERR.
Write some code that runs at the PSGI level to manage a logfile and reopen STDERR to it. I tried this, it wasn't very pleasant. Logfiles are harder than they look.
Use FastCGI instead, and you'll have an error stream that sends the log output back to the webserver. You can still use Plack via Plack::Handler::FCGI / Plack::Handler::FCGI::Engine (I'd recommend the latter, because the FCGI::Engine code is much newer and nicer than FCGI.pm).
I realise it is a long time since the question was asked, but I've just hit the same problem...
You actually have one more option than Hobbs mentioned.
It isn't quite the "init script" that is sending STDERR to /dev/null, it is Starman.
If you look at the source code for Starman, you would discover that, if you give it the --background flag, it uses MooseX::Daemonize::Core.
And once you know that, its documentation will tell you that it deliberately closes STDERR, STDOUT and STDIN and re-directs them to /dev/null, AND that it takes the environment variables MX_DAEMON_STDERR and MX_DAEMON_STDOUT as names of files to use instead.
So if you start your catalyst server with MX_DAEMON_STDERR set to a file name, STDERR will go to that file.
Today Starman has a --error-log command line option which allows you to redirect error messages to a file.
See documentation of starman:
--error-log
Specify the pathname of a file where the error log should be written. This enables you to still have access to the errors when using --daemonize.