What is the purpose of having two run configurations suh as 'Debug' and 'Release'? - eclipse

I have just started using Eclipse CDT and would like to know why there are two run configurations such as Debug and Release.
Could I use this this to improve my workflow in any way? The manuals for CDT just mention that there are two default configurations, but never mention why.
Thank you for your answers.

Not specifically to Eclipse, but for all software and web development you'll encounter these two configurations.
You use Debug to test your application. You will probably generate debug symbols to step through your code, and it will probably avoid some optimizations. The purpose is to facilitate diagnosing issues.
Then, Release configuration is the one you want to use to publish or deploy your application. It could apply optimizations.
Also, if you make want to connect to different servers, or name files differently, or even execute different path codes depending if you are testing locally or if the final user is executing what you made.
Another example is Logging and Tracing, in Debug mode you may want to enable things to print to console or write to a file/log. But in Release you'll want to avoid them if it reduces the performance

Related

How to disable generating nunit-agent log file when running tests with nunit3-console

I have a question regarding the nunit3-console. When running tests through it I am observing a generation of log files like internal-trace and nunit-agent text files.
I was able to disable the generation of the internal-trace with the --trace=off option but for each run having the test .dll specified I am noticing a nunit-agentNumber.txt file generated.
My question is, is this a problem? More specifically for CI/CD and is there an option to disable this? Or clean it at least after each run.
Version 3.15 of the engine introduced a new internal feature, allowing code to change the level of debugging dynamically. (Not yet exposed to users, but intended to be eventually)
As a side effect, it looks as if empty log files are being created. For the moment, the only way to avoid this is to go back to the previous release.
A fix was created in the development code for version 4.0, but has not been ported back to the version 3 code. A bug report might help with that. :-)

How to (automatedly) test different ways to close an application with SWTBot (with Tycho)

Probably there is a simple answer to this, but I'm finding it hard to figure it out myself: How can I test different ways to exit an application with SWTBot?
In my application based on the Eclipse RCP 3.x, you can close the application in three different ways:
Per mouse click on menu items (File > Exit)
Per keyboard shortcuts on a menu (Alt+F X)
Per shortcut (Ctrl+Q)
I'm currently writing unit tests for this behaviour with the help of SWTBot. Running them I have a simple and very real problem: Once one way of closing the application is tested, the application is closed and hence all the other tests fail.
All tests are currently residing in one test class.
My question therefore is: How can I run all tests successfully, from Eclipse for starters. But also: How can I have them run by Tycho during the build, so that following tests won't automatically fail due to the application not being open anymore?
In short, you cannot test closing an application with SWTBot.
As you already found out, closing the application will terminate the VM as well. And since your tests run in the same VM as the application under test, the tests will be terminated as well.
Aside from these implications, you shouldn't test closing an application. The three ways to close an application that you mention are all provided by the platform and hence the platform should have tests for that functionality, not your application.

How use eclipse debug hadoop wordcount?

I want to use eclipse debug the wordcount, because I want to see the job how to run in the JobTracker. But hadoop use Proxy, I don't know the concrete process that job how to run in the JobTracker. How should I debug?
You are better off debugging "locally" against a single-node cluster (e.g. one of the sandboxes supplied by Cloudera or Hortonworks): that way you can truly step through the code as there is only one mapper/reducer in play. That's been my approach at least: usually the problems I had to debug were to do with the contents of specific files; I just copied over the relevant file to my test system and debugged there.

Debugging embedded system with Eclipse - HOW TO PRINT TO A LOGGING FILE?

I'm currently working on a project on STM32F4 and I'm using Eclipse. I've got some problems with the program - it seems to have a random behavior - sometimes it works fine, other times it has some errors. Sometimes when I try do debug with breakpoints I get the beautiful HardFault Handler and it really messes with my brains.
Sorry for the little off-topic paragraph, just wanted to let you know why I decided to use printing to a log file at some key moments in the program so I can see in which states and in which functions does the problem occur. I'm debugging through a JTAG interface with Eclipse (gdb) and I need to know if there is an easy method integrated in Eclipse that may help me use fprintf-like functions inside my program to write to a file on the disk.
If no, any other solutions?
Thanks
I do not like to connect the debug output log to the Jtag communication port because the log will not be available after development.
I usually build an SystemLog library that can send the log messages through any medium that is available (UART, USB, Ethernet or SDCARD). That's what I'd recommend you to do. It will help you through the development, and the support team on the event of any failure on field.
If stdlib is available in your project you should use the snprintf family functions to build your SystemLog.
Also, you can integrate the log output to the eclipse console by calling a serial console communicator (if you use UART) on you makefile, in this case, your makefile will have to flash the target as well.

Automated deployment of Check Script for Nagios

We currently use Ant to automate our deployment process. One of the tasks that requires carrying out when setting up a new Service is to implement monitoring for it.
This involves adding the service in one of the hosts in the Nagios configuration directory.
Has anyone attempted to implement such a thing where it is all automated? It seems that the Nagios configuration is laid out where the files are split up so that they are host based, opposed to application based.
For example:
localhost.cfg
This may cause an issue with implementing an automated solution as when I'm setting up the monitoring as I'm deploying the application to the environment (i.e - host). It's like a jigsaw puzzle where two pieces don't quite fit together. Any suggestions?
Ok, you can say that really you may only need to carry out the setting up of the monitor only once but I want the developers to have the power to update the checking script when the testing criteria changes without too much involvement from Operations.
Anyone have any comments on this?
Kind Regards,
Steve
The splitting of Nagios configuration files is optional, you can have it all in one file if you want to or split it up into several files as you see fit. The cfg_dir configuration statement can be used to have Nagios pick up any .cfg files found.
When configuration files have changed, you'll have to reload the configuration in Nagios. This can be done via the external commands pipe.
Nagios provides a configuration validation tool, so that you can verify that your new configuration is ok before loading it into the live environment.