I'm trying to run an algorithm that requires a few hundred megabytes of memory within eclipse, and I've specified VM arguments -Xmx512m, but I can't get past some arbitrary memory limit which seems to decrease as I continually try to run my programs. Physical memory is fine...what could be the issue?
Eclipse takes VM arguments, but those are for the Eclipse platform itself, you need to modify the applications Launch Configuration to ensure it has enough memory.
When you run a program within Eclipse it creates a new Launch Configuration with default parameters. You can modify those parameters by selecting Run->Run Configurations.. (or Debug->Debug Configurations...), then select the relevant configuration. If it is a Java main app, the configuration will be under Java Application, probably with the name of the class or project you selected when you first ran it.
Select the launch configuration, then the Arguments tab, then enter the relevant JVM arguments (i.e -Xmx512m) in the VM arguments pane. You can also enter Program arguments to pass to the main method if you wish.
(source: modumind.com)
Update: another parameter to try passing is -XX:MaxPermSize=128m, if your algorithm is creating lots of method and/or class objects (which sounds like it is the case).
Look at the arguments -Xms too. Be sure that the argument is specify to your Run mode ( Run / debug).
If you still have some problem, you can use a profiler to see what happen in memory, or just logging the Runtime.getRuntime().freeMemory() can help.
You should ensure that you are specifying the arguments for your test application in the Run Dialog, and not in arguments to Eclipse itself.
Related
I have a question related to V4L-DVB drivers. Following the
Building/Compiling the Latest V4L-DVB Source Code link, there are 3 ways to
compile. I am curious about the last approach (More "Manually
Intensive" Approach). It allows me to choose the components that I
wish to build and install using the "make menuconfig". Some of these components (i.e. "CONFIG_MEDIA_ATTACH") are used in pre-processor directives that define a function in one shape if defined, and a function in another if not defined (i.e.
dvb_attach, dvb_detach) in the resulting modules (i.e. dvb_core.ko)
that will be loaded by most of the DVB drivers. What happens if there are two
drivers (*.ko modules) on the same host machine, one that needs dvb_core.ko with
CONFIG_MEDIA_ATTACH defined and another that needs dvb_core.ko with
CONFIG_MEDIA_ATTACH undefined, is there a clean way to handle this?
What is also not clear to me is: Since the V4L compilation environment seems very customizable (by setting the .config file), if I develop a driver using V4L-DVB structures, there is a big chance that it has conflicts with other drivers since each driver has its own custom settings. Is my understanding correct?
Thanks!
Dave
we would like to use appassembler-maven-plugin to generate daemon scripts for our apps, we want to avoid having multiple configuratoins and generated scripts for the different environments, e.g. test, prod, etc., and would like to be able to set a jvm system property or add an extra command line argument when starting. I have been looking into this for a while ow and can't seem to find a solution.
If anybody has any ideas or suggestions they would be greatly appreciated,
thanks
You can use the extraJvmArguments to put such things as a system property. See the examples on the documentation page.
I'm experiencing some rather annoying problems with scala. The problem is, that I can compile small scala project perfectly, but when the projects are bigger, the compiler crashes with an StackOverflowException.
Clearly, I have to increase the stack size for the compiler, however, that's probably my main problem here, I don't know how.
I'm starting netbeans with these parameters:
netbeans_default_options="-J-client -J-Xmx512m -J-Xss8m -J-Xms512m -J-XX:PermSize=128m -J-XX:MaxPermSize=512m -J-Dapple.laf.useScreenMenuBar=true -J-Dapple.awt.graphics.UseQuartz=true -J-Dsun.java2d.noddraw=true"
So, as far as I'm aware, -J-Xss8m should increase the thread stack size to 8 mb. However, that doesn't seem to affect the compiler. So I tried to pass the same parameter to the compiler directly, using the compiler flags, which I can set in netbeans, resulting in this:
-deprecation -J-Xss8m
But again, that doesn't help, I'm still getting the exception. I searched through the netbeans documentation, but all I found was the netbeans startup parameters, which I had already set. I hope somebody here can give me further information on how to handle this problem.
Further information:
So, after a day I finally had the chance to try everything out on a different machine. I used the same settings and same compiler, but to my surprise, I didn't get the same result. Meaning, on his machine the compiler compiles the whole code without any exception.
The only difference between mine computer and his is, that his has more RAM and CPU power, but that shouldn't make the deal since we both use netbeans with the same startup options.
By now, I even tried out the RC of the 2.9 scala compiler, it didn't help much. Also, I checked if I have the correct scala plugin installed, since there might be problems when using the 2.8 plugin with the 2.9 compiler and vice versa. However, I'm using the 2.9 plugin and 2.9 compiler, so that's fine.
The problem of giving the Scala compiler more stack space is similar to specifying more heap space. Both of these options must be specified as custom JVM arguments when running the Scala compiler. However Netbeans lacks any sort of documentation on how to do it, so here it is.
The way to specify custom JVM arguments for the Scala compiler with Netbeans is by customizing build.xml for each project.
Open nbproject/build-impl.xml in the project's folder.
Search for "scalac" and you will find the following target: -init-macrodef-scalac.
Copy the whole target definition, paste it into your build.xml, and save it.
Close nbproject/build-impl.xml, from now on you will work with build.xml.
In the target you just copied, locate the <scalac> tag, the nesting will be as follows: target.macrodef.sequential.scalac
Add a custom "jvmargs" attribute to the scalac tag, it will look as follows: <scalac jvmargs="-Xss2048k -Xmx2000m" ... >
Save the build.xml. Now whenever you compile your project with netbeans, the compiler will be run with the custom jvm arguments.
This seems like a simple thing, but I can't find an answer in the existing questions:
How do you add a global argument to all your present and existing run or debug configurations? In my case, I need a VM argument, but I see that this could be useful for runline arguments as well.
Basically, every time I create a unit test I need to create a configuration (or run, which creates one), and then manually edit each one with the same VM argument. This seems silly for such a good tool.
This is not true. You can add the VM arguments to the JRE definition. This is exactly what it is for. I use it myself so that assertions are enabled and heap is 1024mb on every run, even future ones.
Ouch: 7-years bug, asking for running configuration template, precisely for that kind or reason.
This thread proposes an interesting workaround, based on duplicating a fake configuration based on string substitution:
You can define variables in Window->Preferences->Run/Debug->String Substitution. For example you can define a projectName_log4j variable with the
correct -Dlog4j.configuration=... value.
In a run configuration you can use ${projectName_log4j} and you don't have to remember the real value.
You can define a project-specific "empty" run configuration.
Set the project and the arguments fields in this configuration but not the main class. If you have to create a new run configuration for this project select this one and use 'Duplicate' from its popup-menu to copy this configuration.
You have to simply set the main class and the program arguments.
Also you can combine both solutions: use a variable and define an "empty"
run configuration which use this variable. The great advantage in this case
is when you begin to use a different log4j config file you have to change
only the variable declaration.
Not ideal, but it may alleviate your process.
os i figured out how to use the -mthumb and -mno-thumb compiler flag and more or less understand what it's doing.
But what is the -mthumb-interlinking flag doing? when is it needed, and is it set for the whole project if i set 'compile for thumb' in my project settings?
thanks for the info!
Open a terminal and type man gcc
Do you mean -mthumb-interwork ?
-mthumb-interwork
Generate code which supports calling between the ARM and Thumb
instruction sets. Without this option the two instruction sets
cannot be reliably used inside one program. The default is
-mno-thumb-interwork, since slightly larger code is generated when
-mthumb-interwork is specified.
If this is related to a build configuration, you should be able to set it separately for each configuration "such as Release or Debug".
Why do you want to change these settings? I know using thumb instructions save some memory but will it save enough to matter in this case?
my application uses both, thumb and vfp code but i never specifically
set -thumb-interwork flag.. how is that possible?
According to man page, without that flag the two instructions sets
cannot be reliably used inside one program.
It says "reliably"; so without that option, it seems they still can be mixed within a single program but it might be "unreliably". I think normally mixing both instructions sets works, the compiler is smart enough to figure out when it has to switch from one set to another one. However, there might be border cases the compiler just doesn't understand correctly and it might fail to see that it should switch instruction sets here, causing the application to fail (most likely it will crash). This option generates special code, so that no matter what your code does, the switching always happens correctly and reliably; the downside is that this extra code is needed for every global visible function and thus increases the binary side (I have no idea if it also might slow down function calls a little bit, I personally would expect that).
Please also note the following two settings:
-mcallee-super-interworking
Gives all externally visible functions in the file being
compiled an ARM instruction set header
which switches to Thumb mode before executing the rest of
the function. This allows these
functions to be called from non-interworking code.
-mcaller-super-interworking
Allows calls via function pointers (including virtual
functions) to execute correctly regardless
of whether the target code has been compiled for
interworking or not. There is a small overhead
in the cost of executing a function pointer if this option
is enabled.
Though I think you only need those, when building libraries to be used with other projects; but I don't know for sure. The GCC thumb handling is definitely "underdocumented".