Project Deployment Error - GC Overhead Limit Exceede - eclipse

I am using tomcat as server and eclipse as my IDE and i am using maven
I am getting "GC Overhead Limit Exceeded" when i am doing project clean for my spring project.

The reason why i am getting above error is because "low memory allocation for VM"
the solution for this is
1.Goto bin folder of Tomcat.
2.Increase the size of the permgen in catalina.sh file
Eg: CATALINA_OPTS="$CATALINA_OPTS -Xms1024m -Xmx10246m -XX:NewSize=256m
-XX:MaxNewSize=356m -XX:PermSize=256m -XX:MaxPermSize=356m"
Add the above line in the top of catalina.sh file and restart tomcat (even if doestn't work restart your eclipse also). It worked for me

Related

Configuring Tomcat 8.5. in Eclipse for running Rest API

I am getting error while starting apache Tomcat 8.5.8 as a server in Eclipse. It was showing error that installed Tomcat version is 8.5.8 but it is expecting 8.0, below is error:
error while configuring Apache tomcat
After looking in to other threads I came to know about solution for changing the server file at path catalina.jar\org\apache\catalina\util\Serverinfo.properties
Till here I done the change but while saving and came out of RAR file it shows error Error while saving the changes in serverinfo.properties file
I have tried after stoping the tomcat but still it not allow me to save the changes in server file. Please advice.
The error you're getting saving the file is because Tomcat is still running, and it has the jar file open. Windows will not allow you to save changes to a file while some other process has the file open.
First, stop Tomcat, then make your changes to the file and save the jar, and then restart Tomcat.
If it matters, I ran into the same problem with Eclipse incorrectly determining the version of Tomcat. The solution was to change that "server.info" property in the given file. For instance, if the Tomcat version you have is 8.5.8, you have to change the version to 8.0.5.8.

Start a Hadoop Map Reduce Job on a remote cluster in Eclipse with the run dialog (F11)

Is it possible to start a Map Reduce job on a remote cluster with the Eclipse Run Dialog (F11)?
Currently I have to run it with the External Tool Chain Dialog and Maven.
Note: To execute it on a local cluster is no big deal with the Run Dialog. But for a remote connection it's mandatory to have a compiled JAR. Otherwise you get a ClassNotFoundException (also if Jar-By-Class is set)
Our current Setup is:
Spring-Data-Hadoop 1.0.0
STS - Springsource Toolsuite
Maven
CDH4
This we set on our applicationContext.xml (this is what you specify in the *-site.xml on a vanilla hadoop)
<hdp:configuration id="hadoopConfiguration">
fs.defaultFS=hdfs://carolin.ixcloud.net:8020
mapred.job.tracker=michaela.ixcloud.net:8021
</hdp:configuration>
Is there a way to tell Eclipse it should build a JAR when the Run Dialog is executed.
I do not know if it builds a new jar (may be you must extract a jar to a folder), adding "Run Configurations->Classpath" your jar clears the problem "ClassNotFoundException".

IntelliJ increase Scalatest Heap space

I'm using IntelliJ to run Scalatest tests. Problem I'm having is the tests are running out of Heap space (likely because my tests are using Selenium and starting up jettys to hit my Api).
I know how to increase my Heap space in IntelliJ but after increasing the space the tests still run out of Heap.
Is there a different place to increase Heap space for tests rather than the usual IntelliJ info.plist (oh btw I'm on Mac)
go to Edit Configurations:
Choose the test on the left, and tweak its VM options:
In case you are using a ScalaTest ant task, there is a jvmarg that you can set:
<jvmarg value="-Xmx2048m -XX:MaxPermSize=128m"/>
As you rightly observed, it is not IDEA's heap size that needs to be increased but rather that of the ScalaTest run configuration accessible from the Run > Edit Configurations... dialog. There, you should be able to set the VM Options for the tests you are trying to run.

Specify memory for ant maven deploy task

I am using ant maven deploy task to upload the zip file created by the ant script to our repository, but the problem is the file is too big and it fails with
java.lang.OutOfMemoryError: Java heap space. Following is the task
<deploy uniqueversion="false">
<remoterepository url="${repository}" id="${repositoryId}"/>
<remotesnapshotrepository url="${snapshotRepository}" id="${snapshotRepositoryId}"/>
<attach file="target/${qname}-dist.zip" type="zip"/>
<pom file="pom.xml" groupid="com.my.company" artifactid="test" packaging="zip" version="${version}" />
</deploy>
How do I specify memory heap size here, I don't seem to find anything in deploy task or some of its children task.
Maven doesn't fork on the deploy task so to increase the memory, you have to increase the heap size for the maven executable itself. You can just set your MAVEN_OPTS environment variable to include the -Xmx setting: MAVEN_OPTS=”-Xmx512m”

How is the Tomcat temp directory location defined?

I am running Tomcat bundled with Liferay5.2.3 and use Eclipse 3.5 (Galileo) as my IDE. I set up my Tomcat server in Eclipse as per this blog entry: http://www.jroller.com/holy/entry/developing_portlets_for_liferay_in. If I start Tomcat via the Eclipse server config, Liferay/Tomcat uses my C:\Documents and Settings\user\Local Settings\Temp\ directory. However, if I start Tomcat directly using the startup.bat script, Liferay/Tomcat uses the Tomcat temp directory. I can't figure out if Eclipse, Liferay or Tomcat is deciding which temp directory to use or how to change it. I would prefer to use the Tomcat temp directory.
I have this issue with both the Lifera/Tomcat bundles 5.5 and 6.0 (liferay-portal-tomcat-6.0-5.2.3.zip and liferay-portal-tomcat-5.5-5.2.3.zip).
Anybody have any clues?
When you start Tomcat with catalina.sh (or catalina.bat), the temp directory is set with the CATALINA_TMPDIR variable:
if [ -z "$CATALINA_TMPDIR" ] ; then
# Define the java.io.tmpdir to use for Catalina
CATALINA_TMPDIR="$CATALINA_BASE"/temp
fi
Also you can pass below as VM argument while starting Tomcat in Eclipse to use it as temp directory.
-Djava.io.tmpdir="C:\Program Files\liferay-portal-5.2.3-tomcat-6.0\tomcat-6.0.18\temp"
Although I still don't know where/how Tomcat determines where the default temp directory should be, nor do I know why Eclipse sets it to something different, I have found out that you can set the temp directory via a VM argument when starting Tomcat in Eclipse:
-Djava.io.tmpdir="C:\Program Files\liferay-portal-5.2.3-tomcat-6.0\tomcat-6.0.18\temp"
You can find the following folder structure in your workspace:
.metadata.plugins\org.eclipse.wst.server.core\tmp0\
Here you will find the folder which is attached with tomcat in eclipse.