Huge amount of JAR files in jboss/server/web/tmp/vfs-nested.tmp directory - jboss

Sometimes we have huge amount of JAR files in jboss/server/web/tmp/vfs-nested.tmp directory.
For example today this directory contained over 350k jar files.
But on other hosts there are only 2 jar files in this directory.
What can be the root cause of this problem?
We use JBoss 5.1
UPDATE:
I found following information in release notes for JBoss 5.1.0.GA:
JBoss VFS provides a set of different
switches to control it's internal
behavior. JBoss AS sets
jboss.vfs.forceCopy=true by default.
To see all the provided VFS flags
check out the code of the
VFSUtils.java class.
So I do not understand what should I set?
Should I set -Djboss.vfs.forceNoCopy=true or -Djboss.vfs.forceCopy=false?
Or should I set both of them?
UPDATE 1:
I have read entire thread http://community.jboss.org/thread/2148?start=0&tstart=0
and now I am not shure that I should change either jboss.vfs.forceCopy or jboss.vfs.forceNoCopy.
According to this thread I will have OutOfMemory error instead of huge amount of files in tmp dir.

From here: http://sourceforge.net/project/shownotes.php?release_id=575410
"Excessive nestedjarNNN.tmp files in the tmp directory. The VFS unwraps nested jars by extracting the nested jar into a tmp file in the java tmp directory. This can result in a large number of files that fill up the tmp directory. You can disable this behavior by setting -Djboss.vfs.forceNoCopy=true on command line used to start jboss. This will be enabled by default in a future release, JBAS-4389."

jskaggz has a good answer. In addition, I have this in the beginning of my run.bat file:
rmdir /s /q c:\apps\jboss-5.1.0.ga\server\default\tmp
rmdir /s /q c:\apps\jboss-5.1.0.ga\server\default\work
rmdir /s /q c:\apps\jboss-5.1.0.ga\server\default\log
mkdir c:\apps\jboss-5.1.0.ga\server\default\tmp
mkdir c:\apps\jboss-5.1.0.ga\server\default\work
mkdir c:\apps\jboss-5.1.0.ga\server\default\log
echo --- Cleared temp folders ---
I've had problems with old copies of classes hanging around, so this seems to help.

We have solved this problem by exploded deployment ( works for war and ear) as described in jboss documentation http://docs.jboss.org/jbossas/docs/Administration_And_Configuration_Guide/5/html/ch03s01.html
That's way vfs is not used.

I had the same issue described above in production and resolved it with the following solution.
Added java options
-Djboss.vfs.cache=org.jboss.virtual.plugins.cache.IterableTimedVFSCache
-Djboss.vfs.cache.TimedPolicyCaching.lifetime=1440
My setup also defines additional deployment directories so I needed to add these additional directories to vfs.xml file located in $JBOSS_SERVER_HOME/conf/bootstrap/ in order to see the benefit.
The lifetime setting I think is in minutes so I set it to a day as I have a scheduled restart of the server overnight.
Prior to finding this solution I had also tried using -Djboss.vfs.forceNoCopy=true and -Djboss.vfs.forceCopy=false
This appeared to work but I noticed the application ran a lot slower - presumably because these settings turn vfs caching off.
My Jboss version is jboss-5.1.0.GA
and my application runs in a cluster on production.

Found a lot others having the same problem running in cluster (or farm) environments.
https://issues.jboss.org/browse/JBAS-7126 desribes to solve the problem having a farm directory as deployment directory.
I had the same problem using a 2nd deploy directory.
The jar files out of my applications coming from this 2nd deploy directory got copied until the disk was full.
Tried adding the 2nd deploy directory the same way as at https://issues.jboss.org/browse/JBAS-7126 described for the farm directory.
It works well!

We were facing the same issue and were able to circumvent the issue by using a farm directory as deployment directory.
After putting that process in place we were facing one more issue due to the nature of our DEV environment ( We have clustered environment and we have many developers deploying on the shared DEV environment ) of not getting a consistent results while we were deploying the EARs and WARs that way .We circumvented the issue by making sure that the EARs and JARs that are being deployed are TOUCHED (http://en.wikipedia.org/wiki/Touch_(Unix) ) on the servers to make sure that inconsistencies are avoided .

Related

IDX directory for splunk

Where to find IDX directory for app while deployment in SPLUNK . I tried to find it in opt/splunk/etc/shcluster/apps
Your team is not being very helpful. :-) "IDX" is a common abbreviation for "index" or "indexer", but there is never a directory by that name.
Often, an app that is already installed in a staging environment just gets copied from there straight into Production. That can change depending on the architecture of the two environments, however.
The usual method for installing an app is to explode the tarball into the $SPLUNK_HOME/etc/apps directory and restart Splunk. Clusters have a different procedure so let us know if you have one. See the docs at https://docs.splunk.com/Documentation/AddOns/released/Overview/Distributedinstall

drag and drop ear file on wildfly to deploy project

I'm trying to deploy my project on wildfly using drag and drop way.
In fact, I drag and drop the ear project to wildfly server, as result, I got myProject-ear.ear.dodeploy on wildfly-10.0.0.Final\standalone\deployments.
I want to have myProject-ear.ear.deployed instead of myProject-ear.ear.dodeploy after drag and drop the ear project on the server.
Have you please any idea about solving my issue. Thanks a lot.
Whether drag&drop (or actually creating the war/jar/ear/... file in the deployments directory) is sufficient can be configured in the Wildfly configuration file (standalone.xml in your case). But since you create that file and see a ...dodeploy file popping up should tell you the deployment scanner has found your file and is acting.
Once the deployment finished, you should instead see a file named .deployed or .failed. In case of failure a log snippet inside the file could hint to the reason for failure.
But be aware of something: A drag&drop usually triggers a copy operation. Depending on the size of your file that copy may take some time. Wildfly's deployment scanner checks the directory every XXX seconds (configurable). So if your copy process started but the deployment scanner identifies the file before the copy is complete, Wildfly tries to deploy an imcomplete archive. This should result in an error message but may cause what you experience.
So it may be better to first copy the file to another directory (on the same disk), then just move/rename the file into the deployments folder - this operation is atomic, and the deployment scanner will immediately see the full file.
Another way would be to stop the deployment scanner completely and stop/start JBoss after every change to the deployments directory. This is anyway advisable if you run short on PermGen Space.

How to configure Capistrano to deploy to same directory?

I understand that Capistrano (v2.15.5) deploys to a different directory and symlinks them in deploy:create_symlink however we have a proprietary module on our web server that breaks on every deploy as its licensed to a specific directory. I understand the advantages of the symlink and being able to rollback etc. but we need to deploy to the same directory. I can't find any documentation which supports this, is it possible without editing the source?
Provided I understood you correctly, this should help:
set :deploy_to, "<proprietary path>"
This will put the releases dir and the current symlink into <proprietary path>.
For more control over all relevant directories, have a look at deploy.rb from the 2.x branch here:
https://github.com/capistrano/capistrano/blob/legacy-v2/lib/capistrano/recipes/deploy.rb
In particular lines 50-66. You can overwrite all the _cset statements with set just like in the example above.

Keep files when deploying .war in Glassfish 3.12

I've got a bit of a problem with deployments on my project and after hours of searching the web I can't find an answer to this.
Situation:
I am working on a Web application that lives of uploads and other files that get generated during use.
To keep things simple I store these into: .../mywebapp/web/some subfolders/*
So far, so good.
My Problem:
Every time I redeploy my project on the actual server (after updating classes/jsp's)
Glassfish deletes the entire content of .../mywebapp/ during redeployment.
My Procedure so far:
Export the latest version of my webapp as .war.
Add the changed files into the .war file on the server (rename to .zip, then back to .war)
Redeploy the .war on my server using the admin console (locahost:4848)
My question is
This current procedure is very prone to dataloss (I could lose the files!)
Is there a straight forward way where I can upload changes to my server without the risk of losing all the files that have been added during runtime?
I see two choices:
move the data 'out of harm's way' (find some place for it that isn't
in the deployment directory; like a database)
Switch to directory deployment instead of archive deployment.
The better of these two choices is the first one... It is more portable than the other; every server out there supports deploying archives. A lot of servers support directory based deployment... but they all do it a bit differently... so a directory structure that deploys on A may not deploy on B.
I had this same issue, solved using XCOPY and Event Scheduler.
Effectively, you are continuously sync two folders
Run a scheduled task for the following batch file every X minutes
sync.bat:
xcopy "domain1\applications\%YOUR_APP_NAME%l\path\to\folder" "D:\folder\to\sync" /D /I /Y
xcopy "D:\folder\to\sync" "domain1\applications\%YOUR_APP_NAME%l\path\to\folder" /D /I /Y
Switches:
/D - Only copy newer files if the destination file exists
/I - If the destination does not exist, and you are copying more than one file, this switch assumes that the destination is a folder.
/Y - Overwrite without prompting

Is there a way to get the absolute path of the context root in tomcat?

I have a problem that, after a lot of reading and research, seems like tomcat is running another instance of itself and thus serving an old version of my updated app (or somehow has cached an older version of my webapp somewhere only serves that.)
I work on the app in eclipse on a windows machine and deploy it on a Linux server as a ROOT app (Renaming the war file to a ROOT.war).
What I'd like to know is if there's a way to locate the older version that tomcat is serving by getting tomcat to log an output of the context root of the servlet that's serving the older version of the app.
As it stands it the moment any files created by the updated app get created in the right directory but because the app instances are different it can't access the files shortly after they're created.
Any help/hints would be welcomed
To answer the question in the title, let your code basically do the following:
System.out.println(getServletContext().getRealPath("/"));
To solve the problem described in the question, shutdown Tomcat and delete everything in its /work directory, delete the expanded WAR in /webapps and remove the /Catalina subdirectory of /conf directory (if any) and then restart.