When deploying my project to SpringSource dm Server, every once in a while a JAR fails to deploy with the following message:
/mnt/myproject/springsource/work/com.springsource.server.deployer/packed/my.project.0.1.10.M.jar' cannot be unpacked.
java.util.zip.ZipException: error in opening zip file
There are 5 .war files in the project. If one of them fails, it's always the same one (which is also the last one to be copied into the pickup directory). However, usually all 5 will deploy without issues. It is the exact same set of files in all instances, taken from a maven repository, just deployed to new server instances.
The file that fails can be opened just fine by 7-Zip. If I stop Spring, clear the pickup directory, start Spring and copy the .war files to pickup again, it will usually work.
The usual deployment process is:
Start Spring
Wait until it reports Open for business with profile 'web'
Copy all 5 projects with a 2 second delay between each copy (scripted).
Similar issues java-util-zip-zipexception-error-in-opening-zip-file and jboss5-cannot-deploy-due-to-java-util-zip-zipexception-error-in-opening-zip-fil do not seem to apply.
You don't say which version of dm Server you are running, so I would recommend upgrading to 2.0.x to pick up fixes if you haven't already. You may also like to upgrade to Eclipse Virgo which is the continuation of the dm Server project.
My guess is that the heuristic in dm Server for determining when a file copy into pickup has terminated is playing up, possibly due to a slow or irratic copy operation. Is there anything unusual about your disk, such as encryption or remote mount, which may interfere with the copy operation?
One way to rule out the heuristic would be to place the files in the pickup directory when dm Server is not running and then start dm Server when the copy operation has definitely completed. If the problem reproduces, then there may be a problem in the JRE you are using.
Related
We are deploying 3 webapps in a liberty server (16.0.0.4) and frequently get ziperrors and messages like below (this is fairly easy to reproduce):
[ERROR ] CWWKZ0002E: An exception occurred while starting the application XYZ. The exception message was: com.ibm.ws.container.service.state.StateChangeException: java.util.zip.ZipError: jzentry == 0,
jzfile = 693877616,
total = 1148,
name = C:\opt\IBM\wlp\usr\servers\XYZ\workarea\org.eclipse.osgi\220\data\cache\com.ibm.ws.classloading.sharedlibrary_84.cache\lib\db2jcc.jar,
i = 329,
message = null
It seems to be erratic but easier to reproduce on slower machines so I suspect a race condition, but it may be as simple as the cache clearing during a server Clean is non-blocking?
Deploying the webapps as war files rather than linked via xml files back to projects does not experience this problem.
I've used the beans.xml with bean-discovery-mode="all" with no effect. We are using injection of different classes in two of the three web applications.
Note the directory number in the path to the cache differs from run to run.
This has been present since at least version 16.0.0.2 of Liberty. Is there a workaround for this problem or does anyone know if will be fixed in the December release?
The problem has gone away with migrating projects to newer releases of Eclipse, so closing.
Trying to restart from command line weblogic server but it is picking up EAR file i deployed from Eclipse previously. Thought it was some kinda caching issue so opened/closed eclipse and cmd no help. Still picking up this EAR even when i delete it manually from the temp folder WL_User. Cant start weblogic from Eclipse as weblogic closes suddenly due to VM shutdown request and eclipse hangs on publishing state. Not sure why it does that too no error messages except BEA: VM requested Shutdown.
Very confusing how it is picking it up. Really want to understand why? Thanks for help in advance.
Your weblogic domain is in a bad state. Normally I would suggest removing a deployment by opening the weblogic admin console and navigating to Deployments and then deleting the problematic deployment. If you can't do that, try the following:
Navigate to <domain folder>/servers/<server causing problems>
Delete the tmp, data, cache, and logs folders
Restart your server.
Another option (you should only use if you are really stuck) is to edit the following file:
<domain folder>/config/config.xml
search for and remove your <app-deployment>
If it still doesn't work, you have other problems with your VM. Edit the question and add more info as necessary.
Problem: A 10 to 15 minute delay in WebSphere application deployments.
Environment/Situation: WebSphere 6.1.0.23, 90MB ear files containing about 19,000 files (ear file contains jar libraries). The ear file, WebSphere, and the automation driving the deployment are all on the same box. No EJBs. There are about 20 deployed applications like this on this box with 10 of them usually running.
Details: The deployment is automated, and the message 'ADMA5013I: Application ... installed successfully' is received. A few moments later, the directory is created (blah.ear/blah.war), but the directory remains empty for 10 to 15 minutes. Except for this specific delay, the performance on the box is fine and CPU utilization is very low. Once the files start getting created, they all show up in under a minute. Steps before and after this step run at an acceptable speed. It's just this one step, waiting for the files to show-up that's the problem.
Additional Details (precipitated by comments, below): WebSphere ND as evidenced by "Deployment Manager", and "Node Agent" in the logs. The ear contains one war file, one application. By using shared a library definition, the size of the ear was reduced to 60MB. WebSphere itself is started with JVM option -XX:MaxPermSize=256M. The deployments are done using the tools in the com.ibm.websphere.management.* packages (jar file supplied by IBM), primary class is "AdminClient". The code is similar to what is in this IBM documentation WS UI entry [System Administration > Console Preferences > "Synchronize changes with nodes"] was checked, but still sits for 15 minutes 'without doing anything'.
Just guesses:
ear files timestamp is in the past. Might be caused by jvm time-zone problems, different time zones when building/deploying (esp. if different machines used).
JDK could determine is there enough disk space
WSAD of some tool do not close file descriptor. Try deploy with anti-viruses paused.
Could you please describe situation in more details:
does build and deploy on the same machine?
does it work the same on all circumstances?
I'm running a Railo project on my local Ubuntu box, Eclipse Indigo, Tomcat 7, Fusebox 4 and AWS Toolkit for Eclipse
I have my project running smoothly on my local dev box. Trying to deploy the project at times takes under 10min (very rare) and other times it never happens, just have the loading bar showing and/or eventually fails.
I've tried publishing a new project which works at times, but when trying to do incremental deployment it just almost never happens (it worked once). In fact after the one time when I deployed the project I've not been able to do it again.
Unable to upload application to Amazon S3: Unable to calculate MD5 hash: /home/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp1/aws-eclipse-1365821331354619151.war (No such file or directory)
Unable to calculate MD5 hash: /home/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp1/aws-eclipse-1365821331354619151.war (No such file or directory)
Rightly so, there is no such file in that location. But why? Is it a permissions issue? I gave myself root right for GUI file browser but still no joy (gksu nautilus).
I'm new to AWS and Ubuntu environment and not sure what i should do in order to deploy.
So one of your questions seems to be problems in uploading files to S3 via the AWS java SDK right?.. on line 1011 https://github.com/amazonwebservices/aws-sdk-for-java/blob/master/src/main/java/com/amazonaws/services/s3/AmazonS3Client.java#L1011
So what I think your doing here is trying to execute a putObject S3 command which had a Content-MD5 hash saying the request could not be authenticated. I had this issue I found out that the MD5 hash needs to be base64 encoded and is required by Amazon to upload files.
However, looking at your error "No such file or directory" this could be a different issue. Doing a quick Google I found a post that could be of interest...
https://forums.aws.amazon.com/message.jspa?messageID=143497
Hope some of this helps.
Documentation says if you have a context file here:
$CATALINA_HOME/conf/Catalina/localhost/myapp.xml
it will NOT be replaced by a context file here:
mywebapp.war/META-INF/context.xml
It is written here: http://tomcat.apache.org/tomcat-6.0-doc/config/context.html
Only if a context file does not exist for the application in the $CATALINA_BASE/conf/[enginename]/[hostname]/, in an individual file at /META-INF/context.xml inside the application files.
But everytime I re-deploy the war it replaces this myapp.xml with the /META-INF/context.xml!
Why does it do it and how can I avoid it?
Thanx
Undeploy part of redeploy deletes app and the associated context.xml.
If you use maven tomcat plugin you can avoid deleting context.xml if you deploy your app with command like this:
mvn tomcat:deploy-only -Dmaven.tomcat.update=true
More info here: https://tomcat.apache.org/maven-plugin-2.0-beta-1/tomcat7-maven-plugin/deploy-only-mojo.html
You can use deploy-only with parameter mode to deploy the context.xml too.
The short answer:
Just make the TOMCATHOME/conf/Catalina/localhost dir read-only, and keep reading for more details:
For quick deployment mode (Eclipse dynamic web project, direct Tomcat
connection, etc.) on a local/non-shared Tomcat server you can just define your JDBC datasource (or any
other 'web resource') using the META-INF/context.xml file inside the
WAR file. Easy and fast in your local environment, but not suitable for staging, QA, or
production.
For build deployment mode (usually for staging, QA, or prod), JDBC
datasources and other 'web resources' details are defined by the
QA/production team, not the development team anymore. Therefore, they
must be specified in the Tomcat server, not inside the WAR file
anymore. In this case, specify them in the file
TOMCATHOME/conf/Catalina/localhost/CONTEXT.xml (change Catalina
by the engine, and localhost by the host, and CONTEXT by your context accordingly). However,
Tomcat will delete this file on each deployment. To prevent this
deletion, just make this dir read-only; in Linux you can type:
chmod a-w TOMCATHOME/conf/Catalina/localhost
Voila! Your welcome.
The long answer
For historical reasons Tomcat allows you to define web resources (JDBC datasources, and others) in four
different places (read four different files) in a very specific order of precedence, if you happen to define the same resource multiple times. The ones named in the
short answer above are the more suitable nowadays for each purpose, though you could still
use the others (nah... you probably don't want to). I'm not going to
discuss the other ones here unless someone asks for it.
On tomcat7, also woth autoDeploy=false the file will be deleted on undeploy. This is documented and not a bug (althought it avoids good automated deployments with server-side fixed configuration).
I found a workaround which solved the problem for me:
create a META-INF/context.xml file in your webapp that contains
on the Server create a second context "/config-context" in server.xml and put all your server-side configuration parameters there
on the application use context.getContext("/config-context").getInitParameter(...) to access the configuration there.
This allows a per-host configuration that is independent of the deployed war.
It should also be possible to add per-context configurations by adding contexts like "/config-context-MYPATH". In your app you can use the context path oth the app to calculate the context path of the config app.
According to the documentation (http://tomcat.apache.org/tomcat-8.0-doc/config/automatic-deployment.html#Deleted_files) upon redeploy tomcat detects the deletion (undeploy) of your application. So it will start a cleanup process deleting the directory and xml also. This is independent of auto deployment - so it will happen upon redeployment through manager and modification of war also. There are 3 exceptions:
global resources are never deleted
external resources are never deleted
if the WAR or DIR has been modified then the XML file is only deleted
if copyXML is true and deployXML is true
I don't know why, but copyXML="false" deployXML="false" won't help.
Secondly: Making the directory read only just makes tomcat throwing an exception and won't start.
You can try merging your $CATALINA_BASE/conf/Catalina/localhost/myapp-1.xml, $CATALINA_BASE/conf/Catalina/localhost/myapp-2.xml, etc files into $CATALINA_BASE/conf/context.xml (that works only if you make sure your application won't deploy its own context configuration, like myapp-1.xml)
If someone could tell what is that "external resources" that would generally solve the problem.
The general issue as described by the title is covered by Re-deploy from war without deleting context which is still an open issue at this time.
There is an acknowledged distinction between re-deploy which does not delete the context, and deploy after un-deploy where the un-deploy deletes the context. The documentation was out of date, and the manager GUI still does not support re-deploy.
Redeployment means two parts: undeployment and deployment.
Undeployment removes the conf/Catalina/yourhost/yourapp.xml because the
<Host name="localhost" appBase="webapps" unpackWARs="true"
autoDeploy="true"> <!-- means autoUndeploy too!!! -->
</Host>
Change the autoDeploy="false" and Tomcat has no order anymore to remove the conf/Catalina/yourhost/yourapp.xml.
There is an feature that allowes us to make those steps (undeploy/deploy) as one single step (redeploy) that do not remove the context.xml. This feature is available via the manager-text-interface, but the option is not available using the manager-html-interface. You might have to wait until the bug in tomcat is fixed. You can use the method described in this answer as an workaround.