Liberty CWWKZ0002E and ZipError when deploying webapps from Eclipse - eclipse

We are deploying 3 webapps in a liberty server (16.0.0.4) and frequently get ziperrors and messages like below (this is fairly easy to reproduce):
[ERROR ] CWWKZ0002E: An exception occurred while starting the application XYZ. The exception message was: com.ibm.ws.container.service.state.StateChangeException: java.util.zip.ZipError: jzentry == 0,
jzfile = 693877616,
total = 1148,
name = C:\opt\IBM\wlp\usr\servers\XYZ\workarea\org.eclipse.osgi\220\data\cache\com.ibm.ws.classloading.sharedlibrary_84.cache\lib\db2jcc.jar,
i = 329,
message = null
It seems to be erratic but easier to reproduce on slower machines so I suspect a race condition, but it may be as simple as the cache clearing during a server Clean is non-blocking?
Deploying the webapps as war files rather than linked via xml files back to projects does not experience this problem.
I've used the beans.xml with bean-discovery-mode="all" with no effect. We are using injection of different classes in two of the three web applications.
Note the directory number in the path to the cache differs from run to run.
This has been present since at least version 16.0.0.2 of Liberty. Is there a workaround for this problem or does anyone know if will be fixed in the December release?

The problem has gone away with migrating projects to newer releases of Eclipse, so closing.

Related

stale deployment in WAS

I'm learning to use IBM WebSphere v8.8.5 for JavaEE development. I'm using MyEclipse as the development environment.
Sometimes when I debug my project from inside MyEclipse, I could see errors like this
00000091 ComponentMeta E WSVR0603E: ComponentMetaDataAccessor beginContext method received a NULL ComponentMetaData.
00000091 AlarmListener E SCHD0063E: A task with ID 953 (mi-online-app-1.0.0-SNAPSHOT#my-online-services-1.0.0-SNAPSHOT.jar#PaymentResultCheckServiceBean) failed to run on Scheduler WebSphere_EJB_Timer_Service (WebSphere_EJB_Timer_Service) because of an exception: java.lang.NullPointerException.
But actually, there's no PaymentResultCheckServiceBean in my project. It used to be in the project but I have renamed it.
When I develop in MyEclipse, I add a WAS server. Sometimes if anything wrong, I'll directly delete the whole workspace from disk and create a new one. So I'm wondering if these ways leave some stale data in WAS folder so that when WAS starts, old deployment got to run. But I've checked C:\IBM\WebSphere85\AppServer\profiles\AppSrv01\installedApps\myserver, there's nothing here.

cannot start weblogic from netbeans after windows 7 BSOD

after a BSOD in Windows 7 something gets corrupted and NetBeans cannot start a local WebLogic 10 server it didn't have any problems with before.
there is also a peculiar message appearing in NetBeans notification, with message and stacktrace almost identical to those reported in a filed NetBeans bug report:
java.lang.IllegalArgumentException: hostname can't be null
at java.net.InetSocketAddress.<init>(InetSocketAddress.java:139)
at org.netbeans.modules.weblogic.common.api.WebLogicRuntime.ping(WebLogicRuntime.java:623)
at org.netbeans.modules.weblogic.common.api.WebLogicRuntime.ping(WebLogicRuntime.java:612)
at org.netbeans.modules.weblogic.common.api.WebLogicRuntime.isRunning(WebLogicRuntime.java:500)
at org.netbeans.modules.j2ee.weblogic9.optional.WLStartServer.isRunning(WLStartServer.java:124)
at org.netbeans.modules.j2ee.deployment.impl.ServerInstance$3.run(ServerInstance.java:902)
at org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:1443)
at org.netbeans.modules.openide.util.GlobalLookup.execute(GlobalLookup.java:68)
at org.openide.util.lookup.Lookups.executeWith(Lookups.java:303)
[catch] at org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:2058)
I tried multiple restarts since then, multiple netbeans restarts, tried even both NetBeans 8.1 and 9 - when starting WebLogic, NetBeans just is stuck with "Starting ..." message and never stops. You actually have to forcefully close NetBeans to stop this.
at some point i tried to start weblogic outside NetBeans, and saw that it was failing to start, with a message approximately like this from this Oracle forum thread:
<Critical> <WebLogicServer> <BEA-000362> <Server failed. Reason: [Management:141268]Parsing Failure in config.xml on line 1, column 1: Content is not allowed in prolog.>
Well, indeed there is a directory inside WebLogic domain I was trying to start called "config", with a file "config.xml" there (and a config.loc - deleting it didn't make any difference) - so I tried to look into config.xml and WOW it was totally corrupted. So that was the reason nothing started. I tried to just delete this file (config.xml) (in the above Oracle forum thread they also suggest to delete the domain - but this was not an option for me as it wasn't an integrated server) - after trying to start the server manually again, WebLogic starter script politely asked me if I want it to create a new default config.xml, since the old one was not found - answering yes worked and I am back in business (i hope :P)
of course, i lost some (EDIT: not all) of the custom settings i did in WebLogic configuration since installing it - I lost the custm data sources, but users and user groups were retained. if only i had backed up this config.xml :( anyway.
(EDIT: probably backing up config.xml wouldn't make sense - anyway - i think it keeps the reference to data source for all the deployed applications - so - you don't have to recreate a data source - you just have to redeploy your application(s))

Pages taking too long to load after maven build

I am using following command to deploy code to my AEM instance "mvn clean install -Daem.host=localhost -Daem.port=1202 -Dmaven.test.skip=true
"
After deployment pages are taking too long to load at least 7 mins.
I found No errors/Exceptions in error log.
There could be couple of factors causing this slowness -
Amount of memory allocated to AEM instance, default setting is - CQ_JVM_OPTS='-server -Xmx1024m -XX:MaxPermSize=256M -Djava.awt.headless=true' which is actually not sufficient for optimal performance. I have been using double of this configurations and sometimes even more.
When you deploy your package with code, the bundles are processed and services are registered. Depending on number of services/components being registered the time can go up. Sometimes there are hooks within code that cause few system level bundles to cycle as well, if that happens it would actually cause all the other bundles dependent on system bundle to cycle and registering the services again.
your code deployment could be triggering some workflow that either consumes lot of resources or is causing delayed activation on your bundle. The first scenario could happen if your deployment has something like images which when deployed causes OOTB image workflow to trigger (there could be other based on your code). Second scenario could be that you have bundle activator either waiting for another bundle which gets deployed later (and/or stays installed and not active) or you are building some sort of caching that waits for pages to be deployed and processed. There are countless such scenarios that can cause this issue.
What you could do is check the status of the bundles in /system/console/bundles pre and post deployment you can identify bundle related issues there. Another thing you could try is to do selective deployment of the code to figure out what module is causing issue that then dive deeper in to that module.
Also look at recent request logs to identify the flow of page load to see if there are services, filters etc in picture that are causing delays.
Let me know if any of this approach helps you identify the root cause and in case you need further help, will be here to assist.

Apache Felix not binding my configuration correctly - wrong inputstream version

I had a bundle deployed in an Apache Felix (Sling, in fact) host. The bundle contained some configurable elements, and its version was 2.0.
I have updated the bundle to v2.0.1 for some small code changes, and now the bundle will not pick up its configuration correctly - it remains at the defaults set in code rather than picking up the values configured in the Felix Web Console.
There is an error message in the log: "[Configuration Updater] org.apache.felix.scr Cannot use configuration pid=com.mypackage.MyClass for bundle inputstream:my-bundle-2.0.1.jar because it belongs to bundle inputstream:my-bundle-1.0.jar" which sounds like the cause of the issue.
However:
I can't edit the inputstream value through the web interface - only by stopping the server, editing the config file manually, and restarting. Surely when I update the bundle, the config should be updated too?
Although the inputstream specifies v1.0, the bundle did not have a problem when it was upgraded to v2.0. What's made the difference here?
I have done the same thing (though perhaps not exactly!) on two servers, and one server seems to have the config specify inputstream=v2.0 (and the bundle at v2.0.1) and it works fine. What caused inputstream version to update on this server? (Presumably the same as the answer to 2 - I imagine it'll depend exactly which steps in the process have been executed and in what order.)
Any advice gratefully received - I haven't been able to find any documentation that gives instructions or troubleshooting suggestions for administering bundles through the Felix Web Console.
If at all possible, I would simply stop and remove the bundle altogether and install it using Sling , e.g. with the maven-sling-plugin or dropping it in the /apps/myapp/install folder using WebDAV .
I find it easiest to be consistent this way and the installation is nicely automated and it handles bundle upgrades properly.

SpringSource dm Server occastionally fails to unpack valid ZIP file

When deploying my project to SpringSource dm Server, every once in a while a JAR fails to deploy with the following message:
/mnt/myproject/springsource/work/com.springsource.server.deployer/packed/my.project.0.1.10.M.jar' cannot be unpacked.
java.util.zip.ZipException: error in opening zip file
There are 5 .war files in the project. If one of them fails, it's always the same one (which is also the last one to be copied into the pickup directory). However, usually all 5 will deploy without issues. It is the exact same set of files in all instances, taken from a maven repository, just deployed to new server instances.
The file that fails can be opened just fine by 7-Zip. If I stop Spring, clear the pickup directory, start Spring and copy the .war files to pickup again, it will usually work.
The usual deployment process is:
Start Spring
Wait until it reports Open for business with profile 'web'
Copy all 5 projects with a 2 second delay between each copy (scripted).
Similar issues java-util-zip-zipexception-error-in-opening-zip-file and jboss5-cannot-deploy-due-to-java-util-zip-zipexception-error-in-opening-zip-fil do not seem to apply.
You don't say which version of dm Server you are running, so I would recommend upgrading to 2.0.x to pick up fixes if you haven't already. You may also like to upgrade to Eclipse Virgo which is the continuation of the dm Server project.
My guess is that the heuristic in dm Server for determining when a file copy into pickup has terminated is playing up, possibly due to a slow or irratic copy operation. Is there anything unusual about your disk, such as encryption or remote mount, which may interfere with the copy operation?
One way to rule out the heuristic would be to place the files in the pickup directory when dm Server is not running and then start dm Server when the copy operation has definitely completed. If the problem reproduces, then there may be a problem in the JRE you are using.