Pages taking too long to load after maven build - aem

I am using following command to deploy code to my AEM instance "mvn clean install -Daem.host=localhost -Daem.port=1202 -Dmaven.test.skip=true
"
After deployment pages are taking too long to load at least 7 mins.
I found No errors/Exceptions in error log.

There could be couple of factors causing this slowness -
Amount of memory allocated to AEM instance, default setting is - CQ_JVM_OPTS='-server -Xmx1024m -XX:MaxPermSize=256M -Djava.awt.headless=true' which is actually not sufficient for optimal performance. I have been using double of this configurations and sometimes even more.
When you deploy your package with code, the bundles are processed and services are registered. Depending on number of services/components being registered the time can go up. Sometimes there are hooks within code that cause few system level bundles to cycle as well, if that happens it would actually cause all the other bundles dependent on system bundle to cycle and registering the services again.
your code deployment could be triggering some workflow that either consumes lot of resources or is causing delayed activation on your bundle. The first scenario could happen if your deployment has something like images which when deployed causes OOTB image workflow to trigger (there could be other based on your code). Second scenario could be that you have bundle activator either waiting for another bundle which gets deployed later (and/or stays installed and not active) or you are building some sort of caching that waits for pages to be deployed and processed. There are countless such scenarios that can cause this issue.
What you could do is check the status of the bundles in /system/console/bundles pre and post deployment you can identify bundle related issues there. Another thing you could try is to do selective deployment of the code to figure out what module is causing issue that then dive deeper in to that module.
Also look at recent request logs to identify the flow of page load to see if there are services, filters etc in picture that are causing delays.
Let me know if any of this approach helps you identify the root cause and in case you need further help, will be here to assist.

Related

Drools Version 7.52 not using GDST Hit Policy

My Drools project has a number of GDSTs (Guided Decision Tables); those tables were created in the JBPM Workbench with hit policies of either 'First Hit' or 'Rule Order'. After upgrading from version 7.39 to 7.52 those policies are no longer being used. This is causing a lot of infinite loops.
Is there anyway to debug why this might be happening?
I will continue recompiling my projects at different version levels, 7.39, 7.40, etc; to try and determine where the support for Hit Policies stopped working but for now I was wondering if anyone else had experienced this issue and how they resolved it.
I am working on creating some JBPM objects that I can post, my actual project has data that I can not share. The additional details should be ready in a few hours. Sorry jumped the gun when posting the question, I know it needs more details, pom files at the very least.
Update Jun 8, 2021
First a quick background:
I am using the JBPM-Server Ver 7.52 that you can download from Drools website. Simply run the standalone.sh and you will have access to the Business Central Workbench.
Once in the Workbench I created some data objects and a few GDST's. The GDST's are created with Hit Policies of either 'First Hit' or 'Rule Order'. Also, when updating a field on the data object I am setting the 'Update Engine' option.
After the Workbench project has been created I can download the KJAR artifact and use it as a dependency in a Java API that pulls in data, loads the KieSession with rules from the artifact and data, executes the rules and processes the responses.
I know that the 'Update Engine' option, in combination with the loss of support for the Hit Policies, is causing the infinite loops.
I have also determined that this stopped working at version 7.52, everything works perfectly from versions 7.39 all the way through 7.51.
The only difference I have found between the two versions is that at 7.52 the 'source' for the GDST is now adding an activation-group parameter related the hit policy.

Deploying new PHP code when running Opcache

We are attempting to deploy new PHP code via Capistrano while running Opcache.
Capistrano creates a new deploy directory each time you deploy, then adjusts a symlink so that the webserver points to the new directory. Because Opcache caches by the real path of the file, that means that the newly deployed version of a site is cached completely separately from the old.
The problem we are running into is that Opcache runs out memory because each new deploy causes the full code base to be cached, and old code is never evicted. We could call opcache_reset(), but when the cache is reset, we briefly get 500 errors when the caches stampede. (We would also have the same errors if we tried to launch a new deploy without warming up the cache.)
Is there a better way to handle this? Some way to launch the new code while not filling up opcache until it runs out of memory (or empties itself because it has too many files) that allows us to avoid calling opcache_reset() on the live site? We are using (or trying to transition to, anyway) Nginx as our web server with PHP-FPM handling the PHP requests.
An option would be to call opcache_invalidate for each of the files in the old version of the site at the end of the deployment. You could prevent cache stampede by including the file following the invalidation.
A second option would be to setup fpm to have multiple pools, and to restart them one by one (they'll start with a clean opcache). You'll somewhat prevent the cache stampede only one server will have a clean cache at any given time, and the application will stay up because nginx will be able to balance the load on the various pool.
Another option is to delete the old versions of the script, so that opcache clears them from the cache once the revalidate_freq has passed, forcing it to load the new files from the filesystem.

Eclipse Kepler and JBoss Wildfly hot deployment

I am trying to use eclipse kepler for Java EE 7.I already installed JBoss Tools and added JBoss Wildfly successfully as a server. However my changes are not automatically deployed. Is there anyway the app can be deployed automatically just as when using glassfish?
Using Eclipse, click twice on your WildFly Server to edit the following properties:
Publishing: choose "Automatically publish after a build event". I like to change the publishing interval to 1 second too.
Application Reload Behavior: check the "Customize application reload ..." checkbox and edit the regex pattern to \.jar$|\.class$
That's it. Good luck!
Both #varantes and #Sean are essentially correct, but these answers are not full.
Unfortunately the only way in a Java server environment to have full, zero-downtime hot deployment is to use paid JRebel or free spring-loaded tool.
But for small project there are some ways to speed up work by partial hot-deployment. Essentially:
When enabled option Automatically publish when resource change
then changes inside *.html, *.xhtml files are immediately
reflected as soon as you refresh the browser.
To make hot deployment work for *.jsp files too, then you should
inside ${wildfly-home}/standalone/configuration/standalone.xml
make following change:
<jsp-config/>
replace with:
<jsp-config development="true"/>
restart the server and enjoy hot deployment of web files.
But when modifying *.java source files, then only partial hot deployment is possible. As #varantes stated in his answer, enabling Application Reload Behavior with regex pattern set to \.jar$|\.class$ is an option, but has serious downside: whole module is restarted, thus:
It takes some time (depending on how big is a module).
Whole application state is lost.
So personally, I discourage this solution. JVM supports (in debug mode) code-swapping for methods' bodies. So as long as you are modifying only bodies of existing methods, you are at home (zero downtime, changes are reflected immediately). But you have to disable automatic publishing inside server settings otherwise the application's state will still be destroyed by that republish.
But if you are heavily crafting Java code (adding classes, annotations, constructors) then unfortunately I can only recommend set publishing into Never publish automatically (or shutdown server) and when you finish your work in Java files, then restart by hand your module (or turn-on server). Up to you.
It works for small Java projects, but for bigger ones, JRebel is invaluable (or just spring-loaded), because all approaches described above are not sufficient. Also because of such problems, solutions like Rails/ Django /Play! Framework gained so huge popularity.
I am assuming you are using the latest version of Wildfly (8.0 Beta 1 as of writing).
In the standalone.xml config file, look for <jsp-config/>. Add the attribute development="true" and it should hot-deploy. The resulting config will look like this:
<jsp-config development="true"/>
Add attributes (development, check-interval, modification-test-interval, recompile-on-fail) in configuration file in xPath = //servlet-container/jsp-config/
<servlet-container name="default" default-buffer-cache="default" stack-trace-on-error="local-only">
<jsp-config development="true" check-interval="1" modification-test-interval="1" recompile-on-fail="true"/>
</servlet-container>
(It works in WildFly-8.0.0.Final)
Start server in debug mode and It will track chances inside methods. Other changes It will ask to restart the server.

Optimize workflow for Front End development on Java Resin Project

I have started a new job from a couple months, I work as front developer in a company where up until now everyone was using classic development patterns, but the goal is to move to a new ajax/rest services approach and that's what I do.
In our local development environment our apps run on Resin which runs inside Eclipse and get deployed as war files to C:\Resin\resin-pro-4.0.27\webapps
My problem is that I work mostly on css html and js files, static resources so I shouldn't need to restart Resin and wait 15 seconds (when it doesn't crash) to see the effect of every little piece of code I change.
Other problem is that I need to edit some files in external editors (sublime text for js, Crunch for LESS); I managed to make Eclipse open the external editor but even with the "Refresh using native hooks or polling" build option it takes a while to realize files have changed and restart Resin.
I also tried just working on the unpacked war in C:\Resin\resin-pro-4.0.27\webapps\appname but even there it takes like one minute before you can see the changes on the browser (is there some caching going on the server? can I disable it?)
I welcome any suggestion as all this is really hurting my productivity
inside Resin.xml <host><web-app> add:
<cache-mapping url-pattern="*.js" expires="0s"/>
<cache-mapping url-pattern="*.css" expires="0s"/>
<cache-mapping url-pattern="*.htm" expires="0s"/>
<cache-mapping url-pattern="*.html" expires="0s"/>
This used to work for me (in resin.xml)
<!--
- For production sites, change dependency-check-interval to something
- like 600s, so it only checks for updates every 10 minutes.
-->
<dependency-check-interval>2s</dependency-check-interval>
Also check resin.properties for a variable definition in newer versions.
However I'm currently having problems picking up changes without a full redeploy.

Automated deployment of Check Script for Nagios

We currently use Ant to automate our deployment process. One of the tasks that requires carrying out when setting up a new Service is to implement monitoring for it.
This involves adding the service in one of the hosts in the Nagios configuration directory.
Has anyone attempted to implement such a thing where it is all automated? It seems that the Nagios configuration is laid out where the files are split up so that they are host based, opposed to application based.
For example:
localhost.cfg
This may cause an issue with implementing an automated solution as when I'm setting up the monitoring as I'm deploying the application to the environment (i.e - host). It's like a jigsaw puzzle where two pieces don't quite fit together. Any suggestions?
Ok, you can say that really you may only need to carry out the setting up of the monitor only once but I want the developers to have the power to update the checking script when the testing criteria changes without too much involvement from Operations.
Anyone have any comments on this?
Kind Regards,
Steve
The splitting of Nagios configuration files is optional, you can have it all in one file if you want to or split it up into several files as you see fit. The cfg_dir configuration statement can be used to have Nagios pick up any .cfg files found.
When configuration files have changed, you'll have to reload the configuration in Nagios. This can be done via the external commands pipe.
Nagios provides a configuration validation tool, so that you can verify that your new configuration is ok before loading it into the live environment.