AEM AUTHOR INSTANCE IS NOT RESPONDING - aem

my author instance crx/de is taking too long to open index.jsp, i was going through few solutions and some1 mentioned to go to CRXDE.ini and allocate more memory but i couldn't find the file. Can any1 suggest some other way to increase the performance of crx.

You can delete or archive the older log files. Also you can archive the tar files or remove or compress the tar files. Also you can perform the tar compaction on regular intervals to prevent the build up of tar files. Here is a maintenance link for the same.
Hope this helps.

The easiest way to increase memory would be to enter /crx-quickstart/bin directory and to edit file start.bat.
There you should find line
if not defined CQ_JVM_OPTS set CQ_JVM_OPTS=-Xmx1024m -XX:MaxPermSize=256M -Djava.awt.headless=true
Simply edit it by increasing CQ_JVM_OPTS and XX:MaxPermSize values.
However, it would better if you will run your AEM instance using bat file created by yourself. In such file you may declare Xms and Xmx Java options which say how much memory should be allocated. Look at belowed example:
%JAVA_HOME%\bin\java -Xms512M -Xmx2048M -jar cq-quickstart-6.5.0.jar -verbose -r author,devlocal

Related

how to delete deploy/images/beaglebone dir in yocto

In my yocto source deploy/images/beaglebone dir is nearly 100GB so i want to free that memory
Please help me how can I delete that deploy dir either manually or via command line
I want to clean all images(*.tar.gz,*.sdcard, *.ubifs) of previous compilation in yocto deploy/images/beaglebone/
Maybe when you are with 100GB in the deploy directory, things have gone too far already.
Check your IMAGE_FSTYPES variable. My experience says it is safe to delete all images of these files that are not symlinks, or symlinks targets. Avoid the last one generated to avoid breaking the last build link, and any related with bootloaders and configuration files, as they could be rarely regenerated.
If you are keeping more than one build with the same set of layers, then you can use a common download folder for builds.
DL_DIR ?= "common_dir_across_all_builds/downloads/"
And afterwards:
To keep your /deploy clean:
RM_OLD_IMAGE: Reclaims disk space by removing previously built versions of the same image from the images directory pointed to by the DEPLOY_DIR variable.Set this variable to "1" in your local.conf file to remove these images:
RM_OLD_IMAGE = "1"
IMAGE_FSTYPES Remove the image types that you do not plan to use, you can always enable a particular one when you need it:
IMAGE_FSTYPES_remove = "tar.bz2"
IMAGE_FSTYPES_remove = "rpi-sdimg"
IMAGE_FSTYPES_remove = "ext3"
For /tmp/work, do not need all the workfiles of all recipes. You can specify which ones you are interested in your development.
RM_WORK_EXCLUDE:
With rm_work enabled, this variable specifies a list of recipes whose work directories should not be removed. See the "rm_work.bbclass" section for more details.
INHERIT += "rm_work"
RM_WORK_EXCLUDE += "home-assistant widde"
try this from your build root rm -fr deploy/images. Here is a good discussion on the topic
I just removed the files manually like below
1. goto build/deploy/images/beaglebone
2. $ ll : you will find the softlinks of rootfs with time like
......*20170811091521.rootfs.tar.gz
......*-20170811091521.rootfs.sdcard etc
3. Dont delete recently compiled files. except these this you can remove all *.tar.gz, *.sdcard,*.ext4 manually like below,
4. rm beaglebone-20170811091521.rootfs.tar.gz
rm beaglebone-20170811091521.rootfs.sdcard
rm beaglebone-20170811091521.rootfs.ext4 etc.

Where is the Trash directory?

The trash spec tells me that the Trash directory is here: $XDG_DATA_HOME/Trash
Looking at my environment variables on my Linux Mint system, I find a bunch of XDG stuff, but no XDG_DATA_HOME
I've done some looking, but so far I have not been able to locate the Trash directory. Where is it?
your home trash directory MUST be available and defined.
Usually it's under ~/.Trash or ~/.local/share/Trash as default
you can echo $XDG_DATA_HOME to display it, if you get nothing, you can set it by yourself.
XDG_DATA_HOME=/usr/local/share/
export XDG_DATA_HOME
and
XDG_DATA_DIRS=/usr/local/share/
export XDG_DATA_DIRS
for details see setting XDG_DATA_DIRS and XDG_DATA_HOME
and I recommend that you make trash-cli as an alternative for the rm, it's the command line interface to FreeDesktop.org Trash,
see https://pypi.python.org/pypi/trash-cli/0.12.9.14
https://github.com/andreafrancia/trash-cli
your home trash directory MUST be available and defined.
Usually it's under ~/.Trash or ~/.local/share/Trash as default
you can echo $XDG_DATA_HOME to display it, if you get nothing, you can set >it by yourself.
First it is impossible to set something you cannot find in the first place.
Secondly env | grep XDG does not return any variable XDG_DATA_HOME, so that provided no help whatsoever. Thirdly, Google search on e.g. "where is linux Trash folder stored" does indeed turn up results -- namely this page and others like it. Search engines are source referrers, not source providers. If someone hasn't already posted it somewhere, it won't show in Google or anywhere else. Suggesting a Google search as an answer is not helpful.
So indeed, find / -iname trash will find it (recommend adding 2>/dev/null to eliminate all errors that will occur for inaccessible files), but novices have a lot of trouble with find's syntax.
So yes, it is usually ~/.Trash or ~/.local/share/Trash.
As for trash-cli, yes very helpful, but the correct instructions for it are:
sudo apt install trash-cli -y
alias rm=trash-put
alias rm >> ~/.bashrc ( or >> ~/.bash_aliases)
Now, I would like to know, if I set XDG_DATA_HOME to /tmp, will trashing a file move to /tmp instead? The concept of a Trash folder is great, but I'd like a little more sophistication like an Archive folder where I can archive-put little used files that I still want to keep but keep out of my main folder stash to eliminate clutter. I'm no linux novice, but I do have limited time--so that is why we collaborate--I save you time, you save me time!! I hope. Less is more, more or less.

is there a way to run storage statistics on a yocto-produced filesystem?

I used Yocto to build a filesystem, using a .bbappend of core-image-minimal. Two questions:
how can i figure out which package is taking huge storage space on the rootfs?
I can't think of a way other than to look into the ${D} of every package and see how big its components are. There's gotta be a more systematic, and intelligent way to do that.
From what i can decipher from the manifest, there is nothing related to the size of the package that is being included.
Also, removing some of the packages I added using the IMAGE_INSTALL object, seems to remove the package but the end result of the built image doesn't show a change in its size!!
I compared the size of a particular .so on the build machine and on the installation device (a vm) and found that the size on the installation device was 20-30% of the original size seen on the build machine. Any explanation?
Thanks!
1) One way is to enable buildhistory, by adding the following to local.con
INHERIT += "buildhistory"
BUILDHISTORY_COMMIT = "1"
This will create a directory (git repo) buildhistory in your $BUILDDIR. There you'll be able to find e.g.
images/$MACHINE/eglibc/$IMAGE/installed-package-sizes.txt
That file will give you the sizes of all installed packages.
There are a lot more things you can learn from buildhistory, see buildhistory introduction
2) Where did you compare the particular .so-file? If it was from the package's ${B} (i.e. where the library is built), it's not surprising, as the installed .so-file will be stripped. The debug information is installed into -deb.rpm (as the debug info is usually useless on the target and the smaller size is of much higher importance).
With some looking inside the scripts/ subdir and some googling about some of the existing scripts, it turns out that the good people of Yocto do have these scripts properly functioning out of the box:
scripts/tiny/dirsize.py and ksize.py.
dirsize.py will give you a breakdown of pkg sizes for your rootfs; while ksize.py will give you the equivalent info for the kernel.

how to reduce GWT war file size

i am using ExtGWT. my application has 5 modules. in war folder all five modules will be compiled and placed. but in every module resources folder is common. my intention is keeping resources folder common. so that the generated war size can be decreased. plz suggest me.
Thanks,
David
Perhaps not exactly, what you are asking for, but I guess, you don't want to upload everytime everything since the amount of data is quite large.
I do it this way:
- DON't create a war-file.
- simply use rsync to incrementally deploy the contents of the war-directory of your GWT-project like this:
rsync -avc --compress --progress --delete --rsh='ssh' --cvs-exclude
./war
root#serverip:/usr/share/tomcat7/webapps/ROOT/
So, only newer files gets uploaded to the server and remaining old files which are not used anymore gets deleted from the server.
Hoped this helped you.

Huge amount of JAR files in jboss/server/web/tmp/vfs-nested.tmp directory

Sometimes we have huge amount of JAR files in jboss/server/web/tmp/vfs-nested.tmp directory.
For example today this directory contained over 350k jar files.
But on other hosts there are only 2 jar files in this directory.
What can be the root cause of this problem?
We use JBoss 5.1
UPDATE:
I found following information in release notes for JBoss 5.1.0.GA:
JBoss VFS provides a set of different
switches to control it's internal
behavior. JBoss AS sets
jboss.vfs.forceCopy=true by default.
To see all the provided VFS flags
check out the code of the
VFSUtils.java class.
So I do not understand what should I set?
Should I set -Djboss.vfs.forceNoCopy=true or -Djboss.vfs.forceCopy=false?
Or should I set both of them?
UPDATE 1:
I have read entire thread http://community.jboss.org/thread/2148?start=0&tstart=0
and now I am not shure that I should change either jboss.vfs.forceCopy or jboss.vfs.forceNoCopy.
According to this thread I will have OutOfMemory error instead of huge amount of files in tmp dir.
From here: http://sourceforge.net/project/shownotes.php?release_id=575410
"Excessive nestedjarNNN.tmp files in the tmp directory. The VFS unwraps nested jars by extracting the nested jar into a tmp file in the java tmp directory. This can result in a large number of files that fill up the tmp directory. You can disable this behavior by setting -Djboss.vfs.forceNoCopy=true on command line used to start jboss. This will be enabled by default in a future release, JBAS-4389."
jskaggz has a good answer. In addition, I have this in the beginning of my run.bat file:
rmdir /s /q c:\apps\jboss-5.1.0.ga\server\default\tmp
rmdir /s /q c:\apps\jboss-5.1.0.ga\server\default\work
rmdir /s /q c:\apps\jboss-5.1.0.ga\server\default\log
mkdir c:\apps\jboss-5.1.0.ga\server\default\tmp
mkdir c:\apps\jboss-5.1.0.ga\server\default\work
mkdir c:\apps\jboss-5.1.0.ga\server\default\log
echo --- Cleared temp folders ---
I've had problems with old copies of classes hanging around, so this seems to help.
We have solved this problem by exploded deployment ( works for war and ear) as described in jboss documentation http://docs.jboss.org/jbossas/docs/Administration_And_Configuration_Guide/5/html/ch03s01.html
That's way vfs is not used.
I had the same issue described above in production and resolved it with the following solution.
Added java options
-Djboss.vfs.cache=org.jboss.virtual.plugins.cache.IterableTimedVFSCache
-Djboss.vfs.cache.TimedPolicyCaching.lifetime=1440
My setup also defines additional deployment directories so I needed to add these additional directories to vfs.xml file located in $JBOSS_SERVER_HOME/conf/bootstrap/ in order to see the benefit.
The lifetime setting I think is in minutes so I set it to a day as I have a scheduled restart of the server overnight.
Prior to finding this solution I had also tried using -Djboss.vfs.forceNoCopy=true and -Djboss.vfs.forceCopy=false
This appeared to work but I noticed the application ran a lot slower - presumably because these settings turn vfs caching off.
My Jboss version is jboss-5.1.0.GA
and my application runs in a cluster on production.
Found a lot others having the same problem running in cluster (or farm) environments.
https://issues.jboss.org/browse/JBAS-7126 desribes to solve the problem having a farm directory as deployment directory.
I had the same problem using a 2nd deploy directory.
The jar files out of my applications coming from this 2nd deploy directory got copied until the disk was full.
Tried adding the 2nd deploy directory the same way as at https://issues.jboss.org/browse/JBAS-7126 described for the farm directory.
It works well!
We were facing the same issue and were able to circumvent the issue by using a farm directory as deployment directory.
After putting that process in place we were facing one more issue due to the nature of our DEV environment ( We have clustered environment and we have many developers deploying on the shared DEV environment ) of not getting a consistent results while we were deploying the EARs and WARs that way .We circumvented the issue by making sure that the EARs and JARs that are being deployed are TOUCHED (http://en.wikipedia.org/wiki/Touch_(Unix) ) on the servers to make sure that inconsistencies are avoided .