Change package mode of JCR Package - aem

Is there a way to change the packaging mode of an existing JCR package from replace to update? As far as I know the packaging mode cannot be set in the AEM Package Manager dialogs.
What exactly would I have to do? Just change the filter.xml and repackage? Somehow, this didn't work for me. Am I missing something?

You'd have to change the filter.xml as well as the .content.xml in the definition subfolder.
Here is a small batchscript that unpacks, changes the mode and repacks a package.
If you save it as modPkg, you cann call it with two params:
modPkg FILENAME FITLERMODE
where FILENAME is the filename of the package and FILTERMODE should be merge, update or replace.
#!/bin/bash
filename=${1}
filterMode=${2}
echo "Extracting package."
jar xf $1
echo "Modifying filter.xml."
perl -pe 's|(root="[^\"]+")(( )*mode="[^\"]+"( )*)?(( )*(/)?>)|\1 mode="'"${filterMode}"'"\5|g' META-INF/vault/filter.xml > META-INF/vault/filter.xml.backup
rm -rf META-INF/vault/filter.xml
mv META-INF/vault/filter.xml.backup META-INF/vault/filter.xml
echo "Modifying .content.xml in definition-folder."
perl -pe 's|mode="[^\"]+"|mode="'"${filterMode}"'"|g' META-INF/vault/definition/.content.xml > META-INF/vault/definition/.content.xml.backup
rm -rf META-INF/vault/definition/.content.xml
mv META-INF/vault/definition/.content.xml.backup META-INF/vault/definition/.content.xml
echo "Repackaging."
jar -cfM ${filterMode}-${filename} META-INF jcr_root
echo "Deleting temp files."
rm -rf META-INF
rm -rf jcr_root
echo "Finished."
There might be more elegant ways to do the job, but it's easy enough.

Related

How do I pull compile_commands.json from a subdirectory?

I've got a c++ cmake project. I've created a subdirectory work in the main directory that I'm using for compilation. When I compile the project I cd into work and do cmake .. && make. This way, compiler files do not pollute the main directory. work is also the directory, where compile_commands.json is generated, and I use that for coc syntax highlighting.
As long as it's in the subdirectory, coc can't find it, therefore, I created a softlink in the main directory that leads to the file. This works, and another solution would be adding a command to CMakeLists.txt that would copy the file to the main directory, but I keep wondering if there is a better way to do it. And with a better way, I mean, something like creating .vimrc in the main directory and writing in it commands that coc could use to find the file.
So far, I have found vim command that allows me to move into that direction, set exrc will load local .vimrc files
One thing I've tried is putting
" setting with vim-lsp
if executable('ccls')
au User lsp_setup call lsp#register_server({
\ 'name': 'ccls',
\ 'cmd': {server_info->['ccls']},
\ 'root_uri': {server_info->lsp#utils#path_to_uri(
\ lsp#utils#find_nearest_parent_file_directory(
\ lsp#utils#get_buffer_path(), ['.ccls', 'compile_commands.json', '.git/', 'work/compile_commands.json' ]))},
\ 'initialization_options': {
\ 'highlight': { 'lsRanges' : v:true },
\ 'cache': {'directory': stdpath('cache') . '/ccls' },
\ },
\ 'whitelist': ['c', 'cpp', 'objc', 'objcpp', 'cc'],
\ })
endif
into the local .vimrc file, but that didn't work
What command should I put into the .vimrc located in the main directory to tell coc plugin to look for ./work/compile_commands.json?
Build your project with clang++
Add -MJ to compile command -
clang++ -MJ a.o.json -Wall -std=c++11 -o a.o -c a.cpp
And combine *.o.json to compile_commands.json file using -
find ./work -name '*.o.json' -print0 | xargs -0 sed -e '1s/^/[\n/' -e '$s/,$/\n]/' > ./work/compile_commands.json

Perl using the -i option on a vboxsf share: Can't remove input_file Text file busy, skipping file

System: Arch Linux in VirtualBox 5.1.26 on Windows 10 Host
I try to use perl like sed in the terminal for in place substitution the input file:
perl -i -p -e 's/orig/replace/g' input_file
But I always get:
Can't remove input_file Text file busy, skipping file
This happens only if the file is inside a VirtualBox vboxsf share. With all other tools (sed, mv, vim or whatever) it is no problem to change the file.
This problem seems to be related to:
https://www.virtualbox.org/ticket/2553
https://forums.virtualbox.org/viewtopic.php?t=4437
I can't find any solution googling around :(
Update:
Using perl -i.bak -p -e 's/orig/replace/g' input_file I get a similar message:
Can't rename input_file to input_file.bak: Text file busy, skipping file.
This is exactly the same message as gedit shows:
So it is the same behavior, but googling around I can only find the Gedit topic. It seems noone has noticed this with perl -i.
While you are running a unix OS, you are still using a Windows file system. NTFS doesn't support anonymous files like unix file systems, and Perl -i requires support for anonymous files.
The workaround is to use a temporary files by using -i<ext> (e.g. -i~) instead of -i.
I have same problem. My solution is a bashscript. Copy files to tmp. Search and Replace. Overwrite tmp-files with original-files. Than delete tmp-dir. If you need you can use parameter in script for dynamic search&replace and create an alias for call the script direct and everywhere.
#!/bin/bash
echo "Removing text from .log files..."
echo "Creating tmp-dir..."
mkdir /tmp/myTmpFiles/
echo "Copy .log files to tmp..."
cp -v /home/user/sharedfolder/*.log /tmp/myTmpFiles/
echo "Search and Replace in tmp-files..."
perl -i -p0e 's/orig/replace/g' /tmp/myTmpFiles/*.log
echo "Copy .log to sharedfolder"
cp -v /tmp/myTmpFiles/*.log /home/user/sharedfolder/
echo "Remove tmp-dir..."
rm -vr /tmp/myTmpFiles/
echo "Done..."

mv: "Directory not Empty" - how do you merge directories with `mv`?

I tried to deploy my personal blog website to my remote server recently. When I tried to move a few files and directories to another place by executing mv, some unexpected errors happened. The command line echoed "Directory not Empty". After doing some googling, I tried again with '-f' switch or '-v', the same result showed.
I logged in on the root account, and the process is here:
root#danielpan:~# shopt -s dotglob
root#danielpan:~# mv /var/www/html/wordpress/* /var/www/html
mv: cannot move `/var/www/html/wordpress/wp-content` to `/var/www/html/wp-content`:
Directory not empty
root#danielpan:~# mv -f /var/www/html/wordpress/* /var/www/html
mv: cannot move `/var/www/html/wordpress/wp-content` to `/var/www/html/wp-content`:
Directory not empty
Anybody know why?
(I'm running Ubuntu 14.04)
If You have sub-directories and "mv" is not working:
cp -R source/* destination/
rm -R source/
I found the solution finally. Because the /var/www/html/wp-content already exists, then when you try to copy /var/www/html/wordpress/wp-content there, error of Directory not Empty happens. So you need to copy /var/www/html/wordpress/wp-content/* to /var/www/html/wp-content.
Just execute this:
mv /var/www/html/wordpress/wp-content/* /var/www/html/wp-content
rmdir /var/www/html/wordpress/wp-content
rmdir /var/www/html/wordpress
Instead of copying directories by cp or rsync, I prefer
cd ${source_path}
find . -type d -exec mkdir -p ${destination_path}/{} \;
find . -type f -exec mv {} ${destination_path}/{} \;
cd $oldpwd
moves files (actually renames them) and overwrites existing ones. So it's fast enough.
But when ${source_path} contains empty subfolders you can cleanup by rm -rf ${source_path}

how to print the progress of the files being copied in bash [duplicate]

I suppose I could compare the number of files in the source directory to the number of files in the target directory as cp progresses, or perhaps do it with folder size instead? I tried to find examples, but all bash progress bars seem to be written for copying single files. I want to copy a bunch of files (or a directory, if the former is not possible).
You can also use rsync instead of cp like this:
rsync -Pa source destination
Which will give you a progress bar and estimated time of completion. Very handy.
To show a progress bar while doing a recursive copy of files & folders & subfolders (including links and file attributes), you can use gcp (easily installed in Ubuntu and Debian by running "sudo apt-get install gcp"):
gcp -rf SRC DEST
Here is the typical output while copying a large folder of files:
Copying 1.33 GiB 73% |##################### | 230.19 M/s ETA: 00:00:07
Notice that it shows just one progress bar for the whole operation, whereas if you want a single progress bar per file, you can use rsync:
rsync -ah --progress SRC DEST
You may have a look at the tool vcp. Thats a simple copy tool with two progress bars: One for the current file, and one for overall.
EDIT
Here is the link to the sources: http://members.iinet.net.au/~lynx/vcp/
Manpage can be found here: http://linux.die.net/man/1/vcp
Most distributions have a package for it.
Here another solution: Use the tool bar
You could invoke it like this:
#!/bin/bash
filesize=$(du -sb ${1} | awk '{ print $1 }')
tar -cf - -C ${1} ./ | bar --size ${filesize} | tar -xf - -C ${2}
You have to go the way over tar, and it will be inaccurate on small files. Also you must take care that the target directory exists. But it is a way.
My preferred option is Advanced Copy, as it uses the original cp source files.
$ wget http://ftp.gnu.org/gnu/coreutils/coreutils-8.21.tar.xz
$ tar xvJf coreutils-8.21.tar.xz
$ cd coreutils-8.21/
$ wget --no-check-certificate wget https://raw.githubusercontent.com/jarun/advcpmv/master/advcpmv-0.8-8.32.patch
$ patch -p1 -i advcpmv-0.8-8.32.patch
$ ./configure
$ make
The new programs are now located in src/cp and src/mv. You may choose to replace your existing commands:
$ sudo cp src/cp /usr/local/bin/cp
$ sudo cp src/mv /usr/local/bin/mv
Then you can use cp as usual, or specify -g to show the progress bar:
$ cp -g src dest
A simple unix way is to go to the destination directory and do watch -n 5 du -s . Perhaps make it more pretty by showing as a bar . This can help in environments where you have just the standard unix utils and no scope of installing additional files . du-sh is the key , watch is to just do every 5 seconds.
Pros : Works on any unix system Cons : No Progress Bar
To add another option, you can use cpv. It uses pv to imitate the usage of cp.
It works like pv but you can use it to recursively copy directories
You can get it here
There's a tool pv to do this exact thing: http://www.ivarch.com/programs/pv.shtml
There's a ubuntu version in apt
How about something like
find . -type f | pv -s $(find . -type f | wc -c) | xargs -i cp {} --parents /DEST/$(dirname {})
It finds all the files in the current directory, pipes that through PV while giving PV an estimated size so the progress meter works and then piping that to a CP command with the --parents flag so the DEST path matches the SRC path.
One problem I have yet to overcome is that if you issue this command
find /home/user/test -type f | pv -s $(find . -type f | wc -c) | xargs -i cp {} --parents /www/test/$(dirname {})
the destination path becomes /www/test/home/user/test/....FILES... and I am unsure how to tell the command to get rid of the '/home/user/test' part. That why I have to run it from inside the SRC directory.
Check the source code for progress_bar in the below git repository of mine
https://github.com/Kiran-Bose/supreme
Also try custom bash script package supreme to verify how progress bar work with cp and mv comands
Functionality overview
(1)Open Apps
----Firefox
----Calculator
----Settings
(2)Manage Files
----Search
----Navigate
----Quick access
|----Select File(s)
|----Inverse Selection
|----Make directory
|----Make file
|----Open
|----Copy
|----Move
|----Delete
|----Rename
|----Send to Device
|----Properties
(3)Manage Phone
----Move/Copy from phone
----Move/Copy to phone
----Sync folders
(4)Manage USB
----Move/Copy from USB
----Move/Copy to USB
There is command progress, https://github.com/Xfennec/progress, coreutils progress viewer.
Just run progress in another terminal to see the copy/move progress. For continuous monitoring use -M flag.

How to download all files from a specific Sourceforge project?

After spending about an hour downloading almost every Msys package from sourceforge I'm wondering whether there is a more clever way to do this. Is it possible to use wget for this purpose?
I've used this script successfully:
https://github.com/SpiritQuaddicted/sourceforge-file-download
For your use run:
sourceforge-file-downloader.sh msys
It should download all the pages first then find the actual links in the pages and download the final files.
From the project description:
Allows you to download all of a sourceforge project's files. Downloads to the current directory into a directory named like the project. Pass the project's name as first argument, eg ./sourceforge-file-download.sh inkscape to download all of http://sourceforge.net/projects/inkscape/files/
Just in case the repo ever gets removed I'll post it here since it's short enough:
#!/bin/sh
project=$1
echo "Downloading $project's files"
# download all the pages on which direct download links are
# be nice, sleep a second
wget -w 1 -np -m -A download http://sourceforge.net/projects/$project/files/
# extract those links
grep -Rh direct-download sourceforge.net/ | grep -Eo '".*" ' | sed 's/"//g' > urllist
# remove temporary files, unless you want to keep them for some reason
rm -r sourceforge.net/
# download each of the extracted URLs, put into $projectname/
while read url; do wget --content-disposition -x -nH --cut-dirs=1 "${url}"; done < urllist
rm urllist
In case of no wget or shell install do it with FileZilla: sftp://yourname#web.sourceforge.net you open the connection with sftp and your password then you browse to the /home/pfs/
after that path (could be a ? mark sign) you fill in with your folder path you want to download in remote site, in my case
/home/pfs/project/maxbox/Examples/
this is the access path of the frs: File Release System: /home/frs/project/PROJECTNAME/