I'm working on "vlt" command
( File Vault - Jackrabbit FileVault comes prepackaged in AEM VLT Tool )
vlt --credentials admin:admin export -v http://localhost:4502/crx /content/dam/geometrixx-outdoors/activities /Users/temp/data
The above command works fine and exports the Assets in "geometrixx-outdoors/activities" to jcr_root and META-INF folders. If these folders are missing, the command would even create them.
But if run the same command using a different source (geometrixx/portraits) and same destination (/Users/temp/data) like:
vlt --credentials admin:admin export -v http://localhost:4502/crx /content/dam/geometrixx/portraits /Users/temp/data
The command errors with the message [ERROR] Not such remote file: /content/dam/geometrixx/portraits
But if i use a different destination folder (/Users/temp/data/new), the same command works:
vlt --credentials admin:admin export -v http://localhost:4502/crx /content/dam/geometrixx/portraits /Users/temp/data/new
What am i missing? Or this is how vlt export command works?
Your first vlt export generates a META-INF/vault/filter.xml which restricts the content root allowing limited/pre-set path to be imported only.
If you want to import the content to same folder you will have to update the filter.xml to allow all the paths that you want to import.
With Ameesh Trikha's suggestions, i developed a shell script to use VLT Tools' export command to the same destination folder.
Every time vlt export command executes, jcr_root and META-INF folders are created. META-INF holds the config files and jcr_root maps to the jcr's source path. The exported files reside here.
If we use the same destination folder for all your exports, because of the path written in filter.xml, consecutive exports won't work. To overcome this, i'm doing the following:
Make a copy of the filter.xml (i'm calling it as master filter) with /content as path. Since all content in jcr generally comes under this, so any export at any source path will work.
Since we are planning to use the same destination folder for all our exports, delete the existing filter.xml file and replace it with the above master filter.xml
Copy of the shell script:
#!/bin/bash
# This script is used to export the contents from AEM CRX to an external folder.
# $1 - Source path from where the files need to be copied from
# $2 - Destination path where the files need to be copied to
#
# Step 1 - Make sure the vault-cli sources are in path
export PATH=$PATH:/Users/Software/vault-cli-3.1.16/bin
#
# Step 2 - vlt overwrites/creates the filter.xml with the source path. We need this to be removed as consecutive exports will not work if the path is not set to the source. To avoid this we remove the filter.xml and copy a file that has /content as the root so any export will work.
echo $'***Resetting filter.xml for the export...'
rm -f $2/META-INF/vault/filter.xml
cp -f /Users/temp/data/filter.xml $2/META-INF/vault
echo $'***Completed resetting filter.xml'
#
# Step 3
echo $'***Starting to execute vlt command...'
vlt --credentials admin:admin export -v http://localhost:4502/crx $1 $2
echo $'***Completed vlt command successfully'
Copy of the master filter.xml
<?xml version="1.0" encoding="UTF-8"?>
<workspaceFilter version="1.0">
<filter root="/content">
<exclude pattern="/content/jcr:system"/>
<exclude pattern="/content/var/classes"/>
<exclude pattern="^.*/rep:accessControl"/>
</filter>
</workspaceFilter>
Related
I am wishing to use watchman to rebuild my directory when a change happens.
I am using watchman--make command to initialize the command :
$ babel ./src/ -d ./lib/
Currently I am using:
$ watchman-make -p "./src/**" -r 'babel ./src/ -d ./lib/'
to watch any file change inside src and run the build command.
Watchman is outputting :
# Relative to /home/marc/workspace/abnb
# Changes to files matching ./src/** will execute `babel ./src/ -d ./lib/`
# waiting for changes
But nothing seems to happen when I change the files within my direcoty src/
You probably want to rewrite your pattern as src/**/*
Watchman name resolution doesn't (and won't) know to resolve . or .. in path names. The ** operator matches any directory depth, so you also should specify * to match any file in any directory under the src dir.
My Matlab source code (.m files) are stored in a hierarchical structure, which has a set of subfolders. I tried to compile them by the following commands in a CentOS server:
mcc -v -m a.m a_call_b.m -I ./Funs/*.m
But I have a lot of subfolders that I cannot list all of them in the command line. Is there any other way to compile the source files in a hierarchical structure?
I noticed this might do as well:
mcc -v -m a.m a_call_b.m -I ./Funs/*.m -a ./Funs
From the docs on the -a option:
If only a folder name is included with the -a option, the entire contents of that folder are added recursively to the CTF archive
So you'd only have to specify top-level directories of your application, which presumably are not too many to list.
See:
http://www.mathworks.de/de/help/compiler/mcc.html
wget -r -np www.a.com/b/c/d
The above will create a directory called 'www.a.com' in the current working directory on my local computer, containing all subdirectories on the path to 'd'.
I only want directory 'd' (and its contents) created in my cwd.
How can I achieve this?
You can mention that directory name explicitely and avoid creation of sub-directories by the following line.
wget -nd -P /home/d www.a.com/b/c/d
The -nd will avoid creation of sub-directories and -P will set the directory to /home/d and all your files will be downloaded to "/home/d" folder only.
I am trying to run some PHP pages on the command line but am running into a few problems with mysql and other PHP extensions.
Running php --ini produces the following output:
Configuration File (php.ini) Path: /usr/local/lib
Loaded Configuration File: (none)
Scan for additional .ini files in: (none)
Additional .ini files parsed: (none)
When I run php on the command line it cannot find my php.ini file. This is because the path is incorrect. My php.ini is actually located in /etc/php.ini as stated in my phpinfo(); file. PHP runs fine in the browser.
How do I change the path of my php.ini file for command line PHP?
I am running Apache2 (CentOS-5.5) and PHP 5.2.6.
Using the -c option, you can specify which php.ini file should be used :
php -c /etc/php.ini your-php-script.php
As a reference, see the output of php --help :
$ php --help
Usage: php [options] [-f] <file> [--] [args...]
php [options] -r <code> [--] [args...]
php [options] [-B <begin_code>] -R <code> [-E <end_code>] [--] [args...]
php [options] [-B <begin_code>] -F <file> [-E <end_code>] [--] [args...]
php [options] -- [args...]
php [options] -a
-a Run as interactive shell
-c <path>|<file> Look for php.ini file in this directory
-n No php.ini file will be used
-d foo[=bar] Define INI entry foo with value 'bar'
...
First update your sloacate database by using the command updatedb
Then try to locate php.ini file by using the command php --ini
I hope it'll update the slocate database and it will display the ini files loaded path.
[root#tamilan src]# php --ini
Configuration File (php.ini) Path: /etc
Loaded Configuration File: /etc/php.ini
Scan for additional .ini files in: /etc/php.d
Additional .ini files parsed: /etc/php.d/apc.ini,
To run php with customized or other php.ini file use the below method
php -c /etc/php.ini your-php-script-file.php
I need files to be downloaded to /tmp/cron_test/. My wget code is
wget --random-wait -r -p -nd -e robots=off -A".pdf" -U mozilla http://math.stanford.edu/undergrad/
So is there some parameter to specify the directory?
From the manual page:
-P prefix
--directory-prefix=prefix
Set directory prefix to prefix. The directory prefix is the
directory where all other files and sub-directories will be
saved to, i.e. the top of the retrieval tree. The default
is . (the current directory).
So you need to add -P /tmp/cron_test/ (short form) or --directory-prefix=/tmp/cron_test/ (long form) to your command. Also note that if the directory does not exist it will get created.
-O is the option to specify the path of the file you want to download to:
wget <uri> -O /path/to/file.ext
-P is prefix where it will download the file in the directory:
wget <uri> -P /path/to/folder
Make sure you have the URL correct for whatever you are downloading. First of all, URLs with characters like ? and such cannot be parsed and resolved. This will confuse the cmd line and accept any characters that aren't resolved into the source URL name as the file name you are downloading into.
For example:
wget "sourceforge.net/projects/ebosse/files/latest/download?source=typ_redirect"
will download into a file named, ?source=typ_redirect.
As you can see, knowing a thing or two about URLs helps to understand wget.
I am booting from a hirens disk and only had Linux 2.6.1 as a resource (import os is unavailable). The correct syntax that solved my problem downloading an ISO onto the physical hard drive was:
wget "(source url)" -O (directory where HD was mounted)/isofile.iso"
One could figure the correct URL by finding at what point wget downloads into a file named index.html (the default file), and has the correct size/other attributes of the file you need shown by the following command:
wget "(source url)"
Once that URL and source file is correct and it is downloading into index.html, you can stop the download (ctrl + z) and change the output file by using:
-O "<specified download directory>/filename.extension"
after the source url.
In my case this results in downloading an ISO and storing it as a binary file under isofile.iso, which hopefully mounts.
"-P" is the right option, please read on for more related information:
wget -nd -np -P /dest/dir --recursive http://url/dir1/dir2
Relevant snippets from man pages for convenience:
-P prefix
--directory-prefix=prefix
Set directory prefix to prefix. The directory prefix is the directory where all other files and subdirectories will be saved to, i.e. the top of the retrieval tree. The default is . (the current directory).
-nd
--no-directories
Do not create a hierarchy of directories when retrieving recursively. With this option turned on, all files will get saved to the current directory, without clobbering (if a name shows up more than once, the
filenames will get extensions .n).
-np
--no-parent
Do not ever ascend to the parent directory when retrieving recursively. This is a useful option, since it guarantees that only the files below a certain hierarchy will be downloaded.
man wget:
-O file
--output-document=file
wget "url" -O /tmp/cron_test/<file>