I have checked out vlt repo using:
vlt co http://localhost:4502/crx/-/jcr:root path/to/repo --force
But now, my CQ instance changed location (port). Is there a way to set new URL(port) to vlt?
(without checking out again)
I have tried unzipping path/to/repo/.vlt and changing repository.url file sometimes it works, but in most cases it breaks local repo, or I'm unable to unzip.
I understand you're looking for something like the "svn relocate" command. This is not possible with the VLT tool directly.
Options (any one of these should do it):
I recommend checking out a new copy of the repository and reapplying the changes that show from running "vlt status" over there.
Set up a new CQ server on the old port, then use "vlt rcp". The process would probably be: copy the whole repository from old to new server, push your local stuff to the new server, copy part of the tree from new to old.
The repository.url setting is nested in .vlt files under all subdirectories of the repository. You could try a global/recursive search & replace for all of these. I've never tried this though. For example, something like this: (I get permission denied running this, needs more work.)
find -name .vlt -type f -print0 | xargs -0 sed -i 's/localhost:4502/localhost:4503/g'
Remove all the .vlt files and use the vlt import/export commands to load. See the "Using import/export instead of .vlt control" section of this document: http://wem.help.adobe.com/enterprise/en_US/10-0/core/how_to/how_to_use_the_vlttool.html
Related
I'm using lftp to deploy a website via Travis CI. There is a build process before the deployment, for that reason a build directory is present and pushed to the root of the ftp server.
lftp $FTP_URL -e "glob -d mirror build . --reverse --delete-first --parallel=10 && exit"
It works quite well, but I dislike to have a downtime / temporary PHP parse errors because of missing files on my website. What is the best way to work arround that issue?
My first approach was an option to set a temporary directory, but the lftp man page says there is only a options for temporary files. I still tried the option but it didn't help.
My second approach was to use "mirror build temp" to use a temporary folder and then replace the root with it. The problem here is, that I cannot exclude the temp folder while deleting the old files and folders like rm -rf *.
For small changes not involving adding/removing php files set xfer:use-temp-file should be sufficient. Also don't use --remove-first, as it causes lftp to delete obsolete files before uploading.
For larger changes I'd create a separate directory for each version of the site and redirect the web server to the directory using .htaccess mod_rewrite or some other configuration file. This technique will allow atomic switch to the new version (and back if needed). Besides, you will be able to do final pre-production testing of the new version if you redirect to the new version conditionally based on your IP address or using some other rule.
If you don't want to re-upload whole site for each new version and the FTP server supports FXP with itself, then you can copy old version to a new directory using mirror old_directory ftp://user#example.com/new_directory, then update the new directory using mirror -eR local_dir new_directory.
This is a zero downtown pattern - each placeholder should be replaced:
lftp $FTP_URL -e "mirror {SOURCE} {TARGET}-new-{TIMESTAMP} --reverse --delete-first;
mv {TARGET} {TARGET}-old-{TIMESTAMP};
mv {TARGET}-new-{TIMESTAMP} {TARGET};
rm -rf {TARGET}-old-{TIMESTAMP};
exit"
I have this wget command:
sudo wget --user-agent='some-agent' --referer=http://some-referrer.html -N -r -nH --cut-dirs=x --timeout=xxx --directory-prefix=/directory/for/downloaded/files -i list-of-files-to-download.txt
-N will check if there is actually a newer file to download.
-r will turn the recursive retrieving on.
-nH will disable the generation of host-prefixed directories.
--cut-dirs=X will avoid the generation of the host's subdirectories.
--timeout=xxx will, well, timeout :)
--directory-prefix will store files in the desired directorty.
This works nice, no problem.
Now, to the issue:
Let's say my files-to-download.txt has these kind of files:
http://website/directory1/picture-same-name.jpg
http://website/directory2/picture-same-name.jpg
http://website/directory3/picture-same-name.jpg
etc...
You can see the problem: on the second download, wget will see we already have a picture-same-name.jpg, so it won't download the second or any of the following ones with the same name. I cannot mirror the directory structure because I need all the downloaded files to be in the same directory. I can't use the -O option because it clashes with --N, and I need that. I've tried to use -nd, but doesn't seem to work for me.
So, ideally, I need to be able to:
a.- wget from a list of url's the way I do now, keeping my parameters.
b.- get all files at the same directory and being able to rename each file.
Does anybody have any solution to this?
Thanks in advance.
I would suggest 2 approaches -
Use the "-nc" or the "--no-clobber" option. From the man page -
-nc
--no-clobber
If a file is downloaded more than once in the same directory, >Wget's behavior depends on a few options, including -nc. In certain >cases, the local file will be
clobbered, or overwritten, upon repeated download. In other >cases it will be preserved.
When running Wget without -N, -nc, -r, or -p, downloading the >same file in the same directory will result in the original copy of file >being preserved and the second copy
being named file.1. If that file is downloaded yet again, the >third copy will be named file.2, and so on. (This is also the behavior >with -nd, even if -r or -p are in
effect.) When -nc is specified, this behavior is suppressed, >and Wget will refuse to download newer copies of file. Therefore, ""no->clobber"" is actually a misnomer in
this mode---it's not clobbering that's prevented (as the >numeric suffixes were already preventing clobbering), but rather the >multiple version saving that's prevented.
When running Wget with -r or -p, but without -N, -nd, or -nc, >re-downloading a file will result in the new copy simply overwriting the >old. Adding -nc will prevent this
behavior, instead causing the original version to be preserved >and any newer copies on the server to be ignored.
When running Wget with -N, with or without -r or -p, the >decision as to whether or not to download a newer copy of a file depends >on the local and remote timestamp and
size of the file. -nc may not be specified at the same time as >-N.
A combination with -O/--output-document is only accepted if the >given output file does not exist.
Note that when -nc is specified, files with the suffixes .html >or .htm will be loaded from the local disk and parsed as if they had been >retrieved from the Web.
As you can see from this man page entry, the behavior might be unpredictable/unexpected. You will need to see if it works for you.
Another approach would be to use a bash script. I am most comfortable using bash on *nix, so forgive the platform dependency. However the logic is sound, and with a bit of modifications, you can get it to work on other platforms/scripts as well.
Sample pseudocode bash script -
for i in `cat list-of-files-to-download.txt`;
do
wget <all your flags except the -i flag> $i -O /path/to/custom/directory/filename ;
done ;
You can modify the script to download each file to a temporary file, parse $i to get the filename from the URL, check if the file exists on the disk, and then take a decision to rename the temp file to the name that you want.
This offers much more control over your downloads.
I am using Pentaho CE 5 on windows. I would like to use CTools but I can't make them show up in the File -> New menu to use them.
Being behind a proxy, I can not use the Marketplace plugin, so I have tried a manual installation.
First, I tried to use the ctools-installer.sh. I have run the following command line in cygwin (wget and unzip are installed):
./ctools-installer.sh -s /cygdrive/d/Users/[user]/Mes\ Programmes/pentaho/biserver-ce/pentaho-solutions/ -w /cygdrive/d/Users/[user]/Mes\ programmes/pentaho/biserver-ce/tomcat/webapps/pentaho/
The script starts, asks me what module I want to install, and begins the downloads.
For each module, I get an output like (set -x added to the script) :
echo -n 'Downloading CDF...' Downloading CDF...+ wget -q --no-check-certificate 'http://ci.analytical-labs.com/job/Webdetails-CDF-5-Release/lastSuccessfulBuild/artifact/bi-platform-v2-plugin/dist/zip/dist.zip'
-O .tmp/cdf/dist.zip SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files (x86)\GnuWin32/etc/wgetrc
'[' '!' -z '' ']'
rm -f .tmp/dist/marketplace.xml
unzip -o .tmp/cdf/dist.zip -d .tmp End-of-central-directory signature not found. Either this file is not a zipfile, or it
constitutes one disk of a multi-part archive. In the latter case
the central directory and zipfile comment will be found on the last
disk(s) of this archive. unzip: cannot find zipfile directory in
.tmp/cdf/dist.zip,
and cannot find .tmp/cdf/dist.zip.zip, period.
chmod -R u+rwx .tmp
echo Done Done
Then the script ends. I have seen on this page (pentaho-bi-suite) that it is the normal output. Nevertheless, it seems a bit strange to me and when I start my pentaho server (login: admin/password), I cannot see any new tools in the menus.
After a look to a few other tutorials and the script itself, I have downloaded the .zip snapshots for every tool and unzipped them in the system directory of my pentaho server. Same result.
I would like to make the .sh works, what can I try or adjust ?
Thanks
EDIT 05/06/2014
I checked the dist.zip files dowloaded by the script and they are all empty. It seems that wget cannot fetch the zip files, and therefore the installation fails.
When I try to get any webpage through wget, it fails. I think it is because of the proxy.
Here is my .wgetrc file, located in my user's cygwin home folder:
use_proxy=on
http_proxy=http://[url]:[port]
https_proxy=http://[url]:[port]
proxy_user=[user]
proxy_password=[password]
How could I make this work?
EDIT 10/06/2014
In the end, I have changed my network connection settings to bypass the proxy. It seems that there is an offline mode for the installer, so one can download all needed files on a proxy-free environment and then run the script offline.
I guess this is related with the -r option.
I consider this post solved, since it not a CTools issue anymore.
Difficult to identify the issue in the above procedure..
but you can refer this blog he is key member of pentaho itself..
In the end, I have changed my network connection settings to bypass the proxy. It seems that there is an offline mode for the installer, so one can download all needed files on a proxy-free environment and then run the script offline. I guess this is related with the -r option.
I consider this post solved, since it is not a CTools issue anymore.
You can manually install the components from http://www.webdetails.pt/ctools/ or if you have pentaho 5.1 or above, you add the following parameters to CATALINA_OPTS option (in start-pentaho.bat or start-pentaho.sh):
-Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttp.nonProxyHosts="localhost|127.0.0.1|10...*"
http://docs.treasuredata.com/articles/pentaho-dataintegration#tips-how-can-i-use-pentaho-through-a-proxy
I have a situation where I'd like to diff two branches in Perforce. Normally I'd use diff2 to do a server-side diff but in this case the files on the branches are so large that the diff2 call ends up filling up /tmp on my server trying to diff them and the diff fails.
I can't bring down my server to rectify this so I'm looking at checking out the the content to disk and using diff on the command line to inspect and compare the content.
The trouble is: most of the files have RCS keywords in them that are being expanded.
I know can remove keyword expansion from a file by opening the files for edit and removing the -k attribute from the files in the process, but that seems a bit brute force. I was hoping I could just tell the p4 sync command not to expand the keywords on checkout. I can't seem to find a way to do this? Is it possible?
As a possible alternative solution, does anyone know if you can tell p4 diff2 which directory to use for temporary space when you call it? If I could tell it to use abundant NAS space instead of /tmp on the Perforce server I might be able to make it work.
I'm using 2010.x version of Perforce if that changes the answer in any way.
There's no way I know of to disable keyword expansion on sync. Here's what I would try:
1) Create a branch spec between the two sets of files
2) Run "p4 files //path/to/files/... | cut -d '#' -f 1 > tmp"
Path to files above should be the right hand side of the branch spec you created
3) p4 -x tmp diff2 -b
This tells p4 to iterate over the lines of text in 'tmp' and treat them as arguments to the command. I think /tmp on your server will get cleared in-between each file this way, preventing it from filling up.
I unfortunately don't have files large enough to test that it works, so this is entirely theoretical.
To change the temp directory that p4d uses just TEMP or TMP to a different path and restart p4d. If you're on Windows make sure to call 'p4 set -S perforce TMP=' to set variable for the Perforce service; without the -S perforce you'll just set it for the current user.
I am having a problem with version control in Subversion. I checked out a working copy from respository and got locks on all of its files. Then, without releasing the locks I have deleted the folder from disk.
I can't delete the folder from repository, since its got a lock
If the I and try to release the locks recursively, it says there are no locks to be released.
In Browse Repository view, I can only break the locks on particular, not folders recursively.
How can I break the locks residing in repository? I am using TortoiseSVN on Windows.
Is there a command to break locks recursively for a folder?
Ok I got it. Here's what worked for me.
Check out a
working copy
Then go in Windows explorer menu,
TortoiseSVN -> Check for
modifications...
Click on Check repository button
Select All the files, right click and
select the break lock option
Delete the working copy and the one
in repository. Voila! :)
Doing an SVN cleanup will release the lock as well:
$ svn cleanup
From the advance locking section
$ svn status -u
M 23 bar.c
M O 32 raisin.jpg
* 72 foo.h
Status against revision: 105
$ svn unlock raisin.jpg
svn: 'raisin.jpg' is not locked in this working copy
That simply means the file is not locked in your current working directory
, but if it is still locked at the repository level, you can force the unlock ("breaking the lock")
$ svn unlock http://svn.example.com/repos/project/raisin.jpg
svn: Unlock request failed: 403 Forbidden (http://svn.example.com)
$ svn unlock --force http://svn.example.com/repos/project/raisin.jpg
'raisin.jpg' unlocked.
(which is what you did through the TortoiseSVN GUI)
If somebody else has locked the files remotely, I found that using TortoiseSVN 1.7.11 to do the following successfully unlocked them in my working copy. (similar to vikkun's answer)
Right click working copy > Check for modifications
Click Check repository button
Select files you wish to unlock
Right click > Get lock
Check "Steal the lock" checkbox
After lock is stolen, select again
Right click > Release lock
Files in working copy should now be unlocked.
Unless you have admin access to the svn machine and can use the 'svnadmin' tool, your best option seems to be this:
Checkout the problematic directory using svn checkout --ignore-externals *your_repo*
Use svn status --show-updates on the checked out repository to find out which files are potentially locked (if someone finds the documentation on the meaning of the status codes please comment).
Use svn unlock --force *some_file* on the files found at 2.
I've used the following one-liner to automate 2. and 3.:
svn status -u | head -n -1 | awk '{ print $3 }' | xargs svn unlock --force
If you have access to the svnadmin tool in the repo server, you can use this alternative to remove all locks (based on the script posted by VonC)
svnadmin lslocks <path_to_repo> |grep -B2 Owner |grep Path |sed "s/Path: \///" | xargs svnadmin rmlocks <path_to_repo>
The repository administrator can remove the locks (recursively), operating on hundreds of files inside a problematic directory -- but only by scripting since there is not a --recursive option to svnadmin rmlocks.
$repopath=/var/svn/repos/myproject/;
$problemdirectory=trunk/bikeshed/
IFS=$'\n'; for f in $(sudo svnadmin lslocks $repopath $problemdirectory \
| grep 'Path: ' \
| sed "s/Path: \///") ; \
do sudo svnadmin rmlocks $repopath "$f" ; done
This solution works with filenames that have spaces in them.
For me deleting the lock file inside .svn did not work since I got bad checksum msg after trying to update the file.
I got the following msg after executing svn cleanup inside the directory:
svn: In directory ''
svn: Can't copy '.svn/tmp/text-base/file_name.svn-base' to 'filename.3.tmp': No such file or directory
So I copied my file to .svn/tmp/text-base and changed the name to file_name.svn-base. Then cleanup and update worked fine.
When I tried to run the script from above as originally provided, I was getting an error when it tried to set the variables:
./scriptname: line1: =/svn/repo/path/: No such file or directory
./scriptname: line2: =directory/: No such file or directory
I removed the '$' from the first two lines and this worked perfectly after that.
repopath=/var/svn/repos/myproject/;
problemdirectory=trunk/bikeshed/
IFS=$'\n'; for f in $(sudo svnadmin lslocks $repopath $problemdirectory \
| grep 'Path: ' \
| sed "s/Path: \///") ; \
do sudo svnadmin rmlocks $repopath "$f" ; done