zsh: compinit and autocomplete function redefinition - autocomplete

I'm trying to learn how autocompletion works in zsh. I've got a simple script file (example.zsh) and I'm trying to create a simple autocomplete function that describes each of its parameters. In order to do that, I've started by creating a simple _example file which looks like this:
#compdef create_ca
_arguments \
"--caKey[name of the file that will hold the keys used for generating the certificate (default: ca.key)]" \
"--caCrt[name of the file that will hold the certificate with the public key (default: ca.crt)]" \
"--cn[common name for the root certificate (default: root.GRM)]" \
"--days[number of days that certificate is valid for (default: 10500)]" \
"--size[key size (default: 4096)]" \
"--help[show this help screen]"
The file is on the same folder as the script and I've updated my .zshrc file so that it adds that folder to the $fpath:
fpath=(~/code/linux_certificates $fpath)
autoload -Uz compinit
compinit -D
I'm using the D option so that the .zcompdump isn't generated. At first sight, everything worked out, but when I tried to update the helper autocomplete function, I'm unable to see thosee changes (ex.: changing the description). I've tried re-running the compinit command and, when using the cache .zcompdump, deleting that file. However, it simply didn't work. The only way I've managed to get it working was by deleting the autocomplete helper function with:
unfunction _create_ca
Is this the expected behavior? I mean, should't running compinit -D be enough to reload my helper autocomplete function?
btw, any good tutorials on how to create autocomplete functions (besides the official docs)?
thanks.

Once a function has been loaded, it will not be loaded again. That’s why you first have to unfunction your function, causing Zsh to unload it, so it can be loaded again.
Alternatively, you can just use exec zsh to restart your shell.

Related

List directories of different location in fish tab-completion

I would like to create a fish function that acts as a shortcut into a directory.
The function looks like this:
function mydir -d "Navigate to mydir and its subdirectories from anywhere"
cd $HOME/some/path/mydir/$argv
end
The function works well, but I would like to add tab-completion. I currently have this for tab-completion:
complete --no-files --exclusive --command mydir --arguments "(__fish_complete_directories)"
The problem is that this will tab-complete to the directories in the current working directory. I would instead like it to tab-complete to the directories in $HOME/some/path/mydir/. I cannot find an example or flag in the documentation that would help me do this.
I do not know fish very well, but I thought one way to do this would be to manually loop over those directories and add them as options, but it doesn't feel right and I am struggling with that implementation anyway.
Honestly, your "looping over those directories" seems like the most "fish-like" way to me, but given that someone has already gone through the trouble of creating and debugging the existing __fish_complete_directories (Github link), I think I'd just use it as a starting point for your own completion function.
It seems to me that you could just add a pushd $HOME/some/path/mydir before the "non-existent command hack" and then popd at the end.

Can we wget with file list and renaming destination files?

I have this wget command:
sudo wget --user-agent='some-agent' --referer=http://some-referrer.html -N -r -nH --cut-dirs=x --timeout=xxx --directory-prefix=/directory/for/downloaded/files -i list-of-files-to-download.txt
-N will check if there is actually a newer file to download.
-r will turn the recursive retrieving on.
-nH will disable the generation of host-prefixed directories.
--cut-dirs=X will avoid the generation of the host's subdirectories.
--timeout=xxx will, well, timeout :)
--directory-prefix will store files in the desired directorty.
This works nice, no problem.
Now, to the issue:
Let's say my files-to-download.txt has these kind of files:
http://website/directory1/picture-same-name.jpg
http://website/directory2/picture-same-name.jpg
http://website/directory3/picture-same-name.jpg
etc...
You can see the problem: on the second download, wget will see we already have a picture-same-name.jpg, so it won't download the second or any of the following ones with the same name. I cannot mirror the directory structure because I need all the downloaded files to be in the same directory. I can't use the -O option because it clashes with --N, and I need that. I've tried to use -nd, but doesn't seem to work for me.
So, ideally, I need to be able to:
a.- wget from a list of url's the way I do now, keeping my parameters.
b.- get all files at the same directory and being able to rename each file.
Does anybody have any solution to this?
Thanks in advance.
I would suggest 2 approaches -
Use the "-nc" or the "--no-clobber" option. From the man page -
-nc
--no-clobber
If a file is downloaded more than once in the same directory, >Wget's behavior depends on a few options, including -nc. In certain >cases, the local file will be
clobbered, or overwritten, upon repeated download. In other >cases it will be preserved.
When running Wget without -N, -nc, -r, or -p, downloading the >same file in the same directory will result in the original copy of file >being preserved and the second copy
being named file.1. If that file is downloaded yet again, the >third copy will be named file.2, and so on. (This is also the behavior >with -nd, even if -r or -p are in
effect.) When -nc is specified, this behavior is suppressed, >and Wget will refuse to download newer copies of file. Therefore, ""no->clobber"" is actually a misnomer in
this mode---it's not clobbering that's prevented (as the >numeric suffixes were already preventing clobbering), but rather the >multiple version saving that's prevented.
When running Wget with -r or -p, but without -N, -nd, or -nc, >re-downloading a file will result in the new copy simply overwriting the >old. Adding -nc will prevent this
behavior, instead causing the original version to be preserved >and any newer copies on the server to be ignored.
When running Wget with -N, with or without -r or -p, the >decision as to whether or not to download a newer copy of a file depends >on the local and remote timestamp and
size of the file. -nc may not be specified at the same time as >-N.
A combination with -O/--output-document is only accepted if the >given output file does not exist.
Note that when -nc is specified, files with the suffixes .html >or .htm will be loaded from the local disk and parsed as if they had been >retrieved from the Web.
As you can see from this man page entry, the behavior might be unpredictable/unexpected. You will need to see if it works for you.
Another approach would be to use a bash script. I am most comfortable using bash on *nix, so forgive the platform dependency. However the logic is sound, and with a bit of modifications, you can get it to work on other platforms/scripts as well.
Sample pseudocode bash script -
for i in `cat list-of-files-to-download.txt`;
do
wget <all your flags except the -i flag> $i -O /path/to/custom/directory/filename ;
done ;
You can modify the script to download each file to a temporary file, parse $i to get the filename from the URL, check if the file exists on the disk, and then take a decision to rename the temp file to the name that you want.
This offers much more control over your downloads.

Run supervisord with custom configuration file from startup

I'm using this article as a source to get me half way there, but I cannot figure out how to run supervisor with a custom config file path.
When I want to run supervisor manually, I just do:
supervisord -c /home/test/_app/supervisord.conf
When I implemented the auto start up script, it runs the default supervisor config file which is located in /etc/ directory. I don't want to use that one because it separates it from the core project folder and makes it hard to maintain and keep track of.
Try this:
In /etc/rc.d/init.d/supervisord, add prog_opts variable like this:
prog_opts=" -c /home/test/_app/supervisord.conf"
prog_bin="${exec_prefix}/bin/supervisord"
Then in start() function, change the call to:
daemon $prog_bin --pidfile $PIDFILE -- $prog_opts
I was able to fix this issue by simply deleting the default supervisord.conf file and then making a sym link with that default location and my custom conf file path.

Saving project properties in SoapUI's groovy

Here is my problem. I'm running TestRunner from command line in order not to launch SoapUI client. (anyway, same problem occurs when running TR straight from client, so not sure if worth mentioning but anyways...). I do it this way:
testrunner <path_to_project> -r -a -f <path_to_reports> & pause
In one of my TC I retrieve data from DB, then save it to project properties this way:
testRunner.testCase.testSuite.project.setPropertyValue("key", value);
Then I use it in next steps which works fine. The problem appears in other TC where, firstly, I get filename from my project properties, this way:
def oldFilename = testRunner.testCase.testSuite.project.getPropertyValue("FILE_NAME");
Then I want to use it, rename it and save to project properties again, so that it would be ready for next launch. I do it the same way:
testRunner.testCase.testSuite.project.setPropertyValue("FILE_NAME", newFilename);
It seems to be not saving/storing this value. Is there any way to fix this?
If you modify anything in your project, and you want to preserve that from one run to the next, use the -S (uppercase) switch.
Documentation is your friend. :)

Ctools do not show up in pentaho UI

I am using Pentaho CE 5 on windows. I would like to use CTools but I can't make them show up in the File -> New menu to use them.
Being behind a proxy, I can not use the Marketplace plugin, so I have tried a manual installation.
First, I tried to use the ctools-installer.sh. I have run the following command line in cygwin (wget and unzip are installed):
./ctools-installer.sh -s /cygdrive/d/Users/[user]/Mes\ Programmes/pentaho/biserver-ce/pentaho-solutions/ -w /cygdrive/d/Users/[user]/Mes\ programmes/pentaho/biserver-ce/tomcat/webapps/pentaho/
The script starts, asks me what module I want to install, and begins the downloads.
For each module, I get an output like (set -x added to the script) :
echo -n 'Downloading CDF...' Downloading CDF...+ wget -q --no-check-certificate 'http://ci.analytical-labs.com/job/Webdetails-CDF-5-Release/lastSuccessfulBuild/artifact/bi-platform-v2-plugin/dist/zip/dist.zip'
-O .tmp/cdf/dist.zip SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files (x86)\GnuWin32/etc/wgetrc
'[' '!' -z '' ']'
rm -f .tmp/dist/marketplace.xml
unzip -o .tmp/cdf/dist.zip -d .tmp End-of-central-directory signature not found. Either this file is not a zipfile, or it
constitutes one disk of a multi-part archive. In the latter case
the central directory and zipfile comment will be found on the last
disk(s) of this archive. unzip: cannot find zipfile directory in
.tmp/cdf/dist.zip,
and cannot find .tmp/cdf/dist.zip.zip, period.
chmod -R u+rwx .tmp
echo Done Done
Then the script ends. I have seen on this page (pentaho-bi-suite) that it is the normal output. Nevertheless, it seems a bit strange to me and when I start my pentaho server (login: admin/password), I cannot see any new tools in the menus.
After a look to a few other tutorials and the script itself, I have downloaded the .zip snapshots for every tool and unzipped them in the system directory of my pentaho server. Same result.
I would like to make the .sh works, what can I try or adjust ?
Thanks
EDIT 05/06/2014
I checked the dist.zip files dowloaded by the script and they are all empty. It seems that wget cannot fetch the zip files, and therefore the installation fails.
When I try to get any webpage through wget, it fails. I think it is because of the proxy.
Here is my .wgetrc file, located in my user's cygwin home folder:
use_proxy=on
http_proxy=http://[url]:[port]
https_proxy=http://[url]:[port]
proxy_user=[user]
proxy_password=[password]
How could I make this work?
EDIT 10/06/2014
In the end, I have changed my network connection settings to bypass the proxy. It seems that there is an offline mode for the installer, so one can download all needed files on a proxy-free environment and then run the script offline.
I guess this is related with the -r option.
I consider this post solved, since it not a CTools issue anymore.
Difficult to identify the issue in the above procedure..
but you can refer this blog he is key member of pentaho itself..
In the end, I have changed my network connection settings to bypass the proxy. It seems that there is an offline mode for the installer, so one can download all needed files on a proxy-free environment and then run the script offline. I guess this is related with the -r option.
I consider this post solved, since it is not a CTools issue anymore.
You can manually install the components from http://www.webdetails.pt/ctools/ or if you have pentaho 5.1 or above, you add the following parameters to CATALINA_OPTS option (in start-pentaho.bat or start-pentaho.sh):
-Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttp.nonProxyHosts="localhost|127.0.0.1|10...*"
http://docs.treasuredata.com/articles/pentaho-dataintegration#tips-how-can-i-use-pentaho-through-a-proxy