Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
im trying to install openvpn on my centos 6 box
im using epel repository to install the vpn
all went fine on the installation but somehow when coming to generate certificate part
lots of command not found raised when im typing "source ./vars" command
here is returned from the terminal
[root#... easy-rsa]# source ./vars
: command not found
: command not found
: command not found
: command not found
: command not found
: command not found
: command not found
NOTE: If you run ./clean-all, I will be doing a rm -rf on /etc/openvpn/easy-rsa/keys
: command not found
: command not found
: command not found
: command not found
: command not found
here is my vars file setting
# easy-rsa parameter settings
# NOTE: If you installed from an RPM,
# don't edit this file in place in
# /usr/share/openvpn/easy-rsa --
# instead, you should copy the whole
# easy-rsa directory to another location
# (such as /etc/openvpn) so that your
# edits will not be wiped out by a future
# OpenVPN package upgrade.
# This variable should point to
# the top level of the easy-rsa
# tree.
export EASY_RSA="`pwd`"
#
# This variable should point to
# the requested executables
#
export OPENSSL="openssl"
export PKCS11TOOL="pkcs11-tool"
export GREP="grep"
# This variable should point to
# the openssl.cnf file included
# with easy-rsa.
export KEY_CONFIG=/etc/openvpn/easy-rsa/openssl.cnf
# Edit this variable to point to
# your soon-to-be-created key
# directory.
#
# WARNING: clean-all will do
# a rm -rf on this directory
# so make sure you define
# it correctly!
export KEY_DIR="$EASY_RSA/keys"
# Issue rm -rf warning
echo NOTE: If you run ./clean-all, I will be doing a rm -rf on $KEY_DIR
# PKCS11 fixes
export PKCS11_MODULE_PATH="dummy"
export PKCS11_PIN="dummy"
# Increase this to 2048 if you
# are paranoid. This will slow
# down TLS negotiation performance
# as well as the one-time DH parms
# generation process.
export KEY_SIZE=1024
# In how many days should the root CA key expire?
export CA_EXPIRE=3650
# In how many days should certificates expire?
export KEY_EXPIRE=3650
# These are the default values for fields
# which will be placed in the certificate.
# Don't leave any of these fields blank.
export KEY_COUNTRY="US"
export KEY_PROVINCE="CA"
export KEY_CITY="SanFrancisco"
export KEY_ORG="Fort-Funston"
export KEY_EMAIL="me#myhost.mydomain"
export KEY_EMAIL=mail#host.domain
export KEY_CN=changeme
export KEY_NAME=changeme
export KEY_OU=changeme
export PKCS11_MODULE_PATH=changeme
export PKCS11_PIN=1234
any help will be appreciated
thanks
If you notice there are 7 command not found statements before the echo. There are also 7 "empty" lines before the echo. It appears that the key dir variable is properly expanding in the echo statement. After the echo statement there are 5 "empty" lines and 5 more command not found errors. This makes me think that the command not found statements are a result of the "empty" lines.
Obviously if the line is empty it shouldn't cause that sort of an error. How did the "vars" file get there? Did you copy/paste it and invisible characters got copied in the process? Or perhaps it was edited on a device that used a different type of carriage returns?
You should use an editor such as vim on it which will help you see normally hidden characters. You could also try to use a program like tofrodos to convert the carriage returns. When you source a file you are actually executing a script and any variables exported become part of our shell that sourced it in. Normally scripts would conform to unix line endings.
Related
The vim manual page contains two similar -r type commands. I'll give more background below, this question is really how to invoke the first type of -r to list the swap files, but avoid the second -r that invokes recovery
-r List swap files, with information about using them for re‐
covery.
-r {file} Recovery mode. The swap file is used to recover a crashed
editing session. The swap file is a file with the same
filename as the text file with ".swp" appended. See ":help
recovery".
The -r without filename (the first -r above ) reports on the swap files of other files too, including ones in other directories
Background:
I'm trying to have vim report the swap files of a specific file (mostly to determine if vim still editing the file). If the file is being edited ( in another window, either on linux or cygwin ) I can 'raise' that window up to the top with "\e[2t\e[1t" as I have successfully be able to do thanks to Bring Window to Front
Vim has multiple swap file names, and multiple directories that it could put a file, so I want to ask vim, please tell me the name of the swap files that are currently in use for a given file, and if there is a current vim process on the file. Unfortunately, sometimes vim will open a command file in recovery mode in unexpected ways.
I'm invoking vim like this vim -r -c :q file, well actually, I'm invoking it from script, since I want vim to see something more like a terminal, then I look at the output file, so it's more like script -q -c "vim -r -c :q foo" fooscript, then I look in the fooscript file for messages like /Note: process STILL RUNNING: (\d+)/
It is beginning to look like I need to use vim -r without a file name, and parse the output of the -r report, and that there isn't a way to get the report pre-filtered to a single file in question.
after switching my focus to just vim -r, and
Knowing that vim will try to put the swap file into the same directory as the file it's editing ( thanks to #romainl for the pointer to :help swap-file )
observing that vim -r reports on the files in the current directory first,
observing that the file name associated with the swap file is reported before the process id of the vim process, and
observing that vim appends (STILL RUNNING) if it finds the active process
I changed the current directory appropriately and ran this code after plugging in the name of the file-to-search-for
perl -lne '
last if /^\s+In directory/;
undef $f if /^\d+/;
$f = $1 if /^\s+file name:\s+(.*)\s*$/;
if ( $f =~ m#/file-to-search-for# && /^\s+ process ID:\s(\d+).*?STILL RUNNING/ ) {
print $1;
$pid //= $1;
}
END { exit !$pid; } '
The pid of the running vim process is printed, and the exit status is zero when the appropiate swap file is found, and non-zero if the file was not being edited
I am using Pentaho CE 5 on windows. I would like to use CTools but I can't make them show up in the File -> New menu to use them.
Being behind a proxy, I can not use the Marketplace plugin, so I have tried a manual installation.
First, I tried to use the ctools-installer.sh. I have run the following command line in cygwin (wget and unzip are installed):
./ctools-installer.sh -s /cygdrive/d/Users/[user]/Mes\ Programmes/pentaho/biserver-ce/pentaho-solutions/ -w /cygdrive/d/Users/[user]/Mes\ programmes/pentaho/biserver-ce/tomcat/webapps/pentaho/
The script starts, asks me what module I want to install, and begins the downloads.
For each module, I get an output like (set -x added to the script) :
echo -n 'Downloading CDF...' Downloading CDF...+ wget -q --no-check-certificate 'http://ci.analytical-labs.com/job/Webdetails-CDF-5-Release/lastSuccessfulBuild/artifact/bi-platform-v2-plugin/dist/zip/dist.zip'
-O .tmp/cdf/dist.zip SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files (x86)\GnuWin32/etc/wgetrc
'[' '!' -z '' ']'
rm -f .tmp/dist/marketplace.xml
unzip -o .tmp/cdf/dist.zip -d .tmp End-of-central-directory signature not found. Either this file is not a zipfile, or it
constitutes one disk of a multi-part archive. In the latter case
the central directory and zipfile comment will be found on the last
disk(s) of this archive. unzip: cannot find zipfile directory in
.tmp/cdf/dist.zip,
and cannot find .tmp/cdf/dist.zip.zip, period.
chmod -R u+rwx .tmp
echo Done Done
Then the script ends. I have seen on this page (pentaho-bi-suite) that it is the normal output. Nevertheless, it seems a bit strange to me and when I start my pentaho server (login: admin/password), I cannot see any new tools in the menus.
After a look to a few other tutorials and the script itself, I have downloaded the .zip snapshots for every tool and unzipped them in the system directory of my pentaho server. Same result.
I would like to make the .sh works, what can I try or adjust ?
Thanks
EDIT 05/06/2014
I checked the dist.zip files dowloaded by the script and they are all empty. It seems that wget cannot fetch the zip files, and therefore the installation fails.
When I try to get any webpage through wget, it fails. I think it is because of the proxy.
Here is my .wgetrc file, located in my user's cygwin home folder:
use_proxy=on
http_proxy=http://[url]:[port]
https_proxy=http://[url]:[port]
proxy_user=[user]
proxy_password=[password]
How could I make this work?
EDIT 10/06/2014
In the end, I have changed my network connection settings to bypass the proxy. It seems that there is an offline mode for the installer, so one can download all needed files on a proxy-free environment and then run the script offline.
I guess this is related with the -r option.
I consider this post solved, since it not a CTools issue anymore.
Difficult to identify the issue in the above procedure..
but you can refer this blog he is key member of pentaho itself..
In the end, I have changed my network connection settings to bypass the proxy. It seems that there is an offline mode for the installer, so one can download all needed files on a proxy-free environment and then run the script offline. I guess this is related with the -r option.
I consider this post solved, since it is not a CTools issue anymore.
You can manually install the components from http://www.webdetails.pt/ctools/ or if you have pentaho 5.1 or above, you add the following parameters to CATALINA_OPTS option (in start-pentaho.bat or start-pentaho.sh):
-Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttp.nonProxyHosts="localhost|127.0.0.1|10...*"
http://docs.treasuredata.com/articles/pentaho-dataintegration#tips-how-can-i-use-pentaho-through-a-proxy
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Renaming lots of files in Linux according to a pattern
I have multiple files in this format:
file_1.pdf
file_2.pdf
...
file_100.pdf
My question is how can I rename all files, that look like this:
file_001.pdf
file_002.pdf
...
file_100.pdf
I know you can rename multiple files with 'rename', but I don't know how to do this in this case.
You can do this using the Perl tool rename from the shell prompt. (There are other tools with the same name which may or may not be able to do this, so be careful.)
rename 's/(\d+)/sprintf("%03d", $1)/e' *.pdf
If you want to do a dry run to make sure you don't clobber any files, add the -n switch to the command.
note
If you run the following command (linux)
$ file $(readlink -f $(type -p rename))
and you have a result like
.../rename: Perl script, ASCII text executable
then this seems to be the right tool =)
This seems to be the default rename command on Ubuntu.
To make it the default on Debian and derivative like Ubuntu :
sudo update-alternatives --set rename /path/to/rename
Explanations
s/// is the base substitution expression : s/to_replace/replaced/, check perldoc perlre
(\d+) capture with () at least one integer : \d or more : + in $1
sprintf("%03d", $1) sprintf is like printf, but not used to print but to format a string with the same syntax. %03d is for zero padding, and $1 is the captured string. Check perldoc -f sprintf
the later perl's function is permited because of the e modifier at the end of the expression
If you want to do it with pure bash:
for f in file_*.pdf; do x="${f##*_}"; echo mv "$f" "${f%_*}$(printf '_%03d.pdf' "${x%.pdf}")"; done
(note the debugging echo)
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 9 years ago.
I have been on this issue for a couple days now. I use zsh and need to set a directory path so that the command i use will be recognized. Following these steps so far:
cd ~
ls -al
ls -al shows me these files:
.oh-my-zsh
.profile
.putty
.rediscli_history
.ringo-history
.ssh
.subversion
.viminfo
.zcompdump
.zsh-update
.zsh_history
.zshrc
I assume i need to add the path to .zshrc, so:
open -e .zshrc
At the end of the file, i added the path of the command i will be using to set my project (with ringojs):
export PATH=Users/repos/ringojs/bin/:$PATH
Close the file, restart the terminal application, restart the computer, however the command (ringo-admin) located under the path above (/Users/repos/ringojs/bin) is not found:
ringo-admin create --google-appengine MyAppName
zsh: command not found: ringo-admin
Please help me with that, if it will of some help, here is my .zshrc file content:
# Path to your oh-my-zsh configuration.
ZSH=$HOME/.oh-my-zsh
# Set name of the theme to load.
# Look in ~/.oh-my-zsh/themes/
# Optionally, if you set this to "random", it'll load a random theme each
# time that oh-my-zsh is loaded.
ZSH_THEME="robbyrussell"
# Example aliases
# alias zshconfig="mate ~/.zshrc"
# alias ohmyzsh="mate ~/.oh-my-zsh"
# Set to this to use case-sensitive completion
# CASE_SENSITIVE="true"
# Comment this out to disable bi-weekly auto-update checks
# DISABLE_AUTO_UPDATE="true"
# Uncomment to change how many often would you like to wait before auto-updates occur? (in days)
# export UPDATE_ZSH_DAYS=13
# Uncomment following line if you want to disable colors in ls
# DISABLE_LS_COLORS="true"
# Uncomment following line if you want to disable autosetting terminal title.
# DISABLE_AUTO_TITLE="true"
# Uncomment following line if you want red dots to be displayed while waiting for completion
# COMPLETION_WAITING_DOTS="true"
# Which plugins would you like to load? (plugins can be found in ~/.oh-my-zsh/plugins/*)
# Custom plugins may be added to ~/.oh-my-zsh/custom/plugins/
# Example format: plugins=(rails git textmate ruby lighthouse)
plugins=(git)
source $ZSH/oh-my-zsh.sh
# Customize to your needs...
export PATH=Users/repos/ringojs/bin/ringo-admin:$PATH
Please guide me step by step, as i am new to zsh. Thanx.
UPDATE:
echo $PATH display me the recently added directory:
/Users/repos/ringojs/bin/ringo-admin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/local/git/bin
I don't understand why the command is always not found.
You are missing a leading slash. Try:
export PATH=/Users/repos/ringojs/bin/:$PATH
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I want to download a big file from a normal http-link to an ftp-server (under ubuntu) without storing the file locally (as my local storage is too small).
Do you have any ideas how to do this with wget or a small perl-script? (I don't have sudo-rights on the local machine).
Here's my take, combining wget and Net::FTP on the commandline.
wget -O - http://website.com/hugefile.zip | perl -MNet::FTP -e 'my $ftp = Net::FTP->new("ftp.example.com"); $ftp->login("user", "pass"); $ftp->put(\*STDIN, "hugefile.zip");'
Of course, you can put it in a file (ftpupload.pl) as well and run it.
#!/usr/bin/perl
use strict;
use warnings;
use Net::FTP;
my $ftp = Net::FTP->new("ftp.example.com"); # connect to FTP server
$ftp->login("user", "pass"); # login with your credentials
# Because of the pipe we get the file content on STDIN
# Net::FTP's put is able to handle a pipe as well as a filehandle
$ftp->put(\*STDIN, "hugefile.zip");
Run it like this:
wget -O - http://website.com/hugefile.zip | perl ftpupload.pl
There's - of course - a CPAN module which makes life easy for FTP:
http://search.cpan.org/search?mode=module&query=Net%3A%3AFTP
And WWW::Mechanize looks up files, follows links, etc.
With these modules I think you can solve your problem.
You can try to use wput. It is not very known tool, but i think you can use it.
Use wget's output-document option
wget -O /dev/null http://foo.com/file.uuu
From wget's manual page:
"-O file
--output-document=file
The documents will not be written to the appropriate files, but all will be
concatenated together and written to file. If - is used as file, documents will be
printed to standard output, disabling link conversion. (Use ./- to print to a file
literally named -.)
Use of -O is not intended to mean simply "use the name file instead of the
one in the URL;" rather, it is analogous to shell redirection: wget -O file http://foo is
intended to work like wget -O - http://foo > file; file will be truncated immediately,
and all downloaded content will be written there."
However, I can't see what could be the purpose of that