Relocated path in a postinstall script - centos

I'm working on an RPM package that deploys files to /opt and /etc.
In most of the cases it works perfectly, excepted that for a given environment, writing to /etc is not allowed ....
So I used Relocations in order to deploy the /etc files in some other location :
Relocations : /opt /etc
By specifying --relocate option I can deploy the /etc files into another location :
rpm -ivh --relocate /etc=/my/path/to/etc mypackage.rpm
Now the issue is that in the postinstall script, there are some hard coded references to /etc that don't get replaced when the package is deployed :
echo `hostname --fqdn` > /etc/myapp/host.conf
I hope that there is a way (macro, keyword, ... ) to use instead of hard coded paths in order to perform the substitutions during rpm execution.
If you have any information on this I'd really appreciate some help.
Thanks per advance
PS : Please note that this is NOT a duplicate of the previously asked (and answered) questions related to the root path re-locations as we're dealing with several relocation paths and the fact that we need to handle each of them separately during rpm scriptlets

Many thanks to Panu Matilainen from the RPM mailing list who answered the question. I'll re-produce his mail literally in order to share the knowledge :
I assume you mean (the above is how rpm -qi shows it though):
Prefixes: /opt /etc
The prefixes are passed to scriptlets via $RPM_INSTALL_PREFIX<n>
environment variables, <n> is the index of supported prefixes starting
from zero. So in the above,
/opt is $RPM_INSTALL_PREFIX0
/etc is $RPM_INSTALL_PREFIX1
So the scriptlet example becomes:
echo `hostname --fqdn` > $RPM_INSTALL_PREFIX1/myapp/host.conf
Works like a charm, thank you very much Panu !

Related

Why does Yocto use absolute paths in TMPDIR?

Changing the path of a Yocto environment is not a good idea, as I found out. This also explains why e.g. bitbake can be run regardless the current working directory. Absolute paths are stored in many places during the build process, even subdirectory structures are created into the tmp directory tree. I ended up in rebuilding from scratch - which takes a long time.
A documentation of how I tried to modify all paths:
find . -name *.conf -exec sed -i 's/media\/rob\/3210bcd4-49ef-473e-97a6-e4b7a2c1973e/home/g' {} +
This step replaces absolute paths, within many dynamic conf files (from xx/xx/linux to /home/linux - where linux was chosen for historical reasons. I could mount the partition also as /home/yocto or whatever name).
Next was deletion of subdirectory structures with the old path in the hope that the build process would recognize these deletions, and still rebuild quickly:
find . -name *3210bcd4-49ef-473e-97a6-e4b7a2c1973e* -exec fakeroot rm -r {} +
It was not recognized. Then I gave up.
From a user new to Yocto, familiar with former/classic crossbuild environments based on make menuconfig etc.
My question is:
Why are absolute paths generated & used throughout tmp instead of treating everything as relative?
Or, asked differently:
Why not use something like ${TOPDIR}/tmp throughout the build configuration, instead of hardcoding the absolute path to tmp?

Need help to write a basic Command Line code

I'm using Windows 10 if it matters and I'm trying to feed a file to the "oeminst" app that will convert this file from .EDR to .CCSS. According to the app's website its usage summary is this:
oeminst [-options] [inputfiles]
-v Verbose
-n Don't install, show where files would be installed
-c Don't install, save files to current directory
-S d Specify the install scope u = user (def.), l = local system]
infile Manufacturers setup.exe install file(s) or .dll(s) containing install files
infile.[edr|ccss|ccmx] EDR file(s) to translate and install or CCSS or CCMX files to install
If no file is provided, oeminst will look for the install CD.
more info can be found here https://www.argyllcms.com/doc/oeminst.html
So far I tried this code:
C:\Users\PC>oeminst infile. [C:\Users\PC\testfile.edr]
oeminst: Error - Unable to load file 'infile [C:\Users\PC\testfile]'
I'd appreciate if someone at least could tell me if I'm doing it right or not.
P.S. sorry for the messed up text. Not sure how to fix it. It looks good in editing mode.
Try this : oeminst infile.edr C:\Users\PC\testfile.edr
Nevermind, I got it.
C:\Users\PC>oeminst C:\Users\PC\testfile.edr

Can we wget with file list and renaming destination files?

I have this wget command:
sudo wget --user-agent='some-agent' --referer=http://some-referrer.html -N -r -nH --cut-dirs=x --timeout=xxx --directory-prefix=/directory/for/downloaded/files -i list-of-files-to-download.txt
-N will check if there is actually a newer file to download.
-r will turn the recursive retrieving on.
-nH will disable the generation of host-prefixed directories.
--cut-dirs=X will avoid the generation of the host's subdirectories.
--timeout=xxx will, well, timeout :)
--directory-prefix will store files in the desired directorty.
This works nice, no problem.
Now, to the issue:
Let's say my files-to-download.txt has these kind of files:
http://website/directory1/picture-same-name.jpg
http://website/directory2/picture-same-name.jpg
http://website/directory3/picture-same-name.jpg
etc...
You can see the problem: on the second download, wget will see we already have a picture-same-name.jpg, so it won't download the second or any of the following ones with the same name. I cannot mirror the directory structure because I need all the downloaded files to be in the same directory. I can't use the -O option because it clashes with --N, and I need that. I've tried to use -nd, but doesn't seem to work for me.
So, ideally, I need to be able to:
a.- wget from a list of url's the way I do now, keeping my parameters.
b.- get all files at the same directory and being able to rename each file.
Does anybody have any solution to this?
Thanks in advance.
I would suggest 2 approaches -
Use the "-nc" or the "--no-clobber" option. From the man page -
-nc
--no-clobber
If a file is downloaded more than once in the same directory, >Wget's behavior depends on a few options, including -nc. In certain >cases, the local file will be
clobbered, or overwritten, upon repeated download. In other >cases it will be preserved.
When running Wget without -N, -nc, -r, or -p, downloading the >same file in the same directory will result in the original copy of file >being preserved and the second copy
being named file.1. If that file is downloaded yet again, the >third copy will be named file.2, and so on. (This is also the behavior >with -nd, even if -r or -p are in
effect.) When -nc is specified, this behavior is suppressed, >and Wget will refuse to download newer copies of file. Therefore, ""no->clobber"" is actually a misnomer in
this mode---it's not clobbering that's prevented (as the >numeric suffixes were already preventing clobbering), but rather the >multiple version saving that's prevented.
When running Wget with -r or -p, but without -N, -nd, or -nc, >re-downloading a file will result in the new copy simply overwriting the >old. Adding -nc will prevent this
behavior, instead causing the original version to be preserved >and any newer copies on the server to be ignored.
When running Wget with -N, with or without -r or -p, the >decision as to whether or not to download a newer copy of a file depends >on the local and remote timestamp and
size of the file. -nc may not be specified at the same time as >-N.
A combination with -O/--output-document is only accepted if the >given output file does not exist.
Note that when -nc is specified, files with the suffixes .html >or .htm will be loaded from the local disk and parsed as if they had been >retrieved from the Web.
As you can see from this man page entry, the behavior might be unpredictable/unexpected. You will need to see if it works for you.
Another approach would be to use a bash script. I am most comfortable using bash on *nix, so forgive the platform dependency. However the logic is sound, and with a bit of modifications, you can get it to work on other platforms/scripts as well.
Sample pseudocode bash script -
for i in `cat list-of-files-to-download.txt`;
do
wget <all your flags except the -i flag> $i -O /path/to/custom/directory/filename ;
done ;
You can modify the script to download each file to a temporary file, parse $i to get the filename from the URL, check if the file exists on the disk, and then take a decision to rename the temp file to the name that you want.
This offers much more control over your downloads.

Ctools do not show up in pentaho UI

I am using Pentaho CE 5 on windows. I would like to use CTools but I can't make them show up in the File -> New menu to use them.
Being behind a proxy, I can not use the Marketplace plugin, so I have tried a manual installation.
First, I tried to use the ctools-installer.sh. I have run the following command line in cygwin (wget and unzip are installed):
./ctools-installer.sh -s /cygdrive/d/Users/[user]/Mes\ Programmes/pentaho/biserver-ce/pentaho-solutions/ -w /cygdrive/d/Users/[user]/Mes\ programmes/pentaho/biserver-ce/tomcat/webapps/pentaho/
The script starts, asks me what module I want to install, and begins the downloads.
For each module, I get an output like (set -x added to the script) :
echo -n 'Downloading CDF...' Downloading CDF...+ wget -q --no-check-certificate 'http://ci.analytical-labs.com/job/Webdetails-CDF-5-Release/lastSuccessfulBuild/artifact/bi-platform-v2-plugin/dist/zip/dist.zip'
-O .tmp/cdf/dist.zip SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files (x86)\GnuWin32/etc/wgetrc
'[' '!' -z '' ']'
rm -f .tmp/dist/marketplace.xml
unzip -o .tmp/cdf/dist.zip -d .tmp End-of-central-directory signature not found. Either this file is not a zipfile, or it
constitutes one disk of a multi-part archive. In the latter case
the central directory and zipfile comment will be found on the last
disk(s) of this archive. unzip: cannot find zipfile directory in
.tmp/cdf/dist.zip,
and cannot find .tmp/cdf/dist.zip.zip, period.
chmod -R u+rwx .tmp
echo Done Done
Then the script ends. I have seen on this page (pentaho-bi-suite) that it is the normal output. Nevertheless, it seems a bit strange to me and when I start my pentaho server (login: admin/password), I cannot see any new tools in the menus.
After a look to a few other tutorials and the script itself, I have downloaded the .zip snapshots for every tool and unzipped them in the system directory of my pentaho server. Same result.
I would like to make the .sh works, what can I try or adjust ?
Thanks
EDIT 05/06/2014
I checked the dist.zip files dowloaded by the script and they are all empty. It seems that wget cannot fetch the zip files, and therefore the installation fails.
When I try to get any webpage through wget, it fails. I think it is because of the proxy.
Here is my .wgetrc file, located in my user's cygwin home folder:
use_proxy=on
http_proxy=http://[url]:[port]
https_proxy=http://[url]:[port]
proxy_user=[user]
proxy_password=[password]
How could I make this work?
EDIT 10/06/2014
In the end, I have changed my network connection settings to bypass the proxy. It seems that there is an offline mode for the installer, so one can download all needed files on a proxy-free environment and then run the script offline.
I guess this is related with the -r option.
I consider this post solved, since it not a CTools issue anymore.
Difficult to identify the issue in the above procedure..
but you can refer this blog he is key member of pentaho itself..
In the end, I have changed my network connection settings to bypass the proxy. It seems that there is an offline mode for the installer, so one can download all needed files on a proxy-free environment and then run the script offline. I guess this is related with the -r option.
I consider this post solved, since it is not a CTools issue anymore.
You can manually install the components from http://www.webdetails.pt/ctools/ or if you have pentaho 5.1 or above, you add the following parameters to CATALINA_OPTS option (in start-pentaho.bat or start-pentaho.sh):
-Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttp.nonProxyHosts="localhost|127.0.0.1|10...*"
http://docs.treasuredata.com/articles/pentaho-dataintegration#tips-how-can-i-use-pentaho-through-a-proxy

Is there a documentation about "Config_heavy.pl" keys?

After installing perl you can find a Config_heavy.pl file e.g. in /usr/lib/perl5/5.18/mach/Config_heavy.pl and I wonder if there is a commentation of all key/value pairs one can find in it. The of them are clear, but sometimes I'm not sure.
Calling perl -V shows all these values in there.
I guess what I really want to know is, which values are really 'hard', because not only in this file so a change would have no effect, an which have an effect after change? E.g. which can I change to have an effect in CPAN like adding a '-I.' to the ccflags to have CPAN searching for local headers included with <> instead of "" (you can find this in Authen::PAM ;) ).
So if there is some more information do find about the keys in this file, I would be happy to learn about them.
From the command line,
perldoc Config
You shouldn't change that file.
I think the following will do the trick to install Authen::PAM:
wget http://search.cpan.org/CPAN/authors/id/N/NI/NIKIP/Authen-PAM-0.16.tar.gz
tar xvzf Authen-PAM-0.16.tar.gz
cd Authen-PAM-0.16
perl Makefile.PL CCFLAGS='-I.'
make test
make install