/bin/sh RHEL6 source not working - sh

we have a build environ which sources a script for few environment settings as follows
if test -f build_env
then
# Source the config file
. build_env
but some how this seems to be failing in RHEL6
sh-4.1$ . build_env
sh: .: build_env: file not found
while in RHEL4 it works
sh-3.00$ . buildenv
sh-3.00$
what could be the issue ?

Try . ./build_env (notice './').
Also, your if syntax is doubtful, though I can't be sure until you fix formatting in the question. I would write it like this:
if test -f build_env; then . ./build_env; fi

Related

Setting up AEM6.3 as a service Linux Redhat version 7.3

I am trying to set up AEM6.3 environment as a service and following below steps. But having some issues-
I have RedHat version 7.3 linux server.
I am taking reference from here
aem file- (/usr/bin/aem)
!/bin/bash
#
# /etc/rc.d/init.d/aem6
#
#
# # of the file to the end of the tags section must begin with a #
# character. After the tags section, there should be a blank line.
# This keeps normal comments in the rest of the file from being
# mistaken for tags, should they happen to fit the pattern.>
#
# chkconfig: 35 85 15
# description: This service manages the Adobe Experience Manager java process.
# processname: aem6
# pidfile: /crx-quickstart/conf/cq.pid
# Source function library.
. /etc/rc.d/init.d/functions
SCRIPT_NAME=`basename $0`
AEM_ROOT=/mnt/crx/author
AEM_USER=root
########
BIN=${AEM_ROOT}/crx-quickstart/bin
START=${BIN}/start
STOP=${BIN}/stop
STATUS="${BIN}/status"
case "$1" in
start)
echo -n "Starting AEM services: "
su - ${AEM_USER} ${START}
touch /var/lock/subsys/$SCRIPT_NAME
;;
stop)
echo -n "Shutting down AEM services: "
su - ${AEM_USER} ${STOP}
rm -f /var/lock/subsys/$SCRIPT_NAME
;;
status)
su - ${AEM_USER} ${STATUS}
;;
restart)
su - ${AEM_USER} ${STOP}
su - ${AEM_USER} ${START}
;;
reload)
;;
*)
echo "Usage: $SCRIPT_NAME {start|stop|status|reload}"
exit 1
;;
esac
aem.system (/etc/systemd/system) (Couldn't find system.d so placed this file systemd)
[Unit]
Description=Adobe Experience Manager
[Service]
Type=simple
ExecStart=/usr/bin/aem start
ExecStop=/usr/bin/aem stop
ExecReload=/usr/bin/aem restart
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
I have provided permissions to both of these files as-
#chmod u+rwx /usr/bin/aem
#chmod u+rwx /etc/systemd/system/aem.system
When I am giving these commands-
#cd /etc/systemd/system
#systemctl enable aem.system
It's giving me below error-
#systemctl enable aem.system
**Failed to execute operation: No such file or directory**
Am I missing any step here?
Thanks!
You are correct in placing the custom unit file in /etc/systemd/system as that is the place for all unpackages files. However, your file should really be called aem.service. To the best of my knowledge, systemd does not pick up files ending in .system. On a side note: Those overly liberal filesystem permissions really are unnecessary, 755 should be more than sufficient.
Also: If there really is a /etc/init.d/aem6 file as the linked guide suggests, systemd's SysV-compatibility layer should be able to read that one in and systemctl enable --now aem6 is everything you need to do.

Buildroot Config Option for applying custom patch

I am new to buildroot and working to build Linaro with buildroot ..I have multiple fragment kernel config files and specified that in buildroot defconfig.
I have specified a custom kernel patches directory with BR2_LINUX_PATCH_DIR .
I dont have some of the config flags not set which are supposed to be there in the .config files..so i suspect that the Patches are applied successfully..so i tried giving a non existing location as Linux Patch dir and it does not give any error..
Is there anything other than giving value to BR2_LINUX_PATCH_DIR and what should be the format of the dir structure...in buildroot manual it says it should be
Package_name/patch name..For linux what should be the package name? It should be the same with which linux dir is created.for example for me it is linux-custom
Plz suggest and guide me in this.
Thanks in Advance
The option is named BR2_LINUX_KERNEL_PATCH, there is nothing named BR2_LINUX_PATCH_DIR. It applies all patches listed in this option (if those are files), or all files named *.patch if what's given in this option is a directory. See the code in linux/linux.mk:
define LINUX_APPLY_LOCAL_PATCHES
for p in $(filter-out ftp://% http://% https://%,$(LINUX_PATCHES)) ; do \
if test -d $$p ; then \
$(APPLY_PATCHES) $(#D) $$p \*.patch || exit 1 ; \
else \
$(APPLY_PATCHES) $(#D) `dirname $$p` `basename $$p` || exit 1; \
fi \
done
endef
Also, I would recommend that you watch the output of Buildroot: it shows everything it is doing, especially it lists the patches it applied. Look at the line >>> linux .... Patching, which is the marker for the beginning of the patching step of the linux package.

rake executes file task when file already exists

I have a rakefile that executes some (but not all) of it's file tasks even if the files of interest have already been built. The frustrating thing, is that paring down my rake file to a MWE resolves the problem---even though I haven't altered anything wrt the filetask definition, how the files are being selected, the dependencies, or anything else. It seems that simply removing other (file)tasks from the rakefile remedies the problem.
I realize this is a really awful question, but does anyone have ideas about what might be going on here? I'd post sample code, but my MWE works as expected and I don't have any sense for what is causing the problem in the full rake file. All I can think to do is demonstrate that my MWE is literally an excerpt from the full Rakefile, unaltered...
➜ solutionmaps cat mwe/Rakefile|sed '/^$/d'|tee a
require 'rake'
require 'rake/clean'
require 'pathname'
HOME = ENV['HOME']
SHARED_ATLAS = "#{HOME}/MRI/Manchester/data/CommonBrains/MNI_EPI_funcRes.nii"
TXT = Rake::FileList["txt/nodestrength/??.mni"]
AFNI_RAW = TXT.pathmap("afni/nodestrength/%n_raw+tlrc.HEAD")
AFNI_RAW.zip(TXT).each do |target,source|
file target => [source] do
sh("3dUndump -master #{SHARED_ATLAS} -xyz -datum float -prefix #{target.sub("+tlrc.HEAD","")} #{source}")
end
CLOBBER.push(target)
CLOBBER.push(target.sub(".HEAD",".BRIK"))
CLOBBER.push(target.sub(".HEAD",".BRIK.gz"))
end
➜ solutionmaps perl -ne 'print if ($seen{$_} .= #ARGV) =~ /10$/' Rakefile mwe/Rakefile|sed '/^$/d'|tee b
require 'rake'
require 'rake/clean'
require 'pathname'
HOME = ENV['HOME']
SHARED_ATLAS = "#{HOME}/MRI/Manchester/data/CommonBrains/MNI_EPI_funcRes.nii"
TXT = Rake::FileList["txt/nodestrength/??.mni"]
AFNI_RAW = TXT.pathmap("afni/nodestrength/%n_raw+tlrc.HEAD")
AFNI_RAW.zip(TXT).each do |target,source|
file target => [source] do
sh("3dUndump -master #{SHARED_ATLAS} -xyz -datum float -prefix #{target.sub("+tlrc.HEAD","")} #{source}")
end
CLOBBER.push(target)
CLOBBER.push(target.sub(".HEAD",".BRIK"))
CLOBBER.push(target.sub(".HEAD",".BRIK.gz"))
end
➜ solutionmaps diff a b
➜ solutionmaps
And that my mwe works as expected (that is, it does not execute the file task).
➜ mwe rake --trace --dry-run afni/nodestrength/02_raw+tlrc.HEAD
** Invoke afni/nodestrength/02_raw+tlrc.HEAD (first_time, not_needed)
** Invoke txt/nodestrength/02.mni (first_time, not_needed)
But the full rakefile does not.
rake --trace --dry-run afni/nodestrength/02_raw+tlrc.HEAD
** Invoke afni/nodestrength/02_raw+tlrc.HEAD (first_time)
** Invoke txt/nodestrength/02.mni (first_time, not_needed)
** Execute (dry run) afni/nodestrength/02_raw+tlrc.HEAD
➜ solutionmaps ls afni/nodestrength/02_raw+tlrc.HEAD
afni/nodestrength/02_raw+tlrc.HEAD
Finally happened across a possible answer:
Rake determines that a file task needs to be run if the file doesn’t exist or if any of the prerequisite file tasks are newer.
Quoted from: http://madewithenvy.com/ecosystem/articles/2013/rake-file-tasks/
Since my Rakefiles are under heavy development, and my Filetasks are all pretty interrelated, this is probably why my Rakefile always wanted to rebuild everything.

Building Apache Spark using SBT: Invalid or corrupt jarfile

I'm trying to install Spark on my local machine. I have been following this guide. I have installed JDK-7 (also have JDK-8) and Scala 2.11.7. A problem occurs when I try to use sbt to build Spark 1.4.1. I get the following exception.
NOTE: The sbt/sbt script has been relocated to build/sbt.
Please update references to point to the new location.
Invoking 'build/sbt assembly' now ...
Attempting to fetch sbt
Launching sbt from build/sbt-launch-0.13.7.jar
Error: Invalid or corrupt jarfile build/sbt-launch-0.13.7.jar
I have searched for a solution to this problem. I have found a nice guide https://stackoverflow.com/a/31597283/2771315 which uses a pre-built version. Other than using the pre-built version, is there a way to install Spark using sbt? Further, is there a reason as to why the Invalid or corrupt jarfile error occurs?
I met the same problem. I have fixed it now.
This probably because sbt-launch-0.13.7.jar has a unsuccessful download, although you can see the file is exist, but it's not correct file. The file is about 1.2MB in size. If less than that, you can get into the build/ , use "vim sbt-launch-0.13.7.jar" or other tools to open sbt-launch-0.13.7.jar file.
If the file have the content like this:
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
It implys that sbt-launch-0.13.7.jar is not downloaded.
Then open sbt-launch-lib.bash in the same directory,check the line 41 and 42, it gives two urls. Open it to check if they work well.
If url1 doesn't work,download the sbt-launch.jar manually(you can use url2, it may works,or you can download from sbt official website), put it in the same directory, rename it to sbt-launch-0.13.7.jar, then you shoud comment lines in relation to the downloading(may be between line 47 and 68), avoid the script download it again. Like this:
acquire_sbt_jar () {
SBT_VERSION=`awk -F "=" '/sbt\.version/ {print $2}' ./project/build.properties`
URL1=http://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/sbt-launch/${SBT_VERSION}/sbt-launch.jar
URL2=http://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/sbt-launch/${SBT_VERSION}/sbt-launch.jar
JAR=build/sbt-launch-${SBT_VERSION}.jar
sbt_jar=$JAR
# if [[ ! -f "$sbt_jar" ]]; then
# # Download sbt launch jar if it hasn't been downloaded yet
# if [ ! -f "${JAR}" ]; then
# # Download
# printf "Attempting to fetch sbt\n"
# JAR_DL="${JAR}.part"
# if [ $(command -v curl) ]; then
# (curl --silent ${URL1} > "${JAR_DL}" || curl --silent ${URL2} > "${JAR_DL}") && mv "${JAR_DL}" "${JAR}"
# elif [ $(command -v wget) ]; then
# (wget --quiet ${URL1} -O "${JAR_DL}" || wget --quiet ${URL2} -O "${JAR_DL}") && mv "${JAR_DL}" "${JAR}"
# else
# printf "You do not have curl or wget installed, please install sbt manually from http://www.scala-sbt.org/\n"
# exit -1
# fi
# fi
# if [ ! -f "${JAR}" ]; then
# # We failed to download
# printf "Our attempt to download sbt locally to ${JAR} failed. Please install sbt manually from http://www.scala-sbt.org/\n"
# exit -1
# fi
# printf "Launching sbt from ${JAR}\n"
# fi
}
Then use "build/sbt assembly" to build the spark again.
Hope you will succeed.
If I didn't express clearly, the following links may be helpful.
https://www.mail-archive.com/user#spark.apache.org/msg34358.html
Error: Invalid or corrupt jarfile sbt/sbt-launch-0.13.5.jar the answer by prabeesh
https://groups.google.com/forum/#!topic/predictionio-user/fllCh8n-0d4
Download the sbt-launch.jar file manually (you can use url2, it may work, or you can download from the sbt official website), put it in the same directory, rename it to sbt-launch-0.13.7.jar, then run the sbt/sbt assembly command.

Openldap : overlay accesslog not found

I am trying to configure accesslog. I have changed the slapd.conf file and trying to test using slaptest but i am getting error while executing slaptest -f /etc/openldap/slapd.conf.
slapd.conf configuration:
# See slapd.conf(5) for details on configuration options.
# This file should NOT be world readable.
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/nis.schema
.
.
modulepath /usr/lib/openldap/
moduleload accesslog.la
overlay accesslog
logdb "cn=accesslog"
logops writes
logsuccess TRUE
I am getting error at overlay accesslog
overlay "accesslog" not found
slaptest: bad configuration file!
Am I missing something..?
I found the issue on my own. I have not compiled the openldap with --enable overlay.
To solve this issue
i have downloaded the openldap src
./configure --enable-overlays (./configure [options] [variable=value ...])
Now modify the slapd.conf to load accesslog.la and execute slaptest -f /etc/openldap/slapd.conf. Now you wont find any error.
restart the slapd.