Promtail Error on RHEL 6 [caller=main.go:115 msg="error creating promtail" error="at least one client config should be provided"] - grafana

I have been trying to install Promtail on a RHEL 6 server and keep getting this error
Error: caller=main.go:115 msg="error creating promtail" error="at least one client config should be provided"
It seems to be requesting for the config file which has been passed CONFIG_FILE=/usr/local/bin/promtailconf.yml
I have done this deployment a number of times on RHEL 8 without a hitch not sure what i am getting wrong
#cat /etc/init.d/promtail
#! /bin/bash
# chkconfig: 345 20 80
# description: my service
345 - 3,4,5 runlevels
20 - start priority
80- stop prioroty
RETVAL=0
PROG="promtail"
EXEC=/usr/local/bin/promtail
LOCKFILE="/var/lock/subsys/$PROG"
LOGFILE=/var/log/promtail.log
DATADIR=/usr/local/bin/promtail/data
CONFIG_FILE=/usr/local/bin/promtailconf.yml
ErrLOGFILE=/var/log/promtail_error.log
# Source function library.
if [ -f /etc/rc.d/init.d/functions ]; then
. /etc/rc.d/init.d/functions
else
echo "/etc/rc.d/init.d/functions is not exists"
exit 0
fi
start() {
if [ -f $LOCKFILE ]
then
echo "$PROG is already running!"
else
echo -n "Starting $PROG: "
nohup $EXEC $OPTIONS > $LOGFILE 2> $ErrLOGFILE &
RETVAL=$?
[ $RETVAL -eq 0 ] && touch $LOCKFILE && success || failure
echo
return $RETVAL
fi
}
stop() {
echo -n "Stopping $PROG: "
killproc $EXEC
RETVAL=$?
[ $RETVAL -eq 0 ] && rm -r $LOCKFILE && success || failure
config file
cat /usr/local/bin/promtailconf.yml
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://xxx.xxx.xxx.xxx:3100/loki/api/v1/push
scrape_configs:
- job_name: system
static_configs:
- targets:
- localhost
labels:
job: varlogs
__path__: /var/log/*log

Related

Starting zero alpha and ratel in a single command e.g. in MacOSX and other environments

https://dgraph.io/tour/intro/2/
asks for three docker commands being run in different terminals:
zero
alpha
ratel
I'd rather start things from a single script dgraph within Mac OSX 10.3.6 and other environments and use e.g. the terminal app to call the different scripts along the lines of Running a command in a new Mac OS X Terminal window
How can the script below adapted for other environments like Unix and Windows?
dgraph
#!/bin/bash
# WF 2020-08-05
# see https://dgraph.io/tour/intro/2/
version=v20.03.0
#ansi colors
#http://www.csc.uvic.ca/~sae/seng265/fall04/tips/s265s047-tips/bash-using-colors.html
blue='\033[0;34m'
red='\033[0;31m'
green='\033[0;32m' # '\e[1;32m' is too bright for white bg.
endColor='\033[0m'
#
# a colored message
# params:
# 1: l_color - the color of the message
# 2: l_msg - the message to display
#
color_msg() {
local l_color="$1"
local l_msg="$2"
echo -e "${l_color}$l_msg${endColor}"
}
#
# error
#
# show the given error message on stderr and exit
#
# params:
# 1: l_msg - the error message to display
#
error() {
local l_msg="$1"
# use ansi red for error
color_msg $red "Error:" 1>&2
color_msg $red "\t$l_msg" 1>&2
exit 1
}
# show usage
#
usage() {
echo "$0 [-h|--help|-k|--kill"
echo ""
echo "-b | --bash: start a bash terminal shell within the currently running container"
echo "-h | --help: show this usage"
echo "-k | --kill: stop the docker image"
exit 1
}
#
# stop the docker image
#
stopImage() {
color_msg $blue "stopping and removing dgraph image ..."
docker stop dgraph
docker rm dgraph
color_msg $green "...done"
}
#
# start a bash shell within the currently running container
#
bashInto() {
sudo docker exec -it dgraph bash
}
#
# dgraph zero
#
zero() {
docker run -it -p 5080:5080 -p 6080:6080 -p 8080:8080 \
-p 9080:9080 -p 8000:8000 -v ~/dgraph:/dgraph --name dgraph \
dgraph/dgraph:$version dgraph zero
}
#
# dgraph alpha
#
alpha() {
docker exec -it dgraph dgraph alpha --lru_mb 2048 --zero localhost:5080 --whitelist 0.0.0.0/0
}
#
# dgraph ratel
#
ratel() {
docker exec -it dgraph dgraph-ratel
}
me=$0
dir=$(dirname $0)
base=$(basename $0)
if [ $# -lt 1 ]
then
case $base in
dgraph)
# Run Dgraph zero
# And in another, run ratel (Dgraph UI)
# In another terminal, now run Dgraph alpha
for option in zero ratel alpha
do
#echo $dir $option
open -a terminal.app $dir/$option
# wait a bit
sleep 2
done
;;
alpha) alpha;;
ratel) ratel;;
zero) zero;;
esac
fi
# commandline option
while [ "$1" != "" ]
do
option="$1"
case $option in
alpha) alpha;;
ratel) ratel;;
zero) zero;;
-b|--bash) bashInto;;
-k|--kill) stopImage;;
-h|--help) usage;;
esac
shift
done
https://github.com/WolfgangFahl/DgraphAndWeaviateTest/blob/master/scripts/dgraph now has a script working on Linux (tested with travis/Ubuntu 18.04 LTS - bionic).
See also https://travis-ci.org/github/WolfgangFahl/DgraphAndWeaviateTest/jobs/715131236
usage
scripts/dgraph -h
scripts/dgraph [-b|--bash|-h|--help|-k|--kill|-p|--pull]
-b | --bash: start a bash terminal shell within the currently running container
-h | --help: show this usage
-k | --kill: stop the docker image
-p | --pull: pull the docker image
dgraph
#!/bin/bash
# WF 2020-08-05
#
# Starts zero alpha and ratel docker based with one script
#
# see https://dgraph.io/tour/intro/2/
# see https://stackoverflow.com/questions/63260073/starting-zero-alpha-and-ratel-in-a-single-command-e-g-in-macosx-and-other-envir
# see https://discuss.dgraph.io/t/dgraph-start-script/9231
version=v20.03.0
# interative tty option for docker
it="-it"
os=$(uname)
case $os in
Linux)
docker="docker"
it="";;
#docker="sudo docker";;
Darwin)
docker="docker";;
*)
docker="docker";;
esac
#ansi colors
#http://www.csc.uvic.ca/~sae/seng265/fall04/tips/s265s047-tips/bash-using-colors.html
blue='\033[0;34m'
red='\033[0;31m'
green='\033[0;32m' # '\e[1;32m' is too bright for white bg.
endColor='\033[0m'
#
# a colored message
# params:
# 1: l_color - the color of the message
# 2: l_msg - the message to display
#
color_msg() {
local l_color="$1"
local l_msg="$2"
echo -e "${l_color}$l_msg${endColor}"
}
#
# error
#
# show the given error message on stderr and exit
#
# params:
# 1: l_msg - the error message to display
#
error() {
local l_msg="$1"
# use ansi red for error
color_msg $red "Error:" 1>&2
color_msg $red "\t$l_msg" 1>&2
exit 1
}
# show usage
#
usage() {
echo "$0 [-h|--help|-k|--kill"
echo ""
echo "-b | --bash: start a bash terminal shell within the currently running container"
echo "-h | --help: show this usage"
echo "-k | --kill: stop the docker image"
echo "-p | --pull: pull the docker image"
exit 1
}
#
# stop the docker image
#
stopImage() {
color_msg $blue "stopping and removing dgraph image ..."
$docker stop dgraph
$docker rm dgraph
color_msg $green "...done"
}
#
# pull the docker image
#
pullImage() {
color_msg $blue "pulling dgraph image $version ..."
$docker pull dgraph/dgraph:$version
color_msg $green "...done"
}
#
# start a bash shell within the currently running container
#
bashInto() {
$docker exec -it dgraph bash
}
#
# dgraph zero
#
zero() {
# Let’s create a folder for storing Dgraph data outside of the container:
mkdir -p ~/dgraph
$docker run $it -p 5080:5080 -p 6080:6080 -p 8080:8080 \
-p 9080:9080 -p 8000:8000 -v ~/dgraph:/dgraph --name dgraph \
dgraph/dgraph:$version dgraph zero
}
#
# dgraph alpha
#
alpha() {
$docker exec $it dgraph dgraph alpha --lru_mb 2048 --zero localhost:5080 --whitelist 0.0.0.0/0
}
#
# dgraph ratel
#
ratel() {
$docker exec $it dgraph dgraph-ratel
}
me=$0
dir=$(dirname $0)
base=$(basename $0)
if [ $# -lt 1 ]
then
case $base in
dgraph)
# Run Dgraph zero
# And in another, run ratel (Dgraph UI)
# In another terminal, now run Dgraph alpha
for option in zero ratel alpha
do
# make sure linked versions of command are available
if [ ! -f $dir/$option ]
then
color_msg $blue "creating link $dir/$option"
ln $dir/dgraph $dir/$option
fi
#echo $dir $option
color_msg $blue "starting dgraph $option ..."
case $os in
Darwin)
open -a terminal.app $dir/$option
;;
# https://askubuntu.com/questions/46627/how-can-i-make-a-script-that-opens-terminal-windows-and-executes-commands-in-the
Linux)
#for terminal in gnome-terminal xterm konsole
#do
# which $terminal > /dev/null
# if [ $? -eq 0 ]
# then
# $terminal -e $dir/$option
# break
# fi
#done
nohup $dir/$option > /tmp/$option.log 2>&1 &
sleep 2
tail /tmp/$option.log
;;
*)
error "unsupported operating system $os"
esac
# wait a bit
sleep 2
done
;;
alpha) alpha;;
ratel) ratel;;
zero) zero;;
esac
fi
# commandline option
while [ "$1" != "" ]
do
option="$1"
case $option in
alpha) alpha;;
ratel) ratel;;
zero) zero;;
-p|--pull) pullImage;;
-b|--bash) bashInto;;
-k|--kill) stopImage;;
-h|--help) usage;;
esac
shift
done

AWS WordPress with EFS

I building an AutoScaling WordPress Enviroment but have a question on EFS while setting up a CF Template. Am I suppose to mount the EFS on top of the existing WP Directory i.e /var/www/html or copy the WordPress Files to the EFS and then mount it to /var/www/html?
So WordPress backed by EFS.
Firstly you will create the EFS, then mounting is done. After that one cron job will be written to copy all the Wordpress files from var/www/html to mounted directory.
creating EFS
option_settings:
aws:elasticbeanstalk:customoption:
EFSVolumeName: "EFS_Wordpress"
VPCId: "vpc-xxxx"
## Subnet Options
SubnetA: "subnet-xxxx"
SubnetB: "subnet-xxxx"
SubnetC: "subnet-xxxx"
aws:elasticbeanstalk:application:environment:
FILE_SYSTEM_ID: '`{"Ref" : "FileSystem"}`'
MOUNT_DIRECTORY: '/wpfiles'
REGION: '`{"Ref": "AWS::Region"}`'
Resources:
## Mount Target Resources
MountTargetA:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: {Ref: FileSystem}
SecurityGroups:
- {Ref: MountTargetSecurityGroup}
SubnetId:
Fn::GetOptionSetting: {OptionName: SubnetA}
MountTargetB:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: {Ref: FileSystem}
SecurityGroups:
- {Ref: MountTargetSecurityGroup}
SubnetId:
Fn::GetOptionSetting: {OptionName: SubnetB}
MountTargetC:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: {Ref: FileSystem}
SecurityGroups:
- {Ref: MountTargetSecurityGroup}
SubnetId:
Fn::GetOptionSetting: {OptionName: SubnetC}
##############################################
#### Do not modify values below this line ####
##############################################
FileSystem:
Type: AWS::EFS::FileSystem
Properties:
FileSystemTags:
- Key: Name
Value:
Fn::GetOptionSetting: {OptionName: EFSVolumeName, DefaultValue: "EFS_Wordpress"}
MountTargetSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Security group for mount target
SecurityGroupIngress:
- FromPort: '2049'
IpProtocol: tcp
SourceSecurityGroupId:
Fn::GetAtt: [AWSEBSecurityGroup, GroupId]
ToPort: '2049'
VpcId:
Fn::GetOptionSetting: {OptionName: VPCId}
mounting EFS
container_commands:
1chown:
command: "chown webapp:webapp /wpfiles"
2create:
command: "sudo -u webapp mkdir -p wp-content/uploads"
3link:
command: "sudo -u webapp ln -s /wpfiles wp-content/uploads"
option_settings:
aws:elasticbeanstalk:application:environment:
FILE_SYSTEM_ID: '`{"Ref" : "FileSystem"}`'
MOUNT_DIRECTORY: '/wpfiles'
REGION: '`{"Ref": "AWS::Region"}`'
packages:
yum:
nfs-utils: []
jq: []
commands:
01_mount:
command: "/tmp/mount-efs.sh"
files:
"/tmp/mount-efs.sh":
mode: "000755"
content : |
#!/bin/bash
EFS_REGION=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.REGION')
EFS_MOUNT_DIR=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.MOUNT_DIRECTORY')
EFS_FILE_SYSTEM_ID=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.FILE_SYSTEM_ID')
echo "Mounting EFS filesystem ${EFS_DNS_NAME} to directory ${EFS_MOUNT_DIR} ..."
echo 'Stopping NFS ID Mapper...'
service rpcidmapd status &> /dev/null
if [ $? -ne 0 ] ; then
echo 'rpc.idmapd is already stopped!'
else
service rpcidmapd stop
if [ $? -ne 0 ] ; then
echo 'ERROR: Failed to stop NFS ID Mapper!'
exit 1
fi
fi
echo 'Checking if EFS mount directory exists...'
if [ ! -d ${EFS_MOUNT_DIR} ]; then
echo "Creating directory ${EFS_MOUNT_DIR} ..."
mkdir -p ${EFS_MOUNT_DIR}
if [ $? -ne 0 ]; then
echo 'ERROR: Directory creation failed!'
exit 1
fi
else
echo "Directory ${EFS_MOUNT_DIR} already exists!"
fi
mountpoint -q ${EFS_MOUNT_DIR}
if [ $? -ne 0 ]; then
echo "mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 ${EFS_FILE_SYSTEM_ID}.efs.${EFS_REGION}.amazonaws.com:/ ${EFS_MOUNT_DIR}"
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 ${EFS_FILE_SYSTEM_ID}.efs.${EFS_REGION}.amazonaws.com:/ ${EFS_MOUNT_DIR}
if [ $? -ne 0 ] ; then
echo 'ERROR: Mount command failed!'
exit 1
fi
chmod 777 ${EFS_MOUNT_DIR}
runuser -l ec2-user -c "touch ${EFS_MOUNT_DIR}/it_works"
if [[ $? -ne 0 ]]; then
echo 'ERROR: Permission Error!'
exit 1
else
runuser -l ec2-user -c "rm -f ${EFS_MOUNT_DIR}/it_works"
fi
else
echo "Directory ${EFS_MOUNT_DIR} is already a valid mountpoint!"
fi
echo 'EFS mount complete.'
Copying files to Mount Directory
files:
"/tmp/wpcopysymlink.sh":
mode: "000755"
content : |
#!/bin/bash
## ebextensions check if Symlink and wp is already installed if not copy it to EFS
echo "Time: $(date). Checking to see Wordpress is already in EFS or not..."
if [ ! -d /wpfiles/wp-admin ]; then
echo "Wordpress isn't installed I'm going to copy the base install to the EFS Shared directory /wpfiles ..."
cp -r /var/app/current/* /wpfiles
if [ $? -ne 0 ]; then
echo 'ERROR: Directory Copy failed!'
exit 1
fi
else
echo "Wordpress is already there /wpfiles/wp-admin already exists!"
fi
echo 'Checking to see if the symlink is there from the app dir to EFS or not...'
if [ -L /var/app/current ] ; then
echo "Good link so your good to go"
else
echo "No link so I'm removing the directory and creating the symlink in it's place to EFS"
rm -rf /var/app/current
ln -s /wpfiles /var/app/current
fi
echo "Time: $(date). All done for EFS"
Now you can write a cron job which will run after every 5min to copy all the files to mount directory.
For ref:- https://github.com/karan6190/WordpressAutoScalable-EFS

PID increments automatically

I'm writing a init-script for a microservice and have the problem, that the PID of the process, that the program gives out (via echo) is not the process ID the process is having. The code:
#!/bin/bash
### BEGIN INIT INFO
# Provides: temp
# Description: temp
# required-start: $local_fs $remote_fs $network $syslog
# required-stop: $local_fs $remote_fs $network $syslog
# default-start: 3 5
# default-stop: 0 1 2 6
# chkconfig: 35 99 1
# description: Microservice init-script
### END INIT INFO
START_SCRIPT=${applicationDirectory}/script/start.sh
STOP_SCRIPT=${applicationDirectory}/script/stop.sh
PID_FILE=${runDirectory}/${microserviceName}_${environment}_${servicePort}
# ***********************************************
# ***********************************************
DAEMON=$START_SCRIPT
# colors
red='\e[0;31m'
green='\e[0;32m'
yellow='\e[0;33m'
reset='\e[0m'
echoRed() { echo -e "${red}$1${reset}"; }
echoGreen() { echo -e "${green}$1${reset}"; }
echoYellow() { echo -e "${yellow}$1${reset}"; }
start() {
#PID=`bash ${START_SCRIPT} > /dev/null 2>&1 & echo $!`
PID=`$DAEMON $ARGS > /dev/null 2>&1 & echo $!`
}
stop() {
STOP_SCRIPT $1
}
case "$1" in
start)
if [ -f $PID_FILE ]; then
PID=`cat $PID_FILE`
if [ -z "`echo kill -0 ${PID}`" ]; then
echoYellow "Microservice is already running [$PID]."
exit 1
else
rm -f $PID_FILE
fi
fi
start
if [ -z $PID ]; then
echoRed "Failed starting microservice."
exit 3
else
echo $PID > $PID_FILE
echoGreen "Microservice successfully started [$PID]."
exit 0
fi
;;
status)
if [ -f $PID_FILE ]; then
PID=`cat $PID_FILE`
if [ ! -z "`echo kill -0 ${PID}`" ]; then
echoRed "Microservice is not running (process dead but pidfile exists)."
exit 1
else
echoGreen "Microservice is running [$PID]."
exit 0
fi
else
echoRed "Microservice is not running."
exit 3
fi
;;
stop)
if [ -f $PID_FILE ]; then
PID=`cat $PID_FILE`
if [ ! -z "`echo kill -0 ${PID}`" ]; then
echoRed "Microservice is not running (process dead but pidfile exists)."
exit 1
else
PID=`cat $PID_FILE`
stop $PID
echoGreen "Microservice successfully stopped [$PID]."
rm -f $PID_FILE
exit 0
fi
else
echoRed "Microservice is not running (pid not found)."
exit 3
fi
;;
*)
echo "Usage: $0 {status|start|stop}"
exit 1
esac
Now, the program gives for example 2505 as PID. But when I use
ps aux | grep trans | grep -v grep
It outputs a number, that is the previously outputted number +1.
Can anyone give a guess? Any help is appreciated!
Your PID variable gets the pid of the shell that executes start.sh. The actual program executed by the script gets a different pid.

Starting multiple tomcat instances in one server with init.d script

I'm trying to configure tomcat init.d start script to work with multiple instances (at this time 2 instances)
I'm following below sample script to create init.d script
#!/bin/bash
#
# tomcat This shell script takes care of starting and stopping Tomcat
#
# chkconfig: - 80 20
#
### BEGIN INIT INFO
# Provides: tomcat
# Required-Start: $network $syslog
# Required-Stop: $network $syslog
# Default-Start:
# Default-Stop:
# Short-Description: start and stop tomcat
### END INIT INFO
TOMCAT_USER=root
TOMCAT_HOME="/opt/tomcat7/node1"
SHUTDOWN_WAIT=45
tomcat_pid() {
echo `ps aux | grep org.apache.catalina.startup.Bootstrap | grep -v grep | awk '{ print $2 }'`
}
start() {
pid=$(tomcat_pid)
if [ -n "$pid" ]
then
echo "Tomcat is already running (pid: $pid)"
else
# Start tomcat
echo "Starting tomcat service"
/bin/su - -c "cd $TOMCAT_HOME/bin && $TOMCAT_HOME/bin/startup.sh" $TOMCAT_USER
fi
return 0
}
stop() {
pid=$(tomcat_pid)
if [ -n "$pid" ]
then
echo "Stoping Tomcat"
/bin/su - -c "cd $TOMCAT_HOME/bin && $TOMCAT_HOME/bin/shutdown.sh" $TOMCAT_USER
let kwait=$SHUTDOWN_WAIT
count=0
count_by=5
until [ `ps -p $pid | grep -c $pid` = '0' ] || [ $count -gt $kwait ]
do
echo "Waiting for processes to exit. Timeout before we kill the pid: ${count}/${kwait}"
sleep $count_by
let count=$count+$count_by;
done
if [ $count -gt $kwait ]; then
echo "Killing processes which didn't stop after $SHUTDOWN_WAIT seconds"
kill -9 $pid
fi
else
echo "Tomcat is not running"
fi
return 0
}
case $1 in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
status)
pid=$(tomcat_pid)
if [ -n "$pid" ]
then
echo "Tomcat is running with pid: $pid"
else
echo "Tomcat is not running"
fi
;;
esac
exit 0
problem is tomcat_pid() method returns process ids of all tomcat instances, because of that, the second instance cannot be started. Is there a better method to handle this?
found a workaround, but expecting better solution
using netstat we can find process id via running port number
echo `netstat -tlnp | awk '/:80 */ {split($NF,a,"/"); print a[1]}'`
So i modified the function tomcat_pid() as below
tomcat_pid() {
echo `netstat -tlnp | awk '/:<port> */ {split($NF,a,"/"); print a[1]}'`
}

Unable to pass variable from shell script to perl script

I have a shell script which passes the some variables to perl script, named Deploy.pl. But it seems perl script is not picking the variable. I have been trying to find out the cause but unable to resolve it. Same variables are getting passed to perl script properly except the $entname variable. Same varibale i am using for my copy statement,but as it is not getting picked up by perl script i am getting an cannot find path error. Please have a look at both shell script and perl script. I know it's insane to put such a long script,but i want to give clear idea about what is happening with the script.
Shell Script:
#!/bin/bash
sleep 1
echo "##### Please Read the following information carefully ####"
sleep 2
echo "Please read this preamble in order to run the script. It states the adamant requirement that user needs to have while executing the script."
echo " "
echo "1. The script requires the user name(i.e. TIBID ) which should have SVN access in order to execute it successfully. Example: ./deploy.sh tib7826"
echo " "
echo "where tib7826 is the user who has SVN access. Make sure the tibid you are using should have full SVN access."
echo " "
echo "2. The script further requires the Internal name as input. MDM creates a directory in the MQ_COMMON_DIR with internal name.It is"
echo " "
echo "case-sensitive and should be exact as what it is there in MQ_COMMON_DIR."
echo " "
echo "3. Further it asks for envoirnment name. The Environment name should be like DEV1,DEV2,DEV3,TEST1,TEST2 etc.Make sure they too are case specific."
echo "Otherwise it will fail to execute further steps."
echo " "
echo " 4. The script requires CATALOG ID's as a input for the below 4 repositories"
echo " "
echo " a.ITEM_MASTER"
echo " "
echo " b.DISTRIBUTION_FACILITY_LV"
echo " "
echo " c.MANAGEMENT_DIVISION_SOI"
echo " "
echo " d.ALTERNATE IDENTIFICATIONS"
echo " "
echo "You will get those ID's from MDM Web UI. Login to the MDM UI. Go to Item Data==>Repositories==>ID Column."
echo " "
echo "5. For more detail read the Readme.txt in order to execute this script.Take it from the location"
echo " "
echo "/tibco/data/GRISSOM2/build_deploy_scripts_kroger/document"
echo " "
echo " Or else take the ReadMe form SVN."
echo " "
echo "If you agree to execute the script press Y else N."
read uinput
if [ $uinput == 'Y' ]; then
echo "Script will execute!!"
sleep 3
else
echo "You have Cancel the Agreement"
exit
fi
# sample command to execute the deploy script ./deploy.sh tib7777
# export the build root
export BUILD_ROOT=/tibco/data/GRISSOM2
# CUSTOM env variables are temporary and set directly in the script to execute custom tasks
export CUSTOM1=/tibco/data/GRISSOM2/DEPLOYMENT_ARTIFACTS/common/MDR_ITEM_E1/rulebase
cd $BUILD_ROOT/DEPLOYMENT_ARTIFACTS/common
echo "--- - - - - - - - - - - - - - - "
echo "Enter your Enterprise INTERNAL NAME:"
sleep 1
read internal_name
sleep 2
echo "Enter Enterprise Name"
read entname
#code to check if the Enterprise with the correct INTERNAL name exists
if [ -d "$MQ_COMMON_DIR/$internal_name" ]; then
echo "Artifacts for the $internal_name will be deployed"
else
echo "THE ENTERPRISE with the $internal_name doesn't seems to be correct INTERNAL NAME. Execute the script again with the correct INTERNAL NAME"
exit
fi
#This snippet will cleanup the existing MDR_ITME_E1 before we get the latest code for MDR_ITME_E1 enterprise from SVN
cd $BUILD_ROOT/DEPLOYMENT_ARTIFACTS/common
if [ -d "$entname" ]; then
rm -rf $BUILD_ROOT/DEPLOYMENT_ARTIFACTS/common/$entname
echo "Removing existing $entname from SVN source directory.."
echo "..."
sleep 1
echo "...."
sleep 1
echo "....."
sleep 1
else
echo "$entname Doesn't Exist for the first time"
fi
echo "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%"
echo "Retrieving Latest Code from SVN..."
sleep 2
echo "Please wait......"
echo "Connecting to SVN."
sleep 1
echo "Connecting to SVN.."
sleep 1
echo "Connecting to SVN..."
sleep 1
echo "Connecting to SVN...."
sleep 1
echo "Connecting to SVN....."
sleep 1
echo "Connecting to SVN......"
sleep 1
echo "Connecting to SVN......."
sleep 1
echo "Connecting to SVN........"
echo "Do you want to checkout the latest version of the COMMON DIR code [Y] or [N]"
read svninput
if [ $svninput == 'Y' ]; then
echo "Downloading SVN Code"
if [ $entname == 'MDR_ITEM_E1' ]; then
echo svn co --username $1 http://svn.kroger.com/repos/mercury/tibcomdm/cim_paris/branches/Grissom2_Development/common/MDR_ITEM_E1
svn co --username $1 http://svn.kroger.com/repos/mercury/tibcomdm/cim_paris/branches/Grissom2_Development/common/MDR_ITEM_E1
sleep 3
else
echo "Copying E2 code"
echo svn co --username $1 http://svn.kroger.com/repos/mercury/tibcomdm/cim_paris/branches/Grissom2_breakfix/common/MDR_ITEM_E2
svn co --username $1 http://svn.kroger.com/repos/mercury/tibcomdm/cim_paris/branches/Grissom2_breakfix/common/MDR_ITEM_E2
fi
else
echo "Enter the revision number of the common directory"
read revision
if [ $entname == 'MDR_ITEM_E1' ]; then
svn co --username $1 -r $revision http://svn.kroger.com/repos/mercury/tibcomdm/cim_paris/branches/Grissom2_Development/common/MDR_ITEM_E1
else
echo "E2 Code"
svn co --username $1 -r $revision http://svn.kroger.com/repos/mercury/tibcomdm/cim_paris/branches/Grissom2_breakfix/common/MDR_ITEM_E2
fi
echo "Loaded code for Enterprise"
fi
if [ -d $BUILD_ROOT/DEPLOYMENT_ARTIFACTS/common/$entname ] ; then
echo "Downloaded latest code from SVN!!"
else
echo "CODE has not been downloaded. Please check your credentials."
exit
fi
#echo svn co --username $1 http://svn.kroger.com/repos/mercury/tibcomdm/cim_paris/branches/Grissom2_Development/common/MDR_ITEM_E1
#svn co --username $1 http://svn.kroger.com/repos/mercury/tibcomdm/cim_paris/branches/Grissom2_Development/common/MDR_ITEM_E1
echo "========================================"
echo "Taking destination organization backup...wait"
cd $MQ_COMMON_DIR/$internal_name
chmod -Rf 777 $MQ_COMMON_DIR/$internal_name
sleep 2
cp -rf /$MQ_COMMON_DIR/$internal_name /$MQ_COMMON_DIR/$internal_name$( date +%d%m%Y%H%M )
echo "backup done!!"
chmod -Rf 775 $MQ_COMMON_DIR/$internal_name *
echo "========================================"
#Removing contents inside the workflow,forms,maps,rules,rulebase,schema,templates,inputmap
sleep 2
echo "Removing contents inside workflow,inputmap,rules......"
echo "."
sleep 1
echo ".."
sleep 1
echo "..."
sleep 1
if [ -d "$MQ_COMMON_DIR/$internal_name" ]; then
# cleanup the enterprise internal directories
rm -rf $MQ_COMMON_DIR/$internal_name/workflow/*
rm -rf $MQ_COMMON_DIR/$internal_name/forms/*
rm -rf $MQ_COMMON_DIR/$internal_name/rulebase/*
rm -rf $MQ_COMMON_DIR/$internal_name/maps/*
rm -rf $MQ_COMMON_DIR/$internal_name/templates/*
else
echo "THE ENTERPRISE with the $internal_name internal name does not exist. Execute the script with the correct INTERNAL NAME"
exit
fi
sleep 2
echo "The following folders[workflow, forms, rulebase, maps, templates] have been cleaned up in the Enterprise common directory"
cd $MQ_COMMON_DIR/$internal_name
echo "Inside internal enterprise"
echo "-------------"
echo "InputMap,Schema,Scheduler,Distributedlock...Creating!!"
sleep 2
if [ -d "inputmap" ] && [ -d "schema" ] && [ -d "scheduler" ] && [ -d "distributedlock" ]; then
echo "Copying the directory structure"
echo "inputmap, schema,scheduler,distributedlock exists!!"
else
mkdir inputmap
mkdir schema
mkdir scheduler
mkdir distributedlock
chmod 775 *.*
sleep 1
echo "Required additional directories have been created!"
fi
#Enter Environment Name:
echo "Options: Which Enviroment you want to Deploy 1.DEV1 2.DEV2 3.DEV3 4.TEST1 5.TEST2 6.E2E 7.STAGE 8.PRODUCTION"
echo "Enter Environment Name:"
read env_name
if [ $env_name == DEV1 -o $env_name == DEV2 -o $env_name == DEV3 -o $env_name == TEST1 -o $env_name == TEST2 -o $env_name == E2E -o $env_name == STAGE -o $env_name == PRODUCTION ] ; then
echo "Running"
else
echo "You Entered wrong Environment Name!! Enter correct environment name and run the script again."
exit
fi
#Input catalog ID's==specific to E1 enterprise code only
if [ $entname == 'MDR_ITEM_E1' ]; then
echo "Enter Catalog ID's to copy the CV and Catalogvalidation files"
echo "Enter Catalog ID For ITEM_MASTER"
read item
if [ -d "$MQ_COMMON_DIR/$internal_name/catalog/master/$item" ]; then
echo "renaming existing catalogvalidation.xml as a backup copy"
cd $MQ_COMMON_DIR/$internal_name/catalog/master/$item
mv catalogvalidation.xml catalogvalidation.xml$( date +%d%m%Y%H%M )
else
echo "Either directory or file does not exist"
fi
echo "Enter Catalog ID For DISTRIBUTION_FACILITY_LV"
read dflv
if [ -d "$MQ_COMMON_DIR/$internal_name/catalog/master/$dflv" ]; then
echo "renaming existing catalogvalidation.xml as a backup copy"
cd $MQ_COMMON_DIR/$internal_name/catalog/master/$dflv
mv catalogvalidation.xml catalogvalidation.xml$( date +%d%m%Y%H%M )
else
echo "Either directory or file does not exist"
fi
echo "Enter Catalog ID For MANAGEMENT_DIVISION_SOI"
read mds
if [ -d "$MQ_COMMON_DIR/$internal_name/catalog/master/$mds" ]; then
echo "renaming existing catalogvalidation.xml as a backup copy"
cd $MQ_COMMON_DIR/$internal_name/catalog/master/$mds
mv catalogvalidation.xml catalogvalidation.xml$( date +%d%m%Y%H%M )
else
echo "Either directory or file does not exist"
fi
echo "Enter Catalog ID For ALTERNATE_IDENTIFICATION_MVL"
read aim
if [ -d "$MQ_COMMON_DIR/$internal_name/catalog/master/$aim" ]; then
echo "renaming existing catalogvalidation.xml as a backup copy"
cd $MQ_COMMON_DIR/$internal_name/catalog/master/$aim
mv catalogvalidation.xml catalogvalidation.xml$( date +%d%m%Y%H%M )
else
echo "Either directory or file does not exist"
fi
echo "Copying CV Files"
cp -rf $BUILD_ROOT/DEPLOYMENT_ARTIFACTS/common/MDR_ITEM_E1/catalog/master/34731_ITEM_MASTER/cv_* $MQ_COMMON_DIR/$internal_name/rulebase/
cp -rf $BUILD_ROOT/DEPLOYMENT_ARTIFACTS/common/MDR_ITEM_E1/catalog/master/34800_DISTRIBUTION_FACILITY_LV/cv_* $MQ_COMMON_DIR/$internal_name/rulebase/
cp -rf $BUILD_ROOT/DEPLOYMENT_ARTIFACTS/common/MDR_ITEM_E1/catalog/master/34800_DISTRIBUTION_FACILITY_LV/DISTRIBUTION_FACILITY_LV.xml $MQ_COMMON_DIR/$internal_name/rulebase/
cp -rf $BUILD_ROOT/DEPLOYMENT_ARTIFACTS/common/MDR_ITEM_E1/catalog/master/34801_MANAGEMENT_DIVISION_SOI/cv_* $MQ_COMMON_DIR/$internal_name/rulebase/
#cp -rf $BUILD_ROOT/DEPLOYMENT_ARTIFACTS/common/MDR_ITEM_E1/catalog/master/ALTERNATE_IDENTIFICATION_MVL/cv_* $MQ_COMMON_DIR/$internal_name/rulebase/
sleep 3
echo "....."
sleep 1
echo "......."
sleep 1
echo "........."
echo "Copied CV files"
#Copying E2 Specific Code--Customized Files
else
#Copy E2 files
sleep 3
echo "Running the Deploy.pl..."
sleep 4
fi
# run the deploy script
cd $BUILD_ROOT/build_deploy_scripts_kroger/deploy_script
./Deploy.pl $env_name $item $dflv $mds $aim MDR_ITEM_E2 $internal_name
# custom code changes
# custom1 to change the rulebase URLs
echo " "
echo " "
echo "********************"
echo "========================================"
if [ $entname == 'MDR_ITEM_E1' ]; then
echo "Copying schema for $internal_name internal enterprise name"
if [ $env_name == DEV3 -o $env_name == TEST2 -o $env_name == DEV2 ]; then
echo "Copying $env_name schema"
cd /tibco/mdm/8.3/common/$internal_name/schema/
rm *.*
cp /tibco/data/GRISSOM2/DEPLOYMENT_ARTIFACTS/common/MDR_ITEM_E1/schema/TEST2/* /tibco/mdm/8.3/common/$internal_name/schema/
else
if [ $env_name == E2E ]; then
echo "Copying schema for $env_name environment!!"
cd /tibco/mdm/8.3/common/MDRITME1/schema/
rm *.*
cp /tibco/data/GRISSOM2/DEPLOYMENT_ARTIFACTS/common/MDR_ITEM_E1/schema/E2E/* /tibco/mdm/8.3/common/$internal_name/schema/
else
echo "Incorrect environment name"
exit
fi
fi
echo "========================================="
else
echo "E2 code is deploying..."
fi
if [ $entname == 'MDR_ITEM_E1' ]; then
echo "Do you want to copy DropZone press Y to continue else N"
read dr
if [ "$dr" == 'Y' ]; then
echo "Copying DropZone"
cp -rf /tibco/data/GRISSOM2/DEPLOYMENT_ARTIFACTS/common/MDR_ITEM_E1/DropZone/* $MQ_COMMON_DIR/$internal_name/DropZone/
echo "Copied DropZone"
sleep 1
else
echo "either folder doesn't exist in $internal_name or You have cancelled the copy opeeation for DropZone"
fi
echo "Do you want to copy EAI press Y to continue else N"
read eai
if [ "$eai" == 'Y' ]; then
echo "Copying EAI"
cp -rf /tibco/data/GRISSOM2/DEPLOYMENT_ARTIFACTS/common/MDR_ITEM_E1/EAI/* $MQ_COMMON_DIR/$internal_name/EAI/
echo "Copyied EAI"
sleep 1
else
echo "either folder doesn't exist in $internal_name or You have cancelled the copy opeeation for DropZone"
fi
else
echo "#$%#$%#$%#$%#$%&*^&*##$"
fi
cd $MQ_COMMON_DIR/$internal_name
echo "****** The following directories have been deployed to your Enterprise Internal directory ********"
echo " "
echo "======================================"
ls | tee -a
echo "======================================="
#change the permissions back to standard on the internal directory
#chmod -Rf 775 $MQ_COMMON_DIR/$internal_name
cd $MQ_COMMON_DIR/$internal_name
chmod -Rf 644 *
cd $MQ_COMMON_DIR/$internal_name
find . -type d -exec chmod 0755 {} \;
cd $MQ_COMMON_DIR/$internal_name
chmod -Rf 777 EAI
chmod -Rf 777 DropZone
chmod -Rf 777 distributedlock
echo "Permissions changed!"
echo "========================================"
echo " "
echo " "
echo " "
echo "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%"
echo "Deployed resources successfully"
echo "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%"
echo " "
echo " "
exit
Perl script:
#!/usr/bin/perl
use CIMDeploymentVaribles;
&main;
sub main()
{
print "\n Deployment Script \n\n\n";
#print "Options: Which Enviroment you want to Deploy\n 1.DEV1\n 2.DEV2\n 3.TEST1\n 4.TEST2\n 5.STAGE\n 6.PRODUCTION \n\n";
$dep_env = $ARGV[0];
$item_master = $ARGV[1];
$distribution_facility_lv = $ARGV[2];
$management_division_soi = $ARGV[3];
$altername_identification_mvl = $ARGV[4];
$entname = $ARGV[5];
$internal_name = $ARGV[6];
$SVN_COMMON_DIR_LOCATION = "/tibco/data/GRISSOM2/DEPLOYMENT_ARTIFACTS";
$SCRIPT_LOCATION = "$SVN_COMMON_DIR_LOCATION/build_deploy_scripts_kroger/deploy_script";
if ($dep_env eq 'DEV1' || $dep_env eq 'DEV2'|| $dep_env eq 'TEST1' || $dep_env eq 'TEST2' || $dep_env eq 'STAGE'|| $dep_env eq 'PRODUCTION' || $dep_env eq 'LOCALDEV1' || $dep_env eq 'MAHANADI'|| $dep_env eq 'DEV3' || $dep_env eq 'E2E')
{
print "\n Deployment on Environment:: $dep_env\n";
&Set_Environment_Variables($dep_env);
&display_variables($dep_env);
&Deploy_Common_dir_artifacts($dep_env);
}
else
{
print "\nWrong argument is passed\n";
print " \n Provide valid argument: DEV1 or DEV2 or TEST1 or TEST2 or STAGE or PRODUCTION\n\n";
exit;
}
}
sub display_variables()
{
print "\n Displaying Variables Start\n";
print "\n MQ_HOME on $_[0] :: $ENV_Variables{\"$_[0]\"}{'MQ_HOME'}\n";
print "\n MQ_COMMON_DIR on $_[0] :: $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}\n";
print "\n ENTERPRISE_INTERNAL_NAME in $_[0] :: $ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}\n";
print "\n SCRIPT_LOCATION in $SCRIPT_LOCATION\n";
print "\n SVN_COMMON_DIR_LOCATION in $SVN_COMMON_DIR_LOCATION\n";
print "\n Displaying Variables Ended \n";
}
sub Set_Environment_Variables()
{
print "\n Setting up environment Variables\n";
$ENV{"MQ_HOME"}="$ENV_Variables{\"$_[0]\"}{'MQ_HOME'}";
$ENV{"MQ_COMMON_DIR"}="$ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}";
print "\nEnvironment Variables are set\n";
}
sub Deploy_Common_dir_artifacts()
{
print "\n Deploying common dir aftifacts for $_[0] environment \n";
print "\nDeploying common dir artifacts to specific enterprise ( $ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'} ) \n";
print "Command: rm -rf `find $SVN_COMMON_DIR_LOCATION -type d -name .svn`";
system("rm -rf `find $SVN_COMMON_DIR_LOCATION -type d -name .svn`");
# Deploying forms
print "\n 1. Deploying forms \n";
print "\nExecuting::: cp $SVN_COMMON_DIR_LOCATION/common/$entname/forms/* $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/forms/\n";
system("cp $SVN_COMMON_DIR_LOCATION/common/$entname/forms/* $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/forms/");
# Deploying maps
print "\n 2. Deploying maps \n";
print "\nExecuting::: cp $SVN_COMMON_DIR_LOCATION/common/$entname/maps/* $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/maps/ \n";
system("cp $SVN_COMMON_DIR_LOCATION/common/$entname/maps/* $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/maps/");
# Deploying rulebase
print "\n 3. Deploying rulebase \n";
print "\nExecuting::: cp -r $SVN_COMMON_DIR_LOCATION/common/$entname/rulebase/* $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/rulebase/\n";
system("cp -r $SVN_COMMON_DIR_LOCATION/common/$entname/rulebase/* $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/rulebase/");
print "\n\n Executing ::: find $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/rulebase/* -type f -exec sed -i 's/$internal_name/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/' {} \\;";
system("find $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/rulebase/* -type f -exec sed -i 's/$internal_name/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/' {} \\;");
# Deploying workflow
print "\n 4. Deploying workflow \n";
print "\nExecuting::: cp -r $SVN_COMMON_DIR_LOCATION/common/$entname/workflow/* $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/workflow/\n";
system("cp -r $SVN_COMMON_DIR_LOCATION/common/$entname/workflow/* $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/workflow/");
print "\n\n Executing ::: find $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/workflow/* -type f -exec sed -i 's/MDR_NEW/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/' {} \\;";
system("find $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/workflow/* -type f -exec sed -i 's/$internal_name/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/' {} \\;");
# Deploying htmlprops
print "\n 6. Deploying htmlprops \n";
# print "\nExecuting::: cp $SVN_COMMON_DIR_LOCATION/common/$entname/htmlprops/* $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/htmlprops/\n";
#system("cp -rf $SVN_COMMON_DIR_LOCATION/common/$entname/htmlprops/* $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/htmlprops/");
# Deploying templates
print "\n 7. Deploying Templates \n";
print "\nExecuting::: cp $SVN_COMMON_DIR_LOCATION/common/$entname/templates/* $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/templates/\n";
system("cp $SVN_COMMON_DIR_LOCATION/common/$entname/templates/* $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/templates/");
# Deploying Schedular
print "\n 8. Deploying scheduler \n";
print "\nExecuting::: cp -r $SVN_COMMON_DIR_LOCATION/common/$entname/scheduler $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/\n";
#system("cp -r $SVN_COMMON_DIR_LOCATION/common/$entname/scheduler/* $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/scheduler/");
# Deploying Filewatcher
print "\n 9. Deploying Filewatcher \n";
print "\nExecuting::: cp $SVN_COMMON_DIR_LOCATION/config/FileWatcher.xml $ENV_Variables{\"$_[0]\"}{'MQ_HOME'}/config/\n";
system("cp $SVN_COMMON_DIR_LOCATION/config/FileWatcher.xml $ENV_Variables{\"$_[0]\"}{'MQ_HOME'}/config/");
# Deploying Custom Properties
# print "\n 5. Deploying Custom Properties \n";
# print "\nExecuting::: cp -r $SVN_COMMON_DIR_LOCATION/custom/* $ENV_Variables{\"$_[0]\"}{'MQ_HOME'}/custom/\n";
# system("cp -r $SVN_COMMON_DIR_LOCATION/custom/* $ENV_Variables{\"$_[0]\"}{'MQ_HOME'}/custom/");
# Deploying dynservices
print "\n 10. Deploying dynservices \n";
print "\nExecuting::: cp -r $SVN_COMMON_DIR_LOCATION/dynservices $ENV_Variables{\"$_[0]\"}{'MQ_HOME'}/\n";
#system("cp -r $SVN_COMMON_DIR_LOCATION/dynservices $ENV_Variables{\"$_[0]\"}{'MQ_HOME'}/");
# Deploying InputMap
print "\n 12. Deploying InputMap \n";
print "\nExecuting::: cp -r $SVN_COMMON_DIR_LOCATION/inputmap/* $ENV_Variables{\"$_[0]\"}{'MQ_HOME'}/inputmap/\n";
#system("cp -r $SVN_COMMON_DIR_LOCATION/schema/* $ENV_Variables{\"$_[0]\"}{'MQ_HOME'}/schema/");
system("cp -r $SVN_COMMON_DIR_LOCATION/common/$entname/inputmap/* $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/inputmap/");
# Deploying DistributeLock
print "\n 13. Deploying DistributedLock \n";
print "\nExecuting::: cp -r $SVN_COMMON_DIR_LOCATION/distributedlock/* $ENV_Variables{\"$_[0]\"}{'MQ_HOME'}/distributedlock/\n";
#system("cp -r $SVN_COMMON_DIR_LOCATION/schema/* $ENV_Variables{\"$_[0]\"}{'MQ_HOME'}/schema/");
system("cp -r $SVN_COMMON_DIR_LOCATION/common/$entname/distributedlock/* $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/distributedlock/");
}
Failure: print "\nExecuting::: cp $SVN_COMMON_DIR_LOCATION/common/$entname/forms/* $ENV_Variables{\"$_[0]\"}{'MQ_COMMON_DIR'}/$ENV_Variables{\"$_[0]\"}{'ENTERPRISE_INTERNAL_NAME'}/forms/\n";
As $entname is not picked up it's not copying expected files.
You really need to start quoting your scripts propertly:
rm -rf $BUILD_ROOT/DEPLOYMENT_ARTIFACTS/common/$entname
Imagine I enter MDR_ITEM_E1 /. The command would now delete all the files on your disk.