Can I split a large HAProxy config file into multiple smaller files? - haproxy

I'm building an haproxy config file that has multiple front and backends. It's going to be several hundred lines long and I'd rather split it up into separate files for each of the different websites that I want to loadbalance.
Does HAProxy offer the ability to link to partial config files from the main haproxy.cfg file?

Configuration files can't be linked together from a configuration directive.
However HAProxy can load multiple configuration files from its command line, using the -f switch multiple times:
haproxy -f conf/http-defaults -f conf/http-listeners -f conf/tcp-defaults -f conf/tcp-listeners
If you want to be flexible with the amount of config files you can even specify a directory like this: -f /etc/haproxy. The files will then be used in their lexical order, newer files overriding older files.
See the mailing list for an example, if provides links to the documentation. This information can be found in the management guide, not the regular docs.

Stumbled on this answer where the author created scripts to imitate nginx disable enable sites functionality. In the haproxy init.d startup he uses script loop to build the haproxy -f commands concatenation.
/etc/init.d/haproxy:
EXTRAOPTS=`for FILE in \`find /etc/haproxy/sites-enabled -type l | sort
-n\`; do CONFIGS="$CONFIGS -f $FILE"; done; echo $CONFIGS`
haensite script:
#!/bin/bash
if [[ $EUID -ne 0 ]]; then
echo "You must be a root user" 2>&1
exit 1
fi
if [ $# -lt 1 ]; then
echo "Invalid number of arguments"
exit 1
fi
echo "Enabling $1..."
cd /etc/haproxy/sites-enabled
ln -s ../sites-available/$1 ./
echo "To activate the new configuration, you need to run:"
echo " /etc/init.d/haproxy restart"
hadissite script:
#!/bin/bash
if [[ $EUID -ne 0 ]]; then
echo "You must be a root user" 2>&1
exit 1
fi
if [ $# -lt 1 ]; then
echo "Invalid number of arguments"
exit 1
fi
echo "Disabling $1..."
rm -f /etc/haproxy/sites-enabled/$1
echo "To activate the new configuration, you need to run:"
echo " /etc/init.d/haproxy restart"

This was a solution building off of #stephenmurdoch's answer which involved the use of multiple -f <conf file> arguments to the haproxy executable.
Using the stock CentOS 6.x RPM's included /etc/init.d/haproxy script you can amend it like so:
start() {
$exec -c -q -f $cfgfile $OPTIONS
if [ $? -ne 0 ]; then
echo "Errors in configuration file, check with $prog check."
return 1
fi
echo -n $"Starting $prog: "
# start it up here, usually something like "daemon $exec"
#daemon $exec -D -f $cfgfile -f /etc/haproxy/haproxy_ds.cfg -f /etc/haproxy/haproxy_es.cfg -f /etc/haproxy/haproxy_stats.cfg -p $pidfile $OPTIONS
daemon $exec -D -f $cfgfile $(for i in /etc/haproxy/haproxy_*.cfg;do echo -n "-f $i ";done) -p $pidfile $OPTIONS
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}
With the above in place you can then create files such as haproxy_<X>.cfg and haproxy_<Y>.cfg using whatever names you want. The above for loop will include these files in an augmented daemon haproxy ... line if these files are present, otherwise the stock haproxy.cfg file will be used solely.
Within the haproxy_<...>.cfg files you need to make sure that your global and defaults are defined in the "toplevel" haproxy.cfg file. The rest of the files simply need to have frontend/backends and nothing more.

You can follow this simple step.
Insert one line script (cat /etc/$BASENAME/conf.d/*.cfg > $CFG) in /etc/init.d/haproxy
Here is position where you must insert line
CFG=/etc/$BASENAME/$BASENAME.cfg
cat /etc/$BASENAME/conf.d/*.cfg > $CFG
[ -f $CFG ] || exit 1
Reload daemon config with systemctl daemon-reload
Make directory mkdir /etc/haproxy/conf.d
Move default haproxy.cfg to conf.d as global.cfg mv /etc/haproxy/haproxy.cfg /etc/haproxy/conf.d/global.cfg
Create your other .cfg file in conf.d directory
Just restart your haproxy service systemctl restart haproxy
NOTE: /etc/haproxy/haproxy.cfg will be automaticly created from all files in conf.d/

answer of #Bapstie memtioned that, a directory can be passed to haproxy as config file, and files inside will be loaded in alphabet order. It's correct.
But problem is, the package haproxy in CentOS 'base/7/x86_64' repository is so old that it does not support that.
So either do you need to write a wrapper to append -f <individual config file>to the command, or you need to install latest version of haproxy:
for package in centos-release-scl-rh rh-haproxy18-haproxy; do
yum install -y $package
done
and create a drop-in config for haproxy service:
[Service]
ExecStart=
ExecStart=/opt/rh/rh-haproxy18/root/sbin/haproxy -f /etc/haproxy-nutstore/ -p /run/haproxy.pid $OPTIONS

If you use Ansible you can do a trick like this:
- name: haproxy configuration
copy:
content: >
{{ lookup('template', haproxy_cfg_src_top) +
lookup('template', haproxy_cfg_src_edge) +
lookup('template', haproxy_cfg_src_bottom) }}
dest: "{{ haproxy_cfg }}"
owner: "{{ docker_user }}"
group: "docker"
mode: 0664
register: haproxy_cfg_change

Related

Use a Chef recipe to modify a single line in a config file

I'm trying to automate disabling the Transparent Huge Pages (THP) Settings for MongoDB using a Chef Recipe.
The THP setting is explained here: MongoDocs THP Settings
I'm trying to follow the first option "In Boot-Time Configuration (Preferred)" by editing the grub configuration file at "/etc/grub.conf"
All I need to do is append "transparent_hugepage=never" to the end of the existing line that starts with "kernel "
I know I can replace a line with Chef::Util::FileEdit, using something like this:
ruby_block "replace_line" do
block do
file = Chef::Util::FileEdit.new("/etc/grub.conf")
file.search_file_replace_line("/kernel/", "kernel <kernel path> <kernel options> transparent_hugepage=never")
file.write_file
end
end
but I need to keep the existing kernel path and kernel options.
I've tried playing around with Chef::Util::Editor, but haven't been successful initializing the constructor. Chef::Util::FileEdit is initialized with a file path (per above), but the ruby docs say that Chef::Util::Editor is initialized with "lines". I've tried
lines = Chef::Util::Editor.new(<lines>)
where <lines> = file path, = Chef::Util::FileEdit.new(), and = 'test string', but nothing seems to work.
Does anyone have any experience with the Chef::Util::Editor? Or a better solution?
Thanks
I never figured out how to modify a single line in a config file using Chef, but here's the recipe I ended up using to disable THP settings for MongoDB.
Recipe: Install MongoDB
# Install MongoDB on Amazon Linux
# http://docs.mongodb.org/manual/tutorial/install-mongodb-on-amazon/
# 1: configure the package management system (yum)
# 2: install mongodb
# 3: configure mongodb settings
# 3.A: give mongod permission to files
# data & log directories (everything in /srv/mongodb)
# http://stackoverflow.com/questions/7948789/mongodb-mongod-complains-that-there-is-no-data-db-folder
execute "mongod_permission" do
command "sudo chown -R mongod:mongod /srv/mongodb"
#command "sudo chown mongod:mongod /var/run/mongodb/mongod.pid"
#command "sudo chown -R $USER /srv/mongodb"
end
# 3.B: edit Transparent Huge Pages (THP) Settings
# get rid of mongod startup warning
# http://docs.mongodb.org/manual/reference/transparent-huge-pages/#transparent-huge-pages-thp-settings
# 3.B.1: disable
execute "disable_thp_khugepaged_defrag" do
command "echo 0 | sudo tee /sys/kernel/mm/transparent_hugepage/khugepaged/defrag" # different b/c file doesn't have options list
end
execute "disable_thp_hugepage_defrag" do
command "echo 'never > /sys/kernel/mm/transparent_hugepage/defrag' | sudo tee --append /sys/kernel/mm/transparent_hugepage/defrag"
end
execute "disable_thp_hugepage_enables" do
command "echo 'never > /sys/kernel/mm/transparent_hugepage/enabled' | sudo tee --append /sys/kernel/mm/transparent_hugepage/enabled"
end
# 3.B.2: verify disabled on reboot
template "/etc/rc.local" do
source "init-rc.local.erb"
owner 'root'
group 'root'
mode '0775'
end
# 4: use upstart & monit to keep mongod alive
Template: init-rc.local.erb
touch /var/lock/subsys/local
if test -f /sys/kernel/mm/transparent_hugepage/khugepaged/defrag; then
echo 0 > /sys/kernel/mm/transparent_hugepage/khugepaged/defrag
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
The problem with your own solution is the template can be overwritten by another recipe with it's own rc.local template.
To change that, I add the lines to the existing rc.local
execute "disable_thp_hugepage_defrag" do
command "sudo sed -i -e '$i \\echo never > /sys/kernel/mm/transparent_hugepage/defrag\\n' /etc/rc.local"
not_if 'grep -c "transparent_hugepage/defrag" /etc/rc.local'
end
execute "disable_thp_hugepage_enables" do
command "sudo sed -i -e '$i \\echo never > /sys/kernel/mm/transparent_hugepage/enabled\\n' /etc/rc.local"
not_if 'grep -c "transparent_hugepage/enabled" /etc/rc.local'
end
The grep makes sure that the line is not already in it.
Maybe chef has something better to manage that?
We can efficietly replace contents of file by grouping the elements
e.g.
appending "transparent_hugepage=never" to the end of the existing line that starts with "kernel "
ruby_block "replace_line" do
block do
file = Chef::Util::FileEdit.new("/etc/grub.conf")
file.search_file_replace_line(/kernel.*/, '\0 tansparent_hugepage=never')
file.write_file
end
end
\0 adds whole mached string
note: ' '(single quote)
I disabled hugepages by replicating the following in chef (looks the same as above but with the addition of a not_if statement):
echo never | sudo tee /sys/kernel/mm/transparent_hugepage/defrag
E.G
execute "disable_hugepage_defrag" do
not_if "grep -F '[never]' /sys/kernel/mm/transparent_hugepage/defrag"
command "echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"
end
I have also had success inserting lines with file.insert_line_if_no_match the ruby line replace feature will probably work for you.
search_file_replace_line(regex, newline) ⇒ Object
ruby_block 'replace_line' do
block do
file = Chef::Util::FileEdit.new('/path/to/file')
file.search_file_replace_line('/Line to find/', 'Line to replace with')
file.write_file
end
end

Why this Debian-Linux service won't autostart? Starting the service manually after startup works

I'm on a headless RaspberryPi (Raspbian) and I would like this service to autostart when system starts up:
#!/bin/bash
### BEGIN INIT INFO
# Provides: mnt
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: mount/unmount volumes from /etc/fstab
### END INIT INFO
#VARIABLES for the truecrypt volume
PROTECT_HIDDEN=no
KEYFILES=""
PASSWORD_FILE=/etc/truecrypt
mount_all(){
slot=0
while read line;
do
read -a fields <<< $line
VOLUME_PATH=${fields[0]}
MOUNT_DIRECTORY=${fields[1]}
FILESYSTEM=${fields[2]}
OPTIONS=${fields[3]}
slot=$((slot+1))
truecrypt \
--text \
--verbose \
--keyfiles=$KEYFILES \
--protect-hidden=$PROTECT_HIDDEN \
--slot=${slot} \
--fs-options=$OPTIONS \
--filesystem=$FILESYSTEM $VOLUME_PATH $MOUNT_DIRECTORY \
< <(grep $VOLUME_PATH $PASSWORD_FILE | sed "s,^${VOLUME_PATH}:,,") \
| grep -v "Enter password for"
done < <(grep '^##truecrypt' /etc/fstab | sed 's/##truecrypt://g')
}
# Function to redirect the output to syslog
log_to_syslog(){
# Temporal file for a named pipe
script_name=$(basename "$0")
named_pipe=$(mktemp -u --suffix=${script_name}.$$)
# On exit clean up
trap "rm -f ${named_pipe}" EXIT
# create the named pipe
mknod ${named_pipe} p
# start syslog and redirect the named pipe
# append the script name before the messages
logger <${named_pipe} -t $0 &
# Redirect stout and stderr to the named pipe
exec 1>${named_pipe} 2>&1
}
# If the script does not run on a terminal then use syslog
set_log_output(){
if [ ! -t 1 ]; then
log_to_syslog
fi
}
case "$1" in
''|start)
EXITSTATUS=0
set_log_output
mount_all || EXITSTATUS=1
exit $EXITSTATUS
;;
stop)
EXITSTATUS=0
set_log_output
truecrypt --verbose --force --dismount || EXITSTATUS=1
exit $EXITSTATUS
;;
restart|force-reload)
EXITSTATUS=0
$0 stop || EXITSTATUS=1
$0 start || EXITSTATUS=1
exit $EXITSTATUS
;;
status)
EXITSTATUS=0
truecrypt --list 2>/dev/null || echo "No truecrypt volumes mounted"
exit $EXITSTATUS
;;
*)
echo "Usage: $0 [start|stop|restart]"
exit 3
;;
esac
The service has 755 permisisons and is owned by root. After setting the permission I did (with no errors):
update-rc.d mnt defaults
When I start the service manually immediately after startup it works well.
Where may be the problem? It would be also great to use this service as a required prerequisite for autostarting Samba - is it possible?
The solution was pretty simple. I installed truecrypt only as a binary and I had environment variable path to truecrypt set only for user, not root or any other system user which is used for autostart.
The solution was to change truecrypt command to /path_to_truecrypt/truecrypt.

How to port `ranger-cd` function to fish shell

I have been trying to port the ranger-cd function for the ranger file manager to the fish shell. As of 2013, ranger’s ranger-cd function looks like this:
function ranger-cd {
tempfile='/tmp/chosendir'
/usr/bin/ranger --choosedir="$tempfile" "${#:-$(pwd)}"
test -f "$tempfile" &&
if [ "$(cat -- "$tempfile")" != "$(echo -n `pwd`)" ]; then
cd -- "$(cat "$tempfile")"
fi
rm -f -- "$tempfile"
}
# This binds Ctrl-O to ranger-cd:
bind '"\C-o":"ranger-cd\C-m"'
(This function gives a temporary file to ranger file manager to store the last accessed directory so that we can change to that directory after ranger quits.)
Here’s what I have done so far to port the function to fish:
function ranger-cd
set tempfile '/tmp/chosendir'
/usr/bin/ranger --choosedir=$tempfile (pwd)
test -f $tempfile and
if cat $tempfile != echo -n (pwd)
cd (cat $tempfile)
end
rm -f $tempfile
end
function fish_user_key_bindings
bind \co ranger-cd
end
When I use this function I get:
test: unexpected argument at index 2: 'and'
1 /home/gokcecat: !=: No such file or directory
cat: echo: No such file or directory
cat: /home/gokce: Is a directory
I’m guessing there are still multiple errors in the above code. Does anyone have a working solution for this?
My answer is based off of gzfrancisco's. However, I fix the "'-a' at index 2" issue, and I also ensure that a new prompt is printed after exiting ranger.
I put the following in ~/.config/fish/config.fish:
function ranger-cd
set tempfile '/tmp/chosendir'
ranger --choosedir=$tempfile (pwd)
if test -f $tempfile
if [ (cat $tempfile) != (pwd) ]
cd (cat $tempfile)
end
end
rm -f $tempfile
end
function fish_user_key_bindings
bind \co 'ranger-cd ; commandline -f repaint'
end
simpler solution, but it does the same. type in shell:
alias ranger 'ranger --choosedir=$HOME/.rangerdir; set RANGERDIR (cat $HOME/.rangerdir); cd $RANGERDIR'
funcsave ranger
alias is a shortcut do define a function. funcsave saves it into ~/.config/fish/functions/ranger.fish
The problem is this line.
test -f $tempfile and
You shuld remove the and because "and" is a conditional execution
http://ridiculousfish.com/shell/user_doc/html/commands.html#and
function ranger-cd
set tempfile '/tmp/chosendir'
/usr/bin/ranger --choosedir=$tempfile (pwd)
if test -f $tempfile and cat $tempfile != echo -n (pwd)
cd (cat $tempfile)
end
rm -f $tempfile
end
EDIT: This does not work for fish unfortunately, but only for bash-compatible shells.
I recently found out that there is another way to accomplish the same end goal, by sourcing the ranger script instead:
source ranger
When ranger is then exited, it will have changed the working directory into the last visited one.

How to compare the content of a tarball with a folder

How can I compare a tar file (already compressed) of the original folder with the original folder?
First I created archive file using
tar -kzcvf directory_name.zip directory_name
Then I tried to compare using
tar -diff -vf directory_name.zip directory_name
But it didn't work.
--compare (-d) is more handy for that.
tar --compare --file=archive-file.tar
works if archive-file.tar is in the directory it was created. To compare archive-file.tar against a remote target (eg if you have moved archive-file.tar to /some/where/) use the -C parameter:
tar --compare --file=archive-file.tar -C /some/where/
If you want to see tar working, use -v without -v only errors (missing files/folders) are reported.
Tipp: This works with compressed tar.bz/ tar.gz archives, too.
It should be --diff
Try this (without the last directory_name):
tar --diff -vf directory_name.zip
The problem is that the --diff command only looks for differences on the existing files among the tar file and the folder. So, if a new file is added to the folder, the diff command does not report this.
The method of pix is way slow for large compressed tar files, because it extracts each file individually. I use the tar --diff method loking for files with different modification time and extract and diff only these. The files are extracted into a folder base.orig where base is either the top level folder of the tar file or teh given comparison folder. This results in diffs including the date of the original file.
Here is the script:
#!/bin/bash
set -o nounset
# Print usage
if [ "$#" -lt 1 ] ; then
echo 'Diff a tar (or compressed tar) file with a folder'
echo 'difftar-folder.sh <tarfile> [<folder>] [strip]'
echo default for folder is .
echo default for strip is 0.
echo 'strip must be 0 or 1.'
exit 1
fi
# Parse parameters
tarfile=$1
if [ "$#" -ge 2 ] ; then
folder=$2
else
folder=.
fi
if [ "$#" -ge 3 ] ; then
strip=$3
else
strip=0
fi
# Get path prefix if --strip is used
if [ "$strip" -gt 0 ] ; then
prefix=`tar -t -f $tarfile | head -1`
else
prefix=
fi
# Original folder
if [ "$strip" -gt 0 ] ; then
orig=${prefix%/}.orig
elif [ "$folder" = "." ] ; then
orig=${tarfile##*/}
orig=./${orig%%.tar*}.orig
elif [ "$folder" = "" ] ; then
orig=${tarfile##*/}
orig=${orig%%.tar*}.orig
else
orig=$folder.orig
fi
echo $orig
mkdir -p "$orig"
# Make sure tar uses english output (for Mod time differs)
export LC_ALL=C
# Search all files with a deviating modification time using tar --diff
tar --diff -a -f "$tarfile" --strip $strip --directory "$folder" | grep "Mod time differs" | while read -r file ; do
# Substitute ': Mod time differs' with nothing
file=${file/: Mod time differs/}
# Check if file exists
if [ -f "$folder/$file" ] ; then
# Extract original file
tar -x -a -f "$tarfile" --strip $strip --directory "$orig" "$prefix$file"
# Compute diff
diff -u "$orig/$file" "$folder/$file"
fi
done
To ignore differences in some or all of the metadata (user, time, permissions), you can pipe the result to awk:
tar --compare --file=archive-file.tar -C /some/where/ | awk '!/Mode/ && !/Uid/ && !/Gid/ && !/time/'
That should output only the true differences between the tar and the directory /some/where/
I recently needed a better compare than what "tar --diff" produced so I made this short script:
#!/bin/bash
tar tf "$1" | while read ; do
if [ "${REPLY%/}" = "$REPLY" ] ; then
tar xOf "$1" "$REPLY" | diff -u - "$REPLY"
fi
done
The easy way is to write:
tar df file This compares the file with the current working directory, and tell us about if any of the files has been removed.
tar df file -C path/folder This compares the file with the folder.

Converting relative path into absolute path?

I'm not sure if these paths are duplicates. Given the relative path, how do I determine absolute path using a shell script?
Example:
relative path: /x/y/../../a/b/z/../c/d
absolute path: /a/b/c/d
The most reliable method I've come across in unix is readlink -f:
$ readlink -f /x/y/../../a/b/z/../c/d
/a/b/c/d
A couple caveats:
This also has the side-effect of resolving all symlinks. This may or may not be desirable, but usually is.
readlink will give a blank result if you reference a non-existant directory. If you want to support non-existant paths, use readlink -m instead. Unfortunately this option doesn't exist on versions of readlink released before ~2005.
From this source comes:
#!/bin/bash
# Assume parameter passed in is a relative path to a directory.
# For brevity, we won't do argument type or length checking.
ABS_PATH=`cd "$1"; pwd` # double quotes for paths that contain spaces etc...
echo "Absolute path: $ABS_PATH"
You can also do a Perl one-liner, e.g. using Cwd::abs_path
Using bash
# Directory
relative_dir="folder/subfolder/"
absolute_dir="$( cd "$relative_dir" && pwd )"
# File
relative_file="folder/subfolder/file"
absolute_file="$( cd "${relative_file%/*}" && pwd )"/"${relative_file##*/}"
${relative_file%/*} is same result as dirname "$relative_file"
${relative_file##*/} is same result as basename "$relative_file"
Caveats: Does not resolve symbolic links (i.e. does not canonicalize path ) => May not differentiate all duplicates if you use symbolic links.
Using realpath
Command realpath does the job. An alternative is to use readlink -e (or readlink -f). However realpath is not often installed by default. If you cannot be sure realpath or readlink is present, you can substitute it using perl (see below).
Using perl
Steven Kramer proposes a shell alias if realpath is not available in your system:
$ alias realpath="perl -MCwd -e 'print Cwd::realpath(\$ARGV[0]),qq<\n>'"
$ realpath path/folder/file
/home/user/absolute/path/folder/file
or if you prefer using directly perl:
$ perl -MCwd -e 'print Cwd::realpath($ARGV[0]),qq<\n>' path/folder/file
/home/user/absolute/path/folder/file
This one-line perl command uses Cwd::realpath. There are in fact three perl functions. They take a single argument and return the absolute pathname. Below details are from documentation Perl5 > Core modules > Cwd.
abs_path() uses the same algorithm as getcwd(). Symbolic links and relative-path components (. and ..) are resolved to return the canonical pathname, just like realpath.
use Cwd 'abs_path';
my $abs_path = abs_path($file);
realpath() is a synonym for abs_path()
use Cwd 'realpath';
my $abs_path = realpath($file);
fast_abs_path() is a more dangerous, but potentially faster version of abs_path()
use Cwd 'fast_abs_path';
my $abs_path = fast_abs_path($file);
These functions are exported only on request => therefore use Cwd to avoid the "Undefined subroutine" error as pointed out by arielf. If you want to import all these three functions, you can use a single use Cwd line:
use Cwd qw(abs_path realpath fast_abs_path);
Take a look at 'realpath'.
$ realpath
usage: realpath [-q] path [...]
$ realpath ../../../../../
/data/home
Since I've run into this many times over the years, and this time around I needed a pure bash portable version that I could use on OSX and linux, I went ahead and wrote one:
The living version lives here:
https://github.com/keen99/shell-functions/tree/master/resolve_path
but for the sake of SO, here's the current version (I feel it's well tested..but I'm open to feedback!)
Might not be difficult to make it work for plain bourne shell (sh), but I didn't try...I like $FUNCNAME too much. :)
#!/bin/bash
resolve_path() {
#I'm bash only, please!
# usage: resolve_path <a file or directory>
# follows symlinks and relative paths, returns a full real path
#
local owd="$PWD"
#echo "$FUNCNAME for $1" >&2
local opath="$1"
local npath=""
local obase=$(basename "$opath")
local odir=$(dirname "$opath")
if [[ -L "$opath" ]]
then
#it's a link.
#file or directory, we want to cd into it's dir
cd $odir
#then extract where the link points.
npath=$(readlink "$obase")
#have to -L BEFORE we -f, because -f includes -L :(
if [[ -L $npath ]]
then
#the link points to another symlink, so go follow that.
resolve_path "$npath"
#and finish out early, we're done.
return $?
#done
elif [[ -f $npath ]]
#the link points to a file.
then
#get the dir for the new file
nbase=$(basename $npath)
npath=$(dirname $npath)
cd "$npath"
ndir=$(pwd -P)
retval=0
#done
elif [[ -d $npath ]]
then
#the link points to a directory.
cd "$npath"
ndir=$(pwd -P)
retval=0
#done
else
echo "$FUNCNAME: ERROR: unknown condition inside link!!" >&2
echo "opath [[ $opath ]]" >&2
echo "npath [[ $npath ]]" >&2
return 1
fi
else
if ! [[ -e "$opath" ]]
then
echo "$FUNCNAME: $opath: No such file or directory" >&2
return 1
#and break early
elif [[ -d "$opath" ]]
then
cd "$opath"
ndir=$(pwd -P)
retval=0
#done
elif [[ -f "$opath" ]]
then
cd $odir
ndir=$(pwd -P)
nbase=$(basename "$opath")
retval=0
#done
else
echo "$FUNCNAME: ERROR: unknown condition outside link!!" >&2
echo "opath [[ $opath ]]" >&2
return 1
fi
fi
#now assemble our output
echo -n "$ndir"
if [[ "x${nbase:=}" != "x" ]]
then
echo "/$nbase"
else
echo
fi
#now return to where we were
cd "$owd"
return $retval
}
here's a classic example, thanks to brew:
%% ls -l `which mvn`
lrwxr-xr-x 1 draistrick 502 29 Dec 17 10:50 /usr/local/bin/mvn# -> ../Cellar/maven/3.2.3/bin/mvn
use this function and it will return the -real- path:
%% cat test.sh
#!/bin/bash
. resolve_path.inc
echo
echo "relative symlinked path:"
which mvn
echo
echo "and the real path:"
resolve_path `which mvn`
%% test.sh
relative symlinked path:
/usr/local/bin/mvn
and the real path:
/usr/local/Cellar/maven/3.2.3/libexec/bin/mvn
I wanted to use realpath but it is not available on my system (macOS), so I came up with this script:
#!/bin/sh
# NAME
# absolute_path.sh -- convert relative path into absolute path
#
# SYNOPSYS
# absolute_path.sh ../relative/path/to/file
echo "$(cd $(dirname $1); pwd)/$(basename $1)"
Example:
./absolute_path.sh ../styles/academy-of-management-review.csl
/Users/doej/GitHub/styles/academy-of-management-review.csl
May be this helps:
$path = "~user/dir/../file"
$resolvedPath = glob($path); # (To resolve paths with '~')
# Since glob does not resolve relative path, we use abs_path
$absPath = abs_path($path);