ksh epoch to datetime format - perl

I have a script for find the expire date of any user password. Script can find the expire date in seconds (epoch) but cannot convert this to datetime format.
#!/usr/bin/ksh
if (( ${#} < 1 )) ; then
print "No Arguments"
exit
fi
lastupdate=`pwdadm -q $1|awk '{print $3;}'`
((lastupdate=$lastupdate+0))
maxagestr=`lsuser -a maxage $1`
maxage=${maxagestr#*=}
let maxageseconds=$maxage*604800
expdateseconds=$(expr "$maxageseconds" + "$lastupdate")
((expdateseconds=$expdateseconds+0))
expdate=`perl -le 'print scalar(localtime($expdateseconds))'`
echo $expdateseconds
echo $expdate
In this script, expdateseconds value is true. If I type the value of expdateseconds as a parameter of localtime() function, the function show the date in datetime format.
But if I type the $expdateseconds variable, the function does not work true and return 01.01.1970 always.
How can I enter a variable as a parameter of localtime() function?

Shell variables are not expanded within single quotes. So in your code, perl is not "seeing" the shell variable, it is instead seeing an uninitialized perl variable whose value defaults to zero. Shell variables are expanded within double quotes, so in this case that's all you need to do:
expdate=`perl -le "print scalar(localtime($expdateseconds))"`

As #JeffY said, your problem is the quotes. You can also do it without perl (assuming your date command is the GNU version):
expdate=`date -d #$expdateseconds`
Although, since you're using ksh - and really, any modern POSIX shell - I recommend that you avoid `...`, which can cause confusing behavior with quoting, and use $(...) instead.
expdate=$(date -d #$expdateseconds)
This isn't codereview, but I have a few other tips regarding your script. The usual rule is to send error messages (like "No arguments") to standard error instead of standard out (with print -u2) and exit with a nonzero value (typically 1) when there's a usage error.
Whenever passing a parameter to a command, like pwadm -q $1, you run the risk of funny characters messing things up unless you double-quote the parameter: pwadm -q "$1".
You have an odd mixture of let, ((, and expr in your arithmetic. I would suggest that you declare all your numeric variables with typeset -ivarname and just use ((...)) for all arithmetic. Inside ((...)), you don't have to worry about globbing messing things up (let a=b*c will expand into a syntax error if you have a file in the current directory named a=b.c, for instance; (( a=b*c )) won't). You also don't need to put dollar signs on the variables (which just makes the shell convert them to a string and then parse their numeric value out again), or add 0 to them just to make sure they're numbers.

There is no need to use neither awk nor perl. E.g.:
#!/usr/bin/ksh93
[[ -z $1 ]] && print -u2 "No Arguments" && exit 1
typeset -i LASTUPDATE S
X=( ${ pwdadm -q "$1" ; } )
LASTUPDATE=${X[3]}
X=${ lsuser -a maxage "$1" ; }
S=${X#*=}
(( S *= 604800 ))
(( S += LASTUPDATE ))
printf "$S\n%T\n" "#$S"

Related

How to store the value of telnet->cmd in an attribute in perl script

I have been running
grep -n \"fixed-address $IP_Address\" /etc/dhcp3/dhcpd.conf | cut -d \":\" -f2"
inside telnet->cmd and I want to store the output in a variable. I got an output value of 1 when I tried, but the output value should be 916. Here is a part of my Perl script
my $dhcp_value = $telnet->cmd(
string => "grep -n \"fixed-address $IP_Address\" /etc/dhcp3/dhcpd.conf | cut -d \":\" -f2"
);
print "$dhcp_value\n";
Please let me know how to run grep -n command in $telnet->cmd
If you're using the Net::Telnet module, then the documentation for the cmd method says this
In a scalar context, the characters read from the remote side are discarded and 1 is returned on success
In a list context, just the output generated by the command is returned, one line per element.
So it looks like you need to apply list context to the cmd method call. You can either use an array, like the example in the documnentation, or you can just put your scalar variable in parentheses
What I think you need is this
Note that I have built the command stringf separately for clarity, and used alternative delimiters with qq{...} so as to avoid having to escape embedded double quotes
Note also that the return value from the cmd call will probably have newline characters at the end. chomp will remove these for you
my $cmd = qq{grep -n "fixed-address $IP_Address" /etc/dhcp3/dhcpd.conf | cut -d ":" -f2};
my ($dhcp_value) = $telnet->cmd(string => $cmd );
chomp $dhcp_value;
print "$dhcp_value\n";

How do I replace a substring by the output of a shell command with sed, awk or such?

I'd like to use sed or any command line tool to replace parts of lines by the output of shell commands. For example:
Replace linux epochs by human-readable timestamps, by calling date
Replace hexa dumps of a specific protocol packets by their decoded counterparts, by calling an in-house decoder
sed seems best fitted because it allows to match patterns and reformat other things too, like moving bits of matches around, but is not mandatory.
Here is a simplified example:
echo "timestamp = 1234567890" | sed "s/timestamp = \(.*\)/timestamp = $(date -u --d #\1 "+%Y-%m-%d %T")/g"
Of course, the $(...) thing does not work. As far as I understand, that's for environment variables.
So what would the proper syntax be? Is sed recommended in this case ? I've spent several hours searching... Is sed even capable of this ? Are there other tools better suited?
Edit
I need...
Pattern matching. The log is full of other things, so I need to be able to pinpoint the strings I want to replace based on context (text before and after, on the same line). This excludes column-position-based matching like awk '{$3...
In-place replacement, so that the reste of the line, "Timestamp = " or whatever, remains unchanged. This exclused sed's 'e' command.
To run an external command in sed you need to use e. See an example:
$ echo "timestamp = 1234567890" | sed "s#timestamp = \(.*\)#date -u --d #\1 "\+\%Y"#e"
2009
With the full format:
$ sed "s#timestamp = \(.*\)#echo timestamp; date -u --d #\1 '\+\%Y-\%m-\%d \%T'#e" <<< "timestamp = 1234567890"
timestamp
2009-02-13 23:31:30
This catches the timestamp and converts it into +%Y format.
From man sed:
e
This command allows one to pipe input from a shell command into
pattern space. If a substitution was made, the command that is found
in pattern space is executed and pattern space is replaced with its
output. A trailing newline is suppressed; results are undefined if the
command to be executed contains a nul character. This is a GNU sed
extension.
However, you see it is a bit "ugly". Depending on what you want to do, you'd better use a regular while loop to fetch the values and then use date normally. For example, if the file is like:
timestamp = 1234567890
Then you can say:
while IFS="=" read -r a b
do
echo "$b"
done < file
this will make you have $b as the timestamp and then you can perform a date ....
As commented, use a language with built-in time functions. For example:
$ echo "timestamp = 1234567890" | gawk '{$3 = strftime("%F %T", $3)} 1'
timestamp = 2009-02-13 18:31:30
$ echo "timestamp = 1234567890" | perl -MTime::Piece -pe 's/(\d+)/ localtime($1)->strftime("%F %T") /e'
timestamp = 2009-02-13 18:31:30

How to expand variable literally when calling perl from csh script?

Below is a csh script.
#! /bin/csh
set alpha=10\20\30;
set beta = $alpha.alpha;
perl -p -i.bak -e 's/gamma/'$beta'/' tmp;
The tmp file contains just the word gamma. After running tmp.csh, I expect 10\20\30.alpha in tmp, but it's now 102030.alpha.
How to preserve slashes in this situation?
Note: I wouldn't prefer changing definition of alpha variable, as it is used in the script else where where it needs to be in this format (10\20\30) only.
Thanks.
In csh, for your alpha assignment, the backslash is being taken to mean 'a literal 2 or 3'. In order to keep csh from doing this, the assignment needs to be enclosed in quotes.
#! /bin/csh
set alpha="10\20\30";
set beta = $alpha.alpha;
perl -p -i.bak -e 's/gamma/'$beta'/' tmp;
If in doubt, it's often helpful to 'echo' your variables out to see exactly what they contain. I don't understand your final note, as the 'alpha' variable is not equal to 10\20\30 the way you have it originally assigned.

Perl String Interpolation in Bash Command

I'm trying to use GNU Date to get the seconds between two dates. The reason I'm using GNU Date is for performance (in testing was 10x faster than Perl) for this purpose. However, one of my arguments is a perl variable. Like this:
my $b_row="2012-01-05 20:20:22";
my $exec =qx'CUR_DATE=`echo $(date +"%F %T")` ; echo $(($(date -d "$CUR_DATE" +%s)-$(date -d "$b_row" +%s)))';
The problem is that b_row is not being expanded. I've tried a couple different solutions (IPC::System::Simple) being one, tried adjusting the backticks etc. No success, any ideas how to do this appropriately? The main thing is I need to capture the output from the bash command.
Make it easier on yourself and do the minimum amount of work in the shell. This works for me:
my $b_row = '2012-01-05 20:20:22';
my $diff = qx(date -d "\$(date +'%F %T')" +%s) -
qx(date -d "$b_row" +%s);
Just be absolutely sure $b_row doesn't have any shell metacharacters in it.
That's because you use ' :
Using single-quote as a delimiter protects the command from
Perl's double-quote interpolation, passing it on to the shell
instead:
$perl_info = qx(ps $$); # that's Perl's $$
$shell_info = qx'ps $$'; # that's the new shell's $$
qx has the feature of letting you choose a convenient delimiter, including the option of whether to interpolate the string or not (by choosing ' as the delimiter). For this use case, sometimes you want interpolation and sometimes you don't, so qx (and backticks) may not be the right tool for the job.
readpipe is probably a better tool. Like the system EXPR command, it takes an arbitrary scalar as input, and you have all of Perl's tools at your disposal to construct that scalar. One way to do it is:
my $exec = readpipe
'CUR_DATE=`echo $(date +"%F %T")` ;' # interp not desired
. ' echo $(($(date -d "$CUR_DATE" +%s)-$(date -d "'
. qq/"$b_row"/ # now interp is desired
. ' +%s)))'; # interp not desired again

How can I have a newline in a string in sh?

This
STR="Hello\nWorld"
echo $STR
produces as output
Hello\nWorld
instead of
Hello
World
What should I do to have a newline in a string?
Note: This question is not about echo.
I'm aware of echo -e, but I'm looking for a solution that allows passing a string (which includes a newline) as an argument to other commands that do not have a similar option to interpret \n's as newlines.
If you're using Bash, you can use backslash-escapes inside of a specially-quoted $'string'. For example, adding \n:
STR=$'Hello\nWorld'
echo "$STR" # quotes are required here!
Prints:
Hello
World
If you're using pretty much any other shell, just insert the newline as-is in the string:
STR='Hello
World'
Bash recognizes a number of other backslash escape sequences in the $'' string. Here is an excerpt from the Bash manual page:
Words of the form $'string' are treated specially. The word expands to
string, with backslash-escaped characters replaced as specified by the
ANSI C standard. Backslash escape sequences, if present, are decoded
as follows:
\a alert (bell)
\b backspace
\e
\E an escape character
\f form feed
\n new line
\r carriage return
\t horizontal tab
\v vertical tab
\\ backslash
\' single quote
\" double quote
\nnn the eight-bit character whose value is the octal value
nnn (one to three digits)
\xHH the eight-bit character whose value is the hexadecimal
value HH (one or two hex digits)
\cx a control-x character
The expanded result is single-quoted, as if the dollar sign had not
been present.
A double-quoted string preceded by a dollar sign ($"string") will cause
the string to be translated according to the current locale. If the
current locale is C or POSIX, the dollar sign is ignored. If the
string is translated and replaced, the replacement is double-quoted.
Echo is so nineties and so fraught with perils that its use should result in core dumps no less than 4GB. Seriously, echo's problems were the reason why the Unix Standardization process finally invented the printf utility, doing away with all the problems.
So to get a newline in a string, there are two ways:
# 1) Literal newline in an assignment.
FOO="hello
world"
# 2) Command substitution.
BAR=$(printf "hello\nworld\n") # Alternative; note: final newline is deleted
printf '<%s>\n' "$FOO"
printf '<%s>\n' "$BAR"
There! No SYSV vs BSD echo madness, everything gets neatly printed and fully portable support for C escape sequences. Everybody please use printf now for all your output needs and never look back.
What I did based on the other answers was
NEWLINE=$'\n'
my_var="__between eggs and bacon__"
echo "spam${NEWLINE}eggs${my_var}bacon${NEWLINE}knight"
# which outputs:
spam
eggs__between eggs and bacon__bacon
knight
I find the -e flag elegant and straight forward
bash$ STR="Hello\nWorld"
bash$ echo -e $STR
Hello
World
If the string is the output of another command, I just use quotes
indexes_diff=$(git diff index.yaml)
echo "$indexes_diff"
The problem isn't with the shell. The problem is actually with the echo command itself, and the lack of double quotes around the variable interpolation. You can try using echo -e but that isn't supported on all platforms, and one of the reasons printf is now recommended for portability.
You can also try and insert the newline directly into your shell script (if a script is what you're writing) so it looks like...
#!/bin/sh
echo "Hello
World"
#EOF
or equivalently
#!/bin/sh
string="Hello
World"
echo "$string" # note double quotes!
The only simple alternative is to actually type a new line in the variable:
$ STR='new
line'
$ printf '%s' "$STR"
new
line
Yes, that means writing Enter where needed in the code.
There are several equivalents to a new line character.
\n ### A common way to represent a new line character.
\012 ### Octal value of a new line character.
\x0A ### Hexadecimal value of a new line character.
But all those require "an interpretation" by some tool (POSIX printf):
echo -e "new\nline" ### on POSIX echo, `-e` is not required.
printf 'new\nline' ### Understood by POSIX printf.
printf 'new\012line' ### Valid in POSIX printf.
printf 'new\x0Aline'
printf '%b' 'new\0012line' ### Valid in POSIX printf.
And therefore, the tool is required to build a string with a new-line:
$ STR="$(printf 'new\nline')"
$ printf '%s' "$STR"
new
line
In some shells, the sequence $' is a special shell expansion.
Known to work in ksh93, bash and zsh:
$ STR=$'new\nline'
Of course, more complex solutions are also possible:
$ echo '6e65770a6c696e650a' | xxd -p -r
new
line
Or
$ echo "new line" | sed 's/ \+/\n/g'
new
line
A $ right before single quotation marks '...\n...' as follows, however double quotation marks doesn't work.
$ echo $'Hello\nWorld'
Hello
World
$ echo $"Hello\nWorld"
Hello\nWorld
Disclaimer: I first wrote this and then stumbled upon this question. I thought this solution wasn't yet posted, and saw that tlwhitec did post a similar answer. Still I'm posting this because I hope it's a useful and thorough explanation.
Short answer:
This seems quite a portable solution, as it works on quite some shells (see comment).
This way you can get a real newline into a variable.
The benefit of this solution is that you don't have to use newlines in your source code, so you can indent
your code any way you want, and the solution still works. This makes it robust. It's also portable.
# Robust way to put a real newline in a variable (bash, dash, ksh, zsh; indentation-resistant).
nl="$(printf '\nq')"
nl=${nl%q}
Longer answer:
Explanation of the above solution:
The newline would normally be lost due to command substitution, but to prevent that, we add a 'q' and remove it afterwards. (The reason for the double quotes is explained further below.)
We can prove that the variable contains an actual newline character (0x0A):
printf '%s' "$nl" | hexdump -C
00000000 0a |.|
00000001
(Note that the '%s' was needed, otherwise printf will translate a literal '\n' string into an actual 0x0A character, meaning we would prove nothing.)
Of course, instead of the solution proposed in this answer, one could use this as well (but...):
nl='
'
... but that's less robust and can be easily damaged by accidentally indenting the code, or by forgetting to outdent it afterwards, which makes it inconvenient to use in (indented) functions, whereas the earlier solution is robust.
Now, as for the double quotes:
The reason for the double quotes " surrounding the command substitution as in nl="$(printf '\nq')" is that you can then even prefix the variable assignment with the local keyword or builtin (such as in functions), and it will still work on all shells, whereas otherwise the dash shell would have trouble, in the sense that dash would otherwise lose the 'q' and you'd end up with an empty 'nl' variable (again, due to command substitution).
That issue is better illustrated with another example:
dash_trouble_example() {
e=$(echo hello world) # Not using 'local'.
echo "$e" # Fine. Outputs 'hello world' in all shells.
local e=$(echo hello world) # But now, when using 'local' without double quotes ...:
echo "$e" # ... oops, outputs just 'hello' in dash,
# ... but 'hello world' in bash and zsh.
local f="$(echo hello world)" # Finally, using 'local' and surrounding with double quotes.
echo "$f" # Solved. Outputs 'hello world' in dash, zsh, and bash.
# So back to our newline example, if we want to use 'local', we need
# double quotes to surround the command substitution:
# (If we didn't use double quotes here, then in dash the 'nl' variable
# would be empty.)
local nl="$(printf '\nq')"
nl=${nl%q}
}
Practical example of the above solution:
# Parsing lines in a for loop by setting IFS to a real newline character:
nl="$(printf '\nq')"
nl=${nl%q}
IFS=$nl
for i in $(printf '%b' 'this is line 1\nthis is line 2'); do
echo "i=$i"
done
# Desired output:
# i=this is line 1
# i=this is line 2
# Exercise:
# Try running this example without the IFS=$nl assignment, and predict the outcome.
I'm no bash expert, but this one worked for me:
STR1="Hello"
STR2="World"
NEWSTR=$(cat << EOF
$STR1
$STR2
EOF
)
echo "$NEWSTR"
I found this easier to formatting the texts.
Those picky ones that need just the newline and despise the multiline code that breaks indentation, could do:
IFS="$(printf '\nx')"
IFS="${IFS%x}"
Bash (and likely other shells) gobble all the trailing newlines after command substitution, so you need to end the printf string with a non-newline character and delete it afterwards. This can also easily become a oneliner.
IFS="$(printf '\nx')" IFS="${IFS%x}"
I know this is two actions instead of one, but my indentation and portability OCD is at peace now :) I originally developed this to be able to split newline-only separated output and I ended up using a modification that uses \r as the terminating character. That makes the newline splitting work even for the dos output ending with \r\n.
IFS="$(printf '\n\r')"
On my system (Ubuntu 17.10) your example just works as desired, both when typed from the command line (into sh) and when executed as a sh script:
[bash]§ sh
$ STR="Hello\nWorld"
$ echo $STR
Hello
World
$ exit
[bash]§ echo "STR=\"Hello\nWorld\"
> echo \$STR" > test-str.sh
[bash]§ cat test-str.sh
STR="Hello\nWorld"
echo $STR
[bash]§ sh test-str.sh
Hello
World
I guess this answers your question: it just works. (I have not tried to figure out details such as at what moment exactly the substitution of the newline character for \n happens in sh).
However, i noticed that this same script would behave differently when executed with bash and would print out Hello\nWorld instead:
[bash]§ bash test-str.sh
Hello\nWorld
I've managed to get the desired output with bash as follows:
[bash]§ STR="Hello
> World"
[bash]§ echo "$STR"
Note the double quotes around $STR. This behaves identically if saved and run as a bash script.
The following also gives the desired output:
[bash]§ echo "Hello
> World"
I wasn't really happy with any of the options here. This is what worked for me.
str=$(printf "%s" "first line")
str=$(printf "$str\n%s" "another line")
str=$(printf "$str\n%s" "and another line")
This isn't ideal, but I had written a lot of code and defined strings in a way similar to the method used in the question. The accepted solution required me to refactor a lot of the code so instead, I replaced every \n with "$'\n'" and this worked for me.