Yocto Uboot CONFIG_USE_DEFAULT_ENV_FILE xxd: not found - yocto

I'm trying to compile the U-boot 2020.07 with option CONFIG_USE_DEFAULT_ENV_FILE=y and with the path to file which contains new U-boot environments records.
u-boot-suniv-spiflash/1_v2020.07-r0/git/scripts/Makefile.build
obj=scripts/basic | /bin/sh: 1: xxd: not found
I try to compile manually the same U-boot with the same Yocto toolchain and the compilation success U-boot works with replaced env records.
The problem is related to Makefile. I found somewhere some solution which allows me to compile success but with empty environment records in U-boot after boot-up.
The problematic syntax is
define filechk_defaultenv.h
(grep -v '^#' | \
grep -v '^$$' | \
tr '\n' '\0' | \
sed -e 's/\\\x0\s*//g' | \
xxd -i ; echo ", 0x00" ; )
ended
the solution from internet is
define filechk_defaultenv.h
(grep -v '^#' | \
grep -v '^$$' | \
tr '\n' '\0' | \
sed -e 's/\\\x0\s*//g' | \
xxd -i | \
sed -r 's/([0-9a-f])$$/\1, /'; \
echo "0x00" ; )
ended
Unfortunately, these environment records are empty after U-Boot's start-up.
Do You have knowledge of what am I doing wrong?

Related

Why some commands are give different output when ran in Perl and in linux terminal

When I run this command:
bjobs -r -P xenon -W | awk '{print $7}' | grep -v JOB_NAME |
cut -f 1 -d ' ' | xargs
in a terminal, all running JOB_NAMES are coming, but when I do this in per_script only JOB_ID are coming.
Perl script code is below:
#dummy_jobs = qx/bjobs -r -P xenon -W | awk '{print $7}' | grep -v JOB_NAME | cut -f 1 -d ' ' | xargs/;
What needs to be changed in Perl?
qx/.../ literals are very much like double-quoted strings. Specifically, $7 is interpolated, so you end up passing ... | awk '{print }' | ....
Replace
qx/...$7.../
with
qx/...\$7.../
Or if you prefer, you can use
my $shell_cmd = <<'EOS'; # These single-quotes means you get exactly what follows.
bjobs -r -P xenon -W | awk '{print $7}' | grep -v JOB_NAME | cut -f 1 -d ' ' | xargs
EOS
my #dummy_jobs = qx/$shell_cmd/;
Another difference is that qx uses /bin/sh instead of whatever shell you were using, but that shouldn't be relevant here.

xargs lines containing -e and -n processed differently

When running the following command with xargs (GNU findutils) 4.7.0
xargs -n1 <<<"-d -e -n -o"
I get this output
-d
-o
Why is -e and -n not present in the output?
From man xargs:
[...] and executes the command (default is /bin/echo) [...]
So it runs:
echo -d
echo -e
echo -n
echo -o
But from man echo:
-n do not output the trailing newline
-e enable interpretation of backslash escapes
And echo -n outputs nothing, and echo -e outputs one empty newlines that you see in the output.

Why can't I filter tail's output multiple times through pipes?

Unexpectedly, this fails (no output; tried in sh, zsh, bash):
echo "foo\nplayed\nbar" > /tmp/t && tail -f /tmp/t | grep played | sed 's#pl#st#g'
Note that two times grep also fails, indicating that it's quite irrelevant which commands are used:
# echo -e "foo\nplayed\nbar" > /tmp/t && tail -f /tmp/t | grep played | grep played
grep alone works:
# echo -e "foo\nplayed\nbar" > /tmp/t && tail -f /tmp/t | grep played
played
sed alone works:
# echo -e "foo\nplayed\nbar" > /tmp/t && tail -f /tmp/t | sed 's#pl#st#g'`
foo
stayed
bar
With cat instead of tail, it works:
# echo -e "foo\nplayed\nbar" > /tmp/t && cat /tmp/t | grep played | sed 's#pl#st#g'
stayed
With journalctl --follow, it fails just like with tail.
What's the reason for being unable to pipe twice?
It's a buffering issue - the first grep buffers it's output when it's piping to another command but not if it's printing to stdout. See http://mywiki.wooledge.org/BashFAQ/009 for additional info.

Why after delete some lines by sed, Postfix can't write maillog [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I want to use cron job, that once per three day will clean and sort maillog.
My job looks like
/bin/sed -i /status=/!d /var/log/maillog |
(/bin/grep "status=bounced" /var/log/maillog | /bin/grep -E -o --color "\b[a-zA-Z0-9.-]+#[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]+\b" | /bin/sort -u >> /root/unsent.log) |
(/bin/grep "status=deferred" /var/log/maillog | /bin/grep -E -o --color "\b[a-zA-Z0-9.-]+#[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]+\b" | /bin/sort -u >> /root/deferred.log) |
(/bin/grep "status=sent" /var/log/maillog | /bin/grep -E -o --color "\b[a-zA-Z0-9.-]+#[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]+\b" | /bin/sort -u >> /root/sent.log) |
/bin/sed -i "/status=/d" /var/log/maillog
Job works fine and do 3 step:
Delete from maillog all lines that don't contain "status="
Sort sent, bounced, deffered in different logs.
Delete from maillog all lines that contain "status"
After this job my maillog is fully clean and sorted to 3 logs.
But Postfix doesn't want to write next records to maillog.
I delete sed command, and Postfix writes next records fine.
Why sed command blocks maillog after execution cron job?
sed -i will unlink the file it modifies, so syslog/postfix will continue writing to a nonexistent file.
From http://en.wikipedia.org/wiki/Sed:
Note: "sed -i" overwrites the original file with a new one, breaking any links the original may have had
It is more common to process log files after rotating them out of place with a tool like logrotate or savelog, so that syslog can continue writing uninterrupted.
If you must edit /var/log/maillog in place, you can add a line to the end of your cron job to reload syslog when you are done. Note that you can lose log lines written to the file while your script is running if you do this. The command will depend on what distribution / operating system you are running. On ubuntu, which uses rsyslog, it would be reload rsyslog >/dev/null 2>&1.
I've reformatted your original code to highlight the pipe-lines you added
/bin/sed -i /status=/!d /var/log/maillog \
| (/bin/grep "status=bounced" /var/log/maillog \
| /bin/grep -E -o --color "\b[a-zA-Z0-9.-]+#[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]+\b" \
| /bin/sort -u >> /root/unsent.log\
) \
| (/bin/grep "status=deferred" /var/log/maillog \
| /bin/grep -E -o --color "\b[a-zA-Z0-9.-]+#[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]+\b" \
| /bin/sort -u >> /root/deferred.log\
) \
| (/bin/grep "status=sent" /var/log/maillog \
| /bin/grep -E -o --color "\b[a-zA-Z0-9.-]+#[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]+\b" \
| /bin/sort -u >> /root/sent.log \
) \
| /bin/sed -i "/status=/d" /var/log/maillog
As #alberge noted, you could very likely lose log messages with all of this sed -i processing on the same file.
I propose a different approach:
I would move the maillog to a dated filename, (the assumption here is that Postfix, will create a new file with the standard name that it 'likes' to use (/var/log/maillog).
Then your real goal seems to be to extract various categories of messages to separately named files, i.e. unsent.log, deferred.log, sent.log AND then you're discarding any lines that don't contain the string status= (although you do that first).
Here's my alternate (please read the whole message, don't copy/paste/excute right away!).
logDate=$(/bin/date +%Y%m%d.%H%M%S)
/bin/mv /var/log/maillog /var/log/maillog.${logDate}
/bin/grep "status=bounced" /var/log/maillog.${logDate} \
| /bin/grep -E -o --color "\b[a-zA-Z0-9.-]+#[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]+\b" \
| /bin/sort -u \
>> /root/unsent.log.${logDate}
/bin/grep "status=deferred" /var/log/maillog.${logDate} \
| /bin/grep -E -o --color "\b[a-zA-Z0-9.-]+#[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]+\b" \
| /bin/sort -u \
>> /root/deferred.log.${logDate}
/bin/grep "status=sent" \
| /bin/grep -E -o --color "\b[a-zA-Z0-9.-]+#[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]+\b" \
| /bin/sort -u \
>> /root/sent.log.${logDate}
To test that this code is working, replace the 2nd line ( /bin/mv .... ) with
/bin/cp /var/log/maillog /var/log/maillog.${logDate}
Copy/paste that into a terminal window, confirm that the /var/log/maillog.${logDate} was copied correctly, then copy/paste each section, 1 at a time and check that the expected output is created in each of the /root logfiles.
(If you get error messages for any of these blocks, make sure there are NO space/tab chars after the last '\' char on each of the continued lines. OR you can fold each of those 3 pipelines back into one line, removing the '\' chars as you go.
(Note that to create each of the /root logfiles, I don't use any connecting sections via pipes surrounded by sub-processes. But, in other situations, I do use this sort of technique for advanced problems, so don't throw the technique away, just use it when it is really required ;-)!
After you confirm that all of this is working as you needed, then you extend the script to do a final cleaning up :
/bin/rm /var/log/maillog.${logDate}
I've added ${logDate} to each of your output files, but as I see you're using sort -u >> you may want to remove that 'extension' to your sub-logfile names (unsent.log, deferred.log, sent.log) And just let those files get grow naturally. In either case, you'll have to comeback at some point and determine how far back you want to keep this data, and develop a plan and method for how you'll clean up these logfiles when they're not useful. I think someone mentioned logrotate package. You might want to look into that as your long-term solution.
This solution avoids a lot of extra processes being created, and it eliminates (mostly) the possibility of lost log records. I'm think you might lose all or part of a record if Postfix is writing to the logfile in the same split-second as you are moving the file. But your solution would have similar problems AND more opportunities for that to happen.
If I have misunderstood the intention of your design, using the nested ( .... ) | ( .... ) sub-processes, sorry! Consider updating your post to include why you are using that techinque.
I hope this helps.

How to make backticks work in a HERE doc?

I have a script2:
# This is script2 that is called by script1.
CURRENT_TOMCAT_PROCESS=`ps -ef | grep java | grep $TOMCAT_USER | grep -v grep | awk '{print $2}'`
echo "---> $CURRENT_TOMCAT_PROCESS"
and I call script2 in script1:
ssh $user#$server 'bash -s' < script2
It works fine. But I'm having trouble make the backtick work in a HERE document:
ssh $user#$server 'bash -s' <<EOF
CURRENT_TOMCAT_PROCESS=`ps -ef | grep java | grep $TOMCAT_USER | grep -v grep | awk '{print \$2}'`
echo "---> $CURRENT_TOMCAT_PROCESS"
EOF
(If I don't assign it to a variable and just print it out it works fine, but when I try to assign it to CURRENT_TOMCAT_PROCESS variable using backticks, it doesn't work.)
How can I make this work?
Thanks,
===============================================================================
I could make it work the following way. There are lots of escaping involved:
ssh $user#$server 'bash -s' <<EOF
CURRENT_TOMCAT_PROCESS="\`ps -ef | grep java | grep $TOMCAT_USER | grep -v grep | awk '{print \$2}'\`"
echo "---> \$CURRENT_TOMCAT_PROCESS"
EFO
I think it is reasonable to escape, because you want to transfer the '$' to remote site. You seems make a typo on your last result. I tried to type here again
TOMCATE_USER=foo
ssh $user#$server 'bash -s' <<EOF
CURRENT_TOMCAT_PROCESS="\`ps -ef | grep java | grep $TOMCAT_USER | grep -v grep | awk '{print \$2}'\`"
echo "---> \$CURRENT_TOMCAT_PROCESS"
EOF