This was the original question.
Using perl, how can I detect from the command line if a specified file contains only a specified character(s), like '0' for example?
I tried
perl -ne 'print if s/(^0*$)/yes/' filename
But it cannot detect all conditions, for example multiple lines, non-zero lines.
Sample input -
A file containing only zeros -
0000000000000000000000000000000000000000000000000000000000000
output - "yes"
Empty file
output - "no"
File containing zeros but has newline
000000000000000000
000000000000
output - "no"
File containing mixture
0324234-234-324000324200000
output - "no"
-0777 causes $/ to be set to undef, causing the whole file to be read when you read a line, so
perl -0777ne'print /^0+$/ ? "yes" : "no"' file
or
perl -0777nE'say /^0+$/ ? "yes" : "no"' file # 5.10+
Use \z instead of $ if want to make sure there's no trailing newline. (A text file should have a trailing newline.)
To print yes if a file contains at least one 0 character and nothing else, and otherwise no, write
perl -0777 -ne 'print /\A0+\z/ ? "yes" : "no"' myfile
I suspect you want a more generic solution than just detecting zeroes, but I haven't got time to write it for you till tomorrow. Anyway, here is what I think you need to do:
1. Slurp your entire file into a single string "s" and get its length (call it "L")
2. Get the first character of the string, using substr(s,0,1)
3. Create a second string that repeats the first character "L" times, using firstchar x L
4. Check the second string is equal to the slurped file
5. Print "No" if not equal else print "Yes"
If your file is big and you don't want to hold two copies in memory, just test character by character using substr(). If you want to ignore newlines and carriage returns, just use "tr" to delete them from "s" before step 2.
Related
I am trying to write a Shell script to edit an input file. Data is structured in the input file as follow:
1000␣␣␣␣␣␣␣␣␣␣␣␣: final time
1000 : print time
0.1 : time step
The alignment is made with whitespaces as emphasized in the first line.
I am currently using sed to replace the parameters (first "word" of each line).
I couldn't find a way to do it without messing the alignment of the labels. I'm open to any suggestions, I don't particularly want to achieve this with sed. It is possible to change the structure of the input file by using tabs for example.
Here's an example of what I would like the script to do:
input file
----------
1000␣␣␣␣␣␣␣␣␣␣␣␣: final time
1000 : print time
0.1 : time step
running the script
------------------
$ script --final-time=100
input file after running the script
-----------------------------------
100␣␣␣␣␣␣␣␣␣␣␣␣␣: final time
1000 : print time
0.1 : time step
The length of the replacement string is not know in advance. It's not fixed and can be up to 6 characters.
With GNU awk:
awk -v value="100" 'BEGIN{FS=OFS=" : "} $2=="final time" {$1=sprintf("%-15s",value)}1' file
Output:
100 : final time
1000 : print time
0.1 : time step
See: 8 Powerful Awk Built-in Variables – FS, OFS, RS, ORS, NR, NF, FILENAME, FNR
This might work for you (GNU sed):
sed -E '/final time/{s/.*/\n&\n100/;:a;s/\n[^:](.*\n)(.)/\2\n\1/;s/\n[^:](.*\n)$/ \n\1/;ta;s/\n//g}' file
In overview, replace the first field by the replacement value a character at a time, making sure to overwrite the original value by spaces if the replacement string is shorter.
If the line contains the required match, prepend a newline to the start of the pattern space and a newline followed by the required replacement string to the end of the pattern space.
Within a loop: if the first character following the first newline is not a colon : i.e. the character which denotes the separation of the first field to the second, replace it by the first character following the second newline and replace the first newline after the replacement character. If there are no characters following the second newline and the first character following the first newline is neither a space or a colon, replace it by a space and again shuffle the first newline. Otherwise, the replacement has been successful and remove all introduced newlines.
i have a problem i have a txt file that has several lines with a three line pattern that for some reason is unpastable so i have to describe it. first line starts looks like this ">#1M1U7:00204:00340" can have any number after the : but have a fixed number of characters. The next line look like this "_F_48_32.0416666667" and can have any number after the last underscore and can be of different legths. The last lien in the pattern is a DNA sequence. what i want is to join the two first lines together.
I want a script in perl that can fix this for me
Just chomp every first line of the three-line group:
perl -pe 'chomp if 1 == $. % 3' < input > output
I want to remove duplicate lines from a file, without sorting the file.
Example of why this is useful to me: removing duplicates from Bash's $HISTFILE without changing the chronological order.
This page has a one-liner to do that:
http://sed.sourceforge.net/sed1line.txt
Here's the one-liner:
sed -n 'G; s/\n/&&/; /^\([ -~]*\n\).*\n\1/d; s/\n//; h; P'
I asked a sysadmin and he told me "you just copy the script and it works, don't go philosophising about this", which is fine, so I am asking here as it's a developer forum and I trust people might be like me, suspicious about using things they don't understand:
Could you kindly provide a pseudo-code explanation of what that "black magic" script is doing, please? I tried parsing the incantation in my head but especially the central part is quite hard.
I'll note that this script does not appear to work with my copy of sed (GNU sed 4.1.5) in my current locale. If I run it with LC_ALL=C it works fine.
Here's an annotated version of the script. sed basically has two registers, one is called "pattern space" and is used for (basically) the current input line, and the other, the "hold space", can be used by scripts for temporary storage etc.
sed -n ' # -n: by default, do not print
G # Append hold space to current input line
s/\n/&&/ # Add empty line after current input line
/^\([ -~]*\n\).*\n\1/d # If the current input line is repeated in the hold space, skip this line
# Otherwise, clean up for storing all input in hold space:
s/\n// # Remove empty line after current input line
h # Copy entire pattern space back to hold space
P # Print current input line'
I guess the adding and removal of an empty line is there so that the central pattern can be kept relatively simple (you can count on there being a newline after the current line and before the beginning of the matching line).
So basically, the entire input file (sans duplicates) is kept (in reverse order) in the hold space, and if the first line of the pattern space (the current input line) is found anywhere in the rest of the pattern space (which was copied from the hold space when the script started processing this line), we skip it and start over.
The regex in the conditional can be further decomposed;
^ # Look at beginning of line (i.e. beginning of pattern space)
\( # This starts group \1
[ -~] # Any printable character (in the C locale)
* # Any number of times
\n # Followed by a newline
\) # End of group \1 -- it contains the current input line
.*\n # Skip any amount of lines as necessary
\1 # Another occurrence of the current input line, with newline and all
If this pattern matches, the script discards the pattern space and starts over with the next input line (d).
You can get it to work independently of locale by changing [ -~] to [[:print:]]
The code doesn't work for me, perhaps due to some locale setting, but this does:
vvv
sed -n 'G; s/\n/&&/; /^\([^\n]*\n\).*\n\1/d; s/\n//; h; P'
^^^
Let's first translate this by the book (i.e. sed info page), into something perlish.
# The standard sed loop
my $hold = "";
while ($my pattern = <>) {
chomp $pattern;
$pattern = "$pattern\n$hold"; # G
$pattern =~ s/(\n)/$1$1/; # s/\n/&&/
if ($pattern =~ /^([^\n]*\n).*\n\1/) { # /…/
next; # d
}
$pattern =~ s/\n//; # s/\n//
$hold = $pattern; # h
$pattern =~ /^([^\n]*\n?)/; print $1; # P
}
OK, the basic idea is that the hold space contains all the lines seen so far.
G: At the beginning of each cycle, append that hold space to the current line. Now we have a single string consisting of the current line and all unique lines which preceeded it.
s/\n/&&/: Turn the newline which separates them into a double newline, so that we can match subsequent and non-subsequent duplicates the same, see the next step.
^\([^\n]*\n\).*\n\1/: Look through the current text for the following: at the beginning of all the lines (^) look for a first line including trailing newline (\([^\n]*\n\)), then anything (.*), then a newline (\n), and then that same first line including newline repeated again (\1). If two subsequent lines are the same, then the .* in the regular expression will match the empty string, but the two \n will still match due to the newline duplication in the preceding step. So basically this asks whether the first line appears again among the other lines.
d: If there is a match, this is a duplicate line. We discard this input, keep the hold space as it is as a buffer of all unique lines seen so far, and continue with the next line of input.
s/\n//: Otherwise, we continue and next turn the double newline back into a single newline.
h: We include the current line in our list of all unique lines.
P: And finally print this new unique line, up to the newline character.
For the actual problem to resolve, here is a simpler solution (at least it looks so) with awk:
awk '!_[$0]++' FILE
In short _[$0] is a counter (of appearance) for each unique line, for any line ($0) appearing for the second time _[$0] >= 1, thus !_[$0] evaluates to false, causing it not to be printed except its first time appearance.
See https://gist.github.com/ryenus/5866268 (credit goes to a recent forum I visited.)
What I need is:
I need to delete the entire line but need to keep the matching strings.
matching pattern starting with Unhandled and ending with a :
I tried the below code which prints the matching pattern, but I need to delete the extra lines from the file.
perl -0777 -ne 'print "Unhandled error at$1\n" while /Unhandled\ error\ at(.*?):/gs' filename
Below is the sample input:
2012-04-09 01:52:13,717 - uhrerror - ERROR - 22866 - /home/shabbir/web/middleware.py process_exception - 217 - Unhandled error at /user/resetpassword/: : {'mod_wsgi.listener_port': '8080', 'HTTP_COOKIE': "__utma=1.627673239.1309689718.1333823126.1333916263.156; __utmz=1.1333636950.152.101.utmgclid=CMmkz934na8CFY4c6wod_R8JbA|utmccn=(not%20set)|utmcmd=(not%20set)|utmctr=non-stick%20kadai%20online; subpopdd=yes; _msuuid_1690zlm11992=FCC09820-3004-413A-97A3-1088EE128CE9; _we_wk_ls_=%7Btime%3A'1322900804422'%7D; _msuuid_lf2uu38ua0=08D1CEFE-3C19-4B9E-8096-240B92BA0ADD; nevermissadeal=True; sessionid=c1e850e2e7db09e98a02415fc1ef490; __utmc=1; __utmb=1.7.10.1333916263; 'wsgi.file_wrapper': , 'HTTP_ACCEPT_ENCODING': 'gzip, deflate'}
The code you gave already provides the requested behaviour.
That said, there's a huge redundant string in your program you can eliminate.
perl -0777nE'say $1 while /(Unhandled error at .*?):/gs' filename
Finally, slurping the entire file seems entirely superfluous.
perl -nE'say $1 if /(Unhandled error at .*?):/g' filename
perl -0777 -i -pe 's/.*?(Unhandled error .*?):.*/$1/g' filename
This will replace error block with matched string in the file.
-0777 : will force Perl to read the whole file in one shot.
-i : means in-place editing of files.
-p : means loop line-by-line through contents of file,execute code in single quotes i.e.'s/.*?(Unhandled error .*?):.*/$1/g',and print the result(matched string),which is written back to file using -i option.
-e : for command-line
If one match is all you want to keep from the whole string, you could replace the string value with the match afterwards. (i.e. Simply assign the new value)
If you have several matches within the string, the least complicated method may be to store the matches temporarily in an array. Then just discard the original variable if you don't need it anymore.
I would use -l option to handle line endings (less version dependent, prints a new line for each match), and a for loop to print all the matches, not just the first one $1. No need to slurp the file with -0777.
perl -nwle 'print for /Unhandled error at .*?:/g'
Note that with the /g modifier, a capturing parenthesis is not required.
If only one (the first) match is to be printed, /g is redundant and you can just use $1:
perl -nlwe 'print $1 if /(Unhandled error at .*?):/'
I have a file, whose every 2nd line is of unequal length. I want to make these lines equal(every 2nd line of output should be equal to 10 characters) but with new identifier (every odd line).
FILE ->
>ZQMK36301EDYQE
ZHZHHEXZZHHZZHHZZXHHHEHHHZZZHHHZHXZHZ
>ZQMK36301EEMJ9
ZZZXHZHHXHHHEZZEEZZHZZZZXEZ
>ZQMK36301EOEM5
ZXHXHZZHEHHHXZEZHXXXHXHHHHXEHHHZHHHH
desired output ->
>ZQMK36301EDYQE
ZHZHHEXZZH
>ZQMK36301EDYQE#2
HZZHHZZXHH
>ZQMK36301EDYQE#3
HEHHHZZZHH
>ZQMK36301EEMJ9
ZZZXHZHHXH
>ZQMK36301EEMJ9#2
HHEZZEEZZH
>ZQMK36301EOEM5
ZXHXHZZHEH
>ZQMK36301EOEM5#2
HHXZEZHXXX
>ZQMK36301EOEM5#3
HXHHHHXEHH
Here if we take the first line which is identifier (>ZQMK36301EDYQE) and in its 2nd line it contains 37 characters. Now it will make 3 sequences of equal length (i:e 10) and if remaining characters are less than 10, we will throw that part. Now each new line of equal length has an identifier which is same as from which the part of sequence it came but followed by "#" and the number. I want to do this for whole file. Please help.
Thanks and Best regards,
Vikas
As a one-liner:
perl -nwle '
$i=0;
for my $add (<>=~/.{10}/g) {
printf "%s%s\n%s\n", $_, $i++ ? "#$i":"", $add;
}' inputfile
-n read file line-by-line and store line in $_. -l autochomps the input. We assume first line is header, and second is data. $i is the counter, so it is reset for each new line pair. The for loop list is made on the fly by reading one line <>, then extracting 10-character long strings from it with a regex. Then we just print the stuff, and make sure not to show the zero counter.