I'm reading Part two from the 2htdp book, and I'm curious about the behavior of the read-words/line function that lives under 2htdp/batch-io.
I created two different files, the first file having a \n at the end of the file and the second file without it, like this:
echo -ne "first line\n\nthird line\n" > file1.txt
echo -ne "first line\n\nthird line" > file2.txt
if I read each one of the files with the read-words/line, this is the result:
(read-words/line "doc/book/htdp2/data/file1.txt")
; => '(("first" "line") () ("third" "line"))
(read-words/line "doc/book/htdp2/data/file2.txt")
; => '(("first" "line") () ("third" "line"))
It looks that the function isn't preserving the information from the last newline character in the first file.
Is that behavior intended?
I'm Running Dr Racket 6.2.1 on a GNU/Linux box.
Thanks,
I'd say yes. The intention of read-words/line is to produce a list of words for each line read. A line is normally a sequence of characters ending in a newline. The last line of a file can end in "eof-of-file" instead of a newline. If you try this example:
echo -ne "first line\n\nthird line\n\n" > file1.txt
I expect the last line to produce an empty list of words.
Related
This is a Bash/.bat terminal script for Mac.
I'm trying to add text ("!!XX!!") into a group of tab-delimited .txt files in a folder, but I only want to add it into the 4th and all following incidents of the tab in each .txt file, and then only if those cels have text in them. So, the end result would be something like (assuming the 7th cel/field/bit of info is blank). So turn this:
text01
text02
text03
text04
text05
text06
... into this:
text01 [TAB] text02 [TAB] text03 [TAB] text04!!XX!! [TAB] text05!!XX!! [TAB] text06!!XX!! [TAB]
The text marker "!!XX!!" is so that another script in a different system can run on the file and perform special system-compatible/custom line formatting at each incident of "!!XX!!", but I don't want to populate the first three fields/tab-delimited text (because it's not needed there) or in the empty fields (because it's not wanted there).
I'm already replacing each line return with a tab, so it is possible to do it there, though my preference is to do it later to the tab-delimited text b/c of weird issues with the line returns/formatting coming in from .rtf files. Below is what I am to replace each line return and replace it with a TAB (and, yes, that is an actual line return and tab in there, which seems to work best because... Macs?):
perl -pi -w -e 's/
/ /g' *.txt;
Thanks in advance :)
This post assumes an input file that has lines with tab-separated fields, where each field starting from (and including) the fourth need be edited if it has something.
One way
perl -F"\t" -wlane'
for (3..$#F) { $F[$_] .= "!XX!" if defined $F[$_] }; print join("\t", #F)
' file
(In tcsh shell need to escape those ! with a backslash.) Once you've tested enough add -i switch to change input file in place (-i.bak keeps a backup).
This uses Perl's -a switch to break input lines by what is given under -F switch (or by whitespace by default), and the resulting array is in #F. See switches in perlrun.
Then it iterates from the fourth field to the last. I use syntax $#ary for the index of the last element of array #ary.
I don't know what counts for cells that "have text in them" so above I test a field for defined-ness; thus this will append even for an empty string. Adjust as suitable.
Or use a regex, which allows more flexibility here. For example,
for (3..$#F) { $F[$_] =~ s/.+\K/!XX!/ }
This matches all characters and then adds !XX! (keeping what it matched, by \K assertion). Using regex allows and demands to specify more precisely what is accepted there; the shown pattern will match even for whitespace alone, but not for empty string. To not touch fields with whitespace only, and to strip trailing spaces if any
for (3..$#F) { $F[$_] =~ s/.+\S\K\s*/XX/ };
Again, adjust to your details.
I don't quite understand the discussion of newlines and what is wanted of them; the above one-liner goes line by line. If that's not what you need please clarify. I don't have Macs to test, so I can't comment on all that.
A self-contained example for ready testing and tweaking
echo "t1\tt2\tt3\tt4\t\tt6 \t " |
perl -F"\t" -wlanE'for (3..$#F) { $F[$_] =~ /.+\S\K\s*/XX/ } say for #F'
where I print each field on a separate line for easier inspection. The last tab in input is followed by trailing spaces only -- this results in an empty field, but with no text marker added (as asked for in a comment).
with GNU sed
$ echo text{01..07}$'\t' | sed -E 's/([^\t]+)(\t|$)/\1!!xx!!/4g'
text01 text02 text03 text04!!xx!! text05!!xx!! text06!!xx!! text07!!xx!!
or
$ echo text{01..07}$'\t' | sed -E 's/\t([^\t]+)/\1!!xx!!/3g'
Assuming each text file contains 7 lines, you can do
paste -s *.txt | awk '
BEGIN {FS=OFS="\t"}
{for (i=4; i<=NF; i++) if ($i != "") $i = $i "!!XX!!"; print}
'
Here is an awk:
echo text{01..10}$'\t' |
awk -v OFS=$'\t' '{for(i=1;i<=NF;i++) printf "%s%s", $i, i>=4 ? "XXX\t" : i<NF ? OFS : ORS }'
With perl, I would do this:
echo text{01..10}$'\t' |
perl -lpE '$cnt=0; s/\h+/++$cnt>=4 ? "XXX\t" : "\t"/ge;'
Both print:
text01 text02 text03 text04XXX text05XXX text06XXX text07XXX text08XXX text09XXX text10XXX
I have several thousand text files which might start with
"
Start of text
but not all of them have the same number of line breaks and not all of them have "
I would like to remove " (if it exists) and any line breaks, if any.
(and the ending too but I'll probably figure it out if you show me how to remove it from the start)
End of file...
"
perl is also ok
my attempt would be something like this with fish shell. awk is probably more performant though
if head -1 | grep \"
sed -i 1d $file
if head -1 | grep '^\r\n$'
sed -i 1d $file
if head -1 | grep '^\r\n$'
sed -i 1d $file
if head -1 | grep '^\r\n$'
sed -i 1d $file
that might actually work I'm going to try it
The simplest way to do this is a 2-pass approach where on the first pass you figure out the beginning and ending line numbers for the "good" lines and on the second you print the lines between those numbers:
awk '
NR==FNR { if (NF && !/^"$/) { if (!beg) beg=NR; end=NR } next }
(beg <= FNR) && (FNR <= end)
' file file
For example given this input:
$ cat file
"
Start of text
but not all of them have the same number of line breaks and not all of them have "
I would like to remove " (if it exists) and any line breaks, if any.
(and the ending too but I'll probably figure it out if you show me how to remove it from the start)
End of file...
"
We can do the following using any awk in any shell on every UNIX box:
$ awk 'NR==FNR{if (NF && !/^"$/) {if (!beg) beg=NR; end=NR} next} (beg <= FNR) && (FNR <= end)' file file
Start of text
but not all of them have the same number of line breaks and not all of them have "
I would like to remove " (if it exists) and any line breaks, if any.
(and the ending too but I'll probably figure it out if you show me how to remove it from the start)
End of file...
You can use ed to do it in a single pass, too:
Something like
printf '%s\n' '1g/^"$/.,/^./-1d' '$g/^"$/?^.?+1,$d' w | ed -s "$file"
Translated: If the first line is nothing but a quote, delete it and any following empty lines. If the last line is nothing but a quote, delete all preceding empty lines and it. Finally write the file back to disk.
This might work for you (GNU sed):
sed '1{/^"$/d};/\S/!d;:a;${/^"$/Md};/\S/{n;ba};$d;N;ba' file
Delete the first line if contains a single ".
Delete all empty lines from the start of the file.
Form a loop for the remainder of the file.
Delete the last line(s) if it/they contains a single ".
If the current line(s) is/are not empty, print it/them, fetch the next and repeat.
If the current line(s) is/are the last and empty, delete it/them.
The current line(s) is/are empty so append the next line and repeat.
N.B. This is a single pass solution and allows for empty lines within the body of the file.
Alternative, memory intensive:
sed -Ez 's/^"?\n+//;s/\n+("\n)?$/\n/' file
In addition to the two-pass processing, here's a one-pass:
awk '!/^"*$/{print b $0;f=1;b=""} f&&/^"*$/{b=b $0 ORS}' file
The program consists of two small parts:
Whenever there's content (lines that contain more than "), print possibly buffered lines and the current input line, set a flag that content has started, and clear the buffer.
If content had started (f), but the current line doesn't contain any content, we may have reached the end, so we buffer these empty lines. Later, (1) will print them or they will be discarded on EOF.
I want to grep some string spread along multiple lines withing some begin and end pattern
Example:
MediaHelper->fetchStrings( names => [ //Here new line may or many not be
**'ubp-firstrun_heading',
'firstrun_text',
'_firstrun-or-start_search',
'installed'** //may end here also );
]);
using perl or grap how I can get list 4 strings here begin pattern is MediaHelper->fetchStrings(names => [ and end pattern is );
Or any other suggesting using other commands like grep or sed or awk ?
Try this:
sed -n '/MediaHelper->fetchStrings( names =>/,/);/ p' <yourfile>
Or, if you want to skip the delimiting lines, this:
sed -n '/MediaHelper->fetchStrings( names =>/,/);/ {/MediaHelper->fetchStrings( names =>/b; /^);/b; p}' <yourfile>
If I understand your question, you need to match all strings in all lines (and not just the MediaHelper thing).
If this is the case, then sed is the right tool, because it is by default line-oriented.
In our case, if you want to match the string in every line:
sed "s/.*\('.*'\).*/\1/" <your_file>
Hope it helps
Edit: To be more descriptive, first we need to match the whole line (that's the first and the last .*) and then we enclose in parenthesis the part of the line we want to print, which in our case is everything inside single quotes. The number 1 before the last delimiter denotes that we want to print the first (in our case it is the last also) parenthesis.
Just process the file in slurp mode instead of line by line:
perl -0777 -ne 'print $1 while m{MediaHelper->fetchStrings(names\s*=>\s*\[(.*?)\]}g' file
Explanation:
Switches:
-0777: Slurp mode instead of line by line
-n: Creates a while(<>){..} loop for each line in your input file.
-e: Tells perl to execute the code on command line.
I have a huge list of locations in this form in a text file:
ar,casa de piedra,Casa de Piedra,20,,-49.985133,-68.914673
gr,riziani,RÃziani,18,,39.5286111,20.35
mx,tenextepec,Tenextepec,30,,19.466667,-97.266667
Is there any way with command line to remove everything that isn't between the first and second commas? For example, I want my list to look like this:
casa de piedra
riziani
tenextepec
with Perl
perl -F/,/ -ane 'print $F[1]."\n"' file
Use cut(1):
cut -d, -f2 inputfile
With perl:
perl -pe 's/^.*?,(.*?),.*/$1/' filename
Breakdown of the above code
perl - the command to use the perl programming language.
-pe - flags.
e means "run this as perl code".
p means:
Set $_ to the first line of the file (given by filename)
Run the -e code
Print $_
Repeat from step 1 with the next line of the file
what -p actually does behind the scenes is best explained here.
s/.*?,(.*?),.*/$1/ is a regular expression:
s/pattern/replacement/ looks for pattern in $_ and replaces it with replacement
.*? basically means "anything" (it's more complicated than that but outside the scope of this answer)
, is a comma (nothing special)
() capture whatever is in them and save it in $1
.* is another (slightly different) "anything" (this time it's more like "everything")
$1 is what we captured with ()
so the whole thing basically says to search in $_ for:
anything
a comma
anything (save this bit)
another comma
everything
and replace it with the bit it saved. This effectively saves the stuff between the first and second commas, deletes everything, and then puts what it saved into $_.
filename is the name of your text file
To review, the code goes through your file line by line, applies the regular expression to extract your needed bit, and then prints it out.
If you want the result in a file, use this:
perl -pe 's/^.*?,(.*?),.*/$1/' filename > out.txt
and the result goes into a file named out.txt (that will be placed wherever your terminal is pointed to at the moment.) What this pretty much does is tell the terminal to print the command's result to a file instead of on the screen.
Also, if it isn't crucial to use the command line, you can just import into Excel (it's in CSV format) and work with it graphically.
With awk:
$ awk -F ',' '{ print $2 }' file
I have a file that contains sequence data, where each new paragraph (separated by two blank lines) contain a new sequence:
#example
ASDHJDJJDMFFMF
AKAKJSJSJSL---
SMSM-....SKSKK
....SK
SKJHDDSNLDJSCC
AK..SJSJSL--HG
AHSM---..SKSKK
-.-GHH
and I want to end up with a file looking like:
ASDHJDJJDMFFMFAKAKJSJSJSL---SMSM-....SKSKK....SK
SKJHDDSNLDJSCCAK..SJSJSL--HGAHSM---..SKSKK-.-GHH
each sequence is the same length (if that helps).
I would also be looking to do this over multiple files stored in different directiories.
I have just tried
sed -e '/./{H;$!d;}' -e 'x;/regex/!d' ./text.txt
however this just deleted the entire file :S
any help would bre appreciated - doesn't have to be in sed, if you know how to do it in perl or something else then that's also great.
Thanks.
All you're asking to do is convert a file of blank-lines-separated records (RS) where each field is separated by newlines into a file of newline-separated records where each field is separated by nothing (OFS). Just set the appropriate awk variables and recompile the record:
$ awk '{$1=$1}1' RS= OFS= file
ASDHJDJJDMFFMFAKAKJSJSJSL---SMSM-....SKSKK....SK
SKJHDDSNLDJSCCAK..SJSJSL--HGAHSM---..SKSKK-.-GHH
awk '
/^[[:space:]]*$/ {if (line) print line; line=""; next}
{line=line $0}
END {if (line) print line}
'
perl -00 -pe 's/\n//g; $_.="\n"'
For multiple files:
# adjust your glob pattern to suit,
# don't be shy to ask for assistance
for file in */*.txt; do
newfile="/some/directory/$(basename "$file")"
perl -00 -pe 's/\n//g; $_.="\n"' "$file" > "$newfile"
done
A Perl one-liner, if you prefer:
perl -nle 'BEGIN{$/=""};s/\n//g;print $_' file
The $/ variable is the equivalent of awk's RS variable. When set to the empty sting ("") it causes two or more empty lines to be treated as one empty line. This is the so-called "paragraph-mode" of reading. For each record read, all newline characters are removed. The -l switch adds a newline to the end of each output string, thus giving the desired result.
just try to find those double linebreaks: \n or \r and replace first those with an special sign like :$:
after that you replace every linebreak with an empty string to get the whole file in one line.
next, replace your special sign with a simple line break :)