what is the best way to extract filled data from a static form? - scala

I have some federal pdf forms with filled data init. Lets say for example i765 and I have the data of this form available in a text format, with duly filled in details. How can I extract the data from this form with minimum parsing. Lets say how can write a script that identifies "difference" , which in itself is nothing but the filled information.
For eg: if a line contains..
SSN: (Whitespace) and the actual filled in form has SSN: ABC!##456
so the filled in information is nothing but ABC!##456 which just a difference between the strings . Is there a known approach that i can follow. Any pointers are much appreciated.

If we are talking about Linux Tools then you could try various solutions , like:
$ join -t"=" -a1 -o 0,2.2 <(sort emptyform) <(sort filledform) # "=" is used as delimiter
Or even awk without sorting requirements:
$ awk 'BEGIN{FS=OFS="="}NR==FNR{a[$1]=$2;next}{if ($1 in a) {print;delete a[$1]}} \
END{print "\n Missing fields:";for (i in a) print i,a[i]}' empty filled
Testing:
cat <<EOF >empty
Name=""
Surname=""
Age=""
Address=""
Kids=""
Married=""
EOF
cat <<EOF >filled
Name="George"
Surname="Vasiliou"
Age="42"
Address="Europe"
EOF
join -t"=" -a1 -o 0,2.2 <(sort empty) <(sort filled)
#Output:
Address="Europe"
Age="42"
Kids=
Married=
Name="George"
Surname="Vasiliou"
awk output
awk 'BEGIN{FS=OFS="="}NR==FNR{a[$1]=$2;next}{if ($1 in a) {print;delete a[$1]}} \
END{print "\nnot completed fields:";for (i in a) print i,a[i]}' empty filled
Name="George"
Surname="Vasiliou"
Age="42"
Address="Europe"
not completed fields:
Married=""
Kids=""
Especially in awk if you remove the print from {if ($1 in a) {print;delete a[$1]}} the END section will print out for you only the missing fields.
Another alternative with a nice visual interface is with diff utility:
$ diff -y <(sort empty) <(sort filled)
Address="" | Address="Europe"
Age="" | Age="42"
Kids="" | Name="George"
Married="" | Surname="Vasiliou"
Name="" <
Surname="" <

Related

Change numbering according to field value by bash script

I have a tab delimited file like this (without the headers and in the example I use the pipe character as delimiter for clarity)
ID1|ID2|VAL1|
1|1|3
1|1|4
1|2|3
1|2|5
2|2|6
I want add a new field to this file that changes whenever ID1 or ID2 change. Like this:
1|1|3|1
1|1|4|1
1|2|3|2
1|2|5|2
2|2|6|3
Is this possible with an one liner in sed,awk, perl etc... or should I use a standard programming language (Java) for this task. Thanks in advance for your time.
Here is an awk
awk -F\| '$1$2!=a {f++} {print $0,f;a=$1$2}' OFS=\| file
1|1|3|1
1|1|4|1
1|2|3|2
1|2|5|2
2|2|6|3
Simple enough with bash, though I'm sure you could figure out a 1-line awk
#!/bin/bash
count=1
while IFS='|' read -r id1 id2 val1; do
#Can remove next 3 lines if you're sure you won't have extraneous whitespace
id1="${id1//[[:space:]]/}"
id2="${id2//[[:space:]]/}"
val1="${val1//[[:space:]]/}"
[[ ( -n $old1 && $old1 -ne $id1 ) || ( -n $old2 && $old2 -ne $id2 ) ]] && ((count+=1))
echo "$id1|$id2|$val1|$count"
old1="$id1" && old2="$id2"
done < file
For example
> cat file
1|1|3
1|1|4
1|2|3
1|2|5
2|2|6
> ./abovescript
1|1|3|1
1|1|4|1
1|2|3|2
1|2|5|2
2|2|6|3
Replace IFS='|' with IFS=$'\t' for tab delimited
Using awk
awk 'FNR>1{print $0 FS (++a[$1$2]=="1"?++i:i)}' FS=\| file

Awk: Match data in 2 files with duplicate keys

I have 2 files
file1
a^b=-123
a^3=-124
c^b=-129
a^b=-130
and file2
a^b=-523
a^3=-524
a^b=-530
I want to lookup the key using '=' as delimiter and get the following output
a^b^-123^-523
a^b^-130^-530
a^3^-124^-524
When there were no duplicate keys, it was easy to do it in awk mapping the first file and looping over the second, however, with the duplicates, its slightly difficult. I tried something like this:
awk -F"=" '
FNR == NR {
arr[$1 "^" $2] = $2;
next;
}
FNR < NR {
for (i in arr) {
match(i, , /^(.*\^.*)\^([-0-9]*)$/, , ar);
if ($1 == ar[1]) {
if ($2 in load == 0) {
if (ar[2] in l2 == 0) {
l2[ar[2]] = ar[2];
load[$2] = $2;
print i "^" $2
}
}
}
}
}
' file1 file2
This works just fine, however, not surprisingly it's extremely slow. On a file with about 600K records, it ran for 4 hours.
Is there a better and more efficient way to do this in one line awk or perl. If possible, a one liner would be great help.
thanks.
You might want to look at the join command which does something very much like you're doing here, but generates a full database-style join. For example, assuming file1 and file2 contain the data you show above, then the commands
$ sort -o file1.out -t = -k 1,1 file1
$ sort -o file2.out -t = -k 1,1 file2
$ join -t = file1.out file2.out
produces the output
a^3=-124=-524
a^b=-123=-523
a^b=-123=-530
a^b=-130=-523
a^b=-130=-530
The sorts are necessary because, to be efficient, join requires the input file to be sorted on the keys being compared. Note though that this generates the full cross-product join, which appears not to be what you want.
(Note: The following is a very shell-heavy solution, but you could cast it fairly easily into any programming language with dynamic arrays and a built-in sort primitive. Unfortunately, awk isn't one of those but perl and python are, as are I'm sure just about every newer scripting language.)
It seems that you really want each instance of a key to be consumed the first time it's emitted in any output. You can get this as follows, again starting with the original contents of file1 and file2.
$ nl -s = -n rz file1 | sort -t = -k 2,2 > file1.out
$ nl -s = -n rz file2 | sort -t = -k 2,2 > file2.out
This decorates each line with the original line number so that we can recover the original order later, and then sorts them on the key for join. The remainder of the work is a short pipeline, which I've broken up into multiple blocks so it can be explained as we go.
join -t = -1 2 -2 2 file1.out file2.out |
This command joins on the key names, now in field two, and emits records like those shown from the earlier output of join, except that each line now includes the line number where the key was found in file1 and file2. Next, we want to re-establish the search order your original algorithm would have used, so we continue the pipeline with
sort -t = -k 2,2 -k 4,4 |
which sorts first on the file1 line number and then on the file2 line numbers. Finally, we need to efficiently emulate the assumption that a particular key, once consumed, cannot be re-used, in order to eliminate the unwanted matches in the original join output.
awk '
BEGIN { OFS="="; FS="=" }
$2 in seen2 || $4 in seen4 { next }
{ seen2[$2]++; seen4[$4]++; print $1,$3,$5 }
'
This ignores every line that references a previously scanned key in either file, and otherwise prints the following
a^b=-123=-523
a^3=-124=-524
a^b=-130=-530
This should be uniformly efficient even for quite large inputs, because the sorts are O(n log n), and everything else is O(n).
try this awk codes, see if it would be faster than yours: (it could be an one-liner, if you join all lines, but I think with formatting, it is easier to read)
awk -F'=' -v OFS="^" 'NR==FNR{sub(/=/,"^");a[NR]=$0;t=NR;next}
{ s=$1
sub(/\^/,"\\^",s)
for(i=1;i<=t;i++){
if(a[i]~s){
print a[i],$2
delete a[i]
break
}
}
}' file1 file2
with your example, it outputs expected result:
a^b^-123^-523
a^3^-124^-524
a^b^-130^-530
But I think the key is performance here. so give it a try.

Using the bash sort command within variable-length filenames

I am trying to numerically sort a series of files output by the ls command which match the pattern either ABCDE1234A1789.RST.txt or ABCDE12345A1789.RST.txt by the '789' field.
In the example patterns above, ABCDE is the same for all files, 1234 or 12345 are digits that vary but are always either 4 or 5 digits in length. A1 is the same length for all files, but value can vary so unfortunately it can't be used as a delimiter. Everything after the first . is the same for all files. Something like:
ls -l *.RST.txt | sort -k +9.13 | awk '{print $9} ' > file-list.txt
will match the shorter filenames but not the longer ones because of the variable length of characters before the field I want to sort by.
Is there a way to accomplish sorting all files without first padding the shorter-length files to make them all the same length?
Perl to the rescue!
perl -e 'print "$_\n" for sort { substr($a, -11, 3) cmp substr($b, -11, 3) } glob "*.RST.txt"'
If your perl is more recent (5.10 or newer), you can shorten it to
perl -E 'say for sort { substr($a, -11, 3) cmp substr($b, -11, 3) } glob "*.RST.txt"'
Because of the parts of the filename which you've identified as unchanging, you can actually build a key which sort will use:
$ echo ABCDE{99999,8765,9876,345,654,23,21,2,3}A1789.RST.txt \
| fmt -w1 \
| sort -tE -k2,2n --debug
ABCDE2A1789.RST.txt
_
___________________
ABCDE3A1789.RST.txt
_
___________________
ABCDE21A1789.RST.txt
__
etc.
What this does is tell sort to separate the fields on character E, then use the 2nd field numerically. --debug arrived in coreutils 8.6, and can be very helpful in seeing exactly what sort is doing.
The conventional way to do this in bash is to extract your sort field. Except for the sort command, the following is implemented in pure bash alone:
sort_names_by_first_num() {
shopt -s extglob
for f; do
first_num="${f##+([^0-9])}";
first_num=${first_num%[^0-9]*};
[[ $first_num ]] && printf '%s\t%s\n' "$first_num" "$f"
done | sort -n | while IFS='' read -r name; do name=${name#*$'\t'}; printf '%s\n' "$name"; done
}
sort_names_by_first_num *.RST.txt
That said, newline-delimiting filenames (as this question seems to call for) is a bad practice: Filenames on UNIX filesystems are allowed to contain newlines within their names, so separating them by newlines within a list means your list is unable to contain a substantial subset of the range of valid names. It's much better practice to NUL-delimit your lists. Doing that would look like so:
sort_names_by_first_num() {
shopt -s extglob
for f; do
first_num="${f##+([^0-9])}";
first_num=${first_num%[^0-9]*};
[[ $first_num ]] && printf '%s\t%s\0' "$first_num" "$f"
done | sort -n -z | while IFS='' read -r -d '' name; do name=${name#*$'\t'}; printf '%s\0' "$name"; done
}
sort_names_by_first_num *.RST.txt

Remove lines from AWK output

I would like to remove lines that have less than 2 columns from a file:
awk '{ if (NF < 2) print}' test
one two
Is there a way to store these lines into variable and then remove it with xargs and sed, something like
awk '{ if (NF < 2) VARIABLE}' test | xargs sed -i /VARIABLE/d
GNU sed
I would like to remove lines that have less than 2 columns
less than 2 = remove lines with only one column
sed -r '/^\s*\S+\s+\S+/!d' file
If you would like to split the input into two files (named "pass" and "fail"), based on condition:
awk '{if (NF > 1 ) print > "pass"; else print > "fail"}' input
If you simply want to filter/remove lines with NF < 2:
awk '(NF > 1){print}' input

Delete records in a file with Null value in certain fields through Unix

I have a Pipe delimited file (sample below) and I need to delete records which has Null value in fields 2(email),4(mailing-id),6(comm_id). In this sample, row 2,3,4 should be deleted. The output should be saved to another file. If 'awk' is the best option, please let me know a way to achieve this
id|email|date|mailing-id|seg_id|comm_id|oyb_id|method
|-fabianz-#yahoo.com|2010-06-23 11:47:00|0|1234|INCLO|1000002|unknown
||2010-06-23 11:47:00|0|3984|INCLO|1000002|unknown
|-maddog-#web.md|2010-06-23 11:47:00|0||INCLO|1000002|unknown
|-mse-#hanmail.net|2010-06-23 11:47:00|0||INCLO|1000002|unknown
|-maine-mei#web.md.net|2010-06-23 11:47:00|0|454|INCLO|1000002|unknown
Here is an awk solution that may help. However, to remove rows 2, 3 and 4, it is necessary to check for null vals in fields 2 and 5 only (i.e. not fields 2, 4 and 6 like you have stated). Am I understanding things correctly? Here is the awk to do what you want:
awk -F "|" '{ if ($2 == "" || $5 == "") next; print $0 }' file.txt > results.txt
cat results.txt:
id|email|date|mailing-id|seg_id|comm_id|oyb_id|method
|-fabianz-#yahoo.com|2010-06-23 11:47:00|0|1234|INCLO|1000002|unknown
|-maine-mei#web.md.net|2010-06-23 11:47:00|0|454|INCLO|1000002|unknown
HTH
Steve is right, it is field 2 and 5 that are missing in the sample given. Email missing for line two and the seq_id missing for line three and four
This is a slightly simplified version of steve's solution
awk -F "|" ' $2!="" && $5!=""' file.txt > results.txt
If column 2,4 and 6 are the important one, the solution would be:
awk -F "|" ' $2!="" && $4!="" && $6!=""' file.txt > results.txt
This might work for you:
sed 'h;s/[^|]*/\n&/2;s/[^|]*/\n&/4;s/[^|]*/\n&/6;/\n|/d;x' file.txt > results.txt