Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm interested in using command line (possibly Perl) to generate a list of all possible IP addresses.
I've done similar with PHP in the past by using the long2ip function and creating a list from 0 to the interger 4294967295.
Is there a way to do this in Perl instead though?
I'm basically just looking for the quickest way to generate a text file that has a list of all 4,294,967,296 possible IP addresses.
There is no need to use any modules. This is a trivial problem.
for my $i (0..255) {
for my $j (0..255) {
for my $k (0..255) {
for my $l (0..255) {
printf("%d.%d.%d.%d\n", $i,$j,$k,$l)
}
}
}
}
One-liner time?
perl -MSocket=inet_ntoa -le 'print inet_ntoa(pack "N", $_) for 0..2**32-1'
Source: http://www.perlmonks.org/?node_id=786521 via quick googling.
Perl isn't strictly necessary either, of course. The following generates a quick sed script on the fly and calls it successively.
octets () { sed "h;$(for ((i=0; i<256; i++)); do printf "g;s/^/$i./p;"; done)"; }
octets <<<'' | octets | octets | octets | sed 's/\.$//'
The octets function generates 256 copies of its input with a (zero-based) line number and a dot prepended to each. (You could easily append at the end instead, of course.) In the sed scripting language, the h command copies the input to the hold space and g retrieves it back, overwriting whatever we had there before. The C-style for loop and the <<< here string are Bash extensions, so not POSIX shell.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am facing a problem with my shell script (I'm using SH):
I have a file with multiple line including mail adressess, for example:
abcd
plm
name_aA.2isurnamec#Text.com -> this is a line that checks the correct condition
random efgh
aaaaaa
naaame_aB.3isurnamec#Text.ro ->same (this is not part of the file)
I have used grep to filter the correct mail adresses like this:
grep -E '^[a-z][a-zA-Z_]*.[0-9][a-zA-Z0-9]+#[A-Z][A-Z0-9]{,12}.(ro|com|eu)$' file.txt
I have to write a shell that cheks the file and prints the following (for the above example it would be like this ):
"Incorrect:" abcd
"Incorrect:" plm
"Correct:" name_aA.2isurnamec#Text.com
"Incorrect:" random efgh
"Incorrect:" aaaaaa
"Correct:" naaame_aB.3isurnamec#Text.ro
I want to solve this problem using grep or sed, while, if, or pipes etc i dont want to use lists or other things.
I have tried using something like this
grep condition abc.txt | while read -r line ; do
echo "Processing $line"
# your code goes here
done
but it only prints the correct lines, and i know that i can also print the lines that dont match the grep condition using -v on grep, but i want to print the lines in the order they appear in the text file.
I'm having trouble trying to parse each line of the file, or maybe i don't need to parse the lines 1
by 1, i really dont know how to solve it.
If you could help me i would appreciate it.
Thanks
#!/bin/bash
pattern='^[a-z][a-zA-Z_]*\.[0-9][a-zA-Z0-9]+#[A-Z][A-Za-z0-9]{,12}\.(ro|com|eu)$'
while read line; do
if [ "$line" ]; then
if echo "$line" | grep -E -q $pattern; then
echo "\"Correct:\" $line"
else
echo "\"Incorrect:\" $line"
fi
fi
done
Invoke like this, assuming the bash script is called filter and the text file, text.txt: ./filter < text.txt.
Note that the full stops in the regular expression are escaped and that the domain name can contain lowercase letters (although, I think that your regex is too restrictive). Other characters are not escaped because the string is in single quotes.
while reads the standard input line by line into $line; the first if skips the empty lines; the second one checks $line against $pattern (-q suppresses grep output).
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I want to use sed to replace the ip addresses in the below entry.
1500.arp1.akaarp.net.00000000.7ac112c6.123456.6 30 IN TXT "198.18.193.23,2509.417\;198.18.193.25,2609.417\;198.18.193.27,2709.417"
1500.arp1.akaarp.net.00000000.7ac112c6.123456.6 30 IN TXT "19.18.19.27,1110.400\;198.18.193.25,2609.417\;198.18.193.27,2709.417"
I tried the following :
sed -i s/"198.18.193.23,2409.417\;198.18.193.25,2609.417\;198.18.193.27,2709.417"/"198.18.19.27,1110.400"/ filename.txt
The above entry works if there is only one ip address in the actual entry. If there are multiple ip addresses separated by regular expressions this doesn't work.
Your question is extremely unclear but if you just want to replace whatever list of IP addresses is between quotes then that's just:
$ sed 's/"[^"]*"/"198.18.19.27,1110.400"/' file
1500.arp1.akaarp.net.00000000.7ac112c6.123456.6 30 IN TXT "198.18.19.27,1110.400"
If that's not what you want then edit your question to clarify your requirements. In particular explain what these 2 sentences in your question mean:
The above entry works if there is only one ip address in the actual entry.
If there are multiple ip addresses separated by regular expressions this doesn't work.
The above was run on this input file:
$ cat file
1500.arp1.akaarp.net.00000000.7ac112c6.123456.6 30 IN TXT "198.18.193.23,2509.417\;198.18.193.25,2609.417\;198.18.193.27,2709.417"
triple the backslashes and it will work: BTW, the quotes were misplaced, or you have to escape them too.
You have to escape:
the backslashes from the shell and from sed, hence the need of 3 backslashes.
the dots but it still works like this because it matches any character so dot matches too.
So the quickfix is
sed -i s/"198.18.193.23,2509.417\\\;198.18.193.25,2609.417\\\;198.18.193.27,2709.417"/"198.18.19.27,1110.400"/ file.txt
2 problems:
quotes are completely ignored by sed because the shell consumes them
dots are considered as wildcards
Consequently, this string would be replaced, which is maybe not what you want (I replaced the dots by digits & letters):
1500.arp1.akaarp.net.00000000.7ac112c6.123456.6 30 IN TXT 1987187193723,2509.417\;198718z193v25,2609k417\;198.18.193.27,2709.417
A cleaner fix would be
sed -i "s/\"198\.18\.193\.23,2509\.417\\\;198\.18\.193\.25,2609\.417\\\;198\.18\.193\.27,2709\.417\"/\"198.18.19.27,1110.400\"/" file.txt
If you want to randomize output, to replace, say, 1110 by a random number between 1 and 100, do this (no need for shuf):
sed -i "s/\"198\.18\.193\.23,2509\.417\\\;198\.18\.193\.25,2609\.417\\\;198\.18\.193\.27,2709\.417\"/\"198.18.19.27,$(($RANDOM%100+1)).400\"/" file.txt
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have a file of e-mail addresses harvested from Outlook so that the addresses in the harvested form show up like this:
-A#b.com
-C#d.com
-A#b.com,JOHN DOE, RICHARD ROE,"\O=USERS:SAM"
etc.
What I would like to end up with is a text file that has one validly formed address on each line. So A#b.com would be OK, but "RICHARD ROE" and the "\O=USERS,etc." would not be. Perhaps this could be done with SED or AWK?
Here's one way with GNU awk given your posted input file:
$ gawk -v RS='[[:alnum:]_.]+#[[:alnum:]_]+[.][[:alnum:]]+' 'RT{print RT}' file
A#b.com
C#d.com
A#b.com
It just finds simple email addresses, e.g. "bob#the_moon.net" or "Joe.Brown#google.com", feel free to change the setting of RS if you can figure out an appropriate RE to capture the more esoteric email addresses that are allowed or post a more representative input file if you have examples. here's another RE that works by specifying what character cannot be in the parts of an email address rather than those that can:
$ gawk -v RS='[^[:space:][:punct:]]+#[^[:space:][:punct:]]+[.][^[:space:][:punct:]]+' 'RT{print RT}' file
A#b.com
C#d.com
A#b.com
Again it works with your posted sample, but may not with others. Massage to suit...
With other awks you can do the same by setting FS or using match() and looping.
You can try:
awk -F, '{
for (i=1; i<=NF; i++)
if ($i ~ /#/)
print $i
}' file
or like this:
awk -F, -f e.awk file
where e.awk is:
{
for (i=1; i<=NF; i++)
if ($i ~ /#/)
print $i
}
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have a data file that looks like this
15105021
15105043
15106013
15106024
15106035
15105024
15105042
15106015
15106021
15106034
and I need to grep lines that have sequence numbers like 1510603, 1510504
I tried this awk command
awk /[1510603,1510504]/ soursefile.txt
but it does not work.
Using egrep and word boundary on LHS since OP wants to match all matching numbers on RHS:
egrep '\b(1510603|1510504)' file
15105043
15106035
15105042
15106034
An shorter awk
awk '/1510603|1510504/' file
Based on the contents of your file the following should suffice
grep -E '^1510603|^1510504' file
If your grep version does not support the -E flag, try egrep instead of grep
If you insist on awk
awk '/^1510603/ || /^1510504/' file
Think this works:
egrep '1510603|1510504' source
Your question is very poorly stated, but if you want to print all numbers in the file that begin with either 1510603 or 1510504, then you can write this in Perl
perl -ne 'print if /^1510(?:603|504)/' sourcefile.txt
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I'm looking for a quick and efficient way to double quote all fields in tab delimited or comma separated text files.
Ideally, this would be a Perl one-liner that I can run from the command-line, but I'm open to any kind of solution.
Use Text::CSV:
perl -MText::CSV -e'
my $c = Text::CSV->new({always_quote => 1, binary => 1, eol => "\n"}) or die;
$c->print(\*STDOUT, $_) while $_ = $c->getline(\*ARGV)' <<'END'
foo,bar, baz qux,quux
apple,"orange",spam, eggs
END
Output:
"foo","bar"," baz qux","quux"
"apple","orange","spam"," eggs"
The always_quote option is the important one here.
If your file does not contain any double quoted strings containing the delimiter, you can use
perl -laF, -ne '$" = q(","); print qq("#F")'
awk -F, -v OFS='","' -v q='"' '{$0=q$0q;$1=$1}7' file
for example, comma sep:
kent $ echo "foo,bar,baz"|awk -F, -v OFS='","' -v q='"' '{$0=q$0q;$1=$1}7'
"foo","bar","baz"
tab sep would be similar.