I have an ip field in a postgres database defined as an inet field.
At the moment, to determine ipv4 vs ipv6, it looks like I can use colon count and/or '.' count to some degree, but there must be a better way, right?
COLON_COUNT = (length(ip::text) - length(replace(ip::text, ':', '')));
DOT_COUNT = (length(ip::text) - length(replace(ip::text, '.', '')));
What is a clean/good way to determine if a address is ipv4 or ipv6?
Use the family() function. IPv4 will return 4, IPv6 will return 6.
I think if the trailing zeros are not ommitted you could just convert the field into a number and see if the number is less than 2 powers 32 IPv4 or not (IPV6).This means filtering out colons and dots before the number conversion.
Related
why doesn't this bor bnot give the expected result in powershell?
To find the last address in an ipv6 subnet one needs to do a "binary or" and a "binary not" operation.
The article I'm reading (https://www.codeproject.com/Articles/660429/Subnetting-with-IPv6-Part-1-2) describes it like this:
(2001:db8:1234::) | ~(ffff:ffff:ffff::) = 2001:db8:1234:ffff:ffff:ffff:ffff:ffff
Where | is a "binary or" and
~ is a "binary not"
In powershell however, I try it like:
$mask = 0xffffffff
$someOctet = 0x0000
"{0:x4}" -f ($someOctet -bor -bnot ($mask) )
and I get 0000 instead of ffff
Why is this?
The tutorial is doing a -not of the entire subnet mask, so ff00 inverts to 00ff and similar for longer Fs and 0s; you aren't doing that, so you don't get the same results.
The fully expanded calculation that you show is doing this:
1. (2001:0db8:1234:0000:0000:0000:0000:0000) | ~(ffff:ffff:ffff:0000:0000:0000:0000:0000)
2. (2001:0db8:1234:0000:0000:0000:0000:0000) | (0000:0000:0000:ffff:ffff:ffff:ffff:ffff)
3. = 2001:db8:1234:ffff:ffff:ffff:ffff:ffff
Note how in step 1. to step 2, the not is inverting the pattern of Fs and 0s, switching the subnet mask around, and switching it around between the bit where the prefix ends and the bit where the host part begins.
Then step 3 or takes only the set bits from the left to keep those numbers the same (neither zero'd nor ffff'd), and all the set bits from the right (to ffff those, maxing them to the max IP address within that prefix).
In other words, it makes no sense to do this "an octet at a time". This is a whole IP address (or whole prefix) + whole subnet mask operation.
Where the tutorial says:
& (AND), | (OR), ~ (NOT or bit INVERTER): We will use these three bitwise operators in our calculations. I think everybody is familiar -at least from university digital logic courses- and knows how they operate. I will not explain the details here again. You can search for 'bitwise operators' for further information.
If you aren't very familiar with what they do, it would be worth studying that more, before trying to apply them to IP subnetting. Because you are basically asking why 0 or (not 1) is 0 and the answer is because that's how Boolean logic "or" and "not" work.
Edit for your comment
[math]::pow(2,128) is a lot bigger than [decimal]::maxvalue, so I don't think Decimal will do.
I don't know what a recommended way to do it is, but I imagine if you really wanted to do it all within PowerShell with -not you'd have to process it with [bigint] (e.g. [bigint]::Parse('20010db8123400000000000000000000', 'hex')).
But more likely, you'd do something more long-winded like:
# parse the address and mask into IP address objects
# which saves you having to expand the short version to
$ip = [ipaddress]::Parse('fe80::1')
$mask = [ipaddress]::Parse('ffff::')
# Convert them into byte arrays, then convert those into BitArrays
$ipBits = [System.Collections.BitArray]::new($ip.GetAddressBytes())
$maskBits = [System.Collections.BitArray]::new($mask.GetAddressBytes())
# ip OR (NOT mask) calculation using BitArray's own methods
$result = $ipBits.Or($maskBits.Not())
# long-winded way to get the resulting BitArray back to an IP
# via a byte array
$byteTemp = [byte[]]::new(16)
$result.CopyTo($byteTemp, 0)
$maxIP = [ipaddress]::new($byteTemp)
$maxIP.IPAddressToString
# fe80:ffff:ffff:ffff:ffff:ffff:ffff:ffff
Why does the System.Net.IpAddress allow the following strings to be converted to valid IP addresses?
$b = [ipaddress]"10.10.10"
$b.IPAddressToString
#10.10.0.10
$c = [ipaddress]"10.10"
$c.IPAddressToString
#10.0.0.10
$d = [ipaddress]"10"
$d.IPAddressToString
#0.0.0.10
I can see that the pattern is that the last octet in the string is the last octet in the IPAddress object, and whatever the first octets are in the string, are used as the left most octets in the IPAddress, and zeros are used to fill the middle unspecified octets, if any.
But why does it do this? As a user I'd expect it to fail during conversion unless all octets are specified.
Because it allows these conversions, unexpected results like this are possible when checking if a string is a valid IP address:
[bool]("10" -as [ipaddress]) #Outputs True
According to https://msdn.microsoft.com/en-us/library/system.net.ipaddress.parse.aspx?f=255&MSPPError=-2147217396
The number of parts (each part is separated by a period) in ipString determines how the IP address is constructed. A one part address is stored directly in the network address. A two part address, convenient for specifying a class A address, puts the leading part in the first byte and the trailing part in the right-most three bytes of the network address. A three part address, convenient for specifying a class B address, puts the first part in the first byte, the second part in the second byte, and the final part in the right-most two bytes of the network address.
I really need your help to understand what the dl_type=0x0800 and the nw_proto=6 mean in this command of flowvisor:
$ fvctl -f /dev/null add-flowspace dpid1-port4-video-src 1 100 in_port=4,dl_type=0x0800,nw_proto=6,tp_src=9999 video=7
Thank you!
The conventions are the same as for the Open vSwitch ovs-ofctl tool, the manpage has all the information you are looking for.
The plaintext version of the manpage can be found here.
It mentions:
The following shorthand notations are also available:
(...)
tcp Same as dl_type=0x0800,nw_proto=6.
(...)
The full descriptions have more information:
dl_type=ethertype
Matches Ethernet protocol type ethertype, which is specified as
an integer between 0 and 65535, inclusive, either in decimal or
as a hexadecimal number prefixed by 0x (e.g. 0x0806 to match ARP
packets).
nw_proto=proto
ip_proto=proto
When ip or dl_type=0x0800 is specified, matches IP protocol type
proto, which is specified as a decimal number between 0 and 255,
inclusive (e.g. 1 to match ICMP packets or 6 to match TCP pack‐
ets).
When ipv6 or dl_type=0x86dd is specified, matches IPv6 header
type proto, which is specified as a decimal number between 0 and
255, inclusive (e.g. 58 to match ICMPv6 packets or 6 to match
TCP). The header type is the terminal header as described in
the DESIGN document.
When arp or dl_type=0x0806 is specified, matches the lower 8
bits of the ARP opcode. ARP opcodes greater than 255 are
treated as 0.
When rarp or dl_type=0x8035 is specified, matches the lower 8
bits of the ARP opcode. ARP opcodes greater than 255 are
treated as 0.
When dl_type is wildcarded or set to a value other than 0x0800,
0x0806, 0x8035 or 0x86dd, the value of nw_proto is ignored (see
Flow Syntax above).
Simple enough, I'd like to split a given IP address into netid (as defined by the netmask) and the hostid in Perl. Example:
$network = NetAddr::IP->new('192.168.255.255/29') || die "invalid space $_";
Now $network->mask returns 255.255.255.248. But there're no methods in NetAddr::IP to apply the mask to split the address into its netid and hostid portions in the /29 space.
NetAddr::IP::Util mentions the operators to do so, but it's documentation is a mess.
At least the netid can be extracted using Net::NetMask:
$netid = Net::Netmask->new('192.168.255.255/29')->base;
This yields 192.168.255.248. Again, no method to get the host portion 0.0.0.7. Maybe the best would be to pack/unpack the IPs into 32 bit int and then simply & them out. Then it would be easier to print the binary representations of IP addresses too, which I found can be really helpful for debugging and documentation purposes.
Use the hostmask() method
$host_wildcard = Net::Netmask->new('192.168.255.255/29')->hostmask;
The camel book suggests that V-strings can be used for representing IPv4 addresses:
$ipaddr = 204.148.40.9; # the IPv4 address of oreilly.com
But perldata on the topic of Version Strings states:
Note that using the v-strings for IPv4
addresses is not portable unless you
also use the inet_aton()/inet_ntoa()
routines of the Socket package.
I have two questions:
1) Why is using the v-strings not portable?
2) What's the "standard" way to convert an ip-address from dotted notation to integer? Seems that unpack "N", <v-string> can cause problems sometimes.
The "standard" way to get the encoded form is inet_aton, which handles dotted IP addresses as well as hostnames -- but what do you need it for? More often than not the best idea is just to skip all of the low-level interfaces that deal with such things and use, e.g., IO::Socket.
If you're looking to convert to integer, as you say, and not to the form that socket functions expect (they're similar concepts in C, but less so in Perl), then you can go ahead and use pack just fine as long as you're consistent -- the part that's unportable is the format that socket functions accept. For example, unpack "N", pack "C4", split /\./, "1.2.3.4" will get you a nice unsigned big-endian representation of that address (in the form of the number 16909060 == 0x01020304).