I work in a company where almost all private ipv4 space is already used, so using 10.254.0.0/16 for service address space is a non-starter. I have carved out a /64 of ipv6 space that I can use, but I can't seem to make it work.
Here's my apiserver config:
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=::"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port kubelets listen on
KUBELET_PORT="--kubelet-port=10250"
# Address range to use for services
# KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=fc00:dead:beef:cafe::/64"
# Add your own!
KUBE_API_ARGS=""
But when I try to start kube-apiserver.service I get an error about "invalid argument". Is it possible to use IPv6 for kubernetes?
I don't think IPv6 is fully supported. I don't think there is a strong motivation among the developers of the project to add IPv6 support, because the largest group of contributors is Google employees. Google Compute Engine (and thus Google Container Engine) doesn't support IPv6, so it wouldn't benefit Google directly to pay their employees to support IPv6. Best thing to do would probably be to pull in employees of companies that run their hosted product on AWS (as AWS has IPv6 support) such as RedHat, or try to contribute some of the work yourself.
From the linked PR, it looks like Brian Grant (Google) is, for whatever reason, somewhat interested and able to contribute IPv6 support. He'd probably be a good resource to query if you're interested in contributing this functionality to Kubernetes your self.
AWS already made IPv6 by default for almost all of their major services --
https://aws.amazon.com/blogs/aws/new-ipv6-support-for-ec2-instances-in-virtual-private-clouds/
Recently, the IPv6 support is accepted, one by another started too, in-fact, the POD implementation has done so far. k8 is moving towards Service and then issues.
Currently, the open blocker issues are still open with good use cases --
https://github.com/kubernetes/kubernetes/issues/27398
Related
Are there any know issues with running the org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder a purely IPv6 environment? I looked here and it mentions there may be issues with clusters becoming detached but does not offer any specifics. Any information would be appreciated, thanks.
I'm not aware of any IPv6 problems per se, so if your network is configured correctly I would expect it to work.
The problem we typically see when IPv6 is enabled is that it's possible to route to the IPv4 address but not the IPv6 address -- which is why setting preferIPv4Stack works.
I am working on an embedded software product that runs on an Ubuntu edge computer with multiple network ports.
The software allows the user to change the IP address of the ports via a locally hosted web interface.
In the scenario that a customer changed an IP on one of our devices, but then forgets their setting I am looking for an easy strategy to walk them through detecting the IP.
Ideally this tool would be usable by non-sophisticated customers (we don’t want to walk them through using Wireshark or command line tools).
Is there a service we can setup on our machine that will broadcast its identity across subnets using another protocol like UDP or EtherNet/IP? Then a simple tool the client could install on their computer to ‘scan’ for our devices?
The edge computers also have USB ports if it is easier to broadcast an identify there.
Changing a local IP address to something invalid (=not compatible with its local subnet) generally disables all L3 communication. Limited broadcasts (to 255.255.255.255) still work, but answering to them by unicast most likely won't. The same goes for multicasting - but you could use that for discovery both ways.
Also, the common link-level discovery protocols (like LLDP or CDP) still work since they don't rely on IP.
However, all that is limited to the connected L2 segment at most. Discovery across subnets isn't possible without some kind of infrastructure (discovery sensors, central server, multicast routing, ...). A reasonable way would be dynamic DNS but then again, that requires IP to work.
I think you'd need to take a step back and reevaluate your design. One way would be to verify a user's reconfiguration before it becomes permanent. For instance, you could have a user change the IP setup and then forward the session to the new IP address. If the session isn't continued within five minutes or so on the new address, it reverses to the previous config.
Additionally, some kind of out-of-band management could be useful.
Over the years, I used No-IP to link a domain to my IP address, and then used No-IP's DUC (Dynamic Update Client) to update my IP, so that the domain will always point to my IP.
That's very handy for running dedicated game servers.
Is there a DUC-equivalent for Google Cloud DNS?
In essence - No - there isn't :(
Unless yo're using Google Domains for your domain hosting then yes - they support just the thing.
Cloud DNS doesn't have that functionality. There are several workarounds like reserving a public IP for your VM which in my opinion would be the best way to do it. Unless your VM get's deployed using Deployment Manager then it may require some more scripting.
Similar questions have been raised on Stackoverflow here and here which you might find helpful.
If you're running Linux here you'll find a complete script how to update DNS records after a machine startup.
I have Micro services running on GKE clusters.They need to communicate with https://maps.googleapis.com/ . All these microservices are running in a cluster which is created in a custom network. Now If I want to Know will need to allow egress for these clusters/(Nodes) or Since it is also GCP service by default cmmuninication is allowed? If I need To allow a firewall rule for egress, How Can I do that for Domain name instead of IP. I read that the IP may change for these maps.googleapis.com. Can you please help me.
GKE works on the same infrastructure that Google Compute Engine.
Unfortunately, it is not possible to add firewall rules with destination defined as a DNS address.
Although Google Maps API is a part of Google services, there are no template or something like that to add it as an exception to the firewall and firewall do not know anything about Google services. If you block all egress traffic - access to all APIs will be blocked too.
So, you need to get IP ranges of the API somehow and add them to the firewall.
I found the only one way how to get all ranges (using DNS names) here. But, you should have:
the Google Maps APIs Premium Plan or a previous Google Maps APIs for Work or Google Maps for Business license.
If you have it, just go to that link where you can get a current list of domains related to Google Maps API.
If not, you can try to allow traffic to all addresses which Google is publishing as its CIRD blocks, it might help.
You can get it by nslookup command:
nslookup -q=TXT _spf.google.com 8.8.8.8
And then get all "include" names from the answer, like:
nslookup -q=TXT _netblocks.google.com 8.8.8.8
I'm implementing the PASV mode in a FTP server, and I send to the client the IP address and port of the data end point. This is stupid because the IP is actually where the client is already connecting, so there ire two options:
How could I get the public IP
address from a given instance? Not
the VIP, but the public one.
How could I get the original target
IP address that the user used from
a Socket object? Considering routers and load balancers in the middle :P
An answer to any of this questions would do, although there is another way that could work... may I get the public IP address doing a DNS look up of myapp.cloudapp.net?
A fourth option would be use the Azure Management API library... but, too much trouble :P.
Cheers.
Not sure if you ever figured this out, but here's my take on it. The individual role instances are all behind the Windows Azure load balancer and have no idea what the original, outward-facing IP address is. Also, there's no Management API call that returns IP address - Get Deployment returns the URL but not the IP address. I think the only option is going to be a dns lookup.
Having said that: I don't think you can host a passive ftp server in your role instance (at least not elegantly). You may open up to 25 input endpoints on your role (up from 5 - see my recent blog post about this update), but there's manual work involved in the configuration. I don't know if your ftp application lets you limit your port range to such a small number of ports. Also:
You'd have to define each port as its own input endpoint (this is the manual labor part I mentioned) - input endpoints don't allow a port range to be specified, unlike the internal endpoints.
You'd have to specify the port number that's used internally, and the port numbers would need to be sequential
One last thing on ftp: you should be able to host an sftp server with no trouble, since all traffic comes through one port.
The hack that I'm contemplating right now is to retrieve http://www.icanhazip.com/. It isn't elegant and is subject to the availability of that service, but it gets the job done. A better solution would be appreciated!