I have Micro services running on GKE clusters.They need to communicate with https://maps.googleapis.com/ . All these microservices are running in a cluster which is created in a custom network. Now If I want to Know will need to allow egress for these clusters/(Nodes) or Since it is also GCP service by default cmmuninication is allowed? If I need To allow a firewall rule for egress, How Can I do that for Domain name instead of IP. I read that the IP may change for these maps.googleapis.com. Can you please help me.
GKE works on the same infrastructure that Google Compute Engine.
Unfortunately, it is not possible to add firewall rules with destination defined as a DNS address.
Although Google Maps API is a part of Google services, there are no template or something like that to add it as an exception to the firewall and firewall do not know anything about Google services. If you block all egress traffic - access to all APIs will be blocked too.
So, you need to get IP ranges of the API somehow and add them to the firewall.
I found the only one way how to get all ranges (using DNS names) here. But, you should have:
the Google Maps APIs Premium Plan or a previous Google Maps APIs for Work or Google Maps for Business license.
If you have it, just go to that link where you can get a current list of domains related to Google Maps API.
If not, you can try to allow traffic to all addresses which Google is publishing as its CIRD blocks, it might help.
You can get it by nslookup command:
nslookup -q=TXT _spf.google.com 8.8.8.8
And then get all "include" names from the answer, like:
nslookup -q=TXT _netblocks.google.com 8.8.8.8
Related
I'm trying to configure network access of a MongoDB cluster to allow connections from an Azure App Service. I found the outbound IP addresses of my App Service in the Azure portal (see Azure docs). And entered them in the IP access list according to MongoDB Atlas docs. I appended "/32" to the IP addresses to allow only a single host (CIDR notation).
However, when trying to connect on App Service start I get an error indicating to check the IP whitelist of the MongoDB cluster.
This actually seems to be the problem, because adding 0.0.0.0/0 (allow access from anywhere) solves the problem.
What could be the problem here?
I double checked the outbound IP addresses of the Azure App Service and the IP access list from the MongoDB Cluster.
What I did was indeed the answer to another question, so I think I'm missing something...
Actually /32 is not a valid CIDR in Azure. The minimum size of a single VNET is /29.
This will restrict your range to only 3 IPs (not 8 as you would expect), as Azure will reserve the first four IPs and the last one for internal routing.
Please consider also that if you are running the MongoDB cluster inside a private network and it is not exposed externally via a network appliance (such as Application Gateway, Load Balancer, Front Door or Traffic Manager), you will need to enable VNET Integration on Azure Web App side.
If this is your case, navigate through your App in the portal and go into the "Networking" blade.
Here you can add VNET Integration, but you should consider that in this case the minimum size of your subnet can only be /28 (you cannot add a smaller subnet)
I only added the IP addresses listed in the "outbound IP addresses" property of my Azure App Service. After adding the IP addresses listed in the "Additional Outbound IP Addresses" property also the App Service connects to the MongoDB cluster successfully.
This is somewhat surprising to me because the documentation on when outbound IPs change says that the "...set of outbound IP addresses for your app changes when you perform one of the following actions:
Delete an app and recreate it in a different resource group (deployment unit may change).
Delete the last app in a resource group and region combination and recreate it (deployment unit may change).
Scale your app between the lower tiers (Basic, Standard, and Premium), the PremiumV2, and the PremiumV3 tier (IP addresses may be added to or subtracted from the set).
..."
None of the above actions happened. 🙄
Over the years, I used No-IP to link a domain to my IP address, and then used No-IP's DUC (Dynamic Update Client) to update my IP, so that the domain will always point to my IP.
That's very handy for running dedicated game servers.
Is there a DUC-equivalent for Google Cloud DNS?
In essence - No - there isn't :(
Unless yo're using Google Domains for your domain hosting then yes - they support just the thing.
Cloud DNS doesn't have that functionality. There are several workarounds like reserving a public IP for your VM which in my opinion would be the best way to do it. Unless your VM get's deployed using Deployment Manager then it may require some more scripting.
Similar questions have been raised on Stackoverflow here and here which you might find helpful.
If you're running Linux here you'll find a complete script how to update DNS records after a machine startup.
I built simple cluster in GKE with two services using this tutorial
https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
After finishing that I'm able to access my service using external IP address. So I bought domain for using this IP address. After setup A record in DNS settings to that IP address, domain doesn't work, it still loads and then show ERR_CONNECTION_TIMED_OUT. Do I need to do something in google console, or how I can make this IP public and accessed through domain?
Please refer to official documentation, which describes steps you need to take to configure domain names with static IP.
There are steps that you need to cover:
Go to NETWORKING section at GCP console, than VPC Network -> External IP addresses to ensure that you are running static IP address, not ephemeral one.
Go to Network services -> Cloud DNS. You need to create DNS zone, where at DNS name line you have to wright your domain name. After creation you will see Add record set, where you need to paste your external IP address.
There is also a good tutorial at YouTube with setting up custom domain on GCP. Let me know if it works for you.
I work in a company where almost all private ipv4 space is already used, so using 10.254.0.0/16 for service address space is a non-starter. I have carved out a /64 of ipv6 space that I can use, but I can't seem to make it work.
Here's my apiserver config:
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=::"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port kubelets listen on
KUBELET_PORT="--kubelet-port=10250"
# Address range to use for services
# KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=fc00:dead:beef:cafe::/64"
# Add your own!
KUBE_API_ARGS=""
But when I try to start kube-apiserver.service I get an error about "invalid argument". Is it possible to use IPv6 for kubernetes?
I don't think IPv6 is fully supported. I don't think there is a strong motivation among the developers of the project to add IPv6 support, because the largest group of contributors is Google employees. Google Compute Engine (and thus Google Container Engine) doesn't support IPv6, so it wouldn't benefit Google directly to pay their employees to support IPv6. Best thing to do would probably be to pull in employees of companies that run their hosted product on AWS (as AWS has IPv6 support) such as RedHat, or try to contribute some of the work yourself.
From the linked PR, it looks like Brian Grant (Google) is, for whatever reason, somewhat interested and able to contribute IPv6 support. He'd probably be a good resource to query if you're interested in contributing this functionality to Kubernetes your self.
AWS already made IPv6 by default for almost all of their major services --
https://aws.amazon.com/blogs/aws/new-ipv6-support-for-ec2-instances-in-virtual-private-clouds/
Recently, the IPv6 support is accepted, one by another started too, in-fact, the POD implementation has done so far. k8 is moving towards Service and then issues.
Currently, the open blocker issues are still open with good use cases --
https://github.com/kubernetes/kubernetes/issues/27398
I have just created a Google Cloud SQL instance. When I was looking on the access control of my instance, I found that if I want to access my database, I should authorize my IP address to get the right to access the database, but the problem is that my application will be deployed anywhere where the clients need, and even if I know where they will run the application and also I authorized their IP address, it (the IP) will be changed at least one time every 24 hours because it is not static IP, and then I have to re-authorize the IP again and again!
Is there any way to make the instance accessible from any IP?
Thanks
You can whitelist any subnet. You just need to enter it using CIDR notation: http://en.wikipedia.org/wiki/Cidr
In particular, you can whitelist 0.0.0.0/0 which includes all possible IP Address.
Please note that this is not recommended for security reasons. You want your access to be as restricted as possible.
This is an older post, but I noticed it on the sidebar so I figured I would add my 2c.
If you're able to use Cloud SQL Second Gen (currently in Beta) there is a new feature which allows access to the database without having to whitelist any firewalls: https://cloud.google.com/sql/docs/sql-proxy
Today, I was looking for a way to set-up an MS-SQL server for development purpose and found the similiar problem (how to allow my laptop to access).
This guide, helps.
In short, you need to allow firewall to enable EXTERNAL access to your VM instance at port 1433.