how to make sure where the cdn edge server we're accessing is located? - azure-cdn

After creating a CDN endpoint that the origin is the blob storage, the performance seems to be very bad when we access the CDN endpoint from China. Is there any way to see if i'm accessing a Asia edge CDN server or North America edge server?
Can I just use command dig to get the returned ip, and look up the IP's location? (In this way, the IP is from North America)

Yes, you could find the CDN edge server through IP Location since the client always receives the response from the endpoint instead of the origin server in a CDN environment. But you might do not see the IP city location due to these POP city locations of some third party providers are not disclosed.
For a CDN basic architecture, when a user clicks the URL in the address tab, the local DNS server will deliver the DNS resolution to a CDN dedicated DNS server, the CDN DNS server will tell an IP address from CDN global LB device to the user. This user will send the request to the CDN global LB device, the global LB device will tell this user a proper zone DNS LB server according to the client IP, request content, and region. Then this user will request to the zone DNS LB server. The zone DNS server will select a proper POP server to the global LB server based on these conditions: which one is nearest to the client, which one is having the expecting contents and which one could serve the client request. Eventually, the global LB server will response the proper POP server IP address to the user.
You can see the DNS resolution using Dig command. I test this on a Linux Azure VM. I am using an endpoint for CDN with Standard Microsoft.
The effective method is to directly capture the network packages using Wireshark or network monitor.
If you are using a windows machine, from a chrome browser, you can get the Remote Address when you send the HTTP/HTTPS request to cdnedpoint via clicking F12 button, check Network-Headers-Remote Address, and you will see the server IP address.
Besides, if China is a significant market for your customers and they need fast performance, consider using Azure CDN China instead. Azure CDN China delivers content from POPs inside of China by partnering with a number of local providers. You could get more details about China content delivery with Azure CDN.

Related

Using Cloudfront as CDN for my custom server REST API

I have a REST API on a Hetzner server which uses Varnish. I am trying to set up Cloudfront to use as the CDN for it. After reading around, I currently have the following setup:
Hetzner / Varnish
A main API route api.mydomain.com.
Config in Varnish for cdn-api.mydomain.com to also act as a route to the same API.
In the DNS for the domain in Hetzner, for cdn-api.mydomain.com I have
added the name servers for Route 53.
Route 53
Hosted zone called cdn-api.mydomain.com.
An A record with name prod.cdn-api.mydomain.com which points to my Cloudfront distribution.
An A record with name cdn-api.mydomain.com which points to the IP address of the server.
Cloudfront Distribution
Has the alternate domain name prod.cdn-api.mydomain.com.
Has the origin domain of cdn-api.mydomain.com
Protocol for origin is HTTP only
What I think should happen
Make a request to prod.cdn-api.mydomain.com.
Route 53 forwards to the Cloudfront distribution.
CloudFront looks to origin cdn-api.mydomain.com.
Origin cdn-api.mydomain.com looks to IP address of Hetzner.
Hetzner receives request, Varnish allows the domain through, sends back data to Cloudfront.
What actually happens
If I make a request straight to cdn-api.mydomain.com from Postman, it works if I turn off SSL.
If I turn on SSL, I get the error SSL Error: Hostname/IP does not match certificate's altnames, saying that cdn-api.mydomain.com is not on the certificates of the server.
If I make a request to prod.cdn-api.mydomain.com, I get the error Error: Exceeded maxRedirects. Probably stuck in a redirect loop. Which may be due to the same certs error.
Cloudflare
As a comparison, we have Cloudflare set up as the CDN for a different domain on the same Hetzner server. It has:
A main API route api.myotherdomain.com
In Hetzner a CNAME for cdn-api.myotherdomain.com with value cdn-api.myotherdomain.com.cdn.cloudflare.net.
In Cloudflare, an A record for cdn-api.myotherdomain.com which points to the IP address of the server.
cdn-api.myotherdomain.com is set up in Varnish as an entry point, but is not on the list of certificates of the server.
This all works fine including with SSL enabled.
It would be good to understand what I'm doing wrong here.

google cloud CDN always serve my static file through only 1 IP

I have my google bucket connect with a load balancer and CDN enabled in google cloud, but I really don't get how google CDN working for static file, checking in the log viewer i can see the "statusDetails: response_from_cache" and "cacheHit: true" so i can say that the CDN is working properly.
Trying to issue a request for the image in my google CDN bucket from a computer located in Europe, the file return from the frontend IP address of my load balancer. Also the same IP address served my image if i make the request from a computer located in Asia.
So the same IP address was used for serving my static image ignore the location where the request coming from, checking the log viewer again, i can see that both of the request has claimed to go through google CDN, again google log viewer tell me that CDN working properly.
i think that the CDN should serve the file from the nearest server to the end-users, what is the point for using google CDN if the file always served from only 1 single IP address for all user over the world?
I have a free account of cloudflare, once i configure my DNS, the image file go through cloudflare network and if i do the test as above, i will see my static image file returned from multiple IP address which is nearest to my end-users.
Could somebody help me to understand what is the purpose for using google CDN in this case ? did i miss something in the configuration process for google CDN?
Thanks a lot in advance.
Google Cloud CDN uses Google's global edge network to serve content
closer to users and it leverages Google Cloud global external HTTP(S)
load balancers to provide routing, health checking, and Anycast IP
support. The HTTP(S) load balancing configuration specifies the
frontend IP addresses and ports on which Cloud CDN receives requests
and the backends that originate responses to those requests.
Google CDN has a special feature of ‘single anycast IP for the whole
network’ letting all contents served through the load balancer
frontend IP resulting in low latency. So rather than having one load
balancer per region, you can simplify your architecture and have
every instance behind a single global load balancer. Also it has a
feature of HTTP/2 which supports the latest HTTP protocol for faster
performance. For additional information, you can check here.
Cloud CDN reduces latency by serving assets directly at Google's
network edge. To know more about the caching using Cloud CDN, refer
to this caching-overview docs.

Is Google Cloud Run Service to Service Communication internal like k8s's cluster.local?

Cloud Run is providing a domain *.run.app to access the service deployed. I am wondering how Google Cloud Run handling requests from one to another Cloud Run service. Is all the service to service communication internal even we have a custom domain instead of *.run.app?
The definition of "internal" is not clear.
Your request stay in the Google Network. Is it internal or external?
To resolve the Custom Domain, a DNS resolution request (port 53) is performed on the public network, but the content of the request stays in the Google Network and forwarded after the resolution. Is it internal or external?
So, as long as you use Google Services (in premium network option), you don't go out of the Google Network and thus you can consider this as highly secured.
I assume, my answer isn't very clear, in fact all depend if you trust or not the Google Cloud network.

How do you configure a domain name for openfire server? Do I just buy a domain and set it as my XMPP domain?

so I am setting up a server for a messaging application which is being developed. I am using openfire server for this which I have installed and running on a PC. Right now, the xmpp domain is set to my computer name and server is working on my network, but obviously as its a local name it cannot be accessed from the outside.I am able to access the server from multiple computers on the same network using the Spark messaging client to test the server. So to be able to access my XMPP server from devices outside my network, do I just buy a domain name and set it as my XMPP domain in Openfire settings?
To answer your question, yes, with the following caveats:
You will either have to host the DNS server yourself or have the DNS provider serve the records for you.
A domain must have a static IP to address to point to. A home or a typical small business Internet account does not include a static IP (some providers actively prevent home accounts from serving web pages/services).
You must also configure your firewall to allow a mapping to the internal server.
I would recommend using an external provider to handle the network and hosting services for your program.

HTTPS for local IP address

I have a gadget[*] that connects to the user's WiFi network and responds to commands over a simple REST interface. The user uses a web app to control this gadget. The web app is currently served over http and the app's javascript does AJAX calls to the gadget's local IP address to control it. This scheme works well and I have no issues with it.
[*] By "gadget" I mean an actual, physical IoT device that the user buys and installs within their home, and configures to connect to their home WiFi network
Now, I want to serve this web app over https. I have no issue setting up https on the hosting side. The problem is, now the browser blocks access to the gadget (since the gadget's REST API is over http and not https).
The obvious solution is to have the gadget serve it's REST API over https. But how? It has a local IP address and no one will issue a certificate for it. (Even if they did, I'd have to buy a boatload of certificates for each possible local IP address.) I could round-trip via the cloud (by adding additional logic on my server side to accept commands from the web app and forward it to the gadget over another connection), but this will increase latencies.
Is there a way around this problem? One possibility that I have in mind is to:
Get a wildcard certificate (say, *.mydomain.com)
Run my own DNS that maps sub-domains to a local IP address following a pattern (For example, 192-168-1-123.mydomain.com would map to 192.168.1.123)
Use the wild-card certificate in all the gadgets
My web app could then make AJAX calls to https://192-168-1-123.mydomain.com instead of http://192.168.1.123 and latencies would remain unaffected aside from the initial DNS lookup
Would this work? It's an expensive experiment to try out (wildcard certificates cost ~$200) and running a DNS server seems like a lot of work. Plus I find myself under-qualified to think through the security implications.
Perhaps there's already a service out there that solves this problem?
While this is a pretty old question, it is still nothing that you find out-of-the-box solutions for today.
Just as #Jaffa-the-cake posted in a comment, you can lean on how Plex did it, which Filippo Valsorda explained in his blog:
https://blog.filippo.io/how-plex-is-doing-https-for-all-its-users/
This is very similar to what you proposed yourself. You don't even need a wildcard certificate, but you can generate certificates on-the-fly using Let's Encrypt. (You can still use wildcard certificates, if you want, which Let's Encrypt supports now, too.)
Just yesterday I did a manual proof-of-concept for that workflow, that can be automated with the following steps:
Write a Web Service that can create DNS entries for individual devices dynamically and generate matching certificates via Let's Encrypt - this is pretty easy using certbot and e.g. Google Cloud DNS. I guess Azure, AWS and others have similar offerings, too. When you use certbot's DNS plugins, you don't even need to have an actual web server running on port 80/443.
On you local device, contact that Web Service to generate a unique DNS entry (e.g. ..yourdns.com) and certificate for that domain
Use that certificate in your local HTTPS server
Browse to that domain instead of your local IP
Now you will have a HTTPS connection to your local server, using a local IP, but a publicly resolved DNS entry.
The downside is that this does not work offline from arbitrary clients. And you need to think of a good security concept to create trust between the client that requests a DNS and certificate, and your web service that will generate those.
BTW, do you mind sharing what kind of gadget it is that you are building?
If all you want is to access the device APIs through the web browser, A Simple solution would be to proxy all the requests to the device through your web server.this was even self signed certs for the devices wont be a problem. Only problem though is that the server would have to be on the same network as your devices.
If you are not on the same network, you can write a simple browser plugin (chrome) to send the api request to IoT device. but then the dependency on the app/plugin will be clumsy.