I have a conundrum. I'd like my entire domain to be hosted by CDN. So the root page, www.mysite.com/ should be served by a CDN. This is fine. However I'd like to conditionally serve a different page (or redirect) dependant on whether the user-agent string is detected to be mobile (e.g. like on http://detectmobilebrowser.com/). And I'd like this, if possible, to be done server-side.
I know Cloudfront can serve 2 different versions of the same file dependant on the header (gzipped or not), but I can't find any documentation stating if it or any others support any way of switching dependant on the user agent. Anyone come across a way of doing this?
Thanks for any much appreciated help :D,Alec
Your CDN must be able to reply with a HTTP response 301 Moved Permanently based on the User-Agent text parsing results, when the user tries to access the webpage or object you´d like to switch.
A content delivery network (CDN) is more thought to host your static content like images, scripts, media files, documents etc. instead of your entire website.
The meaning is to lighten the load by removing the static content from your origin server as well as serving the static content more locally through a network of servers around the world.
A typical hosting setup for what you would like to do would be to have a page/server hosted at a "normal" provider, detect the user agent (client side oder server side) and then render the links to the static resources hosted on the CDN based on the user agent.
To your second point, as mentioned before CDNs are thought to host static files so server-side detection of the user agent is not likely. If you have a hosting environment like I stated with a page/server at the provider of your choice plus a CDN you'll have all options.
Some providers (e.g. Media Temple) offers CDN support on top of their normal page/server hosting.
Hope that helps.
Related
I manage a website which is built from a GitHub repository via an action which commits a live version to a certain branch, and then the webserver routinely checks if there are any updates on this branch and, if so, pulls them down to its public_html directory. This then serves the website on some domain, say
example.com.
For various (practically immutable) reasons, there are individual webpages that are "built" from other individual repositories — here I say "built" because these repositories are almost always just some .html files and such, with little post-processing, and could thus be served directly via GitHub pages. I want these to be served at example.com/individual-page. To achieve this, I currently have a GitHub action which transfers the files via FTP to a directory on the webserver which is symlinked inside public_html, thus making all the files accessible.
However, it now occurs to me that I could "simply" (assuming this is even possible — I imagine it would need some DNS tweaking) activate GitHub pages on these individual repositories, set with the custom domain example.com, and avoid having to pass via FTP. On one hand, it seems maybe conceptually simpler to have public_html on the webserver only contain the files coming from the main website build, and it would be simpler to make new standalone pages from GitHub repositories; on the other hand, it seems like maybe "everything that appears on example.com should be found in the same directory" is a good idea.
What (if any) is the recommended best practice here: to have these individual pages managed by GitHub pages with custom domains (since they are basically just web previews of the contents of the repositories), or to continue to transfer everything over to the webserver in one directory?
In other words maybe, is it a "good idea" to partially host your website with GitHub pages? Is this even possible with the right DNS settings?
(I must admit, I don't really understand what exactly my browser does when I navigate to example.com/individual-page, and what would happen if such a directory existed in my webserver and also GitHub pages was trying to serve up a webpage at this same address, so I guess bonus points if you want to explain the basics!)
The DNS option you describe doesn't work for pages.
While you can use a CNAME record to point your domain to another domain or an A record to point your domain to an IP address, DNS doesn't handle individual pages (as in example.com/a). It would work if each page was, for instance, a.example.com, but that's not a page, it's a whole different website.
What you can do, however, is include those other repositories as submodules of your repository, and then everything works without any DNS magic (except the simple CNAME record, which isn't magic).
It would be a good idea to implement this described solution, as it's the simplest. In any case, as long as your current solution works automatically without issues and the hosting cost isn't an issue, I don't see any need to take the time to implement a new solution.
If you want to only serve some files or pages from the submodules, you can have build actions and serve a specific directory.
I must admit, I don't really understand what exactly my browser does when I navigate to example.com/individual-page
Your browser requests the DNS records of your domain (example.com), in this case the A record (since this is the root domain). This A record gives an IP address, which your browser uses to request the page. As you can see, pages aren't handled by DNS at all.
That means only one machine handles all requests for a domain (which doesn't mean it can't delegate some pages to another machine, but that's more complex). That means it's impossible to have "directory conflicts", because either your server handles the request or GitHub does. Your browser doesn't check if the page exists at server A and if not, if it exists at server B.
I have a static web app to which I have mapped the domains [domain].se and www.[domain].se. The domain is managed in Azure.
The problem I'm facing is redirecting all calls to [domain].se to www.[domain].se
Since I couldn't come up with any solution to redirecting http traffic from [domain].se to www.[domain].se using a static web app (other than setting up an additional standard web app on [domain].se that manages redirects), I enabled the "Enterprise-grade edge" feature (which by the way is a very silly name) to be able to use FrontDoor.
When attempting to add the domain to the frontdoor resource, there is an error message telling me that it's already mapped to something (which is correct - the site that I want frontdoor to manage).
When trying to remap [domain].se (and www.[domain].se) to the front door endpoint (select Azure resource in the DNS zone manager), the frontdoor resource is not available.
The issue could probably be resolved by simply removing the current records from the name server and then add a cname record to point it to the frontdoor endpoint.
Two reasons not to do that:
1: It would cause downtime.
2: It feels like a hack. Stuff genereally work better when they are used the way they were expected to when developed. I want the platform to recognize what things are connected in order to avoid future issues.
I am working on the web vitals for a website and I was checking the Chrome Developer Tools the Network tab. The website loads fully, but I see that in the network tab, the server requests keep on increasing and the resources requested go up to 7.8MB and the website has a slider that keeps repeating in the network. How can I check why so many requests are made?
Here is the picture of the network tab of the website.
I see that the resource names are slide-X.jpg. Without seeing the website or its code, I can only guess that there's a carousel on the page that cycles through images. If the images aren't cacheable, they'd continue to be loaded over the network. Otherwise if they are cacheable, I'd expect to see no network requests at all or at worst a 304 HTTP "Not Modified" response code.
So I'd recommend confirming what kinds of widgets are on the page like a carousel with repetitive behavior and checking the cache control headers of static content like images to avoid the need to load the images each time. Personally, I think carousels are bad UX so I'd even suggest you consider removing it all together! Regardless, you should still cache your content more efficiently.
My company have an application which could be installed with Qt Online-Installers. The data are stored on the our personal server, but, with time, we found out, that the internet connection is a bit slow for users on the other edge of the world. So, there is a question - "What services are we able to use to store these data, which are designed for these purposes?". When I was investigating this question I found the Information about the thing which is called "Content Delivery Network", but I'm not sure if it's something fits or not.
Unfortunately, I don't have enough experience in this area, so, maybe somebody knows more and could give me an advice. Thank you!
Cloudfront on AWS . Depends on what your content is but can probably store it on s3 and then use Cloudfront to cache it at edge locations across the globe.
Your research led you to the right topic because it sounds like you could benefit from a CDN. CDNs store cached versions of your website, download files, video, etc. on their servers which is often a distributed network of servers across the globe, known as 'Points of Presence' (PoPs). When a user requests a file from your website, assuming it is leveraging a CDN, the user request actually goes to the closest POP and retrieves the file. This improves performance because the user may be very far from your origin server, or your origin server may not have enough resources to answer every request by itself.
The amount of time a CDN caches objects from your site depends on configurable settings. You can inform the CDN on how to cache objects using HTTP cache headers. Here is an intro video from Akamai, the largest CDN, with some helpful explanation of HTTP caching headers.
https://www.youtube.com/watch?v=zAxSE1M4yKE
Cheers.
we have a configuration with pound, varnish, apache and TYPO3. every now and then we hava a lot of access to the site and the pound sw gets to its limit.
One idea to loosen the stress was that all images could be fetched with another domain (which gets its own sw).
so the html would be called like http://www.my.domain/folder/page.html and inside the HTML images are referenced like http://images.my.domain/fileadmin/img/image.jpg
what needs to be done so
the editors could work as before (just access files from /fileadmin/...)
in the HTML all image(/file)access are generated with another domain?
make sure all genarated images (processed) can be accessed with the new domain?
I assume you are using Pound as reverse proxy which does the TLS.
I suggest to reduce the setup and use only Varnish and Apache, so Varnish can handle the requests directly.
Kevin Häfeli wrote an article about it some time ago (in German):
https://blog.snowflake.ch/2015/01/28/https-caching-mit-varnish-tlsssl-zertifikat-typo3/
In case you still want to server the images form an other web server, I suggest to have a look at the file abstraction layer:
https://docs.typo3.org/typo3cms/FileAbstractionLayerReference/Index.html