I have a filemount outside of page root. Files get linked, but the URLs aren't nice, e.g.
https://domain.tld/index.php?eID=dumpFile&t=f&f=3259&token=bee330f7ab31b608dadba275b28b6a911fb221d7
URLs of files in fileadmin (inside page route) are perfectly fine. What am I doing wrong?
Related
So, inside my Netlify account (same team) I have two different websites.
The main website is:
https://entertainyou.netlify.app
The detail website is:
https://wtf-lmey.netlify.app
On the main website I have a redirect set in the netlify.toml
[[redirects]]
from="/gameroom/wtf/*"
to = "https://wtf-lmey.netlify.app/:splat"
status = 200
I need the URL in the address bar to remain unchanged, but the detail website should be served.
The redirect is working and the URL remains the same. However, the page that is living on the second URL is not working properly.
It returns several 404 on images, .js files and so on. Only the static HTML is rendered. If I visit the second website directly, everything works just fine.
Both websites run a Nuxt website.
What am I missing here?
I have a private repository, with a Jekyll website.
I use the folder docs to be my public folder in my github pages settings.
The index.html file and images are loaded, but some assets, like .js are not working. For example /docs/assets/js/_user_variable.js
When I try to open the mentioned url in a browser tab, I get a 404 error.
If I go to my repository, and try to look at the raw version of this file, I see the same absolute path in the url bar and the file is loaded, BUT I noticed an additional url parameter like ?token=AAB72EQ74V6CXJ6ZYUJCHWLAPAXV6
I'm 100% sure this has worked before, but I haven't looked at the website for more then a year. I guess github has changed the requirements for some requests to have a token, but I'm not sure, I couldn't find anything on that.
So it is not necessary to have this token for the index.html and the image files, but it is for others files.
I have read https://guides.github.com/features/pages/ and I can not find any thing regarding tokens needed.
I like to access files from https://raw.githubusercontent.com/username/repository_name without the use of tokens, is this possible?
When you want the raw file (so the browser read it as a text file), you have to have a token:
e.g.:
https://raw.githubusercontent.com/username/repository/master/docs/index.html?token=AAB72EUQF7GXPEXY6YZHQHDAPBFOQ
if you use the following url, it works without token:
https://username.github.io/repository/assets/js/some_variables.js
BUT
When the file starts with an _ underscore, this last option will give you an 404.
So don't use underscore, see also: https://github.com/jekyll/jekyll/issues/55
I have setup GitHub pages and my site is working fine, but when navigating to the zip file at the site (i.e. mysite.com/example.zip), I get 404 error page. The files are in the repository and it seems that for some files it works as I can access PDF file just fine (i.e. mysite.com/myfile.pdf). Couldn't find any mention that GitHub Pages are blocking this, but seems like it, or am I missing something? Is there a list of files which ones are served and which not?
I need to create a redirect from www.domain.com/page to another page in a different domain. I need the first (referring) url to have no extension (meaning no .html or .asp).
I know how to do it in Apache, but have no clue how to work with iis6.
Is there any simple way of doing this?
Here is an easy solution:
Create a folder for your website www.domain.com and then another sub folder for "page".
Create a website in IIS manager for www.domain.com
Expand that website, and then right click on your "page" folder, choose properties.
On the directory tab, choose the option: A redirection to a URL put the full domain of the target location. You may also want to examine the check boxes below it if that fits your needs.
Another option that you could do outside of IIS is setup your website, and take advantage of the "default document". if you add an index.html file into your website at http://www.domain.com/page , the default document will be called automatically without referencing the index.html in your path and you can do a javascript redirect.
<script>
window.location='http://www.someothersite.com/page.html'
</script>
This may be easier when dealing with a large number of pages.
A robots.txt file is usually just a text file under your site root directory. For example, you can view www.amazon.com/robots.txt. But today, I found a website with a strange robots.txt file. If you just type
http://xli.bugs3.com/robots.txt
it does not show a text file, instead it still shows the home page of that site.
How could it happen and why does the webmaster do this?
Assuming a fairly conventional/basic server setup, where it is just files as you say, it could simply be a htaccess redirect rule. The rule might be something like "serve a file if it's on the server, otherwise just serve the index".
Or it might be an application server like Rails, where there's no direct relationship between the server directory structure and URL pathnames.