I read GitHub pages documentation for enabling redirects on GitHub Pages using the jekyll-redirect-from plugin. I was able to redirect from one page to another, but I have some different requirements. I need to redirect all pages from a starting url to another.
These URLs
www.example.com/abc/def
www.example.com/abc/xyz
should be redirected to
www.example.com
As stated in the documentation for the plugin, you can add specify multiple urls for the redirect_from key.
For example create a index.md file with:
---
title: index
redirect_from:
- /abc/def/
- /abc/xyz/
---
## Hello, world!
and a _config.yml with:
title: Redirection test
markdown: kramdown
plugins:
- jekyll-feed
- jekyll-redirect-from
Now both /abc/def and /abc/xyz will redirect to your index page.
Related
I have a _redirects file in my netlify directory structure
/
- site
-- _redirects
_redirects
https://example.netlify.com/ https://www.example.com/:splat 301!
https://www.example.com/post/196 https://www.example.com/comic/post-name
Problem:
The first redirection occurs successfully, but the 2nd one returns:
Page Not found
I have followed the documentation in https://www.netlify.com/docs/redirects/ but cannot find a cause of this issue.
I note 2 potential causes mentioned in the documentation:
You can also add redirect rules to your netlify.toml file.
^^ I have not tried this, but since it reads "also" I assume using _redirects file should be sufficient.
For Jekyll, this requires adding an include parameter to
config.yml.)
^^ I am not using Jekyll as far as I know, but my project does contain a config.yml file.
You will not be able to chain redirects on Netlify from what the docs read.
The redirect engine processes the first matching rule it finds, so more specific rules should be listed before more general ones
https://example.netlify.com/post/196 https://www.example.com/comic/post-name
/post/196 /comic/post-name
https://example.netlify.com/* https://www.example.com/:splat 301!
You can try without the first line above to see if https://example.netlify.com/post/196 redirects to https://www.example.com//comic/post-name. If it does not redirect, then there is no chaining in Netlify redirects.
Solved by adding:
[[redirects]]
from = "https://www.example.me/post/196"
to = "https://www.example.me/comic/post-name"
status = 200
to netlify.toml
source of solution: https://www.netlify.com/docs/redirects/
Hey I am trying to set up Jekyll on Github pages. I have followed this guide exactly: https://help.github.com/articles/setting-up-your-github-pages-site-locally-with-jekyll/.
Any ideas what I am doing wrong here?
You must replace the default url value in the config file. replace example.com and leave baseurl empty :
url: https://kekearif.github.io
baseurl: ""
I have github repository. In folder Blog placed jekyll which work if it is root folder.
I want run /Blog/index.html from /index.html
Blog
_config.yml
# Site settings
title: About the Programing
email: tencet#yandex.com
baseurl: "Blog" # the subpath of your site, e.g. /blog/
url: "http://tencet.github.io/" # the base hostname & protocol for your site
github_username: tencet
source: ./Blog
destination: ./Blog
Is there a solution?
Config:source:'Blog' -> Github page generate -> puts result a the root
If you want an independent index at the root an be able to point to tencet.github.io/blog :
create a blog repository,
put your jekyll blog in it, it will then be available at tencet.github.io/blog.
I have dev.example.com and www.example.com hosted on different subdomains. I want crawlers to drop all records of the dev subdomain but keep them on www. I am using git to store the code for both, so ideally I'd like both sites to use the same robots.txt file.
Is it possible to use one robots.txt file and have it exclude crawlers from the dev subdomain?
You could use Apache rewrite logic to serve a different robots.txt on the development domain:
<IfModule mod_rewrite.c>
RewriteEngine on
RewriteCond %{HTTP_HOST} ^dev\.qrcodecity\.com$
RewriteRule ^robots\.txt$ robots-dev.txt
</IfModule>
And then create a separate robots-dev.txt:
User-agent: *
Disallow: /
Sorry, this is most likely not possible. The general rule is that each sub-domain is treated separately and thus would both need robots.txt files.
Often subdomains are implemented using subfolders with url rewriting in place that does the mapping in which you want to share a single robots.txt file across subdomains. Here's a good discussion of how to do this: http://www.webmasterworld.com/apache/4253501.htm.
However, in your case you want different behavior for each subdomain which is going to require separate files.
Keep in mind that if you block Google from indexing the pages under the subdomain, they won't (usually) immediately drop out of the Google index. It merely stops Google from re-indexing those pages.
If the dev subdomain isn't launched yet, make sure it has it's own robots.txt disallowing everything.
However, if the dev subdomain already has pages indexed, then you need to use the robots noindex meta tags first (which requires Google to crawl the pages initially to read this request), then set up the robots.txt file for the dev subdomain once the pages have dropped out of the Google index (set up a Google Webmaster Tools account - it helps to work this out).
I want Google to drop all of the records of the dev subdomain but keep the www.
If the dev site has already been indexed return a 404 or 410 error to crawlers to delist content.
Is it possible to have one robots.txt file that excludes a subdomain?
If your code is completely static what you're looking for the non-standard host directive:
User-agent: *
Host: www.example.com
But if you can support a templating language it's possible to keep everything in a single file:
User-agent: *
# if ENVIRONMENT variable is false robots will be disallowed.
{{ if eq (getenv "ENVIRONMENT") "production" }}
Disallow: admin/
Disallow:
{{ else }}
Disallow: /
{{ end }}
I have setup TYPO3 successfully on my local server. But I am having problem when clicking on any menu item: It's showing "url not found on server".
When I type in the URL manually into the browser it shows the page. It's only having problems when redirecting after clicking on a page item at any frontend website page.
That might be related to the domain config or RealURL... or both ;)
Do you use RealURL? Or do you use the standard url config?
If links to sub pages look like index.php?id=12345 you are using the standard config.
My guess is that the local DNS ("hosts file") is not configured correctly.
With the hosts file you can simulate how the web site will appear when it's online, hooked up to a "real/global" DNS. (Not quite, but in a nutshell)
So if you set up Typo3 to be reached under http://www.example.com/ you need to tell your local DNS ("hosts file") to route a request to http://www.example.com/ to your local host e.g. http://127.0.0.1/ . In that case your host file needs an entry like so:
127.0.0.1 http://www.example.com/
What Domain do you enter to reach your web site? Where do the links from the menu link to?
If you wanna know mor about the "hosts file" look here:
http://accs-net.com/hosts/how_to_use_hosts.html
If you can log in into the TYPO3 backend (/typo3/) and can access the frondend through /index.php, but not through the generated menu links, then RewriteRules for mod_rewrite don't apply.
Usually TYPO3's installer should detect this configuration and disable RealURL, which is responsible for generating such nice looking URLs (instead of index.php?id=123). It seems like this failed (or you copied everything afterwards without the .htaccess file?).
Make sure that you have TYPO3's .htaccess file in place in the root directory of your installation. If this is the case, make sure that mod_rewrite is enabled in your Apache config.