I have followed this guide to setup my GitHub Pages. My username is FReina, and therefore I thought that my website would be at FReina.github.io. However, I consistently get a 404.
Anybody know if this is due to the uppercase in my username?
The issue is simply caused by the fact that there is latency between your username.github.io repository on Github and the actual hosted website. Usually this is only a few minutes, but I have known it to be much longer than that when the site is busy.
From my PC, it looks like your site is now up, and displaying a Hello World page. If you still cannot see it, it may be that the 404 page is cached by your browser - try emptying your browser cache (on Chromium-based browsers including Google Chrome and newer versions of MS Edge, this is done by opening the Inspect Element tool, clicking-and-holding the reload button and selecting "Empty cache and hard-refresh")
Related
My jekyll blog is running ok in my pc but when I open it from my gh repository it's broken. Here's a screen of that
If I click on one of the links, that next page throws 404. Needless to say, I followed the steps from the gh site in order to configure it properly.
baseurl is set, the paths to css and images are correctly set using {{ site.baseurl }}. So locally the blog works fine, but still, on gh-pages it's not.
Code is here
Another piece of information. I have a master branch and a gh-pages branch, the one updated is gh-pages, master has old code, is it ok?
Any ideas please?
It works ! What you saw was an old version of your site.
Two possible explanations :
as #goyllo says : browser cache f5 or Ctrl+r can help.
Github pages is serving the old version and the new site generation is still pending.
In order to know if you last commit has been published it to go to https://github.com/goblind/modestoRimba/settings and to watch under GitHub Pages box.
If you see Your site is ready to be published at http://goblind.github.io/modestoRimba, your site generation is pending.
If you see Your site is published at http://goblind.github.io/modestoRimba, your site have been published with last commit.
It's working fine in my browser. I want to say one thing, GitHub pages cache your old resources(including HTML, CSS, JS, Images), in browser for better UX, For example, just disconnect your internet, and reload that page again, and it will be work fine, similarly if you visit another page, then it will not load your CSS, or javascript again from your website, they will load directly from browser cache. So you are getting this error because your browser use old cache resource, and it will update again in few hour, depending on cache time.
I want to say, if your blog working fine in localhost, then don't worry, it will be also work fine in github pages as well, otherwise github will send you notice in your email regarding page build fail.
in your _config.yml baseurl: /modestorimba R should be uppercase.
It looks like a recent Chrome update broke this by tightening mixed content (https/http) security policies, and I read that Firefox plans to do this too.
Here's the issue:
Say I set the Secure Canvas URL of my app to https://themediadudes.com/httpstest/
That page contains only a link to Google:
Google
When I view the app on Facebook and click the link, nothing happens. An error appears in the console:
[blocked] The page at https://apps.facebook.com/myappname/ ran insecure
content from http://www.google.com/.
I understand that having insecure scripts/stylesheets etc. on an https page isn't allowed, but a simple link to a different website shouldn't be blocked right?. I assume Facebook is running some scripts which do something with the page before sending the user there? Which causes the error.
If I set the target of the link to _top or _blank it works.
Ideally I want to be able to use a javascript window.location to send the user to this insecure URL, or header('Location: blah'); in PHP. But neither of those work either. And it looks like this is a bigger problem than that if even a simple link to an insecure URL doesn't work.
I thought it may be caused by whatever makes the 'fluid' canvas width and canvas height settings work. But I tried setting both width and height to fixed and the problem still happens.
Does anybody have a solution or workaround, or can anybody at least shed some more light on this?
Thanks
I've been struggling with a similar issue and the answer seems to be that it is not possible at all to reference any non-https resources from within your page tab app. Of course if a google link is all you require then that is simply resolved (as google has a https version of course) but referencing external non-https sites will always turn up this warning/block in chrome
Additionally, I should add that I have noticed that the 'page tab URL' section requires a url to a particular page, whereas the 'canvas URL' needs to link to a directory. This does not seem to be documented and will also give the insecure content message in chrome and prevent the page tab app from loading
The problem i have is that i own a website where other people can post stuff ,creating new pages on my domain, but the problem that occured today is that all the new post pages created today are malfunctioning , sharing is not loading thumbnail picture and title and so on, but the weird this is that all the posts(new pages) created before today are all working fine
What caused an error to occur out of nowhere?
I also cannot debug any of the URL's of my website as the same error: Error parsing input URL, no data was scraped
The website im having problems with is here http://www.vabameedia.ee/vm/184/h%C3%A4da-ei-anna-h%C3%A4beneda.html
This is one of the sites where it says no error on page but facebook still cant reach it. http://www.vabameedia.ee/vm/178/craig-parks-%C3%BChek%C3%A4eline-krossisoitja.html
For people experiencing the same problem but for different causes, I discovered a few interesting things about how Facebook "scrapes" pages, checking the logs of the server while doing some trials.
First of all: if you never tried to share a page with FB, FB never tried to scrape it, and it will not try to do so if you only put the url in the Debug tool.
That's the first reason because you get the error: it just states that FB has no information on the page, you must "force" it to scrape the page.
The first time you try to share a page, FB scrapes it (asks your server the first 40k of the page and analyse the opengraph tags).
What can happen is that you do not see the image: Facebook Share Dialog does not display thumbnails one first load
The reason is that FB behind the scenes is still scraping your page and caching the image. The next time, in fact, you have also the image.
How to solve it? Pre caching: https://developers.facebook.com/docs/sharing/best-practices#precaching
or simply add
<meta property="og:image:width" content="450"/>
<meta property="og:image:height" content="298"/>
I was pulling my hair out trying to fix this issue. Hours and hours of troubleshooting to no avail. After speaking with one of our programmers about a topic unrelated I thought of something to try as a long shot.
Much to my surprise, it worked!!!
This is the reason behind the problem and my solution for it:
When you draft a post in WordPress it generates a link based on your article's title (unless you manually change it). The title of my article included special characters, however the auto-generated link didn't display these special characters, only hyphens to replace the spaces. Should be fine right? Wrong! Somewhere embedded in metadata and code in the WordPress platform are those special characters and they mess up the way Facebook pulls info from the article being linked to. This is a problem because certain special characters invalidate hyperlinks.
For example:
Article Title: R[eloaded]
Auto-generated hyperlink DISPLAYED in WordPress "Permalink" field: http://www.example.com/reloaded
Actual WordPress Auto-generated hyperlink: http://www.example.com/r[eloaded]
Those brackets will invalidate the link and Facebook will be unable to pull any information (ie pictures) from it.
Solution:
(1) Simply, manually change the WordPress hyperlink address to something that doesn't include any special characters (this will not change the title of your article).
(2) Click "Update" to change the post to include the new hyperlink.
(3) Click "Purge from Cache" in the WordPress window
(4) Refresh your Facebook browser window
(5) Paste the new hyperlink for your article
(6) Enjoy your Facebook post with a preview image and information
Sidenote: Don't pull your hair out over Facebook, it's not worth it. =)
If you're using Wordpress, edit the post in question to change the permalink (just alter it slightly), then update the post. Using the new permalink in the Facebook OG debugger should now work.
It's a weird fix, but I think it takes care of a problem caused by special characters being used in the title of a post, which is then used to make the permalink.
Its all about DNS issue, was having same issue and resolved it by updating domain name servers to actual name servers.
In my case my domain was pointed to ns1.websterz.net and ns2.websterz.net and on this server i had DNS redirect to my other server (where web site is hosted). I Just updated name servers of the domain to actual name servers where my web site is hosted on. This was account migration case i forgot to update name servers as of new server.
Everything works fine now.
I recently purchased the domain www.iacro.dk from UnoEuro and installed WordPress planning to integrate blogging with Facebook. However, I cannot even get to share a link to the domain.
When I try to share any link on my timeline, it gives the error "The content you're trying to share includes a link that's been blocked for being spammy or unsafe: iacro.dk". Searching, I came across Sucuri SiteCheck which showed that McAfee TrustedSource had marked the site as having malicious content. Strange considering that I just bought it, it contains nothing but WordPress and I can't find any previous history of ownership. But I got McAfee to reclassify it and it now shows up green at SiteCheck. However, now a few days later, Facebook still blocks it. Clicking the "let us know" link in the FB block dialog got me to a "Blocked from Adding Content" form that I submitted, but this just triggered a confirmation mail stating that individual issues are not processed.
I then noticed the same behavior as here and here: When I type in any iacro.dk link on my Timeline it generates a blank preview with "(No Title)". It doesn't matter if it's the front page, a htm document or even an image - nothing is returned. So I tried the debugger which returns the very generic "Error Parsing URL: Error parsing input URL, no data was scraped.". Searching on this site, a lot of people suggest that missing "og:" tags might cause no scraping. I installed a WP plugin for that and verified tag generation, but nothing changed. And since FB can't even scrape plain htm / jpg from the domain, I assume tags can be ruled out.
Here someone suggests 301 Redirects being a problem, but I haven't set up redirection - I don't even have a .htaccess file.
So, my questions are: Is this all because of the domain being marked as "spammy"? If so, how can I get the FB ban lifted? However, I have seen examples of other "spammy" sites where the preview is being generated just fine, e.g. http://dagbok.nu described in this question. So if the blacklist is not the only problem, what else is wrong?
This is driving me nuts so thanks a lot in advance!
I don't know the details, but it is a problem that facebook has with web sites hosted on shared servers, i.e. the server hosting your web site also hosts a number of other web sites.
Made a website for a client of mine who owns a small business. About three months ago, her site URL was blocked by Facebook for being "Spammy". We launched a pretty impressive "Go Here And Report It As Safe" campaign, but alas, it's not unblocked.
We made a new domain that mirrored the blocked one. This worked for about an hour. Then lo and behold! It got blocked too.
I was very curious, so I decided to try out the "Object Debugger". When I did, I got this message:
"Error Parsing URL: Error parsing input URL, no data was scraped."
I tried it again a few hours later, and it scraped just fine! Not only did it scrape and show up in debug perfectly, but it also didn't ping as blocked when I posted to my wall! It was amazing.
Sadly, I made an edit to the header file (just took away a meta tag), and now it won't scrape again. And it's blocked again.
The URL in question is enchantedcareers.com.
I feel like maybe the site isn't being blocked as spammy, but rather, there may be some kind of coding problem? Anyone else had an OG bug ping a URL as blocked upon link shim?
EDIT : Again, it let me post the link, with full preview and everything. I posted it, and about one minute later, the post was removed, and it was back to being "spammy"
The URL Debugger only scrapes my URL sporadically (with no page edits being made whatsoever).
I can't find a pattern.
no data was scraped: 9:06
successful scrape: 9:34
no data was scraped: 9:44
successful scrape: 10:04
no data was scraped: 10:08
Edit #2 : This is just completely crazy. Our new domain, enchantedcareers.net, which is nothing but my host's default quickstart.html page, is also blocked from being posted. When I try to post the .net domain, it gives me both the .net and .com domain as being blocked.
THE .COM DOMAIN ISN'T EVEN TIED TO THE .NET NAME. This domain is straight out of the box. Why is it bringing that domain up when I try to post a new one?
I'm just so confused.
It won't scrape the .net name, either.
Could this be a server thing...?
Your URI comes up clean on a URIBL check, you're on Dream Host, which is typically reliable, so I wouldn't expect to see your IP address show up in an DNSBL check. I don't see any glaring errors in your page code that typically causes the Facebook parser to choke.
There is one "suspicious" script on your page according to this report. Try removing this, clear your cache and see if Facebook will parse your URL.