Site results redirect to Google - redirect

A site I'm working on has been hacked. The CMS (which I didn't build) was accessed and some files (e.g. "km2jk4.php.jpg") were uploaded in image fields. I have since deleted them (a week ago). Now, when I search for the site on Google, then click the result, it either:
a) simply redirects me to the Google search page
OR
b) a download dialogue appears asking me to download a zip file, with the source domain something like gb.celebritytravelnetwork.com
Clearly the site's been compromised. But if I simply type the URL in the address bar, the site loads fine. This only happens when I click through Google results.
There is no .htaccess file on the server, and this is not a virus on my computer, since many other people have reported the same thing happening, so this question is not relevant.
Any ideas please?
Thanks.

Your Source files have been changed.
Check all the files included in the index page. They might be header , footer pages.
And try using : fetch as google bot.

Related

Facebook showing page not found when sharing link

I'm sharing content from a website and every time I paste the link into Facebook it says 'page not found'.
Sometimes it works when I manually add the 'www.' in front of the URL in the address bar.
EXAMPLE
Shows page not found:
http://roundreviews.co.uk/reviews/speakers/native-union-monocle-speaker/
Works when you manually place www. in front:
www.roundreviews.co.uk/reviews/speakers/native-union-monocle-speaker/
I honestly have no I idea why it's doing this, any thoughts on how it can be fixed on the web side?
Also...
I have tried with the link below with both the www. and without yet it doesn't work with either of them, this is all very strange. This is the only link I have tried and it doesn't work with both:
www.roundreviews.co.uk/microphones/spark-digital-microphone/
Any help is much appreciated, thanks.
For me what it worked was to access the Facebook Debugger, as Goose said.
I saw that the scrape was about 12 hours ago, looks like it fetches the first time and saves it as caché or whatsoever...
What it worked for me is to debug the url, then click "fetch new scrape information" after the previous information has been shown.
Hope it works!
For those running across this today, you might find that you also need to verify your domain and link it to your page.
To do this you need to
Set up a Facebook Business Account
Add your page to the business account
Verify your domain (using DNS TXT or adding a page facebook gives you)
Under domains, connect your page as an asset of that domain

Error parsing input URL, no data was scraped. only with new pages on my site

The problem i have is that i own a website where other people can post stuff ,creating new pages on my domain, but the problem that occured today is that all the new post pages created today are malfunctioning , sharing is not loading thumbnail picture and title and so on, but the weird this is that all the posts(new pages) created before today are all working fine
What caused an error to occur out of nowhere?
I also cannot debug any of the URL's of my website as the same error: Error parsing input URL, no data was scraped
The website im having problems with is here http://www.vabameedia.ee/vm/184/h%C3%A4da-ei-anna-h%C3%A4beneda.html
This is one of the sites where it says no error on page but facebook still cant reach it. http://www.vabameedia.ee/vm/178/craig-parks-%C3%BChek%C3%A4eline-krossisoitja.html
For people experiencing the same problem but for different causes, I discovered a few interesting things about how Facebook "scrapes" pages, checking the logs of the server while doing some trials.
First of all: if you never tried to share a page with FB, FB never tried to scrape it, and it will not try to do so if you only put the url in the Debug tool.
That's the first reason because you get the error: it just states that FB has no information on the page, you must "force" it to scrape the page.
The first time you try to share a page, FB scrapes it (asks your server the first 40k of the page and analyse the opengraph tags).
What can happen is that you do not see the image: Facebook Share Dialog does not display thumbnails one first load
The reason is that FB behind the scenes is still scraping your page and caching the image. The next time, in fact, you have also the image.
How to solve it? Pre caching: https://developers.facebook.com/docs/sharing/best-practices#precaching
or simply add
<meta property="og:image:width" content="450"/>
<meta property="og:image:height" content="298"/>
I was pulling my hair out trying to fix this issue. Hours and hours of troubleshooting to no avail. After speaking with one of our programmers about a topic unrelated I thought of something to try as a long shot.
Much to my surprise, it worked!!!
This is the reason behind the problem and my solution for it:
When you draft a post in WordPress it generates a link based on your article's title (unless you manually change it). The title of my article included special characters, however the auto-generated link didn't display these special characters, only hyphens to replace the spaces. Should be fine right? Wrong! Somewhere embedded in metadata and code in the WordPress platform are those special characters and they mess up the way Facebook pulls info from the article being linked to. This is a problem because certain special characters invalidate hyperlinks.
For example:
Article Title: R[eloaded]
Auto-generated hyperlink DISPLAYED in WordPress "Permalink" field: http://www.example.com/reloaded
Actual WordPress Auto-generated hyperlink: http://www.example.com/r[eloaded]
Those brackets will invalidate the link and Facebook will be unable to pull any information (ie pictures) from it.
Solution:
(1) Simply, manually change the WordPress hyperlink address to something that doesn't include any special characters (this will not change the title of your article).
(2) Click "Update" to change the post to include the new hyperlink.
(3) Click "Purge from Cache" in the WordPress window
(4) Refresh your Facebook browser window
(5) Paste the new hyperlink for your article
(6) Enjoy your Facebook post with a preview image and information
Sidenote: Don't pull your hair out over Facebook, it's not worth it. =)
If you're using Wordpress, edit the post in question to change the permalink (just alter it slightly), then update the post. Using the new permalink in the Facebook OG debugger should now work.
It's a weird fix, but I think it takes care of a problem caused by special characters being used in the title of a post, which is then used to make the permalink.
Its all about DNS issue, was having same issue and resolved it by updating domain name servers to actual name servers.
In my case my domain was pointed to ns1.websterz.net and ns2.websterz.net and on this server i had DNS redirect to my other server (where web site is hosted). I Just updated name servers of the domain to actual name servers where my web site is hosted on. This was account migration case i forgot to update name servers as of new server.
Everything works fine now.

Redirects in Ektron 8.6.1

Has anyone played with the new redirect feature in Ektron 8.6?
We tested it (in 8.6.0) before upgrading and were happy with it. But when it came time to do the upgrade, Ektron had released 8.6.1, so we upgraded directly to that.
Now we are having trouble with the redirect feature. (Yes, we should have tested everything again in 8.6.1 before upgrading)
Now if we try to add a redirect rule for an existing page in the CMS, it does not work.
But if we create a redirect rule for a page the does not exist, then try to hit that address, the redirect works fine.
We need the redirects to work for existing pages in the CMS.
To clarify what "working" and "not working" means...
If I have an existing page in the CMS with manual alias of "/erc/lucien.apsx", I can create an entry in the redirect table like this...
Adding this entry generates no errors, but when I visit the page, all I see is the regular old page I created. NOT the Google site it should be redirecting to. I do not get any 404 errors.
But if I create a redirect entry for a page that does not already exist, like this...
It works perfectly. If I try to visit the /erc/fake.apsx address, I end up on the Google site, as expected.
(FYI, we create a "fake" page in the CMS for external content so we can attach metadata to it and make it searchable in taxonomies, but then provide a link to the "real" page. I want to use redirects here so users don't have to do this extra click)
I suspect it might be cache related -- the original URL gets cached as an alias, then subsequent requests to that URL are redirected to the quicklink without the need for a db look up. When you add the redirect, it’s probably not clearing the old item from the cache. I'd try an IIS reset after you add the URL redirect and see if that clears up the issue.
An "outside the box" (of Ektron) answer to this is to place the redirect at the web server rather than in the Aliases section of the Ektron CMS.
The server I work on uses IIS and I have this set up for several pages.

Domain blocked and no data scraped

I recently purchased the domain www.iacro.dk from UnoEuro and installed WordPress planning to integrate blogging with Facebook. However, I cannot even get to share a link to the domain.
When I try to share any link on my timeline, it gives the error "The content you're trying to share includes a link that's been blocked for being spammy or unsafe: iacro.dk". Searching, I came across Sucuri SiteCheck which showed that McAfee TrustedSource had marked the site as having malicious content. Strange considering that I just bought it, it contains nothing but WordPress and I can't find any previous history of ownership. But I got McAfee to reclassify it and it now shows up green at SiteCheck. However, now a few days later, Facebook still blocks it. Clicking the "let us know" link in the FB block dialog got me to a "Blocked from Adding Content" form that I submitted, but this just triggered a confirmation mail stating that individual issues are not processed.
I then noticed the same behavior as here and here: When I type in any iacro.dk link on my Timeline it generates a blank preview with "(No Title)". It doesn't matter if it's the front page, a htm document or even an image - nothing is returned. So I tried the debugger which returns the very generic "Error Parsing URL: Error parsing input URL, no data was scraped.". Searching on this site, a lot of people suggest that missing "og:" tags might cause no scraping. I installed a WP plugin for that and verified tag generation, but nothing changed. And since FB can't even scrape plain htm / jpg from the domain, I assume tags can be ruled out.
Here someone suggests 301 Redirects being a problem, but I haven't set up redirection - I don't even have a .htaccess file.
So, my questions are: Is this all because of the domain being marked as "spammy"? If so, how can I get the FB ban lifted? However, I have seen examples of other "spammy" sites where the preview is being generated just fine, e.g. http://dagbok.nu described in this question. So if the blacklist is not the only problem, what else is wrong?
This is driving me nuts so thanks a lot in advance!
I don't know the details, but it is a problem that facebook has with web sites hosted on shared servers, i.e. the server hosting your web site also hosts a number of other web sites.

problem getting my domain redirect to update .html files in dropbox after used iweb SEO TOOL

So I created a nice 6 page website hutchspropertyandtree.co.nr using freedomain.co.nr via dropbox public folder. Everything was working and updating properly until i updated with iwebs SEO TOOL. I added meta and title tags as well as description etc... PROBLEM is that even though my .html files in dropbox are correct and show all new code and tags. when i open up my domain hutchspropertyandtree.co.nr it doesnt show any of my recent seo tool updates.
im thinking that the cheap domainname from .co.nr is the problem? Is it possible that the default tags and titles and keywords entered into the co.nr website creation boxs are overwriting the newer ones in the html within my dropbox?
But still doesnt explain why a stat counter code and google analytics code in the footer and header respectively still do not show up when i view source in browser.
PLEASE PLEASE HELP.
It's because the page at hutchspropertyandtree.co.nr uses a frame to show the content from another location. The meta information comes from the page with the frame, not the page in the frame. You should be able to see the content of the frame using an inspector (comes with all browsers these days) or "View frame source", if your browser does that.
Note that any search engine hits to your pages will link to the dropbox URL, not the frame page (that has essentially no content from the viewpoint of a search engine). If you want search engine results to show up under that domain, you'll have to get hosting that lets you point a domain directly to it.