Googlebot is not reading dynamic content - adsense

Website is fully dynamic.
meta tags, opengraph tags and contents are created dynamicially on webpages.
I might be doing something wrong. Please guide me to get approved for GOOGLE ADSENSE Program.
Google Adsense gave reason "Insufficient content" for this

I think the only real answer is to implement some kind of partial caching. If needed content is not in the source code of your pages, it won't be indexed.
What exactly do you mean by "fully dynamic" and what parts do you want to be indexed?

Related

how to get news full content from rss feed in j2ee

I am developing a site which is supposed to get the news content of other sites, something like this. but without redirecting to the host for reading the news content.
now the problem is that I don't know what is the best way to get the content completely. I know that I can use RSS feed for each site but it has only a short description of each news not the whole story. I have also read the related questions in SO like these:
How to get the full content from the rss feed in javascript
How to extract the full content from a partial content rss
but none of them solved my problem .
now I wanna ask what is the best way to get the whole content of news from different sites if it is necessary to go directly to them?
I am sorry because of bad english and if my question is not clear enough I can explain it even more
thanks in advance
You could use web scraping library like boilerpipe to extract content from news sites, but scraping breaks easily(if the target site changes layout for example) and there might be legal issues in extracting full content from other sites and displaying in yours.
Edit: I tried boilerpipe api demo and the library seems very smart at extracting articles from web pages.

Grouping for website pages on google search

I have a website slateone.com, When I search slateone.com on google it gives me a search result with a group of links together as a single search. Whereas when I search for slateone it gives me the same links separately without any grouping.
I tried testing the same kind of search for popular websites like facebook and google shows same group search result for both facebook.com and facebook.
Please guide me on this issue.
Thanks in Advance!!
-Vinay
You are actually referring to sitelinks, they are completely automated and not editable.
Now your website first of all needs to have the title attribute in the tags defined, this will surely help the algorithm of Google to find them, but note that does not mean that this is the solution.
The exact algorithm is known only to Google, the fix i mentioned will make the links more readable for the bots, this will surely help.
The official information can be found here: https://support.google.com/webmasters/answer/47334?hl=en

output/rendered text can not be seen in source code

As all Facebook social plugins have this feature;
Your Facebook name can be seen on the web page but when you look up in the source code you can not see Facebook name.
So I need to know why and how?
This feature may be used in order to avoid plagiarism / text content parsers.
Example:
https://developers.facebook.com/docs/reference/plugins/comments/
Name of Facebook users do not exist in source code.
Please kindly enlighten me in details thanks...
Sure the names do exist, they're in the iFrame content. You can see the data coming to your browser on the network traffic tab of the developer tools.

Facebook Graph API SEO Comments and Profanity Filter

I'm trying to integrate the Facebook comments left on our site in a way in which the content can be crawled by search engines and also for people (although I highly doubt there will be many) who don't have Javascript enabled on their browser.
Currently our Facebook comments are displayed via the use of the Facebook comment social plugin (using the <fb:comments href="MY_URL" num_posts="50" width="665"></fb:comments> tag). This ends up rendering an iFrame (which are mostly ignored by search engine crawlers) so the plan is to render this information and format it with basic HTML. To do this, the comments are pulled using the Graph API - this is then only be displayed to crawlers and people with Javascript disabled.
This all works nicely using the Graph API call (https://graph.facebook.com/comments/?ids=MY_URL), parsing the JSON result and displaying it on the page. The problem is that the <fb:comments> approach filters our results based on a blacklist we have set up on one of our Facebook Apps. The AppId with the relevant blacklist is stored on the page using metadata (<meta property="fb:app_id" content="APP_ID"/>) which the <fb:comments> control obviously must somehow use to filter the comments.
The problem is the Graph API method does not filter any results as I guess no blacklist (or App Id containing a blacklist) is specified. Does anyone know how to specify a Facebook App ID to the API call URL or of another way to not fetch commnents back that violate the terms of the blacklist?
On a side note, I know the debate about filtering content in comments rages on but it is a management decision to implement the blacklist, and one that I have no influence in changing - just incase anyone felt the need to explain the reasons why content filtering is or isn't a good idea!
Any thoughts on a solution?
Unfortunately there's no way to access a filtered list of comments using the API - it might be a reasonably request to have this in the API - you should file a wishlist item in Facebook's bug tracker
Otherwise, the only solution I can think of is to implement your own filter on your side when retrieving and displaying the comments from the API.
According to the Comments plugin documentation the filter on Facebook's side is implemented as a simple substring match, so it should be trivial to implement.
A fairly simple regular expression match should be able to check each comment against a relatively long list quickly.
(Unfortunately, the tradeoff here is that implementing a filter is easy, but you'd also need to write an interface so that whoever's updating the list of disallowed words can maintain the list for both the Facebook plugin, and your own filtering.)
Quote from docs:
The comment is checked via substring matching. This means if you blacklist the
word 'at', if the comment contains the sequence 'a' 't' anywhere it will be
marked with limited visibility; e.g. if the comment contained the words 'bat',
'hat', 'attend', etc it would be caught.
Pretty sure there is no current way of doing this from the graph API, the only thing I can suggest is taking the blacklist and build your own filter

Get google to index links from javascript generated content

On my site I have a directory of things which is generated through jquery ajax calls, which subsequently creates the html.
To my knwoledge goole and other bots aren't aware of dom changes after the page load, and won't index the directory.
What I'd like to achieve, is to serve the search bots a dedicated page which only contains the links to the things.
Would adding a noscript tag to the directory page be a solution? (in the noscript section, I would link to a page which merely serves the links to the things.)
I've looked at both the robots.txt and the meta tag, but neither seem to do what I want.
It looks like you stumbled on the answer to this yourself, but I'll post the answer to this question anyway for posterity:
Implement Google's AJAX crawling specification. If links to your page contain #! (a URL fragment starting with an exclamation point), Googlebot will send everything after the ! to the server in the special query string parameter _escaped_fragment_.
You then look for the _escaped_fragment_ parameter in your server code, and if present, return static HTML.
(I went into a little more detail in this answer.)