How to make Google crawler see asynchronously loaded data - axios

I'm currently building a blog using Nuxt 3 + Axios to load data from a headless CMS.
Until the content is loaded, I display a content placeholder and replace it once the Axios request is done.
The request is launched on the Mounted hook.
I tried to preview the page using Google Search Console but the article content is not present on the page.
I'm worried that it will not improve the SEO rank of the website and I would like to know if I need to load the data on Server side and have it directly when the page is loaded and how.

Related

Is there any way to get the HTML from a web page once the JavaScript is loaded in a Flutter app?

I'm working on a URL preview widget, so I'd like to extract the meta tags from the HTML of a given URL.
However, the problem is that websites like Twitter don't return the entire HTML when they detect there's no JavaScript engine enabled (i.e. doing a GET request from the http package).
So, I'd like to know if there's any workaround for these cases, for example, using some kind of headless browser to get the entire HTML.
Thanks!

Precache html pages in flutter

I have a list of some HTML pages that i show in my web_view. The problem is that it will load slowly when network is slow or wont show at all if user is offline. So i want to pre-cache all the URLs before navigating to that page. After that I want to load from the cache and navigate through the URLs on swipe. (The Swipe part is done with URLs loading real-time). My question to you is how do I pre-cache every URLs and call it in my web_view later
If you are sure that those pages are static..I have a suggestion for you..
Like you said, you can pre-cache..
Initially, when internet is available you get the code of those URLs using
http package and store it in local storage, say some .txt file.
And then, when you want to show the page in your app, again..as it being static page, you can read the html code from local storage and show it in your app using html package.
Hope it answers your question.

Auto submit form for web crawling

I've got an old ASPX+XML website created by an external agency here. I only have access to sections of the XML as the web.config is locked.
I want to crawl this site to scrape the pages and capture the relational data. I can do a blank search which returns back all the data - from here a web crawler would be fine. However, I cannot find a web crawler that will hit search - I've tried a JavaScript that submits the form on page load but this still does not work (I guess it's not fast enough).
The URL does not contain the query string (so I cant just do a blank search and copy the results URL for example).
Any ideas?

Is it possible to add adverts to a custom Facebook Page Tab app?

I need to create a custom Facebook Page Tab app which will show an external site in an iframe. This need to have adverts on it but I'm not sure if this is possible as the site is hosted externally.
I'm not sure if I need to sign up to the Facebook Audience Network to get approved etc. either?
Any help or advice would be great.
Many browsers have this limitation of not allowing external sites to be shown in an iframe. Imagine the case when you are working hard to create a site and others show all your content in iframes. That is, naturally frustrating.
However, there is a candidate-solution: Let's suppose you create a page which sends a request to the other site and appends all the content into the body and head of your page. This is very much possible, so the solution is to:
Create a page in your site, let's call it outsider
In the server-side code of your outsider page send a request to the desired page to be shown
You will get the html of the page. Process it and include its content into the head and body of outsider. This includes:
3.1. Checking all the CSS to be reached, as the target page might refer to local CSS, which is unreachable locally at your end. Process the URLs of CSS files
3.2. Checking all the Javascript to be reached, as the target page might refer to local JS, which is unreachable locally at your end. Process the URLs of JS files
3.3. Apply the idea described in 3.1. and 3.2. for other resources, like images, until you are satisfied with the content of outsider
Create an iframe, having the source to point to outsider. outsider is inside your scope, so it should be shown
NOTE: If the site owning the target page does not like the possibility of you showing their content inside iframes, they might protect it by, let's say, having Javascript in their code, which checks whether the page is inside an iframe. Remove that code while processing the response to your request. If nothing else prevents you from showing the page in an iframe, then you should achieve success.

SEO and Javascript Data Load

These days modern sites are becoming more and more service oriented like facebook/gmail.
A main page is loaded and then with ajax requests it calls all sorts of data and adds them on the site. This is also something that is promoted on ASP.NET MVC4 with the Web API.
So now lets say we want to create a product category page for a eshop. It has come to my understanding that the way to go with this implementation is to create a nice layout and create a Web API that will retrieve all data on request.
So we'll have a url like
/api/Products
that will retun a json with all of our products and then we can build up with this api by adding filters/paging maybe (/api/Products?sort-by=name) or anything else that will return the filtered json and we can pass with ajax requests back and forth offering the user an excellent experience.
My question with this now is what happens with SEO.
So a few years ago without onepage ajax/service oriented sites we would have
http://website.com/apples/
http://website.com/apples/2/
that would load the list of the apples with pagination.
Now the site would be
http://website.com/apples/
however it wouldn't load the apples instantly but load a blank page and call the service
/api/apples
that would return a json and then load the data on the site.
I read this article at Google https://developers.google.com/webmasters/ajax-crawling/docs/html-snapshot which didn't convince me. I really don't want to load the service behind and then string replace.
It is possible to have the
http://website.com/apples/
that would call the service
/api/apples
and load the data and be at the same time Google friendly?
You have a couple of options. Either you can use HTML5 pushState to update the URL, but then you also will need to create a version of your site that works without JavaScript turned on.
Another option is to use Googles AJAX Crawling specification. I don't know which search providers that currently supports it, but should be a good way to at least get into Googles search results.