I have a single page app working over REST services.
My services have the standard HTTP methods.
When first loading a page, it has many elements, such as menus, dropdowns, information about general context (user messages,user alerts, etc).
Am I supposed to have a REST service to load each page element or should I load all the page data at once?
If loading all data at once, a GET non parametrized is to load a list Stocks what should I do to load the all the page context (with context I mean the page objects)
I mean, will I have lots of services just to load a menu, dropdown list, the number of unread messages?
Write a get service which can serve you with all the data you need at page load, this will be acting as a template service to create your stage.
you could have only two ajax calls (two rest services). The first one to load context data and the second one to load stock data.
I recommend you to use promises, so in that way you could give the control to the user once all data is loaded correctly.
See this article: https://github.com/kriskowal/q
I hope it helps.
Related
I'm trying to understand how Next JS does dynamic routing and I'm a little confused on how to properly implement it in my own website. Basically, I have a database (MySQL) of content that keeps growing, let's say they're blog posts, with images stored in GCS. From what I understand you can create a pages/[id].js file in your pages folder that can handle dynamically creating routes for new pages, but, in order for you to get a good SEO score you, the Google crawlers need to see your content before any javascript or data requests are made. So the pages have to be physically available for the content to instantly appear upon loading. So if I have pages/[id].js and I have content added to the database daily, how are physical content files supposed to spontaneously populate the pages folder? And if pages files keep getting created, how do I prevent my disk from running out of space? I'm sure there is something I'm not understanding.
I read on nextjs.org that you can have a function getStaticPaths that needs to return a list of possible values for 'id'. I'm wondering, if my site is live and new content (pages) is constantly being added to the database with their own unique ids, how is it "aware" of those ids? Do I need to write a program or message queue system that constantly appends new ids to a file that is read by getStaticPaths? What if I have to duplicate my site on multiple servers around the world, or if a server dies, do I have to keep track of the file's contents in order to boot up a new server with the same content?
From what I understand, in order for Google to see any sort of content on your website, the pages text (content) needs to be static and quickly available via physical files. Images and everything else can be loaded later since Google's crawlers mainly care about text. So if every post needs to be a physical file in your app's pages folder, how do new pages files get created if the content is added to the database?
TL:DR My main concern is having my content readily available for Google crawlers in order to get a good score for my website. How do I achieve that if content is added to my database?
As you stated before, you can set up getStaticPaths to provide a list of values for id at build time. If I understand correctly, you are most concerned about what happens to new content added after the initial build.
For this you have to return the fallback key from getStaticPaths.
If the fallback key is false, then all IDs not specified initially will go to 404 and you’d need to rebuild the app every time you add new content. This is what you don't want.
If you set it to true, then the initial values will be prerendered just like before, but new values will NOT go 404. Instead, the first user visiting a path with a new Id will trigger the rendering of that new page. This allows you to dynamically check for new content if a request hits an id that wasn't available at build time.
It is interesting here that the first visitor will temporarily see a ‘fallback’-version of the page, while next.js processes the request. On that fallback, you would usually just show a loading spinner. The server then passes the data to the client in order to properly render the full page. So in practice, the user will first see a loading indicator, then the page updates itself with the actual content. Subsequent visitors will get the now prerendered result immediately.
You may now be worried about crawlers hitting that fallback page and not getting SEO content. This concern has been addressed here: https://github.com/vercel/next.js/discussions/12482
Apart from being able to serve new pages after build, the fallback strategy has another use in that it allows you to prerender only a small subset of your website (like your most visited pages), while the other pages will be generated only when necessary.
From the docs: When is fallback: true useful?
You may statically generate a small subset of pages and use fallback:
true for the rest. When someone requests a page that’s not generated
yet, the user will see the page with a loading indicator. Shortly
after, getStaticProps finishes and the page will be rendered with the
requested data. From now on, everyone who requests the same page will
get the statically pre-rendered page.
This ensures that users always have a fast experience while preserving
fast builds and the benefits of Static Generation.
We have a requirement to create a News Component. So there will be news pages which we will author contains Title, Image & description. I will store this under one node say content\myproject\newsnode\news1,news2 like this.
On the homepage, I want to show the latest 3 authored news description. For that, I'm thinking of using a news component.
I thought of creating 2 component and map them. Thinking of using the Query builder to fetch the latest news to show on homepage. One component of news page and one component on a homepage to show latest 3 news with Title, Tile image and a small description.
Is there any other approach to this?
If you are using a dispatcher, querybuilder servlet is blocked by default and should be blocked for obvious reasons.
Since your question is general, I will try to answer generally and on a high level.
There are two possible options I can think of:
1. make a servlet to retrieve the last 3 news component information and expose them as JSON. Then send an AJAX request from your browser and change the view accordingly with jquery or your front-end framework of choice.
Advantages: No caching, you'll always get the latest news.
Disadvantages: SEO, if you care about that in this case. Search engines will
not index the news on the page since they are not part of the initial markup (not server-side rendered)
2. Create a service to get the last 3 news component info then render them on your component via HTL or JSP. Basically server-side render them.
Advantages: SEO, same as the reason above.
Disadvantages: You have to invalidate the cache for your page every time a new news component is added to make sure your end users get the latest.
Hope this helps.
We're using the reporting APIs to access page views for specific page paths (using a regex match), but we're getting a lot of HTTP 500s and 503s, presumably because the requests are timing out due to the volume of data that needs to be scanned.
So, my question is, is there a way to access aggregate data for a URL? (for example, all page views, or all referrers)
Secondly, since a single resource can have multiple URLs (f.e. if the title forms part of the URL, and the title is changed), we would need to be able to get aggregate data for a set of URLs (using a regex, for example).
Is this possible?
These days modern sites are becoming more and more service oriented like facebook/gmail.
A main page is loaded and then with ajax requests it calls all sorts of data and adds them on the site. This is also something that is promoted on ASP.NET MVC4 with the Web API.
So now lets say we want to create a product category page for a eshop. It has come to my understanding that the way to go with this implementation is to create a nice layout and create a Web API that will retrieve all data on request.
So we'll have a url like
/api/Products
that will retun a json with all of our products and then we can build up with this api by adding filters/paging maybe (/api/Products?sort-by=name) or anything else that will return the filtered json and we can pass with ajax requests back and forth offering the user an excellent experience.
My question with this now is what happens with SEO.
So a few years ago without onepage ajax/service oriented sites we would have
http://website.com/apples/
http://website.com/apples/2/
that would load the list of the apples with pagination.
Now the site would be
http://website.com/apples/
however it wouldn't load the apples instantly but load a blank page and call the service
/api/apples
that would return a json and then load the data on the site.
I read this article at Google https://developers.google.com/webmasters/ajax-crawling/docs/html-snapshot which didn't convince me. I really don't want to load the service behind and then string replace.
It is possible to have the
http://website.com/apples/
that would call the service
/api/apples
and load the data and be at the same time Google friendly?
You have a couple of options. Either you can use HTML5 pushState to update the URL, but then you also will need to create a version of your site that works without JavaScript turned on.
Another option is to use Googles AJAX Crawling specification. I don't know which search providers that currently supports it, but should be a good way to at least get into Googles search results.
Please give me idea about the management of data in GWT. I am using Gwt in my travel portal project and my web pages is related to previous page data but when i press the refresh button of browser's then my data is lost . so please inform me if there is any way to manage this problem.
GWT History class cannot be used to manage page refresh (only back/forward).
A click on the refresh button send a request to the server and the state of the application is reloaded from the server. That's all. You have to deal with it.
If you don't want to lose your data, you have to find a way to save it on the server when it's needed.
If your users have modern browsers, you can use the HTML5 feature localStorage to store the data in the browser between page-refresh.
Check this thread for supported browser.
You can create a url fragment to encode your data.
String location = "ny";
History.newItem("location="+location);
will result with a url fragment of www.example.com#location=ny
Then if the browser is refreshed, you can decode the url fragment and determine that the location is ny.
For multiple parameters you can create a complex fragment and parse it.
History.newItem("start="+startLocation+"&end="+endLocation);
Then the url would look like www.example.com#start=newyork&end=boston
The basic idea is to store some state in the URL fragment (the part of the URL after the #) -- for example your-site.com/app#page-1
To listen for changes to the fragment, use GWT's History class. The fragment will change when the user goes back/forward, or refreshes the page.
So you could have your app do different things when the URL has #page-1 vs #page-2, etc.
A more generalized and scalable solution to this is something like gwt-platform's Place architecture (along with Presenters, which are also a good idea for large apps)