I want disable cache for Magento 2.15 cms page backend - magento2

I need to update the CMS page often hence I want to disable the cache for that particular page. I have tried using xml in cms page "Layout Update XML" with the following code
<head>
<meta http-equiv="Cache-Control" content="no-cache"/>
</head>
Still i could not disable cache.
Thank You

I need to update the CMS page often hence I want to disable the cache for that particular page. I have tried using xml in cms page "Layout Update XML" with the following code.
<block class="Magento\Cms\Block\Page" name="cms_page" cacheable="false"/>

Related

using .php with .css or some thing better?

I have a log in page for my web site. The log in file is "index.php" this will be the first page you come to when comming to my site. The rest of my site is HTML with a style.css file providing the look for my site. Now my questions is how do I get my index.php file too look like the rest of my web site?
Right now when you come to mydomain.com/index.php it is just a white page with a log in and password box. I would like my log in page to look like the rest of my web site. Can some one please refer me as how to do this?
I have other .php files that would also need to be linked with the .css such as register.php and so forth. thanks guys.
If there is a different/better method of doing what I need please feel free to chime in, I'm all ears at this point I've been trying to do this for 2 days.
Like you would do in every other html page you will have to link the file the same way.
I guess that you have already seen that in every php file there is html code?
Just stay out of the php brackets
<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" type="text/css" href="styles.css">
</head>
<body
<?php
"php code in here"
?>
</body>
</html>
If you don't find the usual html markup somewhere search for a include function in the php file.
Maybe the html header is in other php file and it is being called from there.
They would be included like this
include '_header.php';
You can use the CSS file, similar to how you use it in your HTML files. You can either post the CSS tag below your PHP code, or you can use an echo "cssTagHere"; call within your PHP code.
If you're using a login page, though, are you maintaining that security with the rest of your site by using PHP on your other pages?

Indexing an HTML page that redirects onload

I have a pure GWT based website and as we are aware the search engines cannot index pure gwt based websites. Thus, I have created an alternate web page as shown below which is stored as a separate html in the war folder. The purpose of this webpage is to enlist and index details regarding my website. This page is never displayed on my website, but instead is meant only for indexing. The url leading to this web page is part of the Sitemaps.xml. Thus I am assuming that the below html will be indexed because it's a part of Sitemaps. So here are my questions:
Will the content I give in the div with id "crawler" be indexed given the fact that it is scheduled for removal onload and that the browser is redirected to another url on load?
Is there a better way to get the content indexed for a pure GWT website which does not have any html based user interface?
I can also have urls that will invoke a servlet and return a response that is meant for indexing. But then the same url will be displayed in search results, which is not useful. In other words, I am trying to figure out a way in which the content gets indexed, but when the user clicks the search result he should be redirected to the home page instead of showing the indexed content.
<head>
<script>
function load(){
element = document.getElementById("crawler");
element.parentNode.removeChild(element);
window.location.href='http://<mysite>.com';
}
</script>
</head>
<body onLoad='load()'>
<div id="crawler">
<CONTENT TO BE INDEXED>......
</div>
</body>
As you can see here the div (crawler) that contains all the content that is meant for indexing, is removed as soon as the body loads. Apart from this the page also redirects to the home page of the site on load.
The crawler will read in the entire contents of the page for indexing, so it will have no trouble picking up the portion within the div. The onload is not executed by the crawler prior to reading the page.
A method I have used in the past was to generate static html versions of the pages and reference these through the sitemap.xml. Users landing on the html page would then be directed to the equivalent dynamic page when they click on a link (ie: Buy or Specifications). This worked well for search engine placement with many pages appearing in the top ten.
The best solution to notify the search engines about an undiscoverable website's content is to create a HTML website (as you did). If you create redirects based on the crawler, search engines will not love you. I think you have to fill out your HTML with content with relevant information and add
<link rel="canonical" href="https://gwtsite.com/exact_url"/>
tag to your website's head section. This will notify the search engines that the other site has to appear in the SERP-s instead of the HTML one.

Disadvantage of redirecting to error page on javascript disabled

I searched on the web and I didn't find any website using this technique. When javascript is disabled or doesn't supported by the browser, all those website are showing small boxes of error above their main content while no one is using redirecting to error page technique. I am using following code in my site to do this
<noscript>
Javascript is disabled.
<meta HTTP-EQUIV="REFRESH" content="0; url=http://www.wrangle.in/jserror.aspx">
</noscript>
But as my research resulted in less usage of this feature on www, so I want to know is there any disadvantage of this technique due to which these websites are not using this?

Browser caching after logout

After logout from the application if i press back button that pages are cached by browser.
i place meta tags in master pages not working
I'm not sure which meta tags you're talking about, but normally these tags would "expire" a page, which you can put in your templates.
<META HTTP-EQUIV="PRAGMA" CONTENT="NO-CACHE">
<META HTTP-EQUIV="CACHE-CONTROL" CONTENT="NO-CACHE">
<META HTTP-EQUIV="EXPIRES" CONTENT="0">
Hope this helps.
Like #m1ke said, you will be better off controlling the caching by setting the correct HTTP headers rather than trying to set meta tags, because, as you have probably discovered yourself, many browsers ignore the caching directives in the meta tags.
I barely worry about HTTP headers or caching in my web apps though. I simply set the default caching policy in the web server to "access plus 0 days" (ie. don't cache anything) and then put in specific entries for jpg, png and other assets that I do want cached. All you really need to worry about then is clearing the session on logout and you should be OK.
I would highly recommend reading the following article on caching: http://www.mnot.net/cache_docs/

Using wget to download dokuwiki pages in plain xhtml format only

I'm currently modifying the offline-dokuwiki[1] shell script to get the latest documentation for an application for automatically embedding within instances of that application. This works quite well except in its current form it grabs three versions of each page:
The full page including header and footer
Just the content without header and footer
The raw wiki syntax
I'm only actually interested in 2. This is linked to from the main pages by a html <link> tag in the <head>, like so:
<link rel="alternate" type="text/html" title="Plain HTML"
href="/dokuwiki/doku.php?do=export_xhtml&id=documentation:index" />
and is the same url as the main wiki pages only they contain 'do=export_xhtml' in the querystring. Is there a way of instructing wget to only download these versions or to automatically add '&do=export_xhtml' to the end of any links it follows? If so this would be a great help.
[1] http://www.dokuwiki.org/tips:offline-dokuwiki.sh (author: samlt)
DokuWiki accepts the do parameter as HTTP header as well. You could run wget with the parameter --header "X-DokuWiki-Do: export_xhtml"