Workbox CacheFirst always fetching from network (after fulfilling from cache) - google-chrome-devtools

I have a web application that wants to cache recently-referenced images to support offline use. The images are not expected to ever change, so I have configured my Workbox-based service worker to use the CacheFirst strategy for all requests from the image server:
//service-worker.js
self.skipWaiting();
clientsClaim();
cleanupOutdatedCaches();
registerRoute(
({ request }) => request.url.startsWith('https://example.com'),
new CacheFirst({
cacheName: 'images',
plugins: [
new CacheableResponsePlugin({ statuses: [200] }),
new ExpirationPlugin({
maxAgeSeconds: 30 * 24 * 60 * 60, // 30 days
maxEntries: 100,
}),
],
})
);
When my application runs, I can see in Chrome developer tools Network tab that the service worker is successfully serving these images after the initial load:
So far so good. But then, there is another request in the Network tab that indicates that the service worker itself is fetching the image from the network (here, satisified by the browser's disk cache):
My understanding of CacheFirst is that the service worker should not make a network request at all if the cache satisfies the request. What is causing this behavior?

While debugging, I had missed this message from Workbox in the Chrome developer tools:
The request for '/service-worker.js' returned a response that does not meet the criteria for being cached.
This led me to find information about opaque responses, and some helpful information describing third-party requests, which allowed me to fix this in my environment using the crossorigin attribute in <img> tags:
<img :src="imagePath"
crossorigin="anonymous"
/>
In retrospect, another clue that the original images weren't being cached at all (even though they were being satisified by the service worker) was that the Cache Storage explorer in Chrome developer tools showed only the workbox precache (and not my "images" cache that would hold these cached images):

Related

Error - Failed to add 'SAP-Connectivity-Authentication' header for on-premise connectivity

I am connecting an On-premise S/4 HANA with SAP Cloud Platform trial account. I am using SAP Cloud SDK to fetch all Business Partners from S/4 HANA.
My Cloud Connector is set
My Destination at Sub-Account level is set and can ping to my on-premise system
My Service instances - XSUAA/Destination/Connectivity is set with the application
But I have the following error
Failed to add 'SAP-Connectivity-Authentication' header for on-premise connectivity: no JWT bearer found in the 'Authorization' header of the request. Continuing without a header. Connecting to on-premise systems may not be possible
The code which I am using is -
final List<BusinessPartner> businessPartners =
new DefaultBusinessPartnerService()
.getAllBusinessPartner()
.select(BusinessPartner.BUSINESS_PARTNER)
.execute(destination);
It seems AppRouter is the recommended for Authorization and Access and hence I tried implementing one- but my approuter shows - Not Found
Approuter App -Name - approuter-demo
Below is the xs-app.json
{
"routes": [
{
"source": "^/s4ext/(.*)",
"target": "/s4ext/$1",
"destination": "******"
}
]
}
The Manifest file is as below:
---
applications:
- name: approuter-demo
routes:
- route: approuter-demo-*****trial.cfapps.eu10.hana.ondemand.com
path: approuter
memory: 128M
env:
TENANT_HOST_PATTERN: 'approuter-demo-(.*).cfapps.eu10.hana.ondemand.com'
destinations: '[{"name":"******", "url" :"https://s4ext-***.cfapps.eu10.hana.ondemand.com", "forwardAuthToken": true }]'
services:
- xsuaa-demo
- connectivity-demo
- destination-demo
Kindly guide me. Thanks.
Your destination type might be wrong. The authorization header is set via the destination.
Try other types in sap cp -> connectivity.
Reading your question again I can identify two issues:
This error message in your log:
Failed to add 'SAP-Connectivity-Authentication' header for on-premise connectivity: no JWT bearer found in the 'Authorization' header of the request. Continuing without a header. Connecting to on-premise systems may not be possible
It may be that this error message is actually superfluous and hence indicating a problem which is actually none. In your case this header is possibly not necessary and the SAP Cloud SDK should not try to add it. But in any case, this will not influence the actual connection, so this error message is at most confusing, but not harmful in the sense of altering functionality.
Still, I am asking you to add the stack trace of this exception to your question to be very sure here.
Your app router shows "Not Found":
Here I am missing more information. When does what exactly show "Not Found"? Is it that your browser cannot find your app router, or can your app router not find the target URL of the application?

How to debug Alexa flash briefing skills? Not available error

I am building a flash briefing skill for Alexa. I am using JSON, the JSON feed seems to be working well, went over the checklist and everything checks but when enabling the skill and starting my flash briefing I only get the "Custom Error Message" I have specified in the flash briefing skill definition, with no errors in the CloudWatch logs or anywhere else. No error when checking the feed elsewhere.
I am using AWS API gateway without authentication and the Content-Type is properly set to application/json and I double checked the response with JSONlint.
This is the URL for the feed:
https://l7kjk6dx49.execute-api.us-east-1.amazonaws.com/prod/postedmessage/feed
Following the suggestion of #Bob I updated the feed URL and enabled logging. The feed is called properly from my browser and there seems to be a call when trying to open the flash briefing and the response is OK, from Cloudwatch logs:
2016-11-19
16:15:34
Starting execution for request: 66ac03af-ae73-11e6-8719-1d2a8a213089
16:15:34
HTTP Method: GET, Resource Path: /postedmessage/feed
16:15:35
Successfully completed execution
16:15:35
Method completed with status: 200

Booted Off Local Server - 302 error

I'll start with the log that I am receiving below:
Dec.15.11.56-Rf: Incoming Request URL: /
Dec.15.11.56-Rf: SECURE GET Path: / From: mlocal.cldeals.com Rewritten: www.cldeals.com
Dec.15.11.56-Rf: Received 302 Found [text/html; charset=UTF-8] response for /
Dec.15.11.56-Rf: Sending 302 text/html; charset=UTF-8 response for /
Dec.15.11.56-Rf: Stats. Total: 0.52088702, Upstream: 0.48212701, Processing: 0.00105600, ProcessingOther: 0.04037500
Basically, when I go to mlocal.cldeals.com, it loads fine. If I click on another page, say mlocal.cldeals.com/products, that loads fine as well. The issue seems to be when I go to the account page and try to switch back to the homepage, maybe some type of security issue? When I try to switch back to mlocal.cldeals.com, the home page, it boots me off and sends me to www.cldeals.com. Is there something I can add to force this from not happening? Additionally, is this just a local server issue that would go away when I launch it on Moovweb's server? Any help is greatly appreciated.
Thank you.
It looks like the backend response to https://www.cldeals.com is a 302 to http://www.cldeals.com:80/. Not sure why that is the case (see note below *)
curl -v -o /dev/null https://www.cldeals.com
This response contains a hardcoded Location header and your project is passing along the response as is, which is why you are being booted off your local server.
Because the Location header value has a port specified, you'll need to modify your config.json to include this line in the mapping:
{
"host_map": [
"$.cldeals.com => www.cldeals.com",
"$.cldeals.com => www.cldeals.com:80"
]
}
This way, the SDK knows to rewrite that specific host:port value... (By default all HTTP requests go through port 80, so that information isn't really necessary)
*This is might be bug in the backend implementation because once you log in, you should be in HTTPS mode until you log out. (I can see some pages with personal information being transmitted over plain HTTP)

Loading store data with rest proxy from server in Sencha Touch 2

I have searched around on the forums and read some other posts. However, I'm not sure how exactly to go about this. I have a store with a proxy that I'm trying to load with data from a server. I have tried both jsonp and rest for the type of proxy without luck. In both cases I get a 403 forbidden error. followed by an XMLHTTPRequest cannot load error.
Here's the error that I see in the Chrome console:
Here's my code:
Ext.define('EventsTest.store.Venues', {
extend: 'Ext.data.Store',
requires: [
'Ext.data.proxy.Rest',
],
config: {
storeId: 'venuesStore',
model: 'EventsTest.model.Venue',
proxy: {
type: 'rest',
url: 'http://leo.web/pages/api/',
headers: {
'x-api-key': 'senchaleotestkey'
},
limitParam: false,
pageParam: false,
enablePagingParams: false
/*
extraParams: {
latitude: 45.250157,
longitude: -75.800257,
radius: 5000
}
*/
}
}
});
Security policy in browser and desktop is different so even if it fails in browser it can work in phone. But now the question is how to manage while you are developing the app, for that have a look at this similar question :
How to use json proxy to access remote services during development
Regarding that OPTION request which is getting 403 response, try setting withCredentials : false and useDefaultHeader : false. Details here
http://docs.sencha.com/touch/2-1/#!/api/Ext.data.Operation-cfg-withCredentials
http://docs.sencha.com/touch/2-1/#!/api/Ext.data.Connection-cfg-useDefaultHeader
I would suggest you to read more about CORS if you want to use remote services, you may choose to enable CORS on your server.
You're running your app on a local domain "sencha.test", but you're trying to access data on "leo.web" - the error is that you're trying to load data across domains, which isn't allowed via AJAX.
You say that JSONP doesn't work... why not? Does your server return valid JSONP?

How to make browser stop caching GWT nocache.js

I'm developing a web app using GWT and am seeing a crazy problem with caching of the app.nocache.js file in the browser even though the web server sent a new copy of the file!
I am using Eclipse to compile the app, which works in dev mode. To test production mode, I have a virtual machine (Oracle VirtualBox) with a Ubuntu guest OS running on my host machine (Windows 7). I'm running lighttpd web server in the VM. The VM is sharing my project's war directory, and the web server is serving this dir.
I'm using Chrome as the browser, but the same thing happens in Firefox.
Here's the scenario:
The web page for the app is blank. Accorind to Chrome's "Inspect Element" tool, it's because it is trying fetch 6E89D5C912DD8F3F806083C8AA626B83.cache.html, which doesn't exist (404 not found).
I check the war directory, and sure enough, that file doesn't exist.
The app.nocache.js on the browser WAS RELOADED from the web server (200 OK), because the file on the server was newer than the browser cache. I verified that file size and timestamp for the new file returned by the server were correct. (This is info Chrome reports about the server's HTTP response)
However, if I open the app.nocache.js on the browser, the javascript is referring to 6E89D5C912DD8F3F806083C8AA626B83.cache.html!!! That is, even though the web server sent a new app.nocache.js, the browser seems to have ignored that and kept using its cached copy!
Goto Google->GWT Compile in Eclipse. Recompile the whole thing.
Verify in the war directory that the app.nocache.js was overwritten and has a new timestamp.
Reload the page from Chrome and verify once again that the server sent a 200 OK response to the app.nocache.js.
The browser once again tries to load 6E89D5C912DD8F3F806083C8AA626B83.cache.html and fails. The browser is still using the old cached copy of app.nocache.js.
Made absolutely certain in the war directory that nothing is referring to 6E89D5C912DD8F3F806083C8AA626B83.cache.html (via find and grep)
What is going wrong? Why is the browser caching this nocache.js file even when the server is sending it a new copy?
Here is a copy of the HTTP request/response headers when clicking reload in the browser. In this trace, the server content hasn't been recompiled since the last GET (but note that the cached version of nocache.js is still wrong!):
Request URL:http://192.168.2.4/xbts_ui/xbts_ui.nocache.js
Request Method:GET
Status Code:304 Not Modified
Request Headersview source
Accept:*/*
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Cache-Control:max-age=0
Connection:keep-alive
Host:192.168.2.4
If-Modified-Since:Thu, 25 Oct 2012 17:55:26 GMT
If-None-Match:"2881105249"
Referer:http://192.168.2.4/XBTS_ui.html
User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4
Response Headersview source
Accept-Ranges:bytes
Content-Type:text/javascript
Date:Thu, 25 Oct 2012 20:27:55 GMT
ETag:"2881105249"
Last-Modified:Thu, 25 Oct 2012 17:55:26 GMT
Server:lighttpd/1.4.31
The best way to avoid browser caching is set the expiration time to now and add the max-age=0 and the must-revalidate controls.
This is the configuration I use with apache-httpd
ExpiresActive on
<LocationMatch "nocache">
ExpiresDefault "now"
Header set Cache-Control "public, max-age=0, must-revalidate"
</LocationMatch>
<LocationMatch "\.cache\.">
ExpiresDefault "now plus 1 year"
</LocationMatch>
your configuration for lighthttpd should be
server.modules = (
"mod_expire",
"mod_setenv",
)
...
$HTTP["url"] =~ "\.nocache\." {
setenv.add-response-header = ( "Cache-Control" => "public, max-age=0, must-revalidate" )
expire.url = ( "" => "access plus 0 days" )
}
$HTTP["url"] =~ "\.cache\." {
expire.url = ( "" => "access plus 1 years" )
}
We had a similar issue. We found out that timestamp of the nocache.js was not updated with gwt compile so had to touch the file on build. And then we also applied the fix from #Manolo Carrasco MoƱino. I wrote a blog about this issue. http://programtalk.com/java/gwt-nocachejs-cached-by-browser/
We are using version 2.7 of GWT as the comment also points out.
There are two straightforward solutions (second is modified version of first one though)
1) Rename your *.html file which has a reference to *.nocache.js to i.e. MyProject.html to MyProject.jsp
Now search the location of you *.nocache.js script in MyProject.html
<script language="javascript" src="MyProject/MyProject.nocache.js"></script>
add a dynamic variable as a parameter for the JS file, this will make sure actual contents are being returned from the server every time. Following is example
<script language="javascript" src="MyProject/MyProject.nocache.jsp?dummyParam=<%= "" + new java.util.Date().getTime() %>"></script>
Explanation: dummyParam will be of no use BUT will get us our intended results i.e. will return us 200 code instead of 304
Note: If you will use this technique then you will need to make sure that you are pointing to right jsp file for loading your application (Before this change you was loading your app using HTML file).
2) If you dont want to use JSP solution and want to stick with your html file then you will need java script to dynamically add the unique parameter value on the client side when loading the nocache file. I am assuming that should not be a big deal now for you given the solution above.
I have used first technique successfully, hope this will help.
The app.nocache.js on the browser WAS RELOADED from the web server (200 OK), because the file on the server was newer than the browser cache. I verified that file size and timestamp for the new file returned by the server were correct. (This is info Chrome reports about the server's HTTP response)
I wouldn't rely on this. I've seen a bit of strange behaviour in Chrome's dev tools with the network tab in combination with caching (at least, it's not 100% transparent for me). In case of doubt, I usually still consult Firebug.
So probably Chrome still uses the old version. It may have decided long ago, that it will never have to reload the resource again. Clearing the cache should resolve this. And then make sure to set the correct caching headers before reloading the page, see e.g. Ideal HTTP cache control headers for different types of resources.
Open the page in cognito mode just to get-rid of cache issue and unblock yourself.
You need to configure cache time as mentioned in others comments.
After unsuccessfully preventing caching via Apache I created a bash script that root runs every minute in a cron job on my Linux Tomcat server.
#!/bin/bash
#
# Touches GWT nocache.js files in the Tomcat web app directory to prevent caching.
# Execute this script every minute in a root cron job.
#
cd /var/lib/tomcat7/webapps
find . -name '*nocache.js' | while read file; do
logger "Touching file '$file'"
touch "$file"
done