How to add junk JS files to cache using workbox in CRA - React/SW - progressive-web-apps

I have added a custom SW file which I inject into the final SW file after-build step, and In this file, I'm using the below workbox configuration to cache files and it works fine as expected but I'm looking to add all of the junk files created by React lazy (split codes for every single route) before visiting them, I mean getting all of the js files (with their dynamic junk names which makes it hard to list them by hand and changing every time on build) in build directory and adding them into the cache.
workbox.routing.registerRoute(
/\.(?:js|css|png|jpg|jpeg|svg)$/,
new workbox.strategies.StaleWhileRevalidate({ // StaleWhileRevalidate get first from network and update cache, use cacher if network is not available; we can also use CacheFirst
cacheName: "custom-assets-cache",
plugins: [
new workbox.expiration.Plugin({
maxEntries: 2000,
maxAgeSeconds: 7776000 // 3 months
})
]
})
);
If we had access to Node's FS, it was so easy to lest list of file names but as you know, we don't have access to FS on the front-end side.

Related

get GCS file metadata using scala

I want to get the time creation of files in GCS, I used the code below :
println(Files
.getFileAttributeView(Paths.get("gs://datalake-dev/mu/tpu/file.0450138"), classOf[BasicFileAttributeView])
.readAttributes.creationTime)
The problem is that the Paths.get function replace // with / so I will get gs:/datalake-dev/mu/tpu/file.0450138 instead of gs://datalake-dev/mu/tpu/file.0450138.
Anyone can help me with this ?
Thanks a lot !
I solved the problem by adding the following java code and then calling the java function in scala.
import com.google.cloud.storage.*;
import java.sql.Timestamp;
public class ExtractDate {
public static String getTime(String fileName){
String bucketName = "bucket-data";
String blobName = "doc/files/"+fileName;
// Instantiates a client
Storage storage_client = StorageOptions.getDefaultInstance().getService();
Bucket bucket = storage_client.get(bucketName);
//val storage_client = Storage.
BlobId blobId = BlobId.of(bucketName, blobName);
Blob blob = storage_client.get(blobId);
Timestamp tmp = new Timestamp(bucket.get(blobName).getCreateTime());
System.out.print(bucket.get(blobName).getContent());
// return the year of the file date creation
return tmp.toString().substring(0,4);
}
}
You can use the file_get_contents method to read the contents of the path. From the documentation on Reading and Writing Files
Read objects contents using PHP to fetch an object's custom metadata from Google Cloud Storage.An App Engine PHP 5 app must use the Cloud Storage stream wrapper to write files at runtime. However, if an app needs to read files, and these files are static, you can optionally read static files uploaded with your app using PHP filesystem functions such as file_get_contents.
$fileContents = file_get_contents($filePath);
where the path specified must be a path relative to the script accessing them.
You must upload the file or files in an application subdirectory when you deploy your app to App Engine, and must configure the app.yaml file so your app can access those files. For complete details, see PHP 5 Application Configuration with app.yaml.
In the app.yaml configuration, notice that if you use a static file or directory handler (static_files or static_dir) you must specify application_readable set to true or your app won't be able to read the files. However, if the files are served by a script handler, this isn't necessary, because these files are readable by script handlers by default.

system.js downloads browser.js which is 2mega bytes

I'm trying to convert require.js project to system.js based one.
On network tab, I see browser.js which is 2MB.
I found it is actually npm/babel-core#5.8.38/browser.js
And I think this is to convert (transpile) javascript file somehow in development.
How do I convert beforehand (probably when bundling) so that I don't have to download 2MB browser.js
I am working with jspm 0.17 and I can go back to 0.16 if I can solve this problem.
You can create a bundle for a module with all its dependencies using systemjs builder
var Builder = require('systemjs-builder');
var builder = new Builder;
builder.loadConfig('config.js').then(function() {
builder.bundle('module.js', 'module.bundle.js', {minify: false});
});
Then load resulting bundle with a <script> tag before the first time the module is imported.
You also can make a bundle for all source files like this
builder.bundle('src/*.js', 'bundle.js')

How do I write a Webpack plugin to generate index.js files on demand?

In general, I want to know how to do code-generation/fabrication in a Webpack plugin on demand. I want to generate contents for files that do not exist when they are "required."
Specifically, I want a plugin that, when I require a directory, automatically requires all files in that directory (recursively).
For example, suppose we have the directory structure:
foo
bar.js
baz.js
main.js
And main.js has:
var foo = require("./foo");
// ...
I want webpack to automatically generate foo/index.js:
module.exports = {
bar: require("./bar"),
baz: require("./baz")
};
I've read most of the webpack docs. github.com/webpack/docs/wiki/How-to-write-a-plugin has an example of generating assets. However, I can't find an example of how to generate an asset on demand. It seems this should be a Resolver, but resolvers seem to only output file paths, not file contents.
Actually for your use case:
Specifically, I want a plugin that, when I require a directory, automatically requires all files in that directory (recursively).
you don't need a plugin. See How to load all files in a subdirectories using webpack without require statements
Doing code-generation/fabrication on demand can be done in JavaScript quite easily, why would you restrict your code generation specifically to only applied, when "required" by WebPack?
As NodeJS itself will look for an index.js, if you require a directory, you can quite easily generate arbitrary exports:
//index.js generating dynamic exports
var time = new Date();
var dynamicExport = {
staticFn : function() {
console.log('Time is:', time);
}
}
//dynamically create a function as a property in dynamicExport
//here you could add some file processing logic that is requiring stuff on demand and export it accordingly
dynamicExport['dyn' + time.getDay()] = function() {
console.log('Take this Java!');
}
module.exports = dynamicExport;

Image not showing immediately after uploading in sails.js

In my application ,I have stored uploaded images to folder ./assets/uploads. I am using easyimage and imagemagick for storing the images.
In my application, after uploading the images, it should show the new uploaded image. But it is not displayed even if the page is refreshed. But when i do sails lift , the image is shown.
How to show image immediately after uploading the image? Thanks a lot!
It's a totally normal situation, because of the way Sails works with the assets.
The thing is that upon sails lift the assets are being copied (including directory structure and symlinks) from ./assets folder to ./.tmp/public, which becomes publicly accessible.
So, in order to show your images immediately after upload, you, basically, need to upload them not to ./assets/uploads but to ./.tmp/public/uploads.
The only problem now is that the ./.tmp folder is being rewritten each time your application restarts, and storing uploads in ./tmp/... would make them erased after every sails lift. The solution here would be storing uploads in, for example, ./uploads and having a symlink ./assets/uploads pointing to ../uploads.
Though this question is pretty old but I would like to add a solution which I just implemented.
Today I spend almost 4 hours trying all those solutions out there. But none helped. I hope this solution will save someone else's time.
WHY images are not available immediately after uploading to any custom directory?
Because according to the default Sails setup, you can not access assets directly from the assets directory. Instead you have to access the existing assets that is brought to .tmp/public directory by Grunt at time of sails lift ing
THE Problems
(Available but Volatile) If you upload a file (say image) anywhere inside .tmp/public
directory, your file (image) is going to erase at next sails lift
(Unavailability) If you upload a file in any other custom directory- say: ./assets/images, the uploaded file will not be available immediately but at next sails lift it will be available. Which doesn't makes sense because - cant restart server each time files gets uploaded in production.
MY SOLUTION (say I want to upload my images in ./assets/images dir)
Upload the file say image.ext in ./tmp/public/images/image.ext (available and volatile)
On upload completion make a copy of the file image.ext to ./assets/images/*file.ext (future-proof)
CODE
var uploadToDir = '../public/images';
req.file("incoming_file").upload({
saveAs:function(file, cb) {
cb(null,uploadToDir+'/'+file.filename);
}
},function whenDone(err,files){
if (err) return res.serverError(err);
if( files.length > 0 ){
var ImagesDirArr = __dirname.split('/'); // path to this controller
ImagesDirArr.pop();
ImagesDirArr.pop();
var path = ImagesDirArr.join('/'); // path to root of the project
var _src = files[0].fd // path of the uploaded file
// the destination path
var _dest = path+'/assets/images/'+files[0].filename
// not preferred but fastest way of copying file
fs.createReadStream(_src).pipe(fs.createWriteStream(_dest));
return res.json({msg:"File saved", data: files});
}
});
I dont like this solution at all but yet it saved more of my time and it works perfectly in both dev and prod ENV.
Thanks
Sails uses grunt to handle asset syncing. By default, the grunt-watch task ignores empty folders, but as long as there's at least one file in a folder, it will always sync it. So the quickest solution here, if you're intent on using the default static middleware to server your uploaded files, is to just make sure there's always at least one file in your assets/uploads folder when you do sails lift. As long as that's the case, the uploads folder will always be synced to your .tmp/public folder, and anything that's uploaded to it subsequently will be automatically copied over and available immediately.
Of course, this will cause all of your uploaded files to be copied into .tmp/public every time your lift Sails, which you probably don't want. To solve this, you can use the symlink trick #bredikhin posted in his answer.
Try to do this:
npm install grunt-sync --save-dev --save-exact
uncomment the line: // grunt.loadNpmTasks('grunt-sync');
usually it is near to the end of the file /tasks/config/sync.js.
lift the App again
Back to the Original answer
I was using node version 10.15.0, and I faced same problem. I solved this by updating to current version of node(12.4.0) and also updated npm and all the node modules. After this, I fixed the vulnerabilities(just run 'npm audit fix') and the grunt error that was coming while uploading the images to assets/images folder was fixed.
Try out this implementation
create a helper to sync the file
example of the filesync helper
// import in file
const fs = require('fs')
module.exports = {
friendlyName: 'Upload sync',
description: '',
inputs: {
filename:{
type:'string'
}
},
exits: {
success: {
description: 'All done.',
},
},
fn: async function ({
filename
}) {
var uploadLocation = sails.config.custom.profilePicDirectory + filename;
var tempLocation = sails.config.custom.tempProfilePicDirectory + filename;
//Copy the file to the temp folder so that it becomes available immediately
await fs.createReadStream(uploadLocation).pipe(fs.createWriteStream(tempLocation));
// TODO
return;
}
};
now call this helper to sync your files to the .temp folder
const fileName = result[0].fd.split("\\").reverse()[0];
//Sync to the .temp folder
await await sails.helpers.uploadSync(fileName);
reference to save in env
profilePicDirectory:path.join(path.resolve(),"assets/images/uploads/profilePictures/")
tempProfilePicDirectory:path.join(path.resolve(),".tmp/public/images/uploads/profilePictures/"),
also can try
process.cwd()+filepath

How to serve uploaded files in Play!2 using Scala?

I'm trying to allow users to upload photos to the server and then view them. Uploading happens as described in this guide. Here is the code:
def upload = Action(parse.multipartFormData) { request =>
request.body.file("picture").map { picture =>
import java.io.File
val filename = picture.filename
val contentType = picture.contentType
picture.ref.moveTo(new File("/tmp/picture"))
Ok("File uploaded")
}.getOrElse {
Redirect(routes.Application.index).flashing(
"error" -> "Missing file"
)
}
}
It is unclear to me how to serve the uploaded images back to users that want to see them. Right now I am hosting the server on my own machine, so the code snippet from the guide writes the files to my D: drive, which isn't (and shouldn't be) available from the Internet. As far as I can see there are 2 options:
Store the photos under the /public folder in my project (the one that is dedicated to assets). See here: http://www.playframework.org/documentation/2.0/Assets
Write my own controller that servs images form custom locations from my drive.
For 1, I'm not sure if that is the purpose of assets.
For 2, I have no idea how to write such a controller.
The simple example is
def index = Action {
Ok.sendFile(new java.io.File("/tmp/fileToServe.pdf"))
}
there is "Serving files" section at https://www.playframework.com/documentation/2.4.x/ScalaStream#Serving-files which explains how to serve files
2.0.3 will feature an external Assets controller which might be (mis)used for this. Writing such a controller is no magic though, you have predefined folder where all your uploads are saved, and that's where you read them from. In the database you save the (unique) file name.
A different approach would be to save the uploaded files in the database. We do this with GridFS in MongoDB. A custom controller serves them back to the user. This way your data is stored in one central place, which also makes backups and recoveries simpler.
You can add a new route like this:
GET /myFiles/*file controllers.Assets.at(path="/tmp", file)