SystemJS Cache - Generate timestamp - ejs

I am using Single SPA + SystemJS.
The problem I am having is that my entry assets are being cached by the browser.
Joel Denning said
Browser-only solution
Add a query string with the current timestamp to the URL. For example, download ./entry.js?t=1573878549105
Joel Denning's post
I am thinking of a way to do this in the EJS file. Because if I generate this on the code, it will be evaluated at build time, but I need something on the browser side.

Related

How to handle dynamic list (JSON) of redirects on NextJS + AWS Amplify?

I have a JSON file with 8k+ redirects that I use for my site. This JSON is hosted on a CDN (AWS Cloudfront). And every time one of our products or pages change their path, that JSON is automatically updated with a new redirect, and this happens pretty often (more than once a day).
I want to be able to use that JSON on my NextJS (12.3) project hosted on AWS Amplify.
Ideally I wanted to use the NextJS middleware.js to fetch that JSON and redirect to the proper path.
But right now Amplify doesn't support that middleware. They still have an open issue on github for that:
https://github.com/aws-amplify/amplify-js/issues/9145
So I tried to run that on getServerSideProps. But I'd have to replicate that for every URL segment on my project, which wouldn't look great.
Right now I use the native redirects solution on the next.config.js file (https://nextjs.org/docs/api-reference/next.config.js/redirects).
But this solution isn't great either for two reasons:
This is not dynamic. The JSON file is only fetched when the project is build up, so many times I have to redeploy my project on Amplify to be able to update those redirects.
The latency to find the correct path is affecting the page load perfomance (it takes around 400 to 500ms to run through the 8k+ redirects).
Can anyone help me to find a fast and dynamic solution to fetch and do those redirects on NextJS? Or maybe any idea to do that in a different way.

Generate multiple sheets in one pdf file

I am trying to generate a pdf from a Tableau workbook which has two sheets using the url method:
E.g: https://TableauServer/views/workbook/sheet1?:format=pdf&parameter=value
I am doing this in a program which will issue the url request to the url. The url works fine for one sheet. But the problem is how to generate one pdf file with both sheets in it?
If you first put your two sheets into a single dashboard and then use the URL for the published dashboard (still using the format=pdf parameter), this should work just fine.
We know it's possible because within the Tableau pages itself if you download a PDF it gives you several formatting options, including the option to put all the worksheets in a workbook into a single PDF.
I couldn't find any documentation on it though. What I ended up doing was looking at the network console in the browser (usually F12) when I downloaded the PDF from the browser by clicking the Download button. That showed me the URL end point and the JSON body the server expected in the request payload.
The endpoint URL wasn't too cryptic and ended with "commands/tabsrv/pdf-export-server". The challenge was to take the JSON in the request payload and find the right settings to get it into a single PDF.
This method is a more technical approach and requires very little coding skills; any language that has functions for http calls will work (I use python for it).
If you don't mind doing it outside a browser, tabcmd has lots of functionality to control PDF generation at the command line.

SAPUI5 load file in the workspace

I need to upload files in the Workspace:
I dont know which URL i should give as parameter to my file Uploader. Considering that i working with the SAP WebIDE personal Edition and my file are located in the following path:
file:///C:/SAPWebIDE/eclipse/serverworkspace/Al/ALine/OrionContent/testApp/webapp/model/
What should i please set as Url here?
var oFileUploader2 = new sap.ui.commons.FileUploader({
name: "upload2",
uploadOnChange: false,
uploadUrl: "???"
});
I think you have misunderstood how the FileUploader works.
The "uploadUrl" parameter should be used to specify a path on the "web server" (e.g. application server, web container) on which your application is hosted. UI5 is a web user interface framework, it does not know how to handle (server-side) upload requests. This means that the server (backend) itself should have some implementation for handing the file upload.
After you select the file and trigger the upload, a POST HTTP request is made to the path specified in this "uploadUrl" parameter. If you have no web server to know to handle it, then it will invariably give back an error HTTP response.
Based on the title of your question, I understand that you would want to upload the file inside your workspace. IMO, this does not really make sense (as you are mixing in the design time environment with your run-time environment (i.e. your application should never depend on the IDE).
Nevertheless, you can try and import a file via the import menu (right click on package, import, from file system) and see what URL is the request triggered against (using the dev console). I looked around a little and roughly this is the request URL: http://localhost:[Web IDE Port]/xfer/import/[User Name]-OrionContent/[Project Name]. In the Slug header you would have the file name. You might not be able to make a POST request towards this URL directly (because of XSS / CSS limitations), so you might need to create a route mapping for it.

Maintaining (version + redirect) in S3

So, far in our application, the *.js files were served directly from apache. For example, this was a script include in a jsp page : /foo/v6565/my_script.js. The v6565 in the path is phony and an internal apache redirect, redirects /foo/v6565/my_script.js to /foo/my_script.js.
Whenever my_script.js is updated, v<xxxx> in the included jsp page (an internal tool does it based on the SVN revision of my_script) is updated - thus forcing the browser to fetch my_script.js again and not the cached version. I hope I am able to explain my current approach clearly.
[A different approach could have been to user /foo/my_script.js?v=5652. However, there was some caching issue (can't remember it) because of which the decision was taken to use /foo/v56564/ instead of adding version to the queryParam. I will dig into it, though]
Now, that we are moving all of our *.js files to an s3 bucket, I was wondering what would be a way of doing this?
The path from s3 bucket would look like : mybucket.aws.com/js/my_script.js. How to I insert the version tag + redirection for s3? Are there any other standard approaches used when resources are served from s3?
(I've read about page redirects on s3 resources but the redirects are to be written directly on the resources, which is not really applicable in my case)
Thanks.
I think cache busting with ?v=<hash> is pretty much standard now.
It has been disadvised, however that's a pretty old resource (though often cited) and I'm not sure if this is still true. Even your trusted StackOverflow is using it with SHA1, so I guess it's good enough for everybody now.

Counting eclipse plugin installations/downloads

I'm currently hosting an Eclipse plugin update site on sourceforge.net . SF.net does not allow access to server logs but I'd still like to know how many downloads the plugin gets.
Is there an alternative way of gathering them?
I'm not going to have any sort of 'call home' feature in the plugin, so please don't suggest that.
I wrote a blog about how to track downloads of an Eclipse plug-in update site. What you can do is specify a url to your server and every time a download is initiated the update site will send an HTTP HEAD request to that url, which you can then use to count the number of times the plug-in was downloaded. If you want to track some information about who is downloading the plug-in you can pass information, like the package name, version, os, and and store it in a database.
http://programmingfortherestofus.blogspot.com/2014/08/tracking-downloads-to-your-eclipse.html
I hope it helps!
It is possible to host the plugin jars in the file release service, and then get your site.xml file to point to them. You need to point at a specific mirror to make it work.
This will tell you how many times people download each file as with a normal file release.
Unfortunately, in practice this is a lot of work to maintain, and tends to be unreliable (I kept getting bug reports saying the update site wasn't working).
You could write a very simple php script which just serves up the relevant file, and logs the download to a file or DB. Make sure it double checks the URL is a valid one to download to the user of course :)
Once that's in place, you can update the site.xml to point to the correct thing, or you could probably use URL rewriting to intercept requests to your jar file and pass them through the script. I've never tried that on the SF servers, but it might work.
EDIT:
Even better, just have a php script which sends a redirect like this:
<?php
$file = $_GET('file');
// Now log the access to file
header('Location: ' . $file);
?>
Just a thought: AFAIK, SourceForge does tell you how much data you served. You know the size of your plugin JARs. Divide the data served by the size of your plugin and you get a rough estimate of how many downloads you had.