Why CGI.pm upload old revision of a file on successful new file upload? - perl

I am using CGI.pm version 3.10 for file upload using Perl. I have a Perl script which uploads the file and one of my application keeps track of different revisions of the uploaded document with check-in check-out facility.
Re-creational steps:
I have done a checkout(download a file) using my application (which is web based uses apache).
Logout from current user session.
Login again with same credentials and then check-in (upload) a new file.
Output:
Upload successful
Perl upload script shows the correct uploaded data
New revision of the file created
Output is correct and expected except the one case which is the issue
Issue:
The content of the newly uploaded file are same as the content of the last uploaded revision in DB.
I am using a temp folder for copying the new content and if I print the new content in upload script then it comes correct. I have no limit on CGI upload size. It seems somewhere in CGI environment it fails might be the version i am using. I am not using taint mode.
Can anybody helps me to understand what might be the possible reason?

Sounds like you're getting the old file name stuck in the file upload field. Not sure if that can happen for filefield but this is a feature for other field types.
Try adding the -nosticky pragma, eg, use CGI qw(-nosticky :all);. Another pragma to try is -private_tempfiles, which should prevent the user from "eavesdropping" even on their own uploads.
Of course, it could be that you need to localize (my) some variable or add -force to the filefield.

I found the issue. The reason was destination path of the copied file was not correct, this was because my application one of event maps the path of copied file to different directory and this path is storing in user session. This happens only when I run the event just before staring upload script. This was the reason that it was hard to catch. As upload script is designed to pick the new copied file from same path so it always end up uploading the same file in DB with another revision. The new copied file lying in new path.
Solved by mapping correct path before upload.
Thanks

Related

How to import remote python files using pyscript

Pyscript allows one to run python inside a web browser. I have two python scripts I wrote that I’d like to use. One way to do this is to copy and paste the python code held in these files directly into the index.html file where the index file is part of a GitHub.io page. If possible however, I would rather load/Import them from a remote location. Currently, they reside in the gh-page branch on GitHub alongside the index.html file.
My question is whether this is possible? Most tutorials show how to load and import a local python file which I don’t want to do.
Update: This is my current attempt which I add to the index.html file:
<py-config>
[[fetch]]
from = "https://github.com/etc/blob/gh-pages/"
files = ["myadd.py"]
</py-config>
When I try this I get the error message:
(PY0001): PyScript: Access to local files (using "Paths:" in ) is not available when directly opening a HTML file; you must use a webserver to serve the additional files. See this reference on starting a simple webserver with Python.
I want to avoid starting a server because this is meant to be client-side only approach with only a dumb file repo at the other end.
There is a solution, and it's very simple, just use the syntax:
<py-script src="mypythonscript.py"> </py-script>
And it will pick up the file from the GitHub directory.

File selected in WindowsExplorer with Preview Pane locks the file so powershell cannot output to that file

I have a scheduled script that outputs bunch of HTML files with static names to a remote location. I noticed, that if I have one of those files selected in Windows Explorer so that its contents are shown in Preview Pane, then Powershell cannot overwrite that file and skips updating it.
This only happens if output files are in remote location. Works just fine if files are local.
How do I force PowerShell to overwrite remote files in this situation? Lots of users work with those reports and if one of them leaves Windows Explorer window with one of those files highlighted overnight when the script runs, the file is not going to be updated.
Move HTML files to webserver. You will solve your problem entirely. IIS Setup on windows server is Next, Next, Next. You can leave link to a new file location (https://....) in old place, so users can easily navigate to a new place. Possibly this link can be automated (not sure because of modern security standards)
Try [System.IO.File]::Delete($path) just before writing this file. This removes file entry from filesystem, but leaves file open for those who have it open for now. This makes your script to write to a new file with the same name. Old file exists without name (deleted) but leaves open until everyone close it. Check it actually deleted with resresh!
Try [System.IO.File]::Move($path, $someTrashFullName) just before writing this file. $someTrashFullName probably must be on same drive. Same as Delete, but renames file. Some self-updating software use this strategy. File is renamed, but it's still kept open under new name.
Try replace file with shortcut to some file. You can generate files with different names and change shortcut programmatically
HTML files that change location using js ? They read nearby JSON (generated by export script) and lookup there for a new filename. So user opens static unchanged A.html, JS inside lookups at A.json for new name and redirects user to A-2020-08-11.html. I'm not sure browsers allow reading JSON files from JS for files that opened from network drive.
Only way left is to stop network share or\and close open files server-side.
Maybe some fun with to disable preview in this folder \ completely?
Try with -Force. But to me, it seems to be more a permission issue.
Remove-Item -Path '\\server\share\file' -Force

How to automatically scan the re-uploaded files with some modification in the wildfly-10 without the server restart?

I am using wildfly-10 server. I am providing an option to Upload images or jsp files for the user in the UI and the user can make use of these files in the other section of the application later.
At any one point of time I am allowing only one entry with a particular name. If the user tries to upload file with a name that is already existing then I am trying to overwrite the existing one with the new file.
In this scenario I am facing the below problem:
I have uploaded a image with the name image1.png.
Now if I change some other image's name to image1.png and upload it, the new image is not visible until I restart the server.
Looks like the older image has been cached by the server and it is still referring to the cache location. When I restart the server then it refreshes the cache with the new content of the file.
Is there any way that I can immediately see the changes in the UI whenever I re-upload the modified file?
I am using a custom folder to store the uploaded files in my server.
Is there way that I can enable deployment directory scan for this particular directory only?
You don't have to restart the server, a redeploy of the application should work.
You can define another deployment scanner or the directory scanned by the scanner: http://wildscribe.github.io/WildFly/16.0/subsystem/deployment-scanner/scanner/index.html
Another solution would be to create overlays http://wildscribe.github.io/WildFly/16.0/deployment-overlay/index.html .
Thirdly with exploded deployments WildFly already provide the functionality you have developed: https://wildfly.org/news/2017/09/08/Exploded-deployments/ (note that all jboss-cli operations can be called using HTTP rest API)

how to properly handle a file upload in wicket

I have a file upload page that takes a file and parses it.
Order of Events
user uploads file
uploaded file gets copied
copied file gets it's encoding checked, with CPDetector
determined encoding from the copied file is used to parse the original uploaded file
FileNotFoundException on Solaris Test Server during BufferedReader creation.
copied file is deleted
uploaded file is parsed/verified
parsed data is saved to a database
uploaded file is deleted (I can't remember if I'm doing this or Tomcat is.)
The Whole process works on my Windows 7 workstation. As noted above it does not work on my Solaris Test Server. Something(I Suspect Tomcat) is deleting the uploaded file before I can finish parsing it.
I've watched the directory during the process and an uploaded file does indeed get created, but it lasts less than a second before being deleted. Also It's supposed to go into /opt/tomcat/ but seems be getting created in the /var/opt/csw/tomcat6/temp/ directory instead.
Thanks for any help
I realize it's probably bad form to answer my own question like this but I wanted to leave this here in-case it helps someone else.
The Problem turned out to be How I was accessing the files.
I had hard-coded file paths, for windows, and Database loaded ones for the test server.
I switched those to using System.getProperty("catalina.home")+"/temp/" + filename
I'm also copying the temp file a second time so I end up with:
Order of Events (changes are in bold)
user uploads file
uploaded file gets copied
copied file gets it's encoding checked, with CPDetector
uploaded file gets copied again to ensure a copy survives to be parsed
determined encoding from the copied file is used to parse the original uploaded file
copy used for encoding detection is deleted
copy for parse is parsed/verified
parsed data is saved to a database
parsed file is deleted.
uploaded file is deleted (I'm not sure if I'm doing this or Tomcat is.)

cURL ftp transfer scenario

I'm trying to automate uploading and downloading from an ftp site using cURL inside MAtlab, but I'm having difficulties. Essentially I want one computer continuously uploading new files to an ftp, yet since there is a disk quota on the ftp, I want another computer continuously downloading and removing those same files from the ftp.
Easy enough, but my problem arises from wanting to make sure that I don't download a file that is still being uploaded, thereby resulting in an incomplete file.
First off, is there a way in cURL to make it so that the file wouldn't be available for download from the ftp site until the entire file has been uploaded?
One way around this is that I could upload files to one directory, and once they are finished uploading, then I could transfer them to a "Finished" directory on the ftp site. Then the download program would only look for files inside that "Finished" directory. However, I don't know how to transfer files within an ftp site using cURL.
Is it possible to transfer files between directories on an ftp site using cURL without having to download the file first?
And if anyone else has better ideas on how to perform this task, I'd love to hear em!
Thanks!
You can upload the files using a special name and then rename it when done, and have the download client only download files with that special "upload completed" name style.
Or you move them between directories just as you say (which is essentially a rename as well, just changing the directory too).
With the command line curl, you can perform "raw" commands after the upload with the -Q option and you can even find a tiny example in the curl FAQ: http://curl.haxx.se/docs/faq.html#Can_I_use_curl_to_delete_rename