how to properly handle a file upload in wicket - wicket

I have a file upload page that takes a file and parses it.
Order of Events
user uploads file
uploaded file gets copied
copied file gets it's encoding checked, with CPDetector
determined encoding from the copied file is used to parse the original uploaded file
FileNotFoundException on Solaris Test Server during BufferedReader creation.
copied file is deleted
uploaded file is parsed/verified
parsed data is saved to a database
uploaded file is deleted (I can't remember if I'm doing this or Tomcat is.)
The Whole process works on my Windows 7 workstation. As noted above it does not work on my Solaris Test Server. Something(I Suspect Tomcat) is deleting the uploaded file before I can finish parsing it.
I've watched the directory during the process and an uploaded file does indeed get created, but it lasts less than a second before being deleted. Also It's supposed to go into /opt/tomcat/ but seems be getting created in the /var/opt/csw/tomcat6/temp/ directory instead.
Thanks for any help

I realize it's probably bad form to answer my own question like this but I wanted to leave this here in-case it helps someone else.
The Problem turned out to be How I was accessing the files.
I had hard-coded file paths, for windows, and Database loaded ones for the test server.
I switched those to using System.getProperty("catalina.home")+"/temp/" + filename
I'm also copying the temp file a second time so I end up with:
Order of Events (changes are in bold)
user uploads file
uploaded file gets copied
copied file gets it's encoding checked, with CPDetector
uploaded file gets copied again to ensure a copy survives to be parsed
determined encoding from the copied file is used to parse the original uploaded file
copy used for encoding detection is deleted
copy for parse is parsed/verified
parsed data is saved to a database
parsed file is deleted.
uploaded file is deleted (I'm not sure if I'm doing this or Tomcat is.)

Related

How to add a file to fal which was uploaded via ftp

in a scheduler action i need to add files to FAL (to sys_file) which are already in the storage, uploaded via ftp. The normal way
storage->addFile(....)
which copies the file from a temporary folder to the file storage and adds it to the sys_file table does not work, because the file is already in the fileadmin directory. If i try i get this error message:
[ERROR] Cannot add a file that is already part of this storage.
How is it possible to add a file to sys_file which is already in fileadmin?
Thanks!
AFAIK addFile() is to be used for files uploaded from a local disk. If the files are already available on the remote server, you should go for addUploadedFile() instead.

Ability to detect the filename of the currently downloaded file and cleanup of the downloaded file

Ability to detect the filename of the currently downloaded file and cleanup of the downloaded file.
For downloaded file need ability to be able to grab the filename of the downloaded file as sometimes the filename can be dynamic.
Also need the ability to cleanup the downloaded file after the verifications are complete.

File selected in WindowsExplorer with Preview Pane locks the file so powershell cannot output to that file

I have a scheduled script that outputs bunch of HTML files with static names to a remote location. I noticed, that if I have one of those files selected in Windows Explorer so that its contents are shown in Preview Pane, then Powershell cannot overwrite that file and skips updating it.
This only happens if output files are in remote location. Works just fine if files are local.
How do I force PowerShell to overwrite remote files in this situation? Lots of users work with those reports and if one of them leaves Windows Explorer window with one of those files highlighted overnight when the script runs, the file is not going to be updated.
Move HTML files to webserver. You will solve your problem entirely. IIS Setup on windows server is Next, Next, Next. You can leave link to a new file location (https://....) in old place, so users can easily navigate to a new place. Possibly this link can be automated (not sure because of modern security standards)
Try [System.IO.File]::Delete($path) just before writing this file. This removes file entry from filesystem, but leaves file open for those who have it open for now. This makes your script to write to a new file with the same name. Old file exists without name (deleted) but leaves open until everyone close it. Check it actually deleted with resresh!
Try [System.IO.File]::Move($path, $someTrashFullName) just before writing this file. $someTrashFullName probably must be on same drive. Same as Delete, but renames file. Some self-updating software use this strategy. File is renamed, but it's still kept open under new name.
Try replace file with shortcut to some file. You can generate files with different names and change shortcut programmatically
HTML files that change location using js ? They read nearby JSON (generated by export script) and lookup there for a new filename. So user opens static unchanged A.html, JS inside lookups at A.json for new name and redirects user to A-2020-08-11.html. I'm not sure browsers allow reading JSON files from JS for files that opened from network drive.
Only way left is to stop network share or\and close open files server-side.
Maybe some fun with to disable preview in this folder \ completely?
Try with -Force. But to me, it seems to be more a permission issue.
Remove-Item -Path '\\server\share\file' -Force

How to get an MD5 checksum, of a file in a web site. PowerShell

I need to create a script in powershell. That validates the hash of a file located in a web site and if there are any changes in the file start downloading the file.
There is a way to validate the hash of the file without previously downloading the file to the local machine.
You need to have the content of a file to create a hash for it, so you always have to download the file first. This is easy in PowerShell, download the file and create the MD5 hash (checksum).
If you own the web site, create the MD5 hash on the server and just download that hash and compared it to a locally stored hash, if different, download the whole file.
I don't think there is a way to do this without downloading the file. An md5 checksum is generated from the file's contents, so in order to generate it with powershell you need the contents. Ergo you need to download the file.
I would advise generating the checksum directly on the web server via php or whatever language you use there. You could maybe save the checksums in a separate metadata file or append it to the original file's name. Then you can compare the checksum without downloading the full file.

Why CGI.pm upload old revision of a file on successful new file upload?

I am using CGI.pm version 3.10 for file upload using Perl. I have a Perl script which uploads the file and one of my application keeps track of different revisions of the uploaded document with check-in check-out facility.
Re-creational steps:
I have done a checkout(download a file) using my application (which is web based uses apache).
Logout from current user session.
Login again with same credentials and then check-in (upload) a new file.
Output:
Upload successful
Perl upload script shows the correct uploaded data
New revision of the file created
Output is correct and expected except the one case which is the issue
Issue:
The content of the newly uploaded file are same as the content of the last uploaded revision in DB.
I am using a temp folder for copying the new content and if I print the new content in upload script then it comes correct. I have no limit on CGI upload size. It seems somewhere in CGI environment it fails might be the version i am using. I am not using taint mode.
Can anybody helps me to understand what might be the possible reason?
Sounds like you're getting the old file name stuck in the file upload field. Not sure if that can happen for filefield but this is a feature for other field types.
Try adding the -nosticky pragma, eg, use CGI qw(-nosticky :all);. Another pragma to try is -private_tempfiles, which should prevent the user from "eavesdropping" even on their own uploads.
Of course, it could be that you need to localize (my) some variable or add -force to the filefield.
I found the issue. The reason was destination path of the copied file was not correct, this was because my application one of event maps the path of copied file to different directory and this path is storing in user session. This happens only when I run the event just before staring upload script. This was the reason that it was hard to catch. As upload script is designed to pick the new copied file from same path so it always end up uploading the same file in DB with another revision. The new copied file lying in new path.
Solved by mapping correct path before upload.
Thanks