Sinatra example code to download a large file - sinatra

I started using sinatra,
Right now I'm using the following code to handle file downloads,
It works great for small files, but when it comes to large files > 500MB
The connection disconnects in the middle.
dpath = "/some root path to file"
get '/getfile/:path' do |path|
s = path.to_s
s.gsub!("-*-","/")
fn = s.split("/").last
s = dpath +"/"+ s
send_file s,:filename => fn
end

Two things:
What does your validate method do? If it's trying to open the file in memory, you might be running out of ram on your server (especially with large files).
Where are you setting fn ? It's a local variable inside the get scope and there's nothing setting it in your code example.

Related

linker script and changing the flash address

I would like to ask the following a question: im using stm32g0xx microcontroller and i want to change the flash address in linker script automatically and not be forced to changed manually every time i want to generate an apllication image to let it run from diffrent address. what im doing i wrote an application and i wrote it to tow different address"0x08001000 and 0x08004800" to have the apility to switch to other application incase one of them is updated or damaged. it worked fine but i need by every image to change the flash address manually and i would like to ask if it is possible to changed somewhere else out of the linker script like startup.s?
MEMORY
{
RAM (xrw) : ORIGIN = 0x20000000, LENGTH = 8K
FLASH (rx) : ORIGIN = 0x8001000, LENGTH = 32K
}
MEMORY
{
RAM (xrw) : ORIGIN = 0x20000000, LENGTH = 8K
FLASH (rx) : ORIGIN = 0x8004800, LENGTH = 32K
}
You can create two linker files and compile two times, each time with a different linker script and a different output binary. You will obtain the two necessary binaries. To integrate it on your project, it depends on your way of working (STM IDE, standalone Makefile...) which you did not mention.
As a side note, you should modify the LENGTH on your linker scripts, it will prevent the linker to place data where you have another application.
Your first application starts at 4KB (0x1000), and the second start at 18KB (0x4800), the lenght of the first application should be 18-4 = 16KB and the second LENGTH should be 32-18 = 14KB (if the FLASH total size is 32KB).
You can write two different linker script and apply one or the other in your building enviroment (with the -T linker flag) or you can use a variable for your ORIGIN and pass it with -Wl,--defsym=<VAR_NAME>=<VAR_VALUE>

Importing multiple .xlsx files taken from a network of files

I'm a Matlab novice and have been struggling with this particular task for weeks now.
I'm trying to create a nested for loop for uploading all of my data into Matlab. I need the code to go into the file for subject 1, go into the file for the first exercise, upload 3 files (EMG, Kinetic, and kinematic data), then go back and enter the file for the second exercise and upload the data in that file, repeat this for all 5 exercises, and then repeat this whole process for all 12 subjects. I have created code for uploading data from just one file using information I've read off the internet, but to create this programme to get all of the data from all these files has proven to be very difficult.
Below is the code I have currently written:
clear all;
Subjects = dir('C:\Users\pricep\Desktop\JuggaData');
Exercise = dir('C:\Users\pricep\Desktop\JuggaData\Subject1');
Trialdata = dir('C:\Users\pricep\Desktop\JuggaData\Subject1\*.xlsx');
subjectnum = numel(Subjects);
exercisenum = numel(Exercise);
datanum = numel(Trialdata);
myData = cell(datanum,1);
for k = 1:subjectnum
for j = 1:exercisenum
for i = 1:datanum
filename = sprintf(Trialdata(i).name);
myData{k} = importdata(filename);
end
end
end
No error message appears, but no data appears either.
As you can tell I'm a complete novice so any help would be greatly appreciated.
Verify that subjectnum, exercisenum and datanum are above zero, probably one of the files is zero causing nothing to happen. Besides this, there are other issues:
exercisenum is constant, which assumes that every subject has the same amount of exercises. Might be wrong.
dir returns the file names .. (parent directory) and . (current directory) as well. You don't filter these.
importdata is called with a relative file path, which is not possible.
myData = cell(datanum,1); allocates not enough space, assuming there is more than one subject and exercise.

Why does this code work successfully with Enumerator.fromFile?

I wrote the file transferring code as follows:
val fileContent: Enumerator[Array[Byte]] = Enumerator.fromFile(file)
val size = file.length.toString
file.delete // (1) THE FILE IS TEMPORARY SO SHOULD BE DELETED
SimpleResult(
header = ResponseHeader(200, Map(CONTENT_LENGTH -> size, CONTENT_TYPE -> "application/pdf")),
body = fileContent)
This code works successfully, even if the file size is rather large (2.6 MB),
but I'm confused because my understanding about .fromFile() is a wrapper of fromCallBack() and SimpleResult actually reads the file buffred,but the file is deleted before that.
MY easy assumption is that java.io.File.delete waits until the file gets released after the chunk reading completed, but I have never heard of that process of Java File class,
Or .fromFile() has already loaded all lines to the Enumerator instance, but it's against the fromCallBack() spec, I think.
Does anybody knows about this mechanism?
I'm guessing you are on some kind of a Unix system, OSX or Linux for example.
On a Unix:y system you can actually delete a file that is open, any filesystem entry is just a link to the actual file, and so is a file handle which you get when you open a file. The file contents won't become unreachable /deleted until the last link to it is removed.
So: it will no longer show up in the filesystem after you do file.delete but you can still read it using the InputStream that was created in Enumerator.fromFile(file) since that created a file handle. (On Linux you actually can find it through the special /proc filesystem which, among other things, contains the filehandles of each running process)
On windows I think you will get an error though, so if it is to run on multiple platforms you should probably check test your webapp on windows as well.

How can I get file size in Perl before processing an upload request?

I want to get file size I'm doing this:
my $filename=$query->param("upload_file");
my $filesize = (-s $filename);
print "Size: $filesize ";`
Yet it is not working. Note that I did not upload the file. I want to check its size before uploading it. So to limit it to max of 1 MB.
You can't know the size of something before uploading. But you can check the Content-Length request header sent by the browser, if there is one. Then, you can decide whether or not you want to believe it. Note that the Content-Length will be the length of the entire request stream, including other form fields, and not just the file upload itself. But it's sufficient to get you a ballpark figure for conformant clients.
Since you seem to be running under plain CGI, you should be able to get the request body length in $ENV{CONTENT_LENGTH}.
Also want to sanity check against possibly already having post max set (from perldoc CGI):
$CGI::POST_MAX
If set to a non-negative integer, this variable puts a ceiling on the size of
POSTings, in bytes. If CGI.pm detects a POST that is greater than the ceiling,
it will immediately exit with an error message. This value will affect both
ordinary POSTs and multipart POSTs, meaning that it limits the maximum size of
file uploads as well. You should set this to a reasonably high value, such as
1 megabyte.
The uploaded file is stashed in a tmp location on the server when the form is submitted, check the file size there.
Supply the value for $field.
my $upload_filehandle = $query->upload($field);
my $tmpfilename = $query->tmpFileName($upload_filehandle);
my $file_size = (-s $tmpfilename);
This has nothing to do with Perl.
You are trying to read the filesize of a file on the user's computer using commands that read files on your server, what you want can't be done using Perl.
This is something that has to be done in the browser, and looking briefly at these questions it's either very hard or impossible.
Your best bet is to allow the user to start the upload and abort if the file is too big.
If you want to check before you process the request, you might be better off checking on the web page that triggers the request. I don't think the web browser can do it on it's own, but if you don't mind Flash, there are many Flash upload tools that can check things like size (as well as file types) and prevent uploading.
A good one to start with is the YUI Uploader. Lots more here: What is the best multiple file JavaScript / Flash file uploader?
Obviously you would want to check on the server side too, but by the time the user has started sending the request to the server, you are already using up your CPU cycles and bandwidth.
Thanks everyone for your replies; I just found out why $filesize = (-s $filename); was not working before, it is due that I was checking file size while sending Ajax request and not while re submitting the page.That's why I was having size to be zero. I fixed that to submit the page and it worked. Thanks.
Just read this post but while checking the content-length is a good approximate pre-check you could also save the file to temporary folder and then perform any kind of check on it. If it doesn't meet your criteria just delete and don't send it to it's final destination.
Look at the perl documentation for file stats -X - perldoc.perl.org and stat-perldoc.perl.org. Also, you can look at this upload script which is doing the similar thing what you are trying to do.

How can I validate an image file in Perl?

How would I validate that a jpg file is a valid image file. We are having files written to a directory using FTP, but we seem to be picking up the file before it has finished writing it, creating invalid images. I need to be able to identify when it is no longer being written to. Any ideas?
Easiest way might just be to write the file to a temporary directory and then move it to the real directory after the write is finished.
Or you could check here.
JPEG::Error
[arguments: none] If the file reference remains undefined after a call to new, the file is to be considered not parseable by this module, and one should issue some error message and go to another file. An error message explaining the reason of the failure can be retrieved with the Error method:
EDIT:
Image::TestJPG might be even better.
You're solving the wrong problem, I think.
What you should be doing is figuring out how to tell when whatever FTPd you're using is done writing the file - that way when you come to have the same problem for (say) GIFs, DOCs or MPEGs, you don't have to fix it again.
Precisely how you do that depends rather crucially on what FTPd on what OS you're running. Some do, I believe, have hooks you can set to trigger when an upload's done.
If you can run your own FTPd, Net::FTPServer or POE::Component::Server::FTP are customizable to do the right thing.
In the absence of that:
1) try tailing the logs with a Perl script that looks for 'upload complete' messages
2) use something like lsof or fuser to check whether anything is locking a file before you try and copy it.
Again looking at the FTP issue rather than the JPG issue.
I check the timestamp on the file to make sure it hasn't been modified in the last X (5) mins - that way I can be reasonably sure they've finished uploading
# time in seconds that the file was last modified
my $last_modified = (stat("$path/$file"))[9];
# get the time in secs since epoch (ie 1970)
my $epoch_time = time();
# ensure file's not been modified during the last 5 mins, ie still uploading
unless ( $last_modified >= ($epoch_time - 300)) {
# move / edit or what ever
}
I had something similar come up once, more or less what I did was:
var oldImageSize = 0;
var currentImageSize;
while((currentImageSize = checkImageSize(imageFile)) != oldImageSize){
oldImageSize = currentImageSize;
sleep 10;
}
processImage(imageFile);
Have the FTP process set the readonly flag, then only work with files that have the readonly flag set.