ESENT--read a ese database file - esent

The pagesize of the file I read is 32768. When i set the JET_paramDatabasePageSize to 32768,JetInit returns -1213.Then,i set the JET_paramRecovery to "Off",JetInit succeeds.But,when I use JetAttachDatabase,it returns -550.
Here is my code:
err=JetSetSystemParameter(&instance,sesid,JET_paramDatabasePageSize ,32768 ,NULL);
err=JetCreateInstance(&instance,NULL);
err=JetSetSystemParameter(&instance,sesid,JET_paramRecovery,0,"Off");
err=JetInit(&instance);
err=JetBeginSession(instance,&sesid,NULL,NULL);
err=JetAttachDatabase(sesid,buffer, JET_bitDbReadOnly );
err=JetOpenDatabase ( sesid, buffer, NULL, &dbid, JET_bitDbReadOnly );
What's wrong with it?I am running a Windows 7 32bit.

The page size is global to the process (NOT just the instance) and is persisted in the log files and the database, so changing the page size can be annoyingly tricky.
Is there information in the database that you're trying to access? Or did you just experience this during development?
If you saw this during development, then the easiest thing to do is to blow everything away (del .edb edb) [Assuming that you kept the prefix as "edb"].
Also, are you sure that the database is 32k pages? You can confirm with esentutl.exe -mh <database-name>.
It will be trickier to recover the data if you do care about, and you switched the page size. (I don't know off the top of my head, and I'd have to try a few things out...)
-martin

Related

Siemens S7-1200. TRCV_С. Error code: 893A; Event ID 02:253A

Please, help to solve the problem with communication establishment between PC and 1211C (6ES7-211-1BD30-0XB0 Firmware: V 2.0.2). I feel that I've made a stupid mistake somewhere, but can't figure out where exactly it is.
So, I'm using function TRCV_С...
The configuration seems to be okay:
When i set the CONT=1, the connection establishes without any problems...
But, when i set EN_R=1, I'm getting "error 893A".
That's what I have in my diagnostic buffer: (DB9 - is a block where the received data is supposed to be written)
There is an explanation given for "893A" in the manuals: Parameter contains the number of a DB that is not loaded. In diag. buffer its also written that DB9 is not loaded. But in my case it is loaded! So what should I do in this case?
it seems that DB were created or edited manually due to which they are miss aligned with FB instances try removing and DB and FB instances and then add again instances of FBs with automatically created DBs and do a offline dowonload

Is it possible to restore a HD that was removed from VirtualBox without a snapshot?

I've removed the single Hard Drive (HD) from my VM (Virtual Box 4.0.6) by mistake! :S
I added the HD again, but now it loads with the base configuration (ignoring the state prior HD removal and any snapshot).
I can still see the snapshots tree and they are all there, but I didnt do a snapshot prior to the HD removal..
QUESTION: Is it possible to get the last state up and running again?
I've removed the disk this way (PT language non important):
http://4.bp.blogspot.com/-4iCxcHhqe0M/UISuqMenjII/AAAAAAAAAT8/j726P30_QRA/s1600/1.png
I'm surprised that Virtual Box didn't even give me any warning about loosing data!!
After adding the disk again it acted like described above. And now (I've tried loading the disk directly from the snapshots system folder) it shows some info like this:
http://4.bp.blogspot.com/-BPLxUQxSlx0/UISuswjYd6I/AAAAAAAAAUE/bJ1yfKljUr8/s1600/2.png
[EDIT] - SOLVED:
I've just solved the problem by removing the current HD and adding a new HD from the snapshots folder (the most recent one, in my case a file named something like: fb85e8db-d5f4-46a9-b67b-f8fb39a99a1a.vdi.
There is a .prev file that is created at the VM HDD location.
Sample
<HardDisks>
<HardDisk uuid="{21940e73-1596-45a5-b37e-44cca5d5f1e6}" location="Windows 9.vmdk" format="VMDK" type="Normal">
<HardDisk uuid="{df9d8522-cc5d-499c-b974-037d3f8f89fb}" location="Snapshots/{df9d8522-cc5d-499c-b974-037d3f8f89fb}.vmdk" format="VMDK"/>
<HardDisk uuid="{b707fe0a-f50c-4bc6-b03a-d5a4fe6972fd}" location="Snapshots/{b707fe0a-f50c-4bc6-b03a-d5a4fe6972fd}.vmdk" format="VMDK">
You can add your custom snapshots to that. they are in a tree like fashion and the last one would be loaded. You have to remember the structure too if they were in tree like fashion.
Each snapshot is incremental so make sure the parents and children are in proper order.

Perl CGI gets parameters from a different request to the current URL

This is a weird one. :)
I have a script running under Apache 1.3, with Apache::PerlRun option of mod_perl. It uses the standard CGI.pm module. It's a regularly accessed script on a busy server, accessed over https.
The URL is typically something like...
/script.pl?action=edit&id=47049
Which is then brought into Perl the usual way...
my $action = $cgi->param("action");
my $id = $cgi->param("id");
This has been working successfully for a couple of years. However we started getting support requests this week from our customers who were accessing this script and getting blank pages. We already had a line like the following that put the current URL into a form we use for customers to report an issue about a page...
$cgi->url(-query => 1);
And when we view source of the page, the result of that command is the same URL, but with an entirely different query string.
/script.pl?action=login&user=foo&password=bar
A query string that we recognise as being from a totally different script elsewhere on our system.
However crazy it sounds, it seems that when users are accessing a URL with a query string, the query string that the script is seeing is one from a previous request on another script. Of course the script can't handle that action and outputs nothing.
We have some automated test scripts running to see how often this happens, and it's not every time. To throw some extra confusion into the mix, after an Apache restart, the problem seems to initially disappear completely only to come back later. So whatever is causing it is somehow relieved by a restart, but we can't see how Apache can possibly take the request from one user and mix it up with another.
This, it appears, is an interesting combination of Apache 1.3, mod_perl 1.31, CGI.pm and Apache::GTopLimit.
A bug was logged against CGI.pm in May last year: RT #57184
Which also references CGI.pm params not being cleared?
CGI.pm registers a cleanup handler in order to cleanup all of it's cache.... (line 360)
$r->register_cleanup(\&CGI::_reset_globals);
Apache::GTopLimit (like Apache::SizeLimit mentioned in the bug report) also has a handler like this:
$r->post_connection(\&exit_if_too_big) if $r->is_main;
In pre mod_perl 1.31, post_connection and register_cleanup appears to push onto the stack, while in 1.31 it appears as if the GTopLimit one clobbers the CGI.pm entry. So if your GTopLimit function fires because the Apache process has got to large, then CGI.pm won't be cleaned up, leaving it open to returning the same parameters the next time you use it.
The solution seems to be to change line 360 of CGI.pm to;
$r->push_handlers( 'PerlCleanupHandler', \&CGI::_reset_globals);
Which explicitly pushes the handler onto the list.
Our restart of Apache temporarily resolved the problem because it reduced the size of all the processes and gave GTopLimit no reason to fire.
And we assume it has appeared over the past few weeks because we have increased the size of the Apache process either through new developments which included something that wasn't before.
All tests so far point to this being the issue, so fingers crossed it is!

How can I get file size in Perl before processing an upload request?

I want to get file size I'm doing this:
my $filename=$query->param("upload_file");
my $filesize = (-s $filename);
print "Size: $filesize ";`
Yet it is not working. Note that I did not upload the file. I want to check its size before uploading it. So to limit it to max of 1 MB.
You can't know the size of something before uploading. But you can check the Content-Length request header sent by the browser, if there is one. Then, you can decide whether or not you want to believe it. Note that the Content-Length will be the length of the entire request stream, including other form fields, and not just the file upload itself. But it's sufficient to get you a ballpark figure for conformant clients.
Since you seem to be running under plain CGI, you should be able to get the request body length in $ENV{CONTENT_LENGTH}.
Also want to sanity check against possibly already having post max set (from perldoc CGI):
$CGI::POST_MAX
If set to a non-negative integer, this variable puts a ceiling on the size of
POSTings, in bytes. If CGI.pm detects a POST that is greater than the ceiling,
it will immediately exit with an error message. This value will affect both
ordinary POSTs and multipart POSTs, meaning that it limits the maximum size of
file uploads as well. You should set this to a reasonably high value, such as
1 megabyte.
The uploaded file is stashed in a tmp location on the server when the form is submitted, check the file size there.
Supply the value for $field.
my $upload_filehandle = $query->upload($field);
my $tmpfilename = $query->tmpFileName($upload_filehandle);
my $file_size = (-s $tmpfilename);
This has nothing to do with Perl.
You are trying to read the filesize of a file on the user's computer using commands that read files on your server, what you want can't be done using Perl.
This is something that has to be done in the browser, and looking briefly at these questions it's either very hard or impossible.
Your best bet is to allow the user to start the upload and abort if the file is too big.
If you want to check before you process the request, you might be better off checking on the web page that triggers the request. I don't think the web browser can do it on it's own, but if you don't mind Flash, there are many Flash upload tools that can check things like size (as well as file types) and prevent uploading.
A good one to start with is the YUI Uploader. Lots more here: What is the best multiple file JavaScript / Flash file uploader?
Obviously you would want to check on the server side too, but by the time the user has started sending the request to the server, you are already using up your CPU cycles and bandwidth.
Thanks everyone for your replies; I just found out why $filesize = (-s $filename); was not working before, it is due that I was checking file size while sending Ajax request and not while re submitting the page.That's why I was having size to be zero. I fixed that to submit the page and it worked. Thanks.
Just read this post but while checking the content-length is a good approximate pre-check you could also save the file to temporary folder and then perform any kind of check on it. If it doesn't meet your criteria just delete and don't send it to it's final destination.
Look at the perl documentation for file stats -X - perldoc.perl.org and stat-perldoc.perl.org. Also, you can look at this upload script which is doing the similar thing what you are trying to do.

How can I prevent GD from running out of memory?

I'm not sure if memory is the culprit here. I am trying to instantiate a GD image from data in memory (it previously came from a database). I try a call like this:
my $image = GD::Image->new($image_data);
$image comes back as undef. The POD for GD says that the constructor will return undef for cases of insufficient memory, so that's why I suspect memory.
The image data is in PNG format. The same thing happens if I call newFromPngData.
This works for very small images, like under 30K. However, slightly larger images, like ~70K will cause the problem. I wouldn't think that a 70K image should cause these problems, even after it is deflated.
This script is running under CGI through Apache 2.0, on OS 10.4, if that matters at all.
Are there any memory limitations imposed by Apache by default? Can they be increased?
Thanks for any insight!
EDIT: For clarification, the GD::Image object never gets created, so clearing out the $image_data from memory isn't really an option.
GD library eats many bytes per byte of image size. It's a well over a 10:1 ratio!
When a user uploads an image to our system, we start by checking the file size before loading it into a GD image. If it's over a threshold (1 Megabyte) we don't use it but instead report an error to the user.
If we really cared we could dump it to disk, use the command line "convert" tool to rescale it to a sane size, then load the output into the GD library and remove the temporary file.
convert -define jpeg:size=800x800 tmpfile.jpg -thumbnail '800x800' -
Will scale the image so it fits within an 800 x 800 square. It's longest edge is now 800px which should safely load. The above command will send the shrunk .jpg to STDOUT. The size= option should tell convert not to bother holding the huge image in memory, but just enough to scale to 800x800.
I've run into the same problem a few times.
One of my solutions was simply to increase the amount of memory available to my scripts. The other was to clear the buffer:
Original Script:
$src_img = imagecreatefromstring($userfile2);
imagecopyresampled($dst_img,$src_img,0,0,0,0,$thumb_width,$thumb_height,$origw,$origh);
Edited Script:
$src_img = imagecreatefromstring($userfile2);
imagecopyresampled($dst_img,$src_img,0,0,0,0,$thumb_width,$thumb_height,$origw,$origh);
imagedestroy($src_img);
By clearing out the memory of the first src_image, it freed up enough to handle more processing.