JetInit returns -1213 if I change the PageSize - esent

I am trying to read into some existing and unmounted esent database files (like Windows.edb). I have been playing around with some edb files rather successfully. But when I try to open a database with PageSize that is not equal 8192 I get an error.
Here's my code (without error-handling):
FError := JetSetSystemParameter(#FInstance, nil, JET_paramDatabasePageSize, FPagesize, nil);
FError := JetCreateInstance(#FInstance, 'EDBInstance');
FError := JetInit(#FInstance);
FError := JetBeginSession(FInstance, #FSessionId, nil, nil);
FError := JetAttachDatabase(FSessionId, FFilename, JET_bitDbReadOnly);
It works fine as long as FPageSize = 8192. Any other value (4096, 32768) fails at the JetInit call which returns an -1213 code. If I don't set the proper PageSize value for the database I get the same error at JetAttachDatabase, which I can understand. But the first error that gets returned by JetInit I fail to comprehend. What do I do wrong? I hope Laurion Burchall is reading this! :-)
I am running a Windows 7 64bit.

There are two possibilities:
The database you are trying to open has an 8Kb page size. Use ESENTUTL /M database to see the page size.
The page size is always persisted in the logfiles, which are created by the JetInit call. If you don't clear out those files between runs then you will get a -1213 error when calling JetInit with a different page size.
If you want to open an existing database in a read only way then you should turn recovery off (set JET_paramRecovery to "off"). That will prevent any logfiles from being generated which will avoid a lot of problems.

Related

[ script:es_extended SCRIPT ERROR: #es_extended/server/functions.lua:127: attempt to index a nil value (local 'xPlayer')

[ script:es_extended] SCRIPT ERROR: #es_extended/server/functions.lua:127: attempt to index a nil value (local 'xPlayer')
[ script:es_extended] > ref (#es_extended/server/functions.lua:127)
Please help me im triggered af Thats my Fivem Console (TxAdmin) Nothing works Esx is completly broke after a server Restart
I had a look at the source code. My best guess is that the player you are using isn't registered in the MySQL database for some reason.
I am guessing this because of the following:
The immediate cause of the error is that xPlayer is nil in server/functions.lua:127
This is due to the player object not being added to the ESX.Players table in server/main.lua:239
The info necessary to make the player object is taken from MySQL on server/main.lua:115
So the most obvious explanation would be that the user wasn't found in the database. It is also possible that the program could not connect to the database at all, but it looks like the fivem-mysql-async library would raise an error instead of continuing silently, so that is less likely (although this would need testing to discount completely).
Are there any messages in the server logs that might give you a clue as to what's going on?

update Typo3 7.6 to 8.7, can't get frontend to work on a local test envirement with XAMPP

I working on updating a Typo3 7.6 to 8.7. I do this on my local machine with XAMPP on windows with PHP 7.2.
I got the backend working. It needed some manual work in the DB, like changing the CType in tt_content for my own content elements as well as filling the colPos.
However when I call the page on the frontend all I get is a timeout:
Fatal error: Maximum execution time of 60 seconds exceeded in
C:\xampp\htdocs\typo3_src-8.7.19\vendor\doctrine\dbal\lib\Doctrine\DBAL\Driver\Mysqli\MysqliStatement.php on line 92
(this does not change if I set max_execution_time to 300)
Edit: I added an echo just before line 92 in the above file, this is the function:
public function __construct(\mysqli $conn, $prepareString)
{
$this->_conn = $conn;
echo $prepareString."<br />";
$this->_stmt = $conn->prepare($prepareString);
if (false === $this->_stmt) {
throw new MysqliException($this->_conn->error, $this->_conn->sqlstate, $this->_conn->errno);
}
$paramCount = $this->_stmt->param_count;
if (0 < $paramCount) {
$this->types = str_repeat('s', $paramCount);
$this->_bindedValues = array_fill(1, $paramCount, null);
}
}
What I get is the following statement 1000 of times, always exactly the same:
`SELECT `tx_fed_page_controller_action_sub`, `t3ver_oid`, `pid`, `uid` FROM `pages` WHERE (uid = 0) AND ((`pages`.`deleted` = 0) AND (`pages`.`hidden` = 0) AND (`pages`.`starttime` <= 1540305000) AND ((`pages`.`endtime` = 0) OR (`pages`.`endtime` > 1540305000)))`
Note: I don't have any entry in pages with uid=0. So I am really not sure what this is good for. Does there need to be a page with uid=0?
I enabled logging slow query in mysql, but don't get anything logged with it. I don't get any aditional PHP error nor do I get a log entry in typo3.
So right now I am a bit stuck and don't know how to proceed.
I enabled general logging for mysql and when I call a page on frontent I get this SQL query executed over and over again:
SELECT `tx_fed_page_controller_action_sub`, `t3ver_oid`, `pid`, `uid` FROM `pages` WHERE (uid = 0) AND ((`pages`.`deleted` = 0) AND (`pages`.`hidden` = 0) AND (`pages`.`starttime` <= 1540302600) AND ((`pages`.`endtime` = 0) OR (`pages`.`endtime` > 1540302600)))
executing this query manually gives back an empty result (I don't have any entry in pages with uid=0). I don't know if that means anything..
What options do I have? How can I find whats missing / where the error is?
First: give your PHP more time to run.
in the php.ini configuration increase the max execution time to 240 seconds.
be aware that for TYPO3 in production mode 240 seconds are recommended. If you start the install-tool you can do a system check and get information about configuration which might need optimization.
Second: avoid development mode and use production mode.
the execution is faster, but you will loose the option to debug.
debugging always costs more time and more memory to prepare all that information. maybe 240 seconds are not enough and you even need more memory.
The field tx_fed_page_controller_action_sub comes from an extension it is not part of the core. Most likely you have flux and fluidpages installed in your system.
Try to deactivate those extensions and proceed without them. Reintegrate them later if you still need them. A timeout often means that there is some kind of recursion going on. From my experience with flux it is possible that a content element has itself set as its own flux_parent and therefore creates an infinite rendering loop that will cause a fatal after the max_execution_time.
So, in your case I'd try to find the record that is causing this (seems to be a page record) and/or the code that initiates the Query. You do not need to debug in Doctrine itself :)

JetAttachDatabase returns -1213

I am trying to read into some existing and unmounted ESE database files. I have been playing around with one .dat file rather successfully. But when I try to open a existing database with PageSize that is equal 32768 I get an error.
Here's my code (without error-handling):
FError := JetSetSystemParameter(&FInstance, nil, JET_paramRecovery, FPagesize, "off");
FError := JetCreateInstance(&FInstance, 'myinstance');
FError := JetInit(&FInstance);
FError := JetBeginSession(FInstance, &FSessionId, nil, nil);
FError := JetAttachDatabase(FSessionId, FFilename, JET_bitDbReadOnly);
It fails at the JetAttachDatabase call which returns an -1213 code. Am I doing something wrong?
I am running a Windows 7 32bit.
The Esent engine uses a certain page size by default. If I'm not mistaken it's 4K. You will have to tell the engine that the database you want to open has a different page size. Use something like that:
FError := JetSetSystemParameter(&FInstance, nil, JET_paramDatabasePageSize, 32768, nil);
If you open up different databases all the time, you might want to have your application checking out and setting the pagesize automatically.

ESENT--read a ese database file

The pagesize of the file I read is 32768. When i set the JET_paramDatabasePageSize to 32768,JetInit returns -1213.Then,i set the JET_paramRecovery to "Off",JetInit succeeds.But,when I use JetAttachDatabase,it returns -550.
Here is my code:
err=JetSetSystemParameter(&instance,sesid,JET_paramDatabasePageSize ,32768 ,NULL);
err=JetCreateInstance(&instance,NULL);
err=JetSetSystemParameter(&instance,sesid,JET_paramRecovery,0,"Off");
err=JetInit(&instance);
err=JetBeginSession(instance,&sesid,NULL,NULL);
err=JetAttachDatabase(sesid,buffer, JET_bitDbReadOnly );
err=JetOpenDatabase ( sesid, buffer, NULL, &dbid, JET_bitDbReadOnly );
What's wrong with it?I am running a Windows 7 32bit.
The page size is global to the process (NOT just the instance) and is persisted in the log files and the database, so changing the page size can be annoyingly tricky.
Is there information in the database that you're trying to access? Or did you just experience this during development?
If you saw this during development, then the easiest thing to do is to blow everything away (del .edb edb) [Assuming that you kept the prefix as "edb"].
Also, are you sure that the database is 32k pages? You can confirm with esentutl.exe -mh <database-name>.
It will be trickier to recover the data if you do care about, and you switched the page size. (I don't know off the top of my head, and I'd have to try a few things out...)
-martin

How can I validate an image file in Perl?

How would I validate that a jpg file is a valid image file. We are having files written to a directory using FTP, but we seem to be picking up the file before it has finished writing it, creating invalid images. I need to be able to identify when it is no longer being written to. Any ideas?
Easiest way might just be to write the file to a temporary directory and then move it to the real directory after the write is finished.
Or you could check here.
JPEG::Error
[arguments: none] If the file reference remains undefined after a call to new, the file is to be considered not parseable by this module, and one should issue some error message and go to another file. An error message explaining the reason of the failure can be retrieved with the Error method:
EDIT:
Image::TestJPG might be even better.
You're solving the wrong problem, I think.
What you should be doing is figuring out how to tell when whatever FTPd you're using is done writing the file - that way when you come to have the same problem for (say) GIFs, DOCs or MPEGs, you don't have to fix it again.
Precisely how you do that depends rather crucially on what FTPd on what OS you're running. Some do, I believe, have hooks you can set to trigger when an upload's done.
If you can run your own FTPd, Net::FTPServer or POE::Component::Server::FTP are customizable to do the right thing.
In the absence of that:
1) try tailing the logs with a Perl script that looks for 'upload complete' messages
2) use something like lsof or fuser to check whether anything is locking a file before you try and copy it.
Again looking at the FTP issue rather than the JPG issue.
I check the timestamp on the file to make sure it hasn't been modified in the last X (5) mins - that way I can be reasonably sure they've finished uploading
# time in seconds that the file was last modified
my $last_modified = (stat("$path/$file"))[9];
# get the time in secs since epoch (ie 1970)
my $epoch_time = time();
# ensure file's not been modified during the last 5 mins, ie still uploading
unless ( $last_modified >= ($epoch_time - 300)) {
# move / edit or what ever
}
I had something similar come up once, more or less what I did was:
var oldImageSize = 0;
var currentImageSize;
while((currentImageSize = checkImageSize(imageFile)) != oldImageSize){
oldImageSize = currentImageSize;
sleep 10;
}
processImage(imageFile);
Have the FTP process set the readonly flag, then only work with files that have the readonly flag set.