I am trying to update user profile picture. But each & everytime I am getting error like -
RuntimeException SplFileInfo::getSize(): stat failed for /tmp/php8uXhSg
whenever I am doing dd() just before save() method, everything is looking good. But at the time of saving record it is throwing error.
Below are code of my controller -
UserController.php
public function update(UserRequest $request, $slug) {
if($request->has('profile')) {
$profile = $request->profile;
$extension = $profile->getClientOriginalExtension();
$profile_name = auth()->user()->username.time().'.'.$extension;
$path = public_path('storage/uploads/avatars');
$profile->move($path, $profile_name);
auth()->user()->profile = $profile_name;
}
auth()->user()->save();
return back()->with(['mesg', 'Successfully Uploaded.']);
}
I also had the same problem some time ago, I checked upload_max_filesize / post_max_size and there should be no problem but the problem persists. Then I checked again on phpinfo to make sure and I tried to replace the save method with create and when I refreshed again suddenly the error is gone and I return it back to the save method and it still runs smoothly. I still don't know why
i have the same problem, uploading a 9KB file, in WAMP, PHP 7.3.5 and Laravel 6
I think the problem is not Laravel related, but maybe a PHP/WAMP thingy.
Changing the upload_max_filesize / post_max_size (thus forcing a restart of PHP) fixed the issue
Related
I have a Vue PWA and it stopped creating my IndexDB object stores on first load or upgrade. Here is my code, I am using the latest version of IDB (https://github.com/jakearchibald/idb):
await openDB('dbname', 1, {
upgrade(db, oldVersion, newVersion, transaction) {
switch (newVersion) {
case 0:
// a placeholder case so that the switch block will
// execute when the database is first created
// (oldVersion is 0)
// falls through
case 1:
db.createObjectStore('change_log', {keyPath: 'id'});
db.createObjectStore('person', {keyPath: 'id'})
.createIndex('username', 'username');
break;
}
}
});
I have tried multiple browsers and incognito tabs, etc. and the same thing always happens. The database is created, but no object stores are created. I use developer tools to clear all the data in the PWA and refresh but the same thing happens.
If I increment the version number, the version of my database gets updated in the browser, but the object stores still do not get added.
The upgrade() function does not get called.
I had this happen to me earlier in my development, and I fixed it, but I can't remember how. I feel like it may not actually be a coding issue...
OK, I found the problem. I added a logging mechanism to my App and there was code running BEFORE my upgrade code that was opening the database to create a log entry. Therefore, it was creating the database (with no object stores) before my upgrade method was being called. I changed my open database code to always include the upgrade method to solve my problems.
I working on updating a Typo3 7.6 to 8.7. I do this on my local machine with XAMPP on windows with PHP 7.2.
I got the backend working. It needed some manual work in the DB, like changing the CType in tt_content for my own content elements as well as filling the colPos.
However when I call the page on the frontend all I get is a timeout:
Fatal error: Maximum execution time of 60 seconds exceeded in
C:\xampp\htdocs\typo3_src-8.7.19\vendor\doctrine\dbal\lib\Doctrine\DBAL\Driver\Mysqli\MysqliStatement.php on line 92
(this does not change if I set max_execution_time to 300)
Edit: I added an echo just before line 92 in the above file, this is the function:
public function __construct(\mysqli $conn, $prepareString)
{
$this->_conn = $conn;
echo $prepareString."<br />";
$this->_stmt = $conn->prepare($prepareString);
if (false === $this->_stmt) {
throw new MysqliException($this->_conn->error, $this->_conn->sqlstate, $this->_conn->errno);
}
$paramCount = $this->_stmt->param_count;
if (0 < $paramCount) {
$this->types = str_repeat('s', $paramCount);
$this->_bindedValues = array_fill(1, $paramCount, null);
}
}
What I get is the following statement 1000 of times, always exactly the same:
`SELECT `tx_fed_page_controller_action_sub`, `t3ver_oid`, `pid`, `uid` FROM `pages` WHERE (uid = 0) AND ((`pages`.`deleted` = 0) AND (`pages`.`hidden` = 0) AND (`pages`.`starttime` <= 1540305000) AND ((`pages`.`endtime` = 0) OR (`pages`.`endtime` > 1540305000)))`
Note: I don't have any entry in pages with uid=0. So I am really not sure what this is good for. Does there need to be a page with uid=0?
I enabled logging slow query in mysql, but don't get anything logged with it. I don't get any aditional PHP error nor do I get a log entry in typo3.
So right now I am a bit stuck and don't know how to proceed.
I enabled general logging for mysql and when I call a page on frontent I get this SQL query executed over and over again:
SELECT `tx_fed_page_controller_action_sub`, `t3ver_oid`, `pid`, `uid` FROM `pages` WHERE (uid = 0) AND ((`pages`.`deleted` = 0) AND (`pages`.`hidden` = 0) AND (`pages`.`starttime` <= 1540302600) AND ((`pages`.`endtime` = 0) OR (`pages`.`endtime` > 1540302600)))
executing this query manually gives back an empty result (I don't have any entry in pages with uid=0). I don't know if that means anything..
What options do I have? How can I find whats missing / where the error is?
First: give your PHP more time to run.
in the php.ini configuration increase the max execution time to 240 seconds.
be aware that for TYPO3 in production mode 240 seconds are recommended. If you start the install-tool you can do a system check and get information about configuration which might need optimization.
Second: avoid development mode and use production mode.
the execution is faster, but you will loose the option to debug.
debugging always costs more time and more memory to prepare all that information. maybe 240 seconds are not enough and you even need more memory.
The field tx_fed_page_controller_action_sub comes from an extension it is not part of the core. Most likely you have flux and fluidpages installed in your system.
Try to deactivate those extensions and proceed without them. Reintegrate them later if you still need them. A timeout often means that there is some kind of recursion going on. From my experience with flux it is possible that a content element has itself set as its own flux_parent and therefore creates an infinite rendering loop that will cause a fatal after the max_execution_time.
So, in your case I'd try to find the record that is causing this (seems to be a page record) and/or the code that initiates the Query. You do not need to debug in Doctrine itself :)
I wrote an apk to test camera on Android 4.2.2 before. This apk works fine.
However, when I moved this apk to Android 4.4.
I got a problem with Camera::connect().
Fail to call Camera::connect() and it prints message:
W/AppOps ( 1546): Bad call: specified package TestCamera under uid 1000 but it is really -1
I think the reason may be USE_CALLING_UID, security or something that I can't figure out.
Please give me some suggestions, thanks!
My apk is very simple, only one activity. In onCreate(), I called a jni function.
The jni function just do the code belowed:
int cameraId = 0;
String16 clientPackageName("TestToGoService");
sp<Camera> camera = Camera::connect(cameraId, clientPackageName, Camera::USE_CALLING_UID);
if (camera == NULL) {
ALOGE("camera==NULL.");
return -1;
}
ALOGV("camera=%p.",camera.get());
Try:
If I put the code above to a executable (main()), then Camera::connect() works OK.
I have already add permissons on AndroidManifest.xml
Thanks again!
I'm not sure if it's still of any help. I had the same error in the past. The problem is clientPackageName, that has to be set to the exact package name of your application (which must have the proper camera permissions set on the manifest).
I'm facing a big issue IMO.
First, here's my code:
.bind('uploadSuccess', function(event, file, serverData){
if(serverData === 'nofile') {
var swfu = $.swfupload.getInstance('#form');
swfu.cancelUpload(file.id); // This part is not working :(
} else {
alert('File uploaded');
}
})
In this part I'm checking server response (I'm have strict validation restrictions). Now my question. Is it possible to remove uploaded file from queue? Basically, if server returns error I display error message, but... this file still exsit in the queue (I've implemented checking filename and filesize to avoid duplicated uploads) and user is not possible to replace this file (due to upload and queue limit).
I was trying to search for a solution, but without success. Any ideas?
Regards,
Tom
From the link
http://swfupload.org/forum/generaldiscussion/881
"The cancelUpload(file_id) function
allows you to cancel any file you have
queued.
You just have to keep the file's ID
value so you can pass it to
cancelUpload when you call it."
Probably you have to keep the file ID before sending anything to the server
How would I validate that a jpg file is a valid image file. We are having files written to a directory using FTP, but we seem to be picking up the file before it has finished writing it, creating invalid images. I need to be able to identify when it is no longer being written to. Any ideas?
Easiest way might just be to write the file to a temporary directory and then move it to the real directory after the write is finished.
Or you could check here.
JPEG::Error
[arguments: none] If the file reference remains undefined after a call to new, the file is to be considered not parseable by this module, and one should issue some error message and go to another file. An error message explaining the reason of the failure can be retrieved with the Error method:
EDIT:
Image::TestJPG might be even better.
You're solving the wrong problem, I think.
What you should be doing is figuring out how to tell when whatever FTPd you're using is done writing the file - that way when you come to have the same problem for (say) GIFs, DOCs or MPEGs, you don't have to fix it again.
Precisely how you do that depends rather crucially on what FTPd on what OS you're running. Some do, I believe, have hooks you can set to trigger when an upload's done.
If you can run your own FTPd, Net::FTPServer or POE::Component::Server::FTP are customizable to do the right thing.
In the absence of that:
1) try tailing the logs with a Perl script that looks for 'upload complete' messages
2) use something like lsof or fuser to check whether anything is locking a file before you try and copy it.
Again looking at the FTP issue rather than the JPG issue.
I check the timestamp on the file to make sure it hasn't been modified in the last X (5) mins - that way I can be reasonably sure they've finished uploading
# time in seconds that the file was last modified
my $last_modified = (stat("$path/$file"))[9];
# get the time in secs since epoch (ie 1970)
my $epoch_time = time();
# ensure file's not been modified during the last 5 mins, ie still uploading
unless ( $last_modified >= ($epoch_time - 300)) {
# move / edit or what ever
}
I had something similar come up once, more or less what I did was:
var oldImageSize = 0;
var currentImageSize;
while((currentImageSize = checkImageSize(imageFile)) != oldImageSize){
oldImageSize = currentImageSize;
sleep 10;
}
processImage(imageFile);
Have the FTP process set the readonly flag, then only work with files that have the readonly flag set.