Whenever I try to upload an Image larger than 125Kb, I receive Upload HTTP Error. How can I increase this limit so I can upload Hi-res images?
Thank you,
FD
This has nothing to do with Magento and everything to do with your server settings.
You will likely have to bump up post_max_size and upload_max_filesize in your php.ini
Also, if you're running NGINX you may also have to increase client_max_body_size
Please note, however, that settings and restrictions can vary greatly from one hosting environment to the next. If you're not sure how to alter the config files properly or do not have the necessary access to do so - then you may have to contact your hosting provider and ask them to do it for you.
First of all, make sure that you have correct permissions for your media dir using command line:
sudo chmod -R 775 [magento_root]/media
If it doesn't help, try to check your php config:
php -i | egrep 'upload_max_filesize|post_max_size|memory_limit'
If you see the small values there, you, probably, need to change limits by editing these limits in your php.ini file. You can find this file by running the following command
php -i | grep php.ini
Also, do not forget to restart your apache/php servers after some config changes have been made. Usually, you are able to do it by running:
sudo /etc/init.d/apache2 restart
or
sudo service apache2 restart
Also, I noticed that sometimes mod_security might cause such kind of issues. Try to check your [magento_root]/.htaccess file for the following configuration and try to add it if it's absent:
<IfModule mod_security.c>
SecFilterEngine Off
SecFilterScanPOST Off
</IfModule>
And, the last thing: try to upload images from another browser/computer. Magento has flash uploader for product images and we have cases when the flash player caused the similar issues on some computers.
You have to change both post_max_size and upload_max_filesize in the php.ini
And don’t forget to restart your server afterwards.
Related
I'm trying to specify a specific CA file for use with a proxy. When I use wget --ca-certificate=file.cer, it works fine. But when I try to put ca_certificate = file.cer in $HOME/.wgetrc, it doesn't work and I get the following error:
Unable to locally verify the issuer's authority.
The docs say that these should both do the same thing, so I don't know what is causing the difference.
I'm on SLES 15 SP1 and using GNU Wget 1.20.3.
According to Wgetrc Location manual
If the environmental variable WGETRC is set, Wget will try to load
that file. Failing that, no further attempts will be made.
If WGETRC is not set, Wget will try to load $HOME/.wgetrc.
So first thing to check is if WGETRC is set. If it is and is other than $HOME/.wgetrc then wget will not load latter.
what is causing the difference.
I am not sure in relation to what is wget looking for files, so I would try using absolute path rather than relative.
I am using the following configuration, ubuntu 16.04 apache2 php 7.0 owncloud 10.0.3. I think I have made an error when I setup ownclound. The data directory lives in /var/www/owncloud/data ( I believe that owncloud.log resides in this folder). I have deployed fail2ban and the issue that I am having is that fail2ban cannot access the data folder because I ran sudo chown -R www-data:www-data /var/www/owncloud/. The only way I access the log file is through the OWNcloud gui settings > general > log. where I can see the failed login attempts by me. I cannot seem to get Fail2ban to read the owncloud log.
I am new to ubuntu and Owncloud can anyone advise how to rectify this issue, owncloud is working fine and I am using ip addresses to restrict access to owncloud. Fail2ban was supposed to make the server secure so that I could open up owncloud to the internet.
Regards
Steve
You should change the permissions of the log file so that it can be read by everyone but written only by the php process. Do a 'chmod 755 /var/log/owncloud/owncloud.log'
By the way. I suggest that you migrate from Owncloud to Nextcloud. It is a full replacement, fully open source, more features and more secure. And it has a fail2ban equivalent brute force protection already build in :-)
I am on a Windows7 machine and I'm trying to get graphic view on the centOS machine to be displayed on my current screen. When typing xclock, gedit... in terminal, I am getting the following error
-bash: xclock: command not found
and This the result of # vi /etc/ssh/sshd_config command
# Example of overriding settings on a per-user basis
#Match User anoncvs
# X11Forwarding no
# AllowTcpForwarding no
#tewayPorts no
#X11Forwarding no
X11Forwarding yes
#X11DisplayOffset 10
#X11UseLocalhost yes
Also Xming is running on server:0.0 and I turned X11 forwarding on on putty
So what's the problem ?
sudo yum install xorg-x11-apps
Should cover it!
Do you have an .Xauthority file in your home directory?
I've recently found the answer for my issue, which might be similar to yours. I've seen quite a few open questions about this topic without resolution. You may have a few more things to work through, but SELinux settings ended up being my final hurdle. This among many other steps are covered here: ssh X11 forwarding won't work
That aside, you may need to change the Xming settings to match the default DisplayOffset of 10 for Centos. And after any changes to sshd_config, you'll need to restart the service via
/etc/init.d/sshd restart
I would like to emphasize that my situation is a non-critical operation within a (hopefully!) securely-managed intranet. I would NOT suggest turning off SELinux at work, or at home if you're hoping to open ports or configure VPN for your home network. Please consider: http://securityblog.org/2006/05/21/software-not-working-disable-selinux/
We are running a high-traffic, load-balanced site on CentOS. When I installed haproxy, I used:
make TARGET=linux26 USE_OPENSSL=1 ADDLIB=-lz
make PREFIX=/usr/local/haproxy install
but now I need to add zlib support.
I know that the command for a fresh install would be:
make TARGET=linux26 USE_OPENSSL=1 USE_ZLIB=1 ADDLIB=-lz
make PREFIX=/usr/local/haproxy install
But how do I recompile it into an existing haproxy install without uninstalling first? The site is too high traffic to take it down for even a minute.
I spent 30 minutes Googling for the answer and while I found something that talks about using make clean to do a recompilation, as somewhat of a Linux noob, I thought I should ask the experts how it's supposed to be done and for the exact syntax.
make clean will indeed clean your compilation directory.
My advice would be to:
clean the compilation directory, NOT the install dir
recompile with zlib support
install in another dir
change the currently installed non zlib haproxy to another path
which translates in bash to
make clean
make TARGET=linux26 USE_OPENSSL=1 USE_ZLIB=1 ADDLIB=-lz
make PREFIX=/usr/local/haproxy-zlibed install
mv /usr/local/haproxy /usr/local/haproxy-not-zlibed
ln -s /usr/local/haproxy-not-zlibed /usr/local/haproxy
At this point you're in the exact same situation as you were before.
then use symbolic links to switch from your current haproxy to the other:
use the current haproxy (without zlib)
rm -fr /usr/local/haproxy
ln -s /usr/local/haproxy-not-zlibed /usr/local/haproxy
and restart haproxy your usual way
or, to use the haproxy with zlib support
rm -fr /usr/local/haproxy
ln -s /usr/local/haproxy-zlibed /usr/local/haproxy
and restart haproxy your usual way
That way you can test your new zlibd haproxy and rollback if necessary
On Linux there's no need to uninstall or even stop a service before you recompile and reinstall.
That's true because of how modern (and even not-so-modern) filesystems work: File contents are attached to inodes and inodes are attached to directory entries (having a 1:0..n relationship). Thus, you can delete the directory entry for a running program, but as long as its inode isn't deallocated (which will never happen so long as it continues running), it still has a file handle on its own executable, and can continue to work.
Now, with HAProxy in particular, there's support for seamless restarts -- where a new process starts up, tells the old process to drop its listen sockets but keep servicing the existing connection, grabs new listen sockets itself, tells the old process that this succeeded (or if it failed, in which case the old process regrabs its own listen sockets), and then allows the old process to shut down when done. See http://www.mgoff.in/2010/04/18/haproxy-reloading-your-config-with-minimal-service-impact/ for a writeup on the process.
I am facing a problem in downloading some documents programmatically.
For example this link
https://www-950.ibm.com/events/wwe/grp/grp019.nsf/vLookupPDFs/Introduction_to_Storwize_V7000_Unified_T3/$file/Introduction_to_Storwize_V7000_Unified_T3.pdf
can be downloaded from browser, but when I try to get it from wget it doesn't work.
I have tried
wget https://www-950.ibm.com/events/wwe/grp/grp004.nsf/vLookupPDFs/3-Mobile%20Platform%20--%20Truty%20--%20March%208%202012/\$file/3-Mobile%20Platform%20--%20Truty%20--%20March%208%202012.pdf
It gave me this output
--2012-04-18 17:09:42--
https://www-950.ibm.com/events/wwe/grp/grp004.nsf/vLookupPDFs/3-Mobile%20Platform%20--%20Truty%20--%20March%208%202012/$file/3-Mobile%20Platform%20--%20Truty%20--%20March%208%202012.pdf
Resolving www-950.ibm.com... 216.208.176.98
Connecting to www-950.ibm.com|216.208.176.98|:443... connected.
Unable to establish SSL connection.
Can any one help me solve this problem. Thanks in advance.
Add the --no-check-certificate to your original wget command.
Plus you need to ensure that you are using a proxy.
On Linux:
export http_proxy=http://myproxyserver.com:8080
On Windows:
set http_proxy=http://myproxyserver.com:8080
I also found that on windows, because this is a https request, that in order to make it work, I also had to set https_proxy. So
set https_proxy=http://myproxyserver.com:8080
Obviously, change the proxy settings to suite your particular situation.