Centos 6 - Plesk 11 - MaxRequestLen - cannot modify mod_fcgid - centos

I have just installed Plesk 11 on Centos 6 and I cannot upload files larger than 17-18 Mo.
[warn] [client ] mod_fcgid: HTTP request length 16777368 (so far)
exceeds MaxRequestLen (16777216)
I set my php.ini like this:
post_max_size = 150M
file_uploads = On
upload_max_filesize = 128M
memory_limit = 256M
I tried to modify in /etc/httpd/conf.d/fcgid.conf
by adding
FcgidMaxRequestLen 30000000
and restarted apache; I still got the same error.
Here's what I tried:
I found that FcgidMaxRequestLen was also set in this file
/usr/local/psa/admin/conf/templates/default/domain/domainVirtualHost.php
changed, saved, apache restarted.. same error
I tried to set FcgidMaxRequestLen with different sizes, 1 giga, 20 mo ect still same error.
I tried to change http.conf and add this:
< IfModule mod_fcgid.c >
MaxRequestLen 20000000
< /IfModule >
I am restarting apache for each and every change. There must be some config file that I am missing but I don't know where

Ok here is the solution:
http://stuffthatspins.com/2013/01/22/exceeds-maxrequestlen-16777216-plesk-mod_fcgid-unable-to-upload-large-files/
to reconfigure domain with centos:
/usr/local/psa/admin/sbin/httpdmng --reconfigure-domain yourdomain.com
/sbin/service apache restart
/sbin/service psa restart

Related

PostgreSQL 11 Shared Memory Error: could not open shared memory segment "/PostgreSQL.XXXXXXXX": No such file or directory

Shared Memory files getting deleted some time (~15 hours) in Postgres 11
2019-07-09 08:46:41 CDT [] [6723]: [1-1] user=,db=,e=58P01 ERROR: could not open shared memory segment "/PostgreSQL.291691635": No such file or directory
2019-07-09 08:46:41 CDT [] [6722]: [1-1] user=,db=,e=58P01 ERROR: could not open shared memory segment "/PostgreSQL.291691635": No such file or directory
2019-07-09 08:46:41 CDT [10.40.0.204(60550)] [13880]: [1-1] user=user_name,db=db_name,e=58P01 ERROR: could not open shared memory segment "/PostgreSQL.291691635": No such file or directory
2019-07-09 08:46:41 CDT [10.40.0.204(60550)] [13880]: [2-1] user=user_name,db=db_name,e=58P01 CONTEXT: parallel worker
2019-07-09 08:46:41 CDT [10.40.0.204(60550)] [13880]: [3-1] user=user_name,db=db_name,e=58P01 STATEMENT: WITH overall_reviewed AS (SQL Query)
GCP VM Config
CPU: 4
RAM: 16 GB
OS: Ubuntu 18.04.1 LTS
kernel shared memory setting shared
kernel.shmmax=8589934592
kernel.shmall=2097152
postgresql.config
max_connections = 500
shared_buffers = 4GB
effective_cache_size = 12GB
maintenance_work_mem = 1GB
checkpoint_completion_target = 0.7
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 4194kB
min_wal_size = 1GB
max_wal_size = 2GB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_parallel_workers = 4
During startup: no errors/warnings
After ~15 hours some of the shared memory files is getting deleted, I'm doubting is there any other process deleting files in "/dev/shm" ?
Not sure what is the root cause
making dynamic_shared_memory_type = none in postgresql.conf did solve the issue.
Got the same problem on Ubuntu 18.04 and PostgreSQL 11 and after some more research i have found a solution for us. The error occured when the backup user, which ist he same user as the PG service user, logs into the system. The following Link describes that the storeage under /dev/shm where deleted when a user logs in to the system (same user). So our solution was to change the following:
/etc/systemd/logind.conf
added the Line
RemoveIPC=no
and restart the service
systemctl restart systemd-logind.service
Sources:
https://www.postgresql-archive.org/systemd-deletes-shared-memory-segment-in-dev-shm-Postgresql-NNNNNN-td5883507.html
https://superuser.com/questions/1117764/why-are-the-contents-of-dev-shm-is-being-removed-automatically
We had the same issue and it turned out that someone set postgres user's UID bigger than 1000 (which means that postgres user was no longer a system account). And, as said here:
After hours of searching and reading, I found the culprit.
It's a setting for systemd. The /etc/systemd/logind.conf contains default configuration options, with each of them commented out.
The RemoveIPC option is set to yes by default. That option tells systemd to clean up interprocess communication (IPC) for "user accounts" who aren't logged in.
================================================
This does not affect "system accounts"
================================================
Met the same issue.....when I have opened two sqldeveloper with the same user account, one of them is my remote session which I completely forgot to close the session.
I was doing some aggregation operators like count(*) or max(...), and the error on both. And the error is similar:
ERROR: could not open shared memory segment "/PostgreSQL.798235387":
No such file or directory Where: parallel worker
Solution? I killed the remote session.... XD
And life is peaceful and happy again :D

haproxy ulimit-n computation

I got a haproxy 1.8 vanilla alpine docker image running with maxconn = 2000
curl -s http://host:port/stats| grep maxsock
<b>maxsock = </b> 4017; <b>maxconn = </b> 2000; <b>maxpipes = </b> 0<br>
Sometimes I get the following Warning in my logs:
[WARNING] 0/0 (0) : [/usr/local/sbin/haproxy.main()] FD limit (4015) too low for maxconn=2000/maxsock=4016. Please raise 'ulimit-n' to 4016 or more to avoid any trouble.
I find it very odd since I read this in haproxy doc:
ulimit-n
Sets the maximum number of per-process file-descriptors to . By
default, it is automatically computed, so it is recommended not to use this
option.
Not sure if it's a bug on haproxy or something I am doing wrong.
What do you think of that?
edit: haproxy is running as root
It depends on the open file descriptor limit(hard and soft), you can check that by ulimit -Hn and ulimit -Sn.
It is automatically computed, but it depends on the user you run haproxy, if you run haproxy using root then even if the computed value is greater than hard limit, you can set that value without warning.
But if you run as a normal user, then the max value is hard limit, if the computed value is greater than that, you got the warning.

How to configure open-fire server with HttpUploadComponent for offline file transferring?

I use Openfire with Conversations and would like to implement offline file transferring with HttpUploadComponent, I have copied httpupload folder inside openfire folder as below screenshot:
Then I did below configurations in openfire:
I also installed Python and configured config.yml file in httpupload folder like below:
component_jid: upload.192.168.105.164
component_secret: 1234
component_port: 5275
storage_path : ./var/lib/httpupload/
max_file_size: 20971520 #20MiB
http_address: 0.0.0.0 #use 0.0.0.0 if you don't want to use a proxy
http_port: 8080
get_url : http://192.168.105.164:8080/
put_url : http://192.168.105.164:8080/
expire_interval: 82800 #time in secs between expiry runs (82800 secs = 23 hours). set to '0' to disable
expire_maxage: 2592000 #files older than this (in secs) get deleted by expiry runs (2592000 = 30 days)
user_quota_hard: 104857600 #100MiB. set to '0' to disable rejection on uploads over hard quota
user_quota_soft: 78643200 #75MiB. set to '0' to disable deletion of old uploads over soft quota an expiry runs
allow_web_clients: true #answer OPTIONS requests to allow web clients to upload files
I did run Httpupload server as well :
After starting python server, if you go openfire\serversetting\external components*view the external components* [in the first line], you'll see whether session is created or not:
After all of this, when I want to send a file from android client its failling and It gives me this error:
Where is my problem? Thanks.
In attached error screenshot, the last word is 403, which is indicating that it's related to authorization on HttpUploadComponent end.
Now I started to check the code of this component and on line 83 of https://github.com/siacs/HttpUploadComponent/blob/master/httpupload/server.py it is picking the variable "storage_path" from configuration to place the file in that directory.
Now as mentioned in your question, you have set storage_path : ./var/lib/httpupload/
But you are on a windows machine and this path is invalid.
Try giving a valid windows os path.

What is wrong with my TOR?

I'm the operator of the XMPP server on darkness.su.The server runs on Centos 6.
I installed TOR and configured it to provide a hidden service access to the server.It was working fine at first,but ever since an update a few months ago it started giving me these errors:
799 May 25 14:19:37.060 [warn] Permissions on directory /var/lib/tor/hidden_service are too permissive.
800 May 25 14:19:37.060 [warn] Failed to parse/validate config: Failed to configure rendezvous options. See logs for details.
801 May 25 14:19:37.060 [err] Reading config failed--see warnings above.
I tried to check the logs,but I can't find them,and setting one doesn't seem to work.I've tried removing TOR and wiping all its folder,then reinstalling it.Same thing.
I'm installing through yum from TOR Project's repository.
With chmod 700 on the hidden service directory(owned by TOR):
Jul 24 21:39:05.573 [warn] Directory /var/lib/tor/hidden_service/ cannot be read: Permission denied
Jul 24 21:39:05.573 [warn] Failed to parse/validate config: Failed to configure rendezvous options. See logs for details.
Jul 24 21:39:05.573 [err] Reading config failed--see warnings above
After changing directory owner to root:
Jul 24 22:11:36.236 [warn] /var/lib/tor/hidden_service/ is not owned by this user (_tor, 496) but by root (0). Perhaps you are running Tor as the wrong user?
Jul 24 22:11:36.236 [warn] Failed to parse/validate config: Failed to configure rendezvous options. See logs for details.
Jul 24 22:11:36.236 [err] Reading config failed--see warnings above.
Permissions on directory /var/lib/tor/hidden_service are too permissive.
This means, that too many users have access to this directory. Try to change it:
chmod 700 /var/lib/tor/hidden_service
I assume here that the user running TOR is also the owner of the directory.
Your initial problem with permission issues (I had these after cloning a virtual hdd in VirtualBox) was caused by broken labels in selinux. On CentOS/Linux this is fixed with:
restorecon -r -v /var/lib/tor
It is all about file and directory permissions. I wrote this in Dockerfile
FROM osminogin/tor-simple:0.4.6.7
ARG source=.
USER tor
COPY $source/torrc /etc/tor/torrc
RUN mkdir /var/lib/tor/sc && chmod 700 /var/lib/tor/sc
COPY --chown=tor:nogroup $source/private/* /var/lib/tor/sc
RUN chmod -R 400 /var/lib/tor/sc/*
In my sc directory I have hostname and key pair.
After restarting the container tor domain name persists
sudo chown _tor:_tor /var/lib/tor/site/
fixed it for me.

Template Prestahop, intalation local windows

I installed prestashop locally with EasyPhp.
I installed the release prestashop_1.6.1.0 and everything works with the default theme "default-bootstrap".
I bought a new theme on addons: MyTheme.zip and I want to install it locally from the BO but I have a problem. When I downloaded MyTheme.zip I get an error that appears a few seconds (page on white background):
Warning: POST Content-Length of 8547237 bytes exceeds the limit of 8388608 bytes in Unknown on line 0
... Then nothing happens. I return to the backoffice and the theme is not added.
I do not understand .. you have an idea ?
Thank you !
8388608 bytes is 8M - the default limit in PHP.
You have to update your post_max_size in php.ini to a larger value.
post_max_size sets max size of post data allowed. This setting also affects file upload. To upload large files, this value must be larger than upload_max_filesize
http://php.net/manual/en/ini.core.php