The CSRF token is invalid. Please try to resubmit the form. Symfony 4 with php-fpm7.4 - memcached

When I submit a form I get the following error:
"The CSRF token is invalid. Please try to resubmit the form."
It does not always happen, sometimes it is sent correctly, but no type of error logs me.
I have configured my php.ini increasing memory and limits, without result
php.ini:
upload_max_filesize = 48M
post_max_size = 48M
memory_limit = -1
max_execution_time = -1
max_input_vars = 4000
max_input_time = -1
UPDATE:
I've modified my config file, it seems to work that way but I'm still testing. Still I would like to understand why saving it in memcached does not work
config/packages/framework.yaml
session:
#handler_id: session.handler.memcached //old
handler_id: session.handler.native_file
save_path: "%kernel.root_dir%/../var/cache/sessions"

Related

Mediawiki 1.37 - VisualEditor - "Error contacting the Parsoid/RESTBase server: (curl error: 28) Timeout was reached" (sometimes not always)

Please help, I am stuck. :)
I've searched related threads which could not help me.
My version of Mediawiki is 1.37.2
While editing a page with VisualEditor, I sometimes get the following error (Sometimes it works, sometimes I get the error ; it can work 10 times in a row and then don't work):
"Error contacting the Parsoid/RESTBase server: (curl error: 28) Timeout was reached"
The error seems to occur whatever the page size. It happens on any page.
Note: I do not have this error on another test server with the same configuation.
In the log file I get :
[http] HTTP start: GET https://example.com/wiki/rest.php/example.com/v3/page/html/Language%2FMultiple-languages/129917?redirect=false&stash=true
[http] Error fetching URL "https://example.com/wiki/rest.php/example.com/v3/page/html/Language%2FMultiple-languages/129917?redirect=false&stash=true":
(curl error: 28) Timeout was reached
I also sometimes get a timeout error when using this url directely in a browser:
https://example.com/wiki/api.php?action=visualeditor&paction=parse&page=Language/Multiple-languages
I never get an error if I do (using SSH):
curl https://example.com/wiki/rest.php/example.com/v3/page/html/Language%2FMultiple-languages/129917?redirect=false&stash=true
or
curl https://example.com/wiki/api.php?action=visualeditor&paction=parse&page=Language/Multiple-languages
My config in LocalSettings.php
wfLoadExtension( 'VisualEditor' );
$wgDefaultUserOptions['visualeditor-enable'] = 1;
$wgDefaultUserOptions['visualeditor-editor'] = "visualeditor";
$wgGroupPermissions['*']['read'] = true;
$wgGroupPermissions['*']['edit'] = true;
$wgGroupPermissions['*']['writeapi'] = true;
Add to hosts file: 127.0.0.1 example.com

why `wget` can not get redirection for certain website?

wget hangs there while it accesses the following website. But when I use a browser to access it, it will be redirected to https://nyulangone.org. Does anybody know why wget can not get redirected in this case? Thanks.
$ wget http://nyumc.org
--2018-02-20 20:27:05-- http://nyumc.org/
Resolving nyumc.org (nyumc.org)... 216.165.125.106
Connecting to nyumc.org (nyumc.org)|216.165.125.106|:80...
When I used wget on the site you mentioned, this is what I get:
--2018-02-21 21:16:38-- http://www.nyumc.org/
Resolving www.nyumc.org (www.nyumc.org)... 216.165.125.112
Connecting to www.nyumc.org (www.nyumc.org)|216.165.125.112|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 179 [text/html]
Saving to: ‘index.html’
index.html 100%[==================================>] 179 --.-KB/s in 0s
2018-02-21 21:16:38 (8.16 MB/s) - ‘index.html’ saved [179/179]
In the index.html file, which bears the logo of NYU Langone Medical Center, it says: "The following URL has been rejected for security concerns. If you believe you have received this message in error, please summit an incident with our helpdesk at 212-263-6868..." So, it may not redirect because the website can detect that you are a bot and not a browser. You could attempt to change the user agent string and other HTTP headers to avoid detection, but I'm not sure why you wouldn't just turn wget on https://nyulangone.org. Judging from information on archive.org, nyumc.org has been redirecting to other sites for at least the last 5 years. It was redirecting to http://www.med.nyu.edu until 2016, at which point it started redirecting to https://www.nyulangone.org.
I hope that helps.

haproxy shows in log file (-1) as the status code

I have a strange status code in my log file of haproxy (Note that its not a customized log-format its the default one in log-http)
43.56.77.23:55309 [27/Oct/2015:20:14:34.749] front-http mybackend/app 349/0/-1/-1/359 **-1** 0 - - CC-- 1658/1658/21/21/0 0/0 "GET /img/button_bkg.png HTTP/1.1"
What does the -1 status code mean, i tried to find the solution online but unfortunately i could not find anything that resembles my problem.
Does anyone knows what this status code means?
-1 indicates that the status code is not available. The reason is in the termination flags field.
See section 8.5 in the docs.

Booted Off Local Server - 302 error

I'll start with the log that I am receiving below:
Dec.15.11.56-Rf: Incoming Request URL: /
Dec.15.11.56-Rf: SECURE GET Path: / From: mlocal.cldeals.com Rewritten: www.cldeals.com
Dec.15.11.56-Rf: Received 302 Found [text/html; charset=UTF-8] response for /
Dec.15.11.56-Rf: Sending 302 text/html; charset=UTF-8 response for /
Dec.15.11.56-Rf: Stats. Total: 0.52088702, Upstream: 0.48212701, Processing: 0.00105600, ProcessingOther: 0.04037500
Basically, when I go to mlocal.cldeals.com, it loads fine. If I click on another page, say mlocal.cldeals.com/products, that loads fine as well. The issue seems to be when I go to the account page and try to switch back to the homepage, maybe some type of security issue? When I try to switch back to mlocal.cldeals.com, the home page, it boots me off and sends me to www.cldeals.com. Is there something I can add to force this from not happening? Additionally, is this just a local server issue that would go away when I launch it on Moovweb's server? Any help is greatly appreciated.
Thank you.
It looks like the backend response to https://www.cldeals.com is a 302 to http://www.cldeals.com:80/. Not sure why that is the case (see note below *)
curl -v -o /dev/null https://www.cldeals.com
This response contains a hardcoded Location header and your project is passing along the response as is, which is why you are being booted off your local server.
Because the Location header value has a port specified, you'll need to modify your config.json to include this line in the mapping:
{
"host_map": [
"$.cldeals.com => www.cldeals.com",
"$.cldeals.com => www.cldeals.com:80"
]
}
This way, the SDK knows to rewrite that specific host:port value... (By default all HTTP requests go through port 80, so that information isn't really necessary)
*This is might be bug in the backend implementation because once you log in, you should be in HTTPS mode until you log out. (I can see some pages with personal information being transmitted over plain HTTP)

SOAP "error fetching http headers": how do I do suspected solution of disabling keep-alive?

I'm troubleshooting an existing webservice. It previously worked just fine, but now SOAP-based requests to the postgreSQL database result in an "unknown error: Error Fetching http headers" error.
In looking up this problem, I come across the following tip:
When you get errors like: "Fatal error: Uncaught SoapFault exception:
[HTTP] Error Fetching http headers in" after a few (time intensive)
SOAP-Calls, check your webserver-config.
Sometimes the webservers "KeepAlive"-Setting tends to result in this
error. For SOAP-Environments I recommend you to disable KeepAlive.
Hint: It might be tricky to create a dedicated vhost for your
SOAP-Gateways and disable keepalive just for this vhost because for
normal webpages Keepalive is a nice speed-boost.
I haven't been able to figure out exactly how you disable KeepAlive or where this parameter would be set. I've tried grep -i "keepalive" /usr/share/tomcat5/conf/*, result negative.
Perhaps due to the variability of server environments this is a question for my sysadmin, but I do have root privileges.
Thanks for your help, stack!
In your Tomcat's server.xml file, set the maxKeepAliveRequests attribute to 1 on your HTTP connectors to effectively disable keep alive.
For more information:
http://tomcat.apache.org/tomcat-5.5-doc/config/http.html#Standard_Implementation