Symfony 3 - The CSRF token is invalid. Please try to resubmit the form - csrf

So first of all, I already tried to find another topic related to my issue as it already exists a lot :p.
But I can't figure out why Symfony keeps telling me that ... :
The CSRF token is invalid. Please try to resubmit the form
I'm using a regular form, and the weird part is that it works on another computer but not on this one (the files are the same except the cache/session/log files).
I use both {{ form_rest(form) }} and {{ form_end(form) }} twig variables. I increased both max_input_vars and upload_max_filesize properties of my php.ini .
My database is up to date.
The hidden _token input is displayed in the html.
I cleared my app and browser caches.
Since this is a local website, I just granted all my folders/files with the 777 permission..
EDIT : If I replace (app/config/config.yml) :
save_path: "%kernel.root_dir%/../var/sessions/%kernel.environment%"
with
save_path: ~
It works.. But I have no idea of what are the consequences of this? Sessions are stored based on the php.ini file right? So can I use this setting instead of the previous one without any problems?
Otherwise, I have absolutely no idea of what could cause the error.
Thank you for your help! :)

save_path: "%kernel.root_dir%/../var/sessions/%kernel.environment%"
should be fine. Just make sure that your var/sessions folder exists and it is writable
mkdir -p var/sessions
chmod 755 var/sessions
The same must be true for var/cache and var/logs.

i had the same pb
1st i worked with xampp that is using apache(works fine -no changes required)
2nd i moved on ubuntu with nginx(here was the pb)
i deleted sessions/cache ->nothing
instead i changed "save_path: ~ " and it's work

Related

File not found error when using cyberduck CLI for OneDrive

I want to upload encrypted backups on OneDrive using cyberduck to avoid local copys. Having a local file called file.txt I want to upload in folder Backups from OneDrive root, I used this command:
duck --username <myUser> --password <myPassword> --upload onedrive://Backups .\file.txt
Transfer incomplete…
File not found. /. Please contact your web hosting service provider for assistance.
It's not even possible to get the directory content using duck --username <myUser> --password <myPassword> --listonedrive://Backups command. This also cause a File not found error.
What I'm doing wrong?
I exactly followed the documentation and have no clue why this is not working. Cyberduck was installed by using chocolately, current version is Cyberduck 6.6.2 (28219)
Just testing this out, it looks like OneDrive sets a unique identifier for the root folder. You can find that by either inspecting the value of the cid parameter in the URL of your OneDrive site or I found it by using the following command
duck --list OneDrive:///
Note, having three slashes is important. It would appear the first two are part of the protocol name and the first specifies you want the root. The result should look like a unique id of some sort like this: 36d25d24238f8242, which you can then use to upload your files like:
duck --upload onedrive://36d25d24238f8242/Backups .\file.txt
Didn't see any of that in the docs... just tinkering with it. So I might recommend opening a bug with duck to update their docs if this works for you.
What happens if you use the full path to the file, it looks like it is just complaining about not finding the file to uploads so could be you are in a different directory or something so it needs the full path to the source file.

Unable to reset PSQL password or edit pg_hba.conf file on mac

My problem was originally that I am unable to use PostgreSQL because I do not know the password - nor have I ever made one. I was trying to reset or recover the password and followed various advice in trying to do this.
At first I tried to edit the pg_hba.conf file, which I located by using the following command:
sudo vim /etc/postgresql/9.3/main/pg_hba.conf
But this just took me to a blank screen that I could do nothing with except close the window.
I was told to try:
sudo nano /etc/postgresql/9.3/main/pg_hba.conf
...which was better because this included key commands at the bottom of the page, but the file was blank, and so couldn't be edited.
After going back into this, and I suppose causing some error, if I go back into it now, I get this:
E325: ATTENTION
Found a swap file by the name "/var/tmp/pg_hba.conf.swp"
owned by: root dated: Tue Oct 17 15:57:30 2017
file name: /etc/postgresql/9.3/main/pg_hba.conf
modified: YES
user name: root host name: Roberts-MacBook-Pro.local
process ID: 2668
While opening file "/etc/postgresql/9.3/main/pg_hba.conf"
(1) Another program may be editing the same file. If this is the case,
be careful not to end up with two different instances of the same
file when making changes. Quit, or continue with caution.
(2) An edit session for this file crashed.
If this is the case, use ":recover" or "vim -r
/etc/postgresql/9.3/main/pg_hba.conf"
to recover the changes (see ":help recovery").
If you did this already, delete the swap file
"/var/tmp/pg_hba.conf.swp"
to avoid this message.
Swap file "/var/tmp/pg_hba.conf.swp" already exists!
[O]pen Read-Only, (E)dit anyway, (R)ecover, (D)elete it, (Q)uit,
(A)bort:
I tried deleting the .swp file by typing D, but this didn't seem to do anything.
I'm really confused about all of this and I don't really know how I can learn more to understand what I'm doing. When I go to the PostgreSQL website I read what the pp_hba.conf file should look like, but the only way I can access this file, it has been completely empty.
I don't know where to go from here so I would really appreciate advice from anyone who can point me in the right direction, thanks.
As long as PostgreSQL has been started, you can find out which pg_hba.conf you should be editing by running:
ps -ef | grep 'postgres -D'
In my bizarre setup, this returns:
/opt/boxen/homebrew/opt/postgresql/bin/postgres -D /opt/boxen/homebrew/var/postgres
so I know to edit the file:
/opt/boxen/homebrew/var/postgres/pg_hba.conf
to change which connections are allowed to which databases, etc. See also the pg_hba.conf docs for more info.

wget appends query string to resulting file

I'm trying to retrieve working webpages with wget and this goes well for most sites with the following command:
wget -p -k http://www.example.com
In these cases I will end up with index.html and the needed CSS/JS etc.
HOWEVER, in certain situations the url will have a query string and in those cases I get an index.html with the query string appended.
Example
www.onlinetechvision.com/?p=566
Combined with the above wget command will result in:
index.html?page=566
I have tried using the --restrict-file-names=windows option, but that only gets me to
index.html#page=566
Can anyone explain why this is needed and how I can end up with a regular index.html file?
UPDATE: I'm sort of on the fence on taking a different approach. I found out I can take the first filename that wget saves by parsing the output. So the name that appears after Saving to: is the one I need.
However, this is wrapped by this strange character â - rather than just removing that hardcoded - where does this come from?
If you try with parameter "--adjust-extension"
wget -p -k --adjust-extension www.onlinetechvision.com/?p=566
you come closer. In www.onlinetechvision.com folder there will be file with corrected extension: index.html#p=566.html or index.html?p=566.html on *NiX systems. It is simple now to change that file to index.html even with script.
If you are on Microsoft OS make sure you have latter version of wget - it is also available here: https://eternallybored.org/misc/wget/
To answer your question about why this is needed, remember that the web server is likely to return different results based on the parameters in the query string. If a query for index.html?page=52 returns different results from index.html?page=53, you probably wouldn't want both pages to be saved in the same file.
Each HTTP request that uses a different set of query parameters is quite literally a request for a distinct resource. wget can't predict which of these changes is and isn't going to be significant, so it's doing the conservative thing and preserving the query parameter URLs in the filename of the local document.
My solution is to do recursive crawling outside wget:
get directory structure with wget (no file)
loop to get main entry file (index.html) from each dir
This works well with wordpress sites. Could miss some pages tho.
#!/bin/bash
#
# get directory structure
#
wget --spider -r --no-parent http://<site>/
#
# loop through each dir
#
find . -mindepth 1 -maxdepth 10 -type d | cut -c 3- > ./dir_list.txt
while read line;do
wget --wait=5 --tries=20 --page-requisites --html-extension --convert-links --execute=robots=off --domain=<domain> --strict-comments http://${line}/
done < ./dir_list.txt
The query string is required because of the website design what the site is doing is using the same standard index.html for all content and then using the querystring to pull in the content from another page like with script on the server side. (it may be client side if you look in the JavaScript).
Have you tried using --no-cookies it could be storing this information via cookie and pulling it when you hit the page. also this could be caused by URL rewrite logic which you will have little control over from the client side.
use -O or --output-document options. see http://www.electrictoolbox.com/wget-save-different-filename/

parallels plesk file permission

I,m trying to install a joomla site in parallels plesk panel via akeeba backup . Where I,m facing file permission issue.
An error occured
Could not open /var/www/vhosts/xyz.com/httpdocs/pearl_new/jquery.min.js for writing.
As searched all over and also in Plesk forum . I found this is a very common problem. Some suggested installing mod_suphp can solve the problem. I tried but don't know is it successfully installed or not.
Then I have created a new service plan from where in hosting parameter I select Run PHP as FastCGI
After that I took my domain to that service plan. I thought it will solve the problem. But still getting same error. Can anyone help please ?
On the ssh command line try:
find /var/www/vhosts/xyz.com/httpdocs/ -type f -exec chmod 664 {} \;
find /var/www/vhosts/xyz.com/httpdocs/ -type d -exec chmod 775 {} \;
these will set the permissions correct for writing to by user and group for files (f) and directories (d). you also need to make sure that apache is in the psacln and psaserv groups in the /etc/group file: the lines should look like this:
psaserv:x:504:apache,psaftp,psaadm
psacln:x:505:apache
Then you can run the commad:
chown -R siteusername.psacln /var/www/vhosts/xyz.com/httpdocs/*
where "siteusername" is the username of the site's files.
Hope this helps.
this is common issue in linux and users had shared hosting.
So simple.
If you already selected PHP module with FAST CGi so follow the following steps:
Open file manager
Make new folder "ABC"
Click "ALL" on right side to view all files on the tree.
Select all files and folders except "plesk-stats"
Select Copy/move button
in the path filed type /httpdocs/abc/
Click Move.
If all files moved and then open "abc" folder
Select all files and folders.
Select Copy/move button
in the path filed type /httpdocs/
that's it issue sorted out.
I tried these steps for many clients.
I hope this helps for someone.

How to write php so that the correct file can be downloaded over linux command line?

So basically i have a problem where the user will send a request to test.php?getnextfile=1, and i need to process the request and figure out what is the next file in line to be downloaded, and deliver it to the user. The part that i'm stuck on is how to get the correct filename to the user (the server knows the correct file name, user doesn't).
Currently i've tried to use wget on the test.php?getnextfile=1, and it doesn't actually save with the correct filename. Also tried to header redirect to the correct file, and that doesn't work either.
Any ideas?
Thanks a lot!
Jason
Since Jul 2010, this is impossible in default wget configuration. In process of fixing security bug, they switched "trust-server-names" option off by default. For more information, see this answer:
https://serverfault.com/questions/183959/how-to-tell-wget-to-use-the-name-of-the-target-file-behind-the-http-redirect
In your php script you need to set the 'Content-Disposition' header:
<?php
header('Content-type: application/zip');
header('Content-Disposition: attachment; filename="myfile.zip"');
readfile('myfile.zip');
?>
Use curl instead of wget to test it.
curl --remote-name --remote-header-name http://127.0.0.1:8080/download