I upgraded Magento 2 and trying to access the website, but I'm getting the error below...
This site can’t be reached. unexpectedly closed the connection. Try Checking the connection, Checking the proxy and the firewall, ERR_CONNECTION_CLOSED
I have run all Magento permissions
Changed folders and files ownership
This is the error I found in Apache error log
[Thu Mar 31 19:26:40.740419 2022] [authz_core:error] [pid 20493] [client 45.155.204.146:59076] AH01630: client denied by server configuration: /home/.../public_html/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php
Any suggestions?
Thanks
Double check your Apache httpd.conf and VirtualHost conf file (if you have one) that the DocumentRoot for the site is correct.
You may also have to update access rules.
<Directory />
AllowOverride none
Require all denied
</Directory>
<Directory /Path/to/Your/Site>
AllowOverride none
Require all granted
</Directory>
Apache2 Documentation for the error code
Edit: Wanted to add that the access rules may vary slightly depending on which version of Apache you’re running.
Be careful when changing ownership and permissions on your site’s directory hierarchy and files that you haven’t exposed sensitive things to the public.
Related
I am getting "Curl Error : SSL_CACERT SSL certificate problem: unable to get local issuer certificate" when asking Facebook to scrape my page over https. How can I fix this so that Facebook can scrape my page without errors?
The page is hosted via Apache 2.4 proxying to IIS 10. Apache handles all certificates and IIS is on the local network. My page is running asp code (so no php) and solutions similar to these: edit the php.ini file or adding curl.pem to php folder will not work fix my problem ... or so I think?!?
IIS has no certificate installed.
I do have extension=php_curl.dll enabled -- and extension_dir = 'C:\64bit\php-7.0.6-Win32-VC14-x64\ext' defined in my php.ini file. I followed these steps to install Curl on Windows. And phpinfo.php confirms that cURL is enabled (cURL Information 7.47.1).
My proxy setup in my Apache config file is:
<IfModule mod_proxy.c>
ProxyRequests Off
ProxyPass / http://192.168.1.101:88/com_ssl/
ProxyPassReverse / http://192.168.1.101:88/com_ssl/
RewriteRule ^(.+)$ https://www.domainname.com/$1 [P,L]
</IfModule>
I have no RequestHeader defined in my Apache proxy config file, such as suggested here in Step 10:
RequestHeader set "X-RP-UNIQUE-ID" "%{UNIQUE_ID}e"
RequestHeader set "X-RP-REMOTE-USER" "%{REMOTE_USER}e"
RequestHeader set "X-RP-SSL-PROTOCOL" "%{SSL_PROTOCOL}s"
RequestHeader set "X-RP-SSL-CIPHER" "%{SSL_CIPHER}s"
Is this what is missing to fix the error?
"unable to get local issuer certificate" is almost always the error message you get when the server doesn't provide an intermediate certificate as it should in the TLS handshake, and as WizKid suggests, running the ssllabs test against the server will indeed tell you if that is the case.
If you are using nodejs server and getting this error 'Curl Error SSL_CACERT SSL certificate' then you need to add your CA along with your SSL CRT.
var fs = require('fs');
var https = require('https');
var options = {
key: fs.readFileSync('server-key.pem'),
cert: fs.readFileSync('server-crt.pem'),
ca: fs.readFileSync('ca-crt.pem'), // <= Add This
};
https.createServer(options, function (req, res) {
console.log(new Date()+' '+
req.connection.remoteAddress+' '+
req.method+' '+req.url);
res.writeHead(200);
res.end("hello world\n");
}).listen(4433);
This may not have been the case at the time but I will add this info in case others encounter the same issue.
If you are using a CDN, like cloudflare, it is important to set up your SSL before adding to cloudflare as it can generate issues.
It is also important to ensure that all domains are correctly annotated in the DNS control of cloudflare, otherwise you may end up serving your main domain via cloudflare and your subdomain(s) directly from your server. Whilst this wont matter much to the user (still shows secure, still have access, still passes SSL tests) it may flag issues with sharing apps onto social media. Basically, I replicated the error by splitting the DNS setup as above and achieved the flagged error as highlighted by the op. Then I added the DNS for the subdomain into cloudflare, tested a few hours later (after resetting the page in debudder: https://developers.facebook.com/tools/debug/sharing/?q=https%3A%2F%2Fus.icalculator.info%2Fterminology%2Fus-tax-tables%2F2019%2Fvirginia.html). and, hey presto, the error goes. So, if you encounter that issue and you use cloudflare, that is something to check you have set up correctly.
I put my WAMP sites on a virtual server and from there I want everybody connected to a VPN to have access to it.
If my server's IP address is 172.13.12.156, after choosing the option Put Online in WAMP, I can access one of the sites like this:
http://172.13.12.156/mysite/
The problem is that I would want to remove access when someone types just:
http://172.13.12.156
so that they won't be able to see the WAMP panel.
Is this possible?
ADDITIONAL INFO
At this moment I have tried:
<Directory "c:/wamp/www/">
Order Deny,Allow
Deny from all
Allow from 127.0.0.1
</Directory>
<Directory "c:/wamp/www/my_site1/">
# There will be comments here and some options like FollowSymLinks and AllowOverride Options Indexes FollowSymLinks Includes ExecCGI
AllowOverride All
Require all granted
Allow from all
</Directory>
WAMPServer 2.5 uses Apache 2.4
So first of all dont mix Apache 2.2 syntax with Apache 2.4 syntax (it confuses Apache very easily) in the same section. It is better to use just the new Apache 2.4 syntax anyway.
<Directory "c:/wamp/www/">
Require local
</Directory>
<Directory "c:/wamp/www/my_site1/">
Require all granted
# plus any other Options etc that are required by this site
</Directory>
I have two domains that are both hosted on the same server. Therefore, they both have the same index.html page, and they share all of the other pages. This means that there are two ways to access every file stored on the server:
domain1/file
And
domain2/file
Is there a way to redirect the user to the respective domain1 URL whenever they go to a domain2 URL? The catch is that I only want to redirect if a domain2 URL is gone to.
How can I achieve this programmatically?
Just because you have two domains running on one server does not mean they have to share index.html. The way around this is by using Virtual Hosts. You didn't mention which web server type you are using, so I'll give you an apache example:
<VirtualHost *:80>
DocumentRoot /www/example1
ServerName www.example.com
# Other directives here
</VirtualHost>
<VirtualHost *:80>
DocumentRoot /www/example2
ServerName www.example.org
# Other directives here
</VirtualHost>
This allows you to have two directories, each serving as a root path for each domain. You'd put the domain1 files in /www/example1, and the domain2 files in /www/example2, in this example. There are some other configuration options you may need, but again depending on your setup, they could vary greatly.
If you are using IIS, there's a writeup over on Server Fault that has information on how to perform that. (This question probably belongs there anyway).
I can not send DELETE from my server to the Client, POST works perfectly fine. I am using UBUNTU machine on Virtual BOX, In the wire shark I get the stuff bellow:
OPTIONS /wm/staticflowentrypusher/json HTTP/1.1
HTTP 73 HTTP/1.1 405 Method Not Allowed (application/json)
Instead of OPTIONS it should be DELETE, I saw that I have to enable it from the Apache2 files but I can not figure out which one and also can not find the right thing in order for me to enable it. I am using HTML and JAVA-SCRIPTS.
Apache doesn't limit the methods unless you have a specfic <LIMIT> tag in your config as in:
<Limit GET POST HEAD>
Allow from all
</Limit>
<Limit PUT DELETE OPTIONS>
Deny from all
</Limit>
So check your .htaccess file in the web directory and also check your config in your apache conf directory. Otherwise it might be whatever scripting/automation server you're using on the backend creating this restriction.
I have checked out the posts and made the appropriate changes to the configuration files to make zend framework 2 to work on my local environment. Everything goes fine but the redirection of the page on specifying the vhost name doesnot work appropriately. It displays me the home page of the MAMP server with the directory listing.
Here is what I have done till now:
httpd.conf
<VirtualHost *:80>
ServerName newportalcopper.localhost
DocumentRoot /Applications/MAMP/htdocs/NewPortalCopper/public
SetEnv APPLICATION_ENV "development"
<Directory /Applications/MAMP/htdocs/NewPortalCopper/public>
DirectoryIndex index.php
AllowOverride All
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
etc/hosts
127.0.0.1 newportalcopper.localhost localhost
Can some one tell me what i am doing wrong that this particular thing is not working.
Thanks for viewing the post guys and the help specified. At the end i was able to sort the problem out.
The main issue was the port number in case of MAMP. It was required to be 8888 instead of 80. This specifically solved my problem.