Cannot issue PURGE request to Varnish cache - webserver

I am using Varnish 3 in front of nginx running multiple WordPress sites. I am using a default.vcl recommended and used by many large WordPress sites.
default.vcl: http://pastebin.com/KaSdvuRS
I am using W3 Total Cache which has an option to automatically purge when clearing the cache. I also tested installing Varnish HTTP Purge plugin to purge posts/pages when editing them. Neither seemed to work so I tested interactive session over ssh w/ curl.
I am logged into SSH on the varnish/nginx box, and I type the following command to test the varnish purge:
curl -X PURGE http://www.example.com
The result is:
<head>
<title>405 Not allowed.</title>
</head>
<body>
<h1>Error 405 Not allowed.</h1>
<p>Not allowed.</p>
<h3>Guru Meditation:</h3>
<p>XID: 265824636</p>
<hr>
<p>Varnish cache server</p>
</body>
Any ideas what I am missing? This vcl file is very similar to what is recommended by Varnish-Cache.org for WordPress and is the purge configuration I see recommended everywhere.

Chances are, you're connecting to your Varnish box via the public IP and Varnish will also see a public IP connecting, not a local one. Your ACL for purges now only allows localhost/127.0.0.1. You may want to extend that list with the public IP address of your server as well.
Alternatively, try debugging by removing the ACL check and by simply allowing purges from everyone, just to exclude the ACL as the guilty one.

Related

frappe.get_url return http:// but not https://

I am using frappe v14, however, when I use frappe.get_url in jinja2, it return me http://xxxx, but not https://xxxxx, may I know if I miss any setup. I am using https://xxxx for visiting
You may have setup ssl manually (not through bench) for your site. You can add a rule in your NGINX config to auto upgrade requests (if you have https setup).
Another way is to debug get_url and figure out which key is missing to be set in the site_config so that Frappe can handle the rest for you.
ref: Adding a redirect rule in NGINX

Setting up load-balancer based on authenticated users

I'm trying to set up a loadbalancer that would redirect to specific version of an application certein users. So far i was using Blue/Green deployment strategy (so once i made new version of an app i created new environment and redirected traffic there). Now i would like to change this approach. I want to be able to specify users (more experienced or whatever) that would see new site after authentication while the others would still be redirected to old one. If something goes wrong with new version all users will see old version. Currently my loadbalancing is made in apache and authentication is done on application level. So is this even possible? I know i could hardcode it in application but what if there is a bug in new feature and new users are still being redirected there? I would then need to stop application for all users and rollback to old version and that's bad i guess. I was thinking about using external CAS however didnt find any information if it would be possible then. So i would like to ask is it possible and are there any tools (maybe some apache plugin) for that purpose?
Here's a working solution with nginx
create conf.d/balancer.conf
put the code into it (see below)
docker run -p8080:8080 -v ~/your_path/conf.d:/etc/nginx/conf.d openresty/openresty:alpine
use curl to play with it
balancer.conf:
map $cookie_is_special_user $upstream {
default http://example.com;
~^1$ http://scooterlabs.com/echo;
}
server {
listen 8080;
resolver 8.8.8.8;
location / {
proxy_pass $upstream;
}
}
testing
curl --cookie "is_special_user=1" http://localhost:8080
It would return the contents of scooterlabs.com dumping the request it receives
curl http://localhost:8080
Produces the contents of example.com
explanation
the idea is that you set a special cookie to the users you treat as special by the backend app after they get authorized as usual
of course it would only work if both app versions are served on the same domain so that the cookie is seen by both versions
after that you balance them to a desired server depending on the cookie value
you can easily disable such routing by tweaking your nginx config file
with this approach you can come up with even more complex scenarios like setting random cookie values in the range 1-10 and then gradually switching some of the special users in your config file i.e. start with those having value 1, after that 1-2 etc

keycloak Invalid parameter: redirect_uri

When I am trying to hit from my api to authenticate user from keycloak, but its giving me error Invalid parameter: redirect_uri on keycloak page. I have created my own realm apart from master. keycloak is running on http. Please help me.
What worked for me was adding wildchar '*'. Although for production builds, I am going to be more specific with the value of this field. But for dev purposes you can do this.
Setting available under, keycloak admin console -> Realm_Name -> Cients -> Client_Name.
EDIT: DO NOT DO THIS IN PRODUCTION. Doing so creates a large security flaw.
If you are a .Net Devloper Please Check below Configurations
keycloakAuthentication options class set
CallbackPath = RedirectUri,//this Property needs to be set other wise it will show invalid redirecturi error
I faced the same error. In my case, the issue was with Valid Redirect URIs was not correct. So these are the steps I followed.
First login to keycloack as an admin user.
Then Select your realm(maybe you will auto-direct to the realm). Then you will see below screen
Select Clients from left panel.
Then select relevant client which you configured for your app.
By default, you will be Setting tab, if not select it.
My app was running on port 3000, so my correct setting is like below.
let say you have an app runs on localhost:3000, so your setting should be like this
If you're getting this error because of a new realm you created
You can directly change the URL in the URL bar to get past this error. In the URL that you are redirected to (you may have to look in Chrome dev tools for this URL), change the realm from master to the one you just created, and if you are not using https, then make sure the redirect_uri is also using http.
If you're getting this error because you're trying to setup Keycloak on a public facing domain (not localhost)
Step 1)
Follow this documentation to setup a MySql database (link's broken. If you find some good alternative documentation that works for you, feel free to update this link and remove this message). You may also need to refer to this documentation.
Step 2)
Run the command update REALM set ssl_required = 'NONE' where id = 'master';
Note:
At this point, you should technically be able to login, but version 4.0 of Keycloak is using https for the redirect uri even though we just turned off https support. Until Keycloak fixes this, we can get around this with a reverse proxy. A reverse proxy is something we will want to use anyhow to easily create SSL/TLS certificates without having to worry about Java keystores.
Note 2: After writing these instructions, Keycloak come out with their own proxy. They then stopped supporting it and recommended using oauth2 proxy instead. It is lacking some features the Keycloak proxy had, and an unoffical version of that proxy is still being maintained here. I haven't tried using either of these proxies, but at this point, you might want to stop following my directions and use one of those instead.
Step 3) Install Apache. We will use Apache as a reverse proxy (I tried NGINX, but NGINX had some limitations that got in the way). See yum installing Apache (CentOs 7), and apt-get install Apache (Ubuntu 16), or find instructions for your specific distro.
Step 4) Run Apache
Use sudo systemctl start httpd (CentOs) or sudo systemctl start apache2 (Ubuntu)
Use sudo systemctl status httpd (CentOs) or sudo systemctl status apache2
(Ubuntu) to check if Apache is running. If you see in green text the words active (running) or if the last entry reads Started The Apache HTTP Server. then you're good.
Step 5) We will establish a SSL connection with the reverse proxy, and then the reverse proxy will communicate to keyCloak over http. Because this http communication is happening on the same machine, you're still secure. We can use Certbot to setup auto-renewing certificates.
If this type of encryption is not good enough, and your security policy requires end-to-end encryption, you will have to figure out how to setup SSL through WildFly, instead of using a reverse proxy.
Note:
I was never actually able to get https to work properly with the admin portal. Perhaps this may have just been a bug in the beta version of Keycloak 4.0 that I'm using. You're suppose to be able to set the SSL level to only require it for external requests, but this did not seem to work, which is why we set https to none in step #2. From here on we will continue to use http over an SSH tunnel to manage the admin settings.
Step 6)
Whenever you try to visit the site via https, you will trigger an HSTS policy which will auto-force http requests to redirect to https. Follow these instructions to clear the HSTS rule from Chrome, and then for the time being, do not visit the https version of the site again.
Step 7)
Configure Apache. Add the virtual host config in the code block below. If you've never done this, then the first thing you'll need to do is figure out where to add this config file.
On RHEL and some other distros
you'll need to find where your httpd.conf or apache2.conf file is located. That config file should be loading virtual host config files from another folder such as conf.d.
If you are using Ubuntu or Debian,
your config files will be located in /etc/apache2/sites-available/ and you'll have an extra step of needing to enable them by running the command sudo a2ensite name-of-your-conf-file.conf. That'll create a symlink in /etc/apache2/sites-enabled/ which is where Apache looks for config files on Ubuntu/Debian (and remember the config file was placed in sites-available, slightly different).
All distros
Once you've found the config files, change out, or add, the following virtual host entries in your config files. Make sure you don't override the already present SSL options that where generated by certbot. When done, your config file should look something like this.
<VirtualHost *:80>
RewriteEngine on
#change https redirect_uri parameters to http
RewriteCond %{request_uri}\?%{query_string} ^(.*)redirect_uri=https(.*)$
RewriteRule . %1redirect_uri=http%2 [NE,R=302]
#uncomment to force https
#does not currently work
#RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI}
#forward the requests on to keycloak
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:8080/
ProxyPassReverse / http://127.0.0.1:8080/
</VirtualHost>
<IfModule mod_ssl.c>
<VirtualHost *:443>
RewriteEngine on
#Disable HSTS
Header set Strict-Transport-Security "max-age=0; includeSubDomains;" env=HTTPS
#change https redirect_uri parameters to http
RewriteCond %{request_uri}\?%{query_string} ^(.*)redirect_uri=https(.*)$
RewriteRule . %1redirect_uri=http%2 [NE,R=302]
#forward the requests on to keycloak
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:8080/
ProxyPassReverse / http://127.0.0.1:8080/
#Leave the items added by certbot alone
#There should be a ServerName option
#And a bunch of options to configure the location of the SSL cert files
#Along with an option to include an additional config file
</VirtualHost>
</IfModule>
Step 8) Restart Apache. Use sudo systemctl restart httpd (CentOs) or sudo systemctl restart apache2 (Ubuntu).
Step 9)
Before you have a chance to try to login to the server, since we told Keycloak to use http, we need to setup another method of connecting securely. This can be done by either installing a VPN service on the keycloak server, or by using SOCKS. I used a SOCKS proxy. In order to do this, you'll first need to setup dynamic port forwarding.
ssh -N -D 9905 user#example.com
Or set it up via Putty.
All traffic sent to port 9905 will now be securely routed through an SSH tunnel to your server. Make sure you whitelist port 9905 on your server's firewall.
Once you have dynamic port forwarding setup, you will need to setup your browser to use a SOCKS proxy on port 9905. Instructions here.
Step 10) You should now be able to login to the Keycloak admin portal. To connect to the website go to http://127.0.0.1, and the SOCKS proxy will take you to the admin console. Make sure you turn off the SOCKS proxy when you're done as it does utilize your server's resources, and will result in a slower internet speed for you if kept on.
Step 11) Don't ask me how long it took me to figure all of this out.
IMPORTANT UPDATE IN KEYCLOAK 18
In the newest keycloak 18, they have deprecated the redirect_uri variable for the openid connect logout -> https://www.keycloak.org/docs/latest/upgrading/index.html#openid-connect-logout
Go to keycloak admin console > SpringBootKeycloak> Cients>login-app page.
Here in valid-redirect uris section add
http://localhost:8080/sso/login
This will help resolve indirect-uri problem
For me, I had a missing trailing slash / in the value for Valid Redirect URIs
[For Keycloak version 18 or Higher]
None of the mentioned solutions should be working if you are using Keycloak 18 or a higher version.
According to the version 18 release note. Keycloak does not support logout with redirect_uri anymore. you need to include post_logout_redirect_uri and id_token_hint as parameters.
Please check the answer of this question for more information.
keycloak: using react user can login but when I try logout I get a message "Invalid parameter: redirect_uri"
Log in the Keycloak admin console website, select the realm and its client, then make sure all URIs of the client are prefixed with the protocol, that is, with http:// for example. An example would be http://localhost:8082/*
Another way to solve the issue, is to view the Keycloak server console output, locate the line stating the request was refused, copy from it the redirect_uri displayed value and paste it in the * Valid Redirect URIs field of the client in the Keycloak admin console website. The requested URI is then one of the acceptables.
If you're seeing this problem after you've made a modification to the Keycloak context path, you'll need to make an additional change to a redirect url setting:
Change <web-context>yourchange/auth</web-context> back to
<web-context>auth</web-context> in standalone.xml
Restart Keycloak and navigate to the login page (/auth/admin)
Log in and select the "Master" realm
Select "Clients" from the side menu
Select the "security-admin-console" client from the list that appears
Change the "Valid Redirect URIs" from /auth/admin/master/console/* to
/yourchange/auth/admin/master/console/*
Save and sign out. You'll again see the "Invalid redirect url" message after signing out.
Now, put in your original change
<web-context>yourchange/auth</web-context> in standalone.xml
Restart Keycloak and navigate to the login page (which is now
/yourchange/auth/admin)
Log in and enjoy
I faced the same issue. I rectified it by going to the particular client under the realm respectively therein redirect URL add * after your complete URL.
I had the same problem with "localhost" in the redirect URL. Change to 127.0.0.1 in the "Valid Redirect URIs" field of clients config (KeyCloak web admin console). It works for me.
It seems that this problem can occur if you put whitespace in your Realm name. I had name set to Debugging Realm and I got this error. When I changed to DebuggingRealm it worked.
You can still have whitespace in the display name. Odd that keycloak doesn't check for this on admin input.
even I faced the same issue. I rectified it by going to the particular client under the realm respectively therein redirect URL add * after your complete URL.
THE PROBLEM WILL BE SOLVED
Example:
redirect URI: http:localhost:3000/myapp/generator/*
Looking at the exact rewrite was key for me. the wellKnownUrl lookup was returning "http://127.0.01:7070/" and I had specified "http://localhost:7070" ;-)
I faced the Invalid parameter: redirect_uri problem problem while following spring boot and keycloak example available at http://www.baeldung.com/spring-boot-keycloak. when adding the client from the keycloak server we have to provide the redirect URI for that client so that keycloak server can perform the redirection.
When I faced the same error multiple times, I followed copying correct URL from keycloak server console and provided in the valid Redirect URIs space and it worked fine!
This error is also thrown when your User does not have the expected Role delegated in User definition(Set role for the Realm in drop down).
Your redirect URI in your code(keycloak.init) should be the same as the redirect URI set on Keycloak server (client -> Valid Uri)
Ran into this problem too. After two days of pulling my hair out I discovered that the URLs in Keycloak are case sensitive. However the browser coverts the URL to lowercase, which means that uppercase URLs in Keycloak will never work.
e.g. my server name is MYSERVER (hostname returns MYSERVER)
Keycloak URLs are https://MYSERVER:8080/*
Browse to https://myserver:8080 -> fails invalid_url
Browse to https://MYSERVER:8080 -> fails invalid_url
Change Keycloak URLs to https://myserver:8080/*
Browse to https://myserver:8080 -> works
Browse to https://MYSERVER:8080 -> works
We also saw this, but only on certain URLs. After seeing this clue, I realized that the Java URI constructor has to be able to decode it,
like so URI uri = URI.create(redirectUri);
We had a { and } in our URLs which normally worked fine, but when going through two layers of URL decode/encode, Java decided the { and } were invalid.
We'll be changing our curly braces to something else to get around the double encode/decode issue.
I know other people provided the same answer, but my reputation was not high enough to upvote them. In the redirect menu, Mine had a redirect of " 0.0.0.0:8080/* ". I added
(actualIP) followed by :8080/* and it worked.
In your client, set the origin of your request. In my case, localhost:3000 (javaScript client)
If you are using the Authorization Code Flow then the response_type query param must be equal to code. See https://www.keycloak.org/docs/3.3/server_admin/topics/sso-protocols/oidc.html
You need to check the keycloak admin console for fronted configuration. It must be wrongly configured for redirect url and web origins.
If you're trying to redirect to the keycloak login page after logout (as I was), that is not allowed by default but also needs to be configured in the "Valid Redirect URIs" setting in the admin console of your client.
Check that the value of the redirect_uri parameter is whitelisted for the client that you are using. You can manage the configuration of the client via the admin console.
The redirect uri should match exactly with one of the whitelisted redirect uri's, or you can use a wildcard at the end of the uri you want to whitelist. See: https://www.keycloak.org/docs/latest/server_admin/#_clients
Note that using wildcards to whitelist redirect uri's is allowed by Keycloak, but is actually a violation of the OpenId Connect specification. See the discussion on this at https://lists.jboss.org/pipermail/keycloak-dev/2018-December/011440.html
My issue was caused by the wrong client_id (OPENID_CLIENT_ID) I had defined in the deployment.yaml. Make sure this field is assigned with the one in Keycloak client id.
The problem seems related to an invalid value in Valid Redirect URIs field. You can try with one of these tips:
set the same value of Client ID (if it's a URL) making it end with /* , or
tryingToLearn 's reponse [https://stackoverflow.com/a/51420355/97799] (but beware of security issues).
I into this due to a malformed redirect url in the keycloak client:
https://http://192.168.1.10/hub/oauth_callback
As soon as I took out https:// the error
I'm using version 20.0.2 and, for me, the solution was to simply add a '+' in the "Valid post logout redirect URIs" field:
As stated in the help balloon, "A value of '+' will use the list of valid redirect uris".
I faced a similar issue because I create a realm with two words and had a space on it. eg Test Realm, this gave me this error. I put an underscore and was good to go eg, Test_Realm.

phpinfo() is not working on my CentOS server

I have PHP installed on my CentOS server. However, when running a phpinfo() inside my script to test it, I receive the HTML, not the interpreted information.
I can see the folders for PHP. I can even see the php.ini in the etc folder.
But PHP itself does not seem to be working.
I mean my test.php file looks like this:
<?php
phpinfo();
?>
And the response looks like this:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "DTD/xhtml1-transitional.dtd">
<html><head>
<style type="text/css">
body {background-color: #ffffff; color: #000000;}
body, td, th, h1, h2 {font-family: sans-serif;}
pre {margin: 0px; font-family: monospace;}
a:link {color: #000099; text-decoration: none; background-color: #ffffff;}
...
and so on.
What seems to be the problem and how do I solve it?
If I copy the HTML returned, paste it into an HTML file, and run it from there, I can see the formatted result, but not by running the test.php. I assume PHP is not loaded somehow... even if in the interpreted HTML I can see the:
**Server API Apache 2.0 Handler
Virtual Directory Support disabled
Configuration File (php.ini) Path /etc/php.ini
Scan this dir for additional .ini files /etc/php.d
additional .ini files parsed /etc/php.d/dbase.ini, /etc/php.d/json.ini, /etc/php.d/mysql.ini, /etc/php.d/mysqli.ini, /etc/php.d/pdo.ini, /etc/php.d/pdo_mysql.ini, /etc/php.d/pdo_sqlite.ini
PHP API 20041225
PHP Extension 20050922
Zend Extension 220051025
Debug Build no
Thread Safety disabled
Zend Memory Manager enabled
IPv6 Support enabled
Registered PHP Streams php, file, http, ftp, compress.bzip2, compress.zlib, https, ftps**
and so on...
On this system, there are three websites hosted. Does that have anything to do with this problem?
This happened to me as well. The fix was wrapping it in HTML tags. Then I saved the file as /var/www/html/info.php and ran http://localhost/info.php in the browser. That's it.
<html>
<body>
<?php
phpinfo();
?>
</body>
</html>
Save the page which contains
<?php
phpinfo();
?>
with the .php extension. Something like info.php, not info.html...
You need to update your Apache configuration to make sure it's outputting php as the type text/HTML.
The below code should work, but some configurations are different.
AddHandler php5-script .php
AddType text/html .php
This did it for me (the second answer): Why are my PHP files showing as plain text?
Simply adding this, nothing else worked.
apt-get install libapache2-mod-php5
I just had the same issue and found that a client had disabled this function from their php.ini file:
; This directive allows you to disable certain functions for security reasons.
; It receives a comma-delimited list of function names. This directive is
; *NOT* affected by whether Safe Mode is turned On or Off.
disable_functions = phpinfo;
More information: Enabling and disabling phpinfo() for security reasons
Be sure that the tag "php" is stick in the code like this:
?php phpinfo(); ?>
Not like this:
? php phpinfo(); ?>
OR the server will treat it as a (normal word),
so the server will not understand the language you are writing to deal with it so it will be blank.
I know it's a silly error ...but it happened ^_^
Try to create a php.ini file in root and write the following command in and save it.
disable_functions =
Using this code will enable the phpinfo() function for you if it is disabled by the global PHP configuration.
For people who have no experience in building websites (like me) I tried a lot, only to find out that I hadn't used the .php extension, but the .html extension.
It happened to me as well. On a newly provisioned Red Hat Linux 7 server.
When I run a PHP page, i.e. info.php, I could see plain text PHP scripts instead of executing them.
I just installed PHP:
[root#localhost ~]# yum install php
And then restarted Apache HTTP Server:
[root#localhost ~]# systemctl restart httpd
My solution was uninstalling and installing the PHP instance (7.0 in my case) again:
sudo apt-get purge php7.0
sudo apt-get install php7.0
After that you will need to restart the Apache service:
sudo service apache2 restart
Finally, verify again in your browser with your localhost or IP address.
I accidentally set the wrong file permissions. After chmod 644 phpinfo.php the info indeed showed up as expected.
It may not work for you if you use localhost/info.php.
You may be able to found the clue from the error. Find the port number in the error message. To me it was 80. I changed address as http://localhost:80/info.php, and then it worked to me.
I had the same problem.
The solution in my case was to set default_mimetype = "text/html" inside the php.ini file.
Another possible answer for Windows 10:
The command httpd -k restart does not work on my machine somehow.
Try to use the Windows 10 Service to restart the relative service.

ASP.Net MVC 2 on nginx/mono 2.8

I am trying to setup ASP.Net MVC 2 application on Linux environment. I've installed Ubuntu 10.10 on VirtualBox, then installed Mono 2.8 from sources. After that I have installed nginx and configure it as recommended here.
Unfortunately, FastCGI shows me standard error 500 page:
No Application Found
Unable to find a matching application for request:
Host localhost:80
Port 80
Request Path /Default.aspx
Physical Path /var/www/mvc/Default.aspx
My application is located in /var/www/mvc directory. I've tried to create some stub Default.aspx file and place it in root dir of my application, but it didn't help, same error occured.
Thanks.
I've been doing some testing with this as well, using all ubuntu10.10 binaries.
From what I can make from it, either nginx fails to pass the hostname of the mono server fails to receive it over the fastcgi protocol. Anyhow, the tutorial line:
fastcgi-mono-server2 /applications=www.domain1.xyz:/:/var/www/www.domain1.xyz/ /socket=tcp:127.0.0.1:9000
doesn't work. Removing the hostname makes the thing work:
fastcgi-mono-server2 /applications=/:/var/www/www.domain1.xyz/ /socket=tcp:127.0.0.1:9000
but this of course blocks the use of multiple virtual mono hosts.
Since you are running ASP.NET MVC 2 application you should use fastcgi-mono-server4.
Adding following line in /etc/nginx/fastcgi_param resolves the issue for me. It also allows to use multiple virtual hosts.
fastcgi_param HTTP_HOST $host;
Does your application work with xsp (xsp4 if you are using .net 4.0)? You'll want to make sure that is working before you try configuring the connection to another web server.
Does nginx know where to find mono? You most likely have a parallel install and it won't be in the default paths.
I use apache, but you may still find some of the instructions on my blog useful:
http://tqcblog.com/2010/04/02/ubuntu-subversion-teamcity-mono-2-6-and-asp-net-mvc/
I had this problem just now, I too had been following the document on the mono site:
I was trying to start the fastcgi-mono-server as it suggested:
sudo fastcgi-mono-server4 /applications=www.domain1.xyz:/:/var/www/www.domain1.xyz/ /socket=tcp:127.0.0.1:9000 &
However when I did it like that I got the same problem as you. I changed it to this:
sudo fastcgi-mono-server4 /applications=/:/var/www/www.domain1.xyz/ /socket=tcp:127.0.0.1:9000 &
And it worked ( I had to type in www.domain1.xyz/Home/Index to see my MVC page, not worked out how to stop it looking for www.domain1.xyz/default.aspx yet XD ).
You need to make sure the domain set in your site config matches the domain passed to the fastcgi server. So for example if your default site (/etc/nginx/sites-enabled/default) has the following config:
server {
...
server_name www.domain1.xyz;
...
}
You would need to pass that domain into the fastcgi server:
sudo fastcgi-mono-server4 /applications=www.domain1.xyz:/:/var/www/www.domain1.xyz/ ...
Then when you access the site it will obviously need to be with that domain you set.