FreeRadius 3.0.13 rlm_rest restful api authentication - rest

I'm trying to authenticate Radius Requests against Restful API. My Virtual Server configuration as below:
authorize {
filter_username
filter_password
preprocess
auth_log
if (User-Password) {
update control {
Auth-Type := rest
}
}
}
authenticate {
rest
}
My radius -X output is:
(0) Received Access-Request Id 202 from 127.0.0.2:10708 to 127.0.0.2:1812 length 73
(0) User-Name = "bob"
(0) User-Password = "hello"
(0) NAS-IP-Address = 127.0.0.2
(0) NAS-Port = 1
(0) Message-Authenticator = 0xcd622e98255234964d081be2513a0a9c
(0) # Executing section authorize from file /usr/local/etc/raddb/sites-enabled/testserver
(0) authorize {
(0) policy filter_username {
(0) if (&User-Name) {
(0) if (&User-Name) -> TRUE
(0) if (&User-Name) {
(0) if (&User-Name =~ / /) {
(0) if (&User-Name =~ / /) -> FALSE
(0) if (&User-Name =~ /#[^#]*#/ ) {
(0) if (&User-Name =~ /#[^#]*#/ ) -> FALSE
(0) if (&User-Name =~ /\.\./ ) {
(0) if (&User-Name =~ /\.\./ ) -> FALSE
(0) if ((&User-Name =~ /#/) && (&User-Name !~ /#(.+)\.(.+)$/)) {
(0) if ((&User-Name =~ /#/) && (&User-Name !~ /#(.+)\.(.+)$/)) -> FALSE
(0) if (&User-Name =~ /\.$/) {
(0) if (&User-Name =~ /\.$/) -> FALSE
(0) if (&User-Name =~ /#\./) {
(0) if (&User-Name =~ /#\./) -> FALSE
(0) } # if (&User-Name) = notfound
(0) } # policy filter_username = notfound
(0) policy filter_password {
(0) if (&User-Password && (&User-Password != "%{string:User-Password}")) {
(0) EXPAND %{string:User-Password}
(0) --> hello
(0) if (&User-Password && (&User-Password != "%{string:User-Password}")) -> FALSE
(0) } # policy filter_password = notfound
(0) [preprocess] = ok
(0) auth_log: EXPAND /antikor/log/radacct/%{%{Packet-Src-IP-Address}:-%{Packet-Src-IPv6-Address}}/auth-detail-%Y%m%d
(0) auth_log: --> /antikor/log/radacct/127.0.0.2/auth-detail-20170429
(0) auth_log: /antikor/log/radacct/%{%{Packet-Src-IP-Address}:-%{Packet-Src-IPv6-Address}}/auth-detail-%Y%m%d expands to /antikor/log/radacct/127.0.0.2/auth-detail-20170429
(0) auth_log: EXPAND %t
(0) auth_log: --> Sat Apr 29 19:46:26 2017
(0) [auth_log] = ok
(0) if (User-Password) {
(0) if (User-Password) -> TRUE
(0) if (User-Password) {
(0) update control {
(0) Auth-Type := rest
(0) } # update control = noop
(0) } # if (User-Password) = noop
(0) } # authorize = ok
(0) Found Auth-Type = rest
(0) # Executing group from file /usr/local/etc/raddb/sites-enabled/testserver
(0) authenticate {
rlm_rest (rest): Reserved connection (0)
(0) rest: Expanding URI components
(0) rest: EXPAND http://127.0.0.1:8902
(0) rest: --> http://127.0.0.1:8902
(0) rest: EXPAND /test.php?action=authenticate
(0) rest: --> /test.php?action=authenticate
(0) rest: Sending HTTP POST to "http://127.0.0.1:8902/test.php?action=authenticate"
(0) rest: EXPAND {"username":"%{User-Name}", "password":"%{User-Password}"}
(0) rest: --> {"username":"bob", "password":"hello"}
(0) rest: Processing response header
(0) rest: Status : 200 (OK)
(0) rest: Type : json (application/json)
(0) rest: Parsing attribute "control:Cleartext-Password"
(0) rest: EXPAND hello
(0) rest: --> hello
(0) rest: Cleartext-Password := "hello"
(0) rest: Parsing attribute "request:User-Password"
(0) rest: EXPAND hello
(0) rest: --> hello
(0) rest: User-Password := "hello"
(0) rest: Parsing attribute "reply:Reply-Message"
(0) rest: EXPAND Hello bob
(0) rest: --> Hello bob
(0) rest: Reply-Message := "Hello bob"
rlm_rest (rest): Released connection (0)
Need 5 more connections to reach 10 spares
rlm_rest (rest): Opening additional connection (5), 1 of 27 pending slots used
rlm_rest (rest): Connecting to "http://127.0.0.1:8902/test.php"
(0) [rest] = updated
(0) } # authenticate = updated
(0) Failed to authenticate the user
(0) Login incorrect: [bob/hello] (from client antikor-l2tp port 1)
(0) Using Post-Auth-Type Reject
(0) # Executing group from file /usr/local/etc/raddb/sites-enabled/testserver
(0) Post-Auth-Type REJECT {
(0) attr_filter.access_reject: EXPAND %{User-Name}
(0) attr_filter.access_reject: --> bob
(0) attr_filter.access_reject: Matched entry DEFAULT at line 11
(0) [attr_filter.access_reject] = updated
(0) [eap] = noop
(0) policy remove_reply_message_if_eap {
(0) if (&reply:EAP-Message && &reply:Reply-Message) {
(0) if (&reply:EAP-Message && &reply:Reply-Message) -> FALSE
(0) else {
(0) [noop] = noop
(0) } # else = noop
(0) } # policy remove_reply_message_if_eap = noop
(0) } # Post-Auth-Type REJECT = updated
(0) Delaying response for 1.000000 seconds
Waking up in 0.3 seconds.
Waking up in 0.6 seconds.
(0) Sending delayed response
(0) Sent Access-Reject Id 202 from 127.0.0.2:1812 to 127.0.0.2:10708 length 33
(0) Reply-Message = "Hello bob"
I added both control:Cleartext-Password and request:User-Password variables to test.php json reply. tried one by one. But still authentication step fails. JSON output is as below:
{"control:Cleartext-Password":"hello", "request:User-Password":"hello","reply:Reply-Message":"Hello bob"}
I wonder that if the JSON response is wrong and how should it be for Authentication reply?
Thanks.

The authorize method rlm_rest module acts like other datastore modules like rlm_sql, rlm_redis and rlm_couchbase.
It is mainly for retrieving AVPs from a remote source, it can be used as an authentication module, but not in the way you were calling it above (example at the bottom of this answer).
With the way you're calling rlm_rest, In order for the user to be accepted, you'll need to list another module that can look at the attributes in the request, look at what you got back from your rest API, and figure out what type of authentication to perform. If you're doing plaintext authentication (i.e. no EAP) then you can use the pap module.
Your server config would then look something like
authorize {
rest
pap
}
authenticate {
pap
}
rest.authorize retrieves control:Cleartext-Password which gives the server the "good" password to compare against the password the user sent.
pap.authorize checks to see if request:User-Password exists, and if it does, sets control:Auth-Type pap.
pap.authenticate compares control:Cleartext-Password with request:User-Password and if they match returns ok or reject depending on whether they do or not.
The other way of authenticating a plaintext userr in this case is by using HTTP BasicAuth and rlm_rest's authenticate method. The policy for that would look something like this:
authorize {
if (&User-Password) {
update control {
Auth-Type := 'rest'
}
}
}
authenticate {
rest
}

Related

Varnish MISS on some URL's, but HITS on other

We are using Magento 2 with Varnish cache
We only get Varnish Cache HIT on very few /catalogsearch/result/ pages, and we really can not figure out why we don't get cache HIT on all /catalogsearch/result/ pages.
Please help us in the right direction :-)
Ex.
we always get HIT on this url
https://www.babygear.dk/catalogsearch/result/?q=bog
We always get MISS on a lot of other search queries
https://www.babygear.dk/catalogsearch/result/?q=black
https://www.babygear.dk/catalogsearch/result/?q=box
https://www.babygear.dk/catalogsearch/result/?q=box
Here is our varnish.vlc
# VCL version 5.0 is not supported so it should be 4.0 even though actually used Varnish version is 5
vcl 4.0;
import std;
# The minimal Varnish version is 5.0
# For SSL offloading, pass the following header in your proxy server or load balancer: 'X-Forwarded-Proto: https'
backend default {
.host = "127.0.0.1";
.port = "8080";
.first_byte_timeout = 600s;
}
acl purge {
"127.0.0.1";
}
sub vcl_recv {
if (req.method == "PURGE") {
if (client.ip !~ purge) {
return (synth(405, "Method not allowed"));
}
# To use the X-Pool header for purging varnish during automated deployments, make sure the X-Pool header
# has been added to the response in your backend server config. This is used, for example, by the
# capistrano-magento2 gem for purging old content from varnish during it's deploy routine.
if (!req.http.X-Magento-Tags-Pattern && !req.http.X-Pool) {
return (synth(400, "X-Magento-Tags-Pattern or X-Pool header required"));
}
if (req.http.X-Magento-Tags-Pattern) {
ban("obj.http.X-Magento-Tags ~ " + req.http.X-Magento-Tags-Pattern);
}
if (req.http.X-Pool) {
ban("obj.http.X-Pool ~ " + req.http.X-Pool);
}
return (synth(200, "Purged"));
}
if (req.method != "GET" &&
req.method != "HEAD" &&
req.method != "PUT" &&
req.method != "POST" &&
req.method != "TRACE" &&
req.method != "OPTIONS" &&
req.method != "DELETE") {
/* Non-RFC2616 or CONNECT which is weird. */
return (pipe);
}
# We only deal with GET and HEAD by default
if (req.method != "GET" && req.method != "HEAD") {
return (pass);
}
# Bypass shopping cart, checkout
if (req.url ~ "/checkout") {
return (pass);
}
# Bypass health check requests
if (req.url ~ "/pub/health_check.php") {
return (pass);
}
# Set initial grace period usage status
set req.http.grace = "none";
# normalize url in case of leading HTTP scheme and domain
set req.url = regsub(req.url, "^http[s]?://", "");
# collect all cookies
std.collect(req.http.Cookie);
# Compression filter. See https://www.varnish-cache.org/trac/wiki/FAQ/Compression
if (req.http.Accept-Encoding) {
if (req.url ~ "\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|flv)$") {
# No point in compressing these
unset req.http.Accept-Encoding;
} elsif (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
} elsif (req.http.Accept-Encoding ~ "deflate" && req.http.user-agent !~ "MSIE") {
set req.http.Accept-Encoding = "deflate";
} else {
# unknown algorithm
unset req.http.Accept-Encoding;
}
}
# Remove all marketing get parameters to minimize the cache objects
if (req.url ~ "(\?|&)(gclid|ff|fp|cx|ie|cof|siteurl|zanpid|origin|fbclid|mc_[a-z]+|utm_[a-z]+|_bta_[a-z]+)=") {
set req.url = regsuball(req.url, "(gclid|ff|fp|cx|ie|cof|siteurl|zanpid|origin|fbclid|mc_[a-z]+|utm_[a-z]+|_bta_[a-z]+)=[-_A-z0-9+()%.]+&?", "");
set req.url = regsub(req.url, "[?|&]+$", "");
}
# Static files caching
if (req.url ~ "^/(pub/)?(media|static)/") {
# Static files should not be cached by default
#return (pass);
# But if you use a few locales and don't use CDN you can enable caching static files by commenting previous line (#return (pass);) and uncommenting next 3 lines
unset req.http.Https;
unset req.http.X-Forwarded-Proto;
unset req.http.Cookie;
}
return (hash);
}
sub vcl_hash {
if (req.http.cookie ~ "X-Magento-Vary=") {
hash_data(regsub(req.http.cookie, "^.*?X-Magento-Vary=([^;]+);*.*$", "\1"));
}
# For multi site configurations to not cache each other's content
if (req.http.host) {
hash_data(req.http.host);
} else {
hash_data(server.ip);
}
# To make sure http users don't see ssl warning
if (req.http.X-Forwarded-Proto) {
hash_data(req.http.X-Forwarded-Proto);
}
if (req.url ~ "/graphql") {
call process_graphql_headers;
}
}
sub process_graphql_headers {
if (req.http.Store) {
hash_data(req.http.Store);
}
if (req.http.Content-Currency) {
hash_data(req.http.Content-Currency);
}
}
sub vcl_backend_response {
set beresp.grace = 3d;
if (beresp.http.content-type ~ "text") {
set beresp.do_esi = true;
}
if (bereq.url ~ "\.js$" || beresp.http.content-type ~ "text") {
set beresp.do_gzip = true;
}
if (beresp.http.X-Magento-Debug) {
set beresp.http.X-Magento-Cache-Control = beresp.http.Cache-Control;
}
# cache only successfully responses
if (beresp.status != 200) {
set beresp.ttl = 0s;
set beresp.uncacheable = true;
return (deliver);
} elsif (beresp.http.Cache-Control ~ "private") {
set beresp.uncacheable = true;
set beresp.ttl = 44400s;
return (deliver);
}
# validate if we need to cache it and prevent from setting cookie
if (beresp.ttl > 0s && (bereq.method == "GET" || bereq.method == "HEAD")) {
unset beresp.http.set-cookie;
}
# If page is not cacheable then bypass varnish for 2 minutes as Hit-For-Pass
if (beresp.ttl <= 0s ||
beresp.http.Surrogate-control ~ "no-store" ||
(!beresp.http.Surrogate-Control &&
beresp.http.Cache-Control ~ "no-cache|no-store") ||
beresp.http.Vary == "*") {
# Mark as Hit-For-Pass for the next 2 minutes
set beresp.ttl = 120s;
set beresp.uncacheable = true;
}
return (deliver);
}
sub vcl_deliver {
if (resp.http.X-Magento-Debug) {
if (resp.http.x-varnish ~ " ") {
set resp.http.X-Magento-Cache-Debug = "HIT";
set resp.http.Grace = req.http.grace;
} else {
set resp.http.X-Magento-Cache-Debug = "MISS";
}
} #else {
# unset resp.http.Age;
# }
# Not letting browser to cache non-static files.
if (resp.http.Cache-Control !~ "private" && req.url !~ "^/(pub/)?(media|static)/") {
set resp.http.Pragma = "no-cache";
set resp.http.Expires = "-1";
set resp.http.Cache-Control = "no-store, no-cache, must-revalidate, max-age=0";
}
unset resp.http.X-Magento-Debug;
unset resp.http.X-Magento-Tags;
unset resp.http.X-Powered-By;
unset resp.http.Server;
unset resp.http.X-Varnish;
unset resp.http.Via;
unset resp.http.Link;
}
sub vcl_hit {
if (obj.ttl >= 0s) {
# Hit within TTL period
return (deliver);
}
if (std.healthy(req.backend_hint)) {
if (obj.ttl + 3000000s > 0s) {
# Hit after TTL expiration, but within grace period
set req.http.grace = "normal (healthy server)";
return (deliver);
} else {
# Hit after TTL and grace expiration
return (miss);
}
} else {
# server is not healthy, retrieve from cache
set req.http.grace = "unlimited (unhealthy server)";
return (deliver);
}
}
It's a bit tough to judge what's really going on, because there's both a Varnish cache in front of Magento and Cloudflare as the CDN.
A no-cache/no-store Cache-Control value
What I am seeing in general for your searches, is the following Cache-Control value:
cache-control: no-store, no-cache, must-revalidate, max-age=0
Based on this value, Varnish will decide not to cache. In the response headers of must of your search results you will see that Age: 0 is set. This means that Varnish doesn't hold the value in cache.
A cached search result that shouldn't be cacheable
However, weirdly enough https://www.babygear.dk/catalogsearch/result/?q=bog does have a an Age header with a value greater than zero:
age: 38948
This means it's been in cache for 38948 seconds. But really, this shouldn't be happening, because the page is not supposed to be cacheable.
Making search results cacheable:
Please make sure you have a Cache-Control header that allows caching, if you want to cache search results.
Example:
Cache-Control: public, s-maxage=3600
Debugging using Varnishlog
If you really want to know what happens in behind the scenes in Varnish, you can perform some debugging using the varnishlog binary.
You could run the following command to get debug output:
varnishlog -g request -q "ReqUrl eq '/catalogsearch/result/\?q=bog'"
This will print some very verbose logs on how Varnish treats the URL that causes the hit. You can add this output to your question, and I can try to examine what's going on.
FYI: I wrote a very detailed blog post about varnishlog a couple of years ago. Please go to https://feryn.eu/blog/varnishlog-measure-varnish-cache-performance/ to have a look, and to learn.
What's Cloudflare doing?
All that being said, I have no clue what the impact of Cloudflare is on the cacheability of the website. The varnishlog output will give us some insight, but if those results diverge from reality, Cloudflare is probably getting in our way.
Keep this in mind while debugging.
Inside vcl_recv Add
# Bypass search requests
if (req.url ~ "/catalogsearch") {
return (pass);
}
check .htaccess file for Cache-Control setting.

Freeradius 3.0.20 replay rlm_rest response

I try to reply a rest response to the user as a simple string (reply message). The authentication against an rest api works and i get a json formatted reponse (called "token"). I've declared this attribute as an attribute in raddb/dictionary file.
My question is: How can i access this attribute in authentication or post-authentication section?
Below my config (raddb/sites-available/default):
authenticate {
#
# REST authentication
Auth-Type rest {
rest {
updated = 1
}
if (updated) {
%{Token}
ok
}
}
I tried all possibilities like &Token "%{Token}" &rest:Token
See below my Debug-Output:
Ready to process requests,
(0) Received Access-Request Id 96 from 127.0.0.1:49260 to 127.0.0.1:1812 length 50,
(0) User-Name = "Arya Stark",
(0) User-Password = "verySecret",
(0) # Executing section authorize from file /etc/freeradius/sites-enabled/default,
(0) authorize {,
(0) [preprocess] = ok,
(0) [chap] = noop,
(0) [mschap] = noop,
(0) [digest] = noop,
(0) suffix: Checking for suffix after "#",
(0) suffix: No '#' in User-Name = "Arya Stark", looking up realm NULL,
(0) suffix: No such realm "NULL",
(0) [suffix] = noop,
(0) eap: No EAP-Message, not doing EAP,
(0) [eap] = noop,
(0) files: users: Matched entry DEFAULT at line 154,
(0) [files] = ok,
(0) [expiration] = noop,
[logintime] = noop,
Not doing PAP as Auth-Type is already set.,
(0) [pap] = noop,
(0) } # authorize = ok,
Found Auth-Type = rest,
(0) # Executing group from file /etc/freeradius/sites-enabled/default,
(0) Auth-Type rest {,
rlm_rest (rest): Reserved connection (0),
(0) rest: Expanding URI components,
(0) rest: EXPAND https://172.16.0.5,
(0) rest: --> https://172.16.0.5,
(0) rest: EXPAND /identityprovider/auth/passwordlogin,
(0) rest: --> /identityprovider/auth/passwordlogin,
(0) rest: Sending HTTP POST to "https://172.16.0.5/provider/auth/login",
(0) rest: EXPAND { "Username": "%{User-Name}", "Password":"%{User-Password}" },
(0) rest: --> { "Username": "Arya Stark", "Password":"verySecret" },
(0) rest: Processing response header,
(0) rest: Status : 200 (OK),
(0) rest: Type : json (application/json),
(0) rest: Parsing attribute "token",
(0) rest: EXPAND eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJBcnlhIFN0YXJrIiwiaWF0IjoxNTgwMTI5......1DuIVMzCI4a1UWUThAce0xnA,
(0) rest: --> eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJBcnlhIFN0YXJrIiwiaWF0IjoxNTgwMTI5......1DuIVMzCI4a1UWUThAce0xnA,
(0) rest: Token := "eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJBcnlhIFN0YXJrIiwiaWF0IjoxNTgwMTI5......1DuIVMzCI4a1UWUThAce0xnA",
rlm_rest (rest): Released connection (0),
Need 5 more connections to reach 10 spares,
rlm_rest (rest): Opening additional connection (5), 1 of 27 pending slots used,
rlm_rest (rest): Connecting to "https://172.16.0.5",
(0) [rest] = updated,
(0) if (updated) {,
(0) if (updated) -> TRUE,
(0) if (updated) {,
(0) EXPAND %{Token},
(0) --> ,
(0) [ok] = ok,
(0) } # if (updated) = ok,
(0) } # Auth-Type rest = ok,
(0) # Executing section post-auth from file /etc/freeradius/sites-enabled/default,
(0) post-auth {,
(0) if (session-state:User-Name && reply:User-Name && request:User-Name && (reply:User-Name == request:User-Name)) {,
(0) if (session-state:User-Name && reply:User-Name && request:User-Name && (reply:User-Name == request:User-Name)) -> FALSE,
(0) update {,
(0) No attributes updated for RHS &session-state:,
(0) } # update = noop,
(0) [exec] = noop,
(0) policy remove_reply_message_if_eap {,
(0) if (&reply:EAP-Message && &reply:Reply-Message) {,
(0) if (&reply:EAP-Message && &reply:Reply-Message) -> FALSE,
(0) else {,
(0) [noop] = noop,
(0) } # else = noop,
(0) } # policy remove_reply_message_if_eap = noop,
} # post-auth = noop,
(0) Sent Access-Accept Id 96 from 127.0.0.1:1812 to 127.0.0.1:49260 length 0,
(0) Finished request,
Waking up in 4.9 seconds.,
(0) Cleaning up request packet ID 96 with timestamp +17,
Ready to process requests,
The issue here is that by default any attributes specified in the JSON response are inserted into the reply list.
This can be fixed in two ways, either change your JSON blob to specify a list qualifier:
{ 'request:Token': '<token value>' }
or access the attribute in the reply list:
%{reply:Token}

Dovecot IMAP and SMTP suddenly not available

Hello Community and first things First:
dovecot --version
2.2.9
dovecot -n
# 2.2.9: /etc/dovecot/dovecot.conf
# OS: Linux 3.13.0-042stab125.5 x86_64 Ubuntu 14.04.5 LTS
auth_mechanisms = plain login
dict {
sqlquota = mysql:/etc/dovecot/dovecot-dict-sql.conf
}
listen = *,[::]
log_timestamp = "%Y-%m-%d %H:%M:%S "
login_log_format_elements = user=<%u> method=%m rip=%r lip=%l mpid=%e %c %k
mail_fsync = always
mail_home = /var/vmail/%d/%n
mail_location = maildir:~/
mail_nfs_index = yes
mail_nfs_storage = yes
mail_plugins = quota acl
vmanagesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave
mmap_disable = yes
namespace {
list = yes
location = maildir:%%h/:INDEXPVT=~/Shared/%%u
prefix = Shared/%%u/
separator = /
subscriptions = yes
type = shared
}
namespace inbox {
inbox = yes
location =
mailbox Archiv {
special_use = \Archive
}
mailbox Archive {
auto = subscribe
special_use = \Archive
}
mailbox Archives {
special_use = \Archive
}
mailbox "Deleted Messages" {
special_use = \Trash
}
mailbox Drafts {
auto = subscribe
special_use = \Drafts
}
mailbox Entwürfe {
special_use = \Drafts
}
mailbox "Gelöschte Objekte" {
special_use = \Trash
}
mailbox Gesendet {
special_use = \Sent
}
mailbox Junk {
auto = subscribe
special_use = \Junk
}
mailbox Papierkorb {
special_use = \Trash
}
mailbox Sent {
auto = subscribe
special_use = \Sent
}
mailbox "Sent Messages" {
special_use = \Sent
}
mailbox Trash {
auto = subscribe
special_use = \Trash
}
prefix =
separator = /
}
passdb {
args = /etc/dovecot/dovecot-mysql.conf
driver = sql
}
plugin {
acl = vfile
acl_anyone = allow
acl_shared_dict = file:/var/vmail/shared-mailboxes.db
quota = dict:User quota::proxy::sqlquota
quota_rule2 = Trash:storage=+100%%
sieve = /var/vmail/sieve/%u.sieve
sieve_after = /var/vmail/sieve/global.sieve
sieve_max_script_size = 1M
sieve_quota_max_scripts = 0
sieve_quota_max_storage = 0
}
protocols = imap sieve lmtp pop3
service auth {
unix_listener /var/spool/postfix/private/auth_dovecot {
group = postfix
mode = 0660
user = postfix
}
unix_listener auth-master {
mode = 0600
user = vmail
}
unix_listener auth-userdb {
mode = 0600
user = vmail
}
user = root
}
service dict {
unix_listener dict {
group = vmail
mode = 0660
user = vmail
}
}
service lmtp {
unix_listener /var/spool/postfix/private/dovecot-lmtp {
group = postfix
mode = 0600
user = postfix
}
user = vmail
}
service managesieve-login {
inet_listener sieve {
port = 4190
}
process_min_avail = 2
service_count = 1
vsz_limit = 128 M
}
service managesieve {
process_limit = 256
}
ssl_cert = </etc/ssl/mail/mail.crt
ssl_cipher_list = EDH+CAMELLIA:EDH+aRSA:EECDH+aRSA+AESGCM:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECD H:+CAMELLIA256:+AES256:+CAMELLIA128:+AES128:+SSLv3:!aNULL:!eNULL:!LOW:!3DES:!MD5 :!EXP:!PSK:!DSS:!RC4:!SEED:!ECDSA:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128- SHA:AES128-SHA
ssl_dh_parameters_length = 2048
ssl_key = </etc/ssl/mail/mail.key
ssl_protocols = !SSLv3 !SSLv2
userdb {
args = /etc/dovecot/dovecot-mysql.conf
driver = sql
}
protocol imap {
mail_plugins = quota imap_quota imap_acl acl
}
protocol lmtp {
auth_socket_path = /var/run/dovecot/auth-master
mail_plugins = quota sieve acl
postmaster_address = postmaster#domain1.com
}
protocol sieve {
managesieve_logout_format = bytes=%i/%o
}
remote 127.0.0.1 {
disable_plaintext_auth = no
}
Mail.err
Nov 13 23:59:06 webdev dovecot: auth: Error: PLAIN(account#domain2.com, XXX.XXX.XXX.XXX,<y869CoDETEST4dHk>): Request 29154.1 timed out after 150 secs, state=1
Mail.log
Nov 13 23:27:54 webdev dovecot: auth: Error: LOGIN(account#domain1.com,IP.IP.IP.IP,<oN4ly+TestDZ6dHk>): Request 28118.1 timed out after 150 secs, state=1
Nov 13 23:27:57 webdev dovecot: auth: Error: PLAIN( account#domain2.com,XXX.XXX.XXX.XXX,<FAxKe+JaatES7tHk>): Request 28120.1 timed out after 150 secs, state=1
Nov 13 23:28:24 webdev dovecot: imap-login: Disconnected: Inactivity during authentication (disconnected while authenticating, waited 180 secs): user=<>, method=LOGIN, rip=ClientIP, lip=ServerIP, TLS: Disconnected, TLSv1.2 with cipher DHE-RSA-AES128-GCM-SHA256 (128/128 bits)
[...]
Nov 13 23:47:15 webdev dovecot: imap-login: Aborted login (no auth attempts in 0 secs): user=<>, rip=84.119.151.17, lip=62.75.185.32
I did not change anything on my client nor serverside setup and i suddenly could not reach the Mailserver anymore. Obviously I can reach the Server using SSH or HTTP.
I hope I provided all infos you need to help me in this situation. And am grateful for every hint to solve this as I dont even have a clue what to look for.
The error messages are talking about a timeout on the authentication, and the config shows that the authentication is using a MySQL database. For this reason I would check if the MySQL process is still up, or restart the service (if it's running as a service, which is probably the case).

Private only dovecot, local docker configuration for one user fails login for Apple Mail

I'm trying to make a local docker-dovecot machine to archive my e-mails. I would like to query them with Apple Mail. I have a simple ubuntu docker machine (on an VM with parallels, because I'm on a Mac).
I have this local.conf:
# A comma separated list of IPs or hosts where to listen in for connections.
# "*" listens in all IPv4 interfaces, "::" listens in all IPv6 interfaces.
# If you want to specify non-default ports or anything more complex,
# edit conf.d/master.conf.
listen = *,::
# Protocols we want to be serving.
protocols = imap
# Static passdb.
# This can be used for situations where Dovecot doesn't need to verify the
# username or the password, or if there is a single password for all users:
passdb {
driver = static
args = password=dovecot
}
# Location for users' mailboxes. The default is empty, which means that Dovecot
# tries to find the mailboxes automatically. This won't work if the user
# doesn't yet have any mail, so you should explicitly tell Dovecot the full
# location.
#
# If you're using mbox, giving a path to the INBOX file (eg. /var/mail/%u)
# isn't enough. You'll also need to tell Dovecot where the other mailboxes are
# kept. This is called the "root mail directory", and it must be the first
# path given in the mail_location setting.
#
# There are a few special variables you can use, eg.:
#
# %u - username
# %n - user part in user#domain, same as %u if there's no domain
# %d - domain part in user#domain, empty if there's no domain
# %h - home directory
#
# See doc/wiki/Variables.txt for full list. Some examples:
#
# mail_location = maildir:~/Maildir
# mail_location = mbox:~/mail:INBOX=/var/mail/%u
# mail_location = mbox:/var/mail/%d/%1n/%n:INDEX=/var/indexes/%d/%1n/%n
#
# <doc/wiki/MailLocation.txt>
#
mail_location = maildir:/var/mail/%n
# System user and group used to access mails. If you use multiple, userdb
# can override these by returning uid or gid fields. You can use either numbers
# or names. <doc/wiki/UserIds.txt>
# mail_uid = CHANGE_THIS_to_your_short_user_name_or_uid
# mail_gid = admin
# SSL/TLS support: yes, no, required. <doc/wiki/SSL.txt>
ssl = no
# Login user is internally used by login processes. This is the most untrusted
# user in Dovecot system. It shouldn't have access to anything at all.
# default_login_user = _dovenull
# Internal user is used by unprivileged processes. It should be separate from
# login user, so that login processes can't disturb other processes.
# default_internal_user = _dovecot
# Setting limits.
default_process_limit = 10
default_client_limit = 50
and I'm getting this from Apple Mail
May 23 07:15:58 Mail[87524] <Debug>: <0x7fe16f021cd0:[Non-authenticated]> Wrote: 1.11 ID ("name" "Mac OS X Mail" "version" "9.3 (3124)" "os" "Mac OS X" "os-version" "10.11.5 (15F34)" "vendor" "Apple Inc.")
May 23 07:15:58 Mail[87524] <Debug>: <0x7fe16f021cd0:[Non-authenticated]> Read: * ID {
name = Dovecot;
}
May 23 07:15:58 Mail[87524] <Debug>: <0x7fe16f021cd0:[Non-authenticated]> Read: 1.11 OK
May 23 07:15:58 Mail[87524] <Debug>: <0x7fe16f021cd0:[Non-authenticated]> Wrote: 3.11 LOGOUT
May 23 07:16:00 Mail[87524] <Debug>: <0x7fe16aa14590:[Disconnected]> Read: * OK [CAPABILITY (
IMAP4REV1,
"LITERAL+",
"SASL-IR",
"LOGIN-REFERRALS",
ID,
ENABLE,
IDLE,
"AUTH=PLAIN",
"AUTH=LOGIN"
)]
May 23 07:16:00 Mail[87524] <Debug>: <0x7fe16aa14590:[Non-authenticated]> Wrote: 1.23 ID ("name" "Mac OS X Mail" "version" "9.3 (3124)" "os" "Mac OS X" "os-version" "10.11.5 (15F34)" "vendor" "Apple Inc.")
May 23 07:16:00 Mail[87524] <Debug>: <0x7fe16aa14590:[Non-authenticated]> Read: * ID {
name = Dovecot;
}
May 23 07:16:00 Mail[87524] <Debug>: <0x7fe16aa14590:[Non-authenticated]> Read: 1.23 OK
May 23 07:16:00 Mail[87524] <Debug>: <0x7fe16aa14590:[Non-authenticated]> Wrote: 3.23 LOGOUT
and this from dovecot (mail.log):
May 23 05:07:22 f8ab3e20742f dovecot: master: Dovecot v2.2.9 starting up (core dumps disabled)
May 23 05:07:22 f8ab3e20742f dovecot: ssl-params: Generating SSL parameters
May 23 05:07:29 f8ab3e20742f dovecot: ssl-params: SSL parameters regeneration completed
May 23 05:07:52 f8ab3e20742f dovecot: imap-login: Aborted login (no auth attempts in 0 secs): user=<>, rip=10.211.55.2, lip=172.17.0.2, session=<IJwtbHszbgAK0zcC>
May 23 05:07:54 f8ab3e20742f dovecot: imap-login: Aborted login (no auth attempts in 0 secs): user=<>, rip=10.211.55.2, lip=172.17.0.2, session=<qsRNbHszdgAK0zcC>
The output of doveconf -n is (so "disable_plaintext_auth = no" is active):
# 2.2.9: /etc/dovecot/dovecot.conf
# OS: Linux 4.4.8-boot2docker x86_64 Ubuntu 14.04.4 LTS aufs
auth_mechanisms = plain login
default_client_limit = 50
default_process_limit = 10
disable_plaintext_auth = no
listen = *,::
mail_location = maildir:/var/mail/%n
namespace inbox {
inbox = yes
location =
mailbox Drafts {
special_use = \Drafts
}
mailbox Junk {
special_use = \Junk
}
mailbox Sent {
special_use = \Sent
}
mailbox "Sent Messages" {
special_use = \Sent
}
mailbox Trash {
special_use = \Trash
}
prefix =
}
passdb {
args = password=dovecot
driver = static
}
protocols = imap
ssl = no
Any suggestions why this login isn't working?
Thanks!
The solution is to fix and configure the following line correctly (from local.conf):
# mail_uid = CHANGE_THIS_to_your_short_user_name_or_uid
How did I find out? Thanks to #Kondybas for the pointer to try another client. I used Thunderbird and it produced dovecot log entries (why didn't Apple Mail produce these lines? No clue), saying that it couldn't switch to mail_uid user context. I extended dovecot Dockerfile and switched the user appropriately. Afterwards it worked with Thunderbird and then with Apple Mail.

Socket can't identify protocol (socket leak)

I have a Go1.5.1 process/app. When I run /usr/sbin/lsof -p on that process, I see a lot of "can't identify protocol".
monitor_ 13105 root 101u sock 0,6 0t0 16960100 can't identify protocol
monitor_ 13105 root 102u sock 0,6 0t0 21552427 can't identify protocol
monitor_ 13105 root 103u sock 0,6 0t0 17565091 can't identify protocol
monitor_ 13105 root 104u sock 0,6 0t0 18476870 can't identify protocol
proc status/limit/fd
[root#Monitor_q ~]# cat /proc/13105/status
Name: monitor_client
State: S (sleeping)
Tgid: 13105
Pid: 13105
PPid: 13104
TracerPid: 0
Uid: 0 0 0 0
Gid: 0 0 0 0
Utrace: 0
FDSize: 16384
Groups:
...
[root#Monitor_q ~]# cat /proc/13105/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 10485760 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 3870 3870 processes
Max open files 9999 9999 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 3870 3870 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
[root#Monitor_q ~]# ll /proc/13105/fd/
lrwx------ 1 root root 64 Dec 7 00:15 8382 -> socket:[52023221]
lrwx------ 1 root root 64 Dec 7 00:15 8383 -> socket:[51186627]
lrwx------ 1 root root 64 Dec 7 00:15 8384 -> socket:[51864232]
lrwx------ 1 root root 64 Dec 7 00:15 8385 -> socket:[52435453]
lrwx------ 1 root root 64 Dec 7 00:15 8386 -> socket:[51596071]
lrwx------ 1 root root 64 Dec 7 00:15 8387 -> socket:[52767667]
lrwx------ 1 root root 64 Dec 7 00:15 8388 -> socket:[52090632]
lrwx------ 1 root root 64 Dec 7 00:15 8389 -> socket:[51739068]
lrwx------ 1 root root 64 Dec 7 00:15 839 -> socket:[22963529]
lrwx------ 1 root root 64 Dec 7 00:15 8390 -> socket:[52023223]
lrwx------ 1 root root 64 Dec 7 00:15 8391 -> socket:[52560389]
lrwx------ 1 root root 64 Dec 7 00:15 8392 -> socket:[52402565]
...
but there is no similar output in netstat -a.
What are these sockets and how can I find out what they do?
monitor_client.go
package main
import (
"crypto/tls"
"encoding/json"
"fmt"
"log"
"net"
"net/http"
nurl "net/url"
"strconv"
"strings"
"syscall"
"time"
)
type Result struct {
Error string `json:"error"`
HttpStatus int `json:"http_status"`
Stime time.Duration `json:"http_time"`
}
//http://stackoverflow.com/questions/20990332/golang-http-timeout-and-goroutines-accumulation
//http://3.3.3.3/http?host=3.2.4.2&servername=a.test&path=/&port=33&timeout=5&scheme=http
func MonitorHttp(w http.ResponseWriter, r *http.Request) {
var host, servername, path, port, scheme string
var timeout int
u, err := nurl.Parse(r.RequestURI)
if err != nil {
log.Fatal(err)
return
}
if host = u.Query().Get("host"); host == "" {
host = "127.0.0.0"
}
if servername = u.Query().Get("servername"); servername == "" {
servername = "localhost"
}
if path = u.Query().Get("path"); path == "" {
path = "/"
}
if port = u.Query().Get("port"); port == "" {
port = "80"
}
if scheme = u.Query().Get("scheme"); scheme == "" {
scheme = "http"
}
if timeout, _ = strconv.Atoi(u.Query().Get("timeout")); timeout == 0 {
timeout = 5
}
//log.Printf("(host)=%s (servername)=%s (path)=%s (port)=%s (timeout)=%d", host, servername, path, port, timeout)
w.Header().Set("Content-Type", "application/json")
res := httptool(host, port, servername, scheme, path, timeout)
result, _ := json.Marshal(res)
fmt.Fprintf(w, "%s", result)
}
func httptool(ip, port, servername, scheme, path string, timeout int) Result {
var result Result
startTime := time.Now()
host := ip + ":" + port
transport := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
DisableKeepAlives: true,
}
dialer := net.Dialer{
Timeout: time.Duration(timeout) * time.Second,
KeepAlive: 0 * time.Second,
}
transport.Dial = func(network, address string) (net.Conn, error) {
return dialer.Dial(network, address)
}
client := &http.Client{
Transport: transport,
}
rawquery := ""
url := fmt.Sprintf("%s://%s%s%s", scheme, host, path, rawquery)
req, err := http.NewRequest("GET", url, nil)
if err != nil {
result.HttpStatus = -1
errs := strings.Split(err.Error(), ": ")
result.Error = errs[len(errs)-1]
result.Stime = time.Now().Sub(startTime) / time.Millisecond
return result
}
req.Header.Set("User-Agent", "monitor worker")
req.Header.Set("Connection", "close")
req.Host = servername
resp, err := client.Do(req)
//https://github.com/Basiclytics/neverdown/blob/master/check.go
if err != nil {
nerr, ok := err.(*nurl.Error)
if ok {
switch cerr := nerr.Err.(type) {
case *net.OpError:
switch cerr.Err.(type) {
case *net.DNSError:
errs := strings.Split(cerr.Error(), ": ")
result.Error = "dns: " + errs[len(errs)-1]
default:
errs := strings.Split(cerr.Error(), ": ")
result.Error = "server: " + errs[len(errs)-1]
}
default:
switch nerr.Err.Error() {
case "net/http: request canceled while waiting for connection":
errs := strings.Split(cerr.Error(), ": ")
result.Error = "timeout: " + errs[len(errs)-1]
default:
errs := strings.Split(cerr.Error(), ": ")
result.Error = "unknown: " + errs[len(errs)-1]
}
}
} else {
result.Error = "unknown: " + err.Error()
}
result.HttpStatus = -2
result.Stime = time.Now().Sub(startTime) / time.Millisecond
return result
}
resp.Body.Close()
result.HttpStatus = resp.StatusCode
result.Error = "noerror"
result.Stime = time.Now().Sub(startTime) / time.Millisecond //spend time (ms)
return result
}
func setRlimit() {
var rLimit syscall.Rlimit
err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &rLimit)
if err != nil {
log.Printf("Unable to obtain rLimit", err)
}
if rLimit.Cur < rLimit.Max {
rLimit.Max = 9999
rLimit.Cur = 9999
err = syscall.Setrlimit(syscall.RLIMIT_NOFILE, &rLimit)
if err != nil {
log.Printf("Unable to increase number of open files limit", err)
}
}
}
func main() {
setRlimit()
s := &http.Server{
Addr: ":59059",
ReadTimeout: 7 * time.Second,
WriteTimeout: 7 * time.Second,
}
http.HandleFunc("/http", MonitorHttp)
log.Fatal(s.ListenAndServe())
}
There are couple of points here.
I was unable to reproduce your behavior, anyway, can't identify protocol is usually tied to sockets not being properly closed.
Some commenters suggested you don't have to create http client inside each handler - that's true. Simply create it once and reuse.
Second, I'm not sure why are you creating your own http.Client struct and why you're disabling keepalives. Can't you just go with http.Get ? Simpler code is easier to debug.
Third, not sure why are you overwriting transport.Dial function. Even if you must do it, the documentation (for Go 1.9.2) says:
% go doc http.transport.dial
type Transport struct {
// Dial specifies the dial function for creating unencrypted TCP
connections.
//
// Deprecated: Use DialContext instead, which allows the transport
// to cancel dials as soon as they are no longer needed.
// If both are set, DialContext takes priority.
Dial func(network, addr string) (net.Conn, error)
That comment about deprecation and lack of dials reuse may point to the source of your problems.
To sum up, when in your shoes, I'd do two things:
move client creation to the code which executes once, or just use default client with http.Get
I'd clean up this thing with overwriting default transport fields, if you must do it then I'd use DialContext as suggested.
Good luck.
I couldn't reproduce the issue. But here are my 2 cents (no pun intended)
Simmilar issue was found in SockJS-node noticed in an article https://idea.popcount.org/2012-12-09-lsof-cant-identify-protocol/ according to this issue was observed on FreeBSD. But the issue was "websockets are not bieng properly cleaned up"
Another test test I would like ou to do if you still have hands on same environment. If possible post wireshark logs. just to confirm there are not subtle things in network frames which may have caused this.
I am sorry I cann't install Go 1.5.1 just to reproduce this issue.
Hope this was helpful.