I have a running ejabberd installation, with http-bind enabled, nginx proxy, and a mini jappix xmpp client for web browsers. I also have an external authentication program.
I can connect the same user on this server with different ressources if I use the classical 5222 port. But:
If I have active sessions from classical xmpp clients (psi, empathy) for a given user I cannot connect this user with http-bind (401 authentication failure).
I the first connection is made via http-bind no other connection can be done by other classical xmpp clients (and resources are of course different). I sometimes get the 401 already connected message
I can only connect the same user in one browser, I cannot connect the same user several time on http-bind (resources are different on theses connections, but I also get a 401)
I'm sure the external authentification program is never launched when I get theses auth failure
max_user_sessions settings are ok (tested with infinity), and if I'm not connecting in http-bind I can run parallel sessions. But in case of I also tested the new resource_conflict setting values without any success (and it's not a re'source conflict in fact)
Installation: ejabberd-2.1.10 Debian (from ejabberd-2.1.10-linux-x86-installer.bin, but same problem tested in x86_64 version).
Extract of configuration:
{5280, ejabberd_http, [
{request_handlers,
[
{["http_bind"], mod_http_bind}
]},
%%captcha,
http_bind,
%%http_poll,
web_admin
]}
On the logs, when this is happening I have:
=INFO REPORT==== 2012-01-27 10:18:55 ===
D(<0.335.0>:ejabberd_http_bind:684) : reqlist: [{hbr,154037,
"01775ec6fc089a2b0c84abb80a4b5b7b4bdd958d",
[]},
{hbr,154036,
"01775ec6fc089a2b0c84abb80a4b5b7b4bdd958d",
[{xmlstreamelement,
{xmlelement,
"stream:features",[],
[{xmlelement,
"mechanisms",
[{"xmlns",
"urn:ietf:params:xml:ns:xmpp-sasl"}],
[{xmlelement,
"mechanism",[],
[{xmlcdata,
"PLAIN"}]}]}]}},
{xmlstreamstart,
"stream:stream",
[{"version","1.0"},
{"xml:lang","fr"},
{"xmlns","jabber:client"},
{"xmlns:stream",
"http://etherx.jabber.org/streams"},
{"id","3595609800"},
{"from",
"tchat.example.com"}]}]}]
=INFO REPORT==== 2012-01-27 10:18:55 ===
D(<0.335.0>:ejabberd_http_bind:732) : really sending now: [{xmlelement,
"auth",
[{"xmlns",
"urn:ietf:params:xml:ns:xmpp-sasl"},
{"mechanism",
"PLAIN"}],
[{xmlcdata<<"bGRhcHVzZX(...)3">>}]}]
=INFO REPORT==== 2012-01-27 10:18:55 ===
I(<0.336.0>:ejabberd_c2s:649) : ({socket_state,ejabberd_http_bind,{http_bind,<0.335.0>,{{127,0,0,1},50992}},ejabberd_http_bind}) Failed authentication for foo38#tchat.example.com
=INFO REPORT==== 2012-01-27 10:18:55 ===
D(<0.337.0>:ejabberd_http_bind:916) : OutPacket: [{xmlstreamelement,
{xmlelement,"failure",
[{"xmlns",
"urn:ietf:params:xml:ns:xmpp-sasl"}],
[{xmlelement,
"not-authorized",[],
[]}]}}]
=INFO REPORT==== 2012-01-27 10:18:55 ===
D(<0.337.0>:ejabberd_http_bind:1054) : --- outgoing data ---
<body xmlns='http://jabber.org/protocol/httpbind'><failure xmlns='urn:ietf:params:xml:ns:xmpp-sasl'><not-authorized/></failure></body>
So is this a "feature" in http-bind ? making it the only valid Resource for a given user while activated? And how to run several http-binded sessions for the same user if so? Any hints?
No there must be something wrong with your configuration. I have been successfully using ejabberd's http-bind for a long time and of course you can have multiple connections with different resources and independently of other clients connecting. I also use nginx as a proxy. In you ejabberd.cfg you should have:
{5280, ejabberd_http, [
http_bind,
web_admin
]}
and
{modules,[
{mod_http_bind,[]},
...
]}.
Also, in your logs I see {"from", "tchat.example.com"}, which seems to indicate a missconfiguration.
Related
I have Keycloak running in a Kubernetes cluster. Authentication works but I need to set up e-mail to be able to send e-mails for verification and password reset.
I have SendGrid set up as an SMTP Relay. These settings (host, port and api key) work when I send mail using the SendGrid java client. However, when pressing Test connection in KeyCloak I get:
[Error] Failed to load resource: the server responded with a status of 500 ()
[Debug] Remove message (services.js, line 14)
[Debug] Added message (services.js, line 15)
[Error] Can't find variable: error
https://<domain>/auth/resources/ong8v/admin/keycloak/js/controllers/realm.js:76 – "Possibly unhandled rejection: {}"
[Debug] Remove message (services.js, line 14)
There isn't much to go on here. I have an e-mail address set up for the currently logged in user. I've also tried resetting the password in case the Test connection functionality was broken but that didn't work either.
The Realm Settings settings user for email are as such:
host: smtp.sendgrid.net
port: 587
from: test#<domain>
Enable StartTLS: true
Username: "apikey"
Password: <api key>
Any idea what can be wrong? Or how to find out? For instance, maybe I can get a more meaningful error message somehow.
Edit:
I got the server logs.
Failed to send email: com.sun.mail.util.MailConnectException: Couldn't connect to host, port: smtp.sendgrid.net, 587; timeout 10000;
nested exception is: java.net.SocketTimeoutException: connect timed out
Edit 2:
I've tried sending mail using Telnet using the exact same settings and that works. So apparently it's something with Keycloak or its underlying Java libraries that's causing issues sending e-mail.
Turns out that Keycloak works and that emails were blocked by the hosting provider.
Trying to set up a 3 node mongodb server replica on Ubuntu 18.04, mongo version 4.0.18
gl1 192.168.1.30
gl2 192.168.1.31
gl3 192.168.1.33
Using an internal CA on the same network to create certs, I have created 2 certs, one for the server mongo is installed on (GL1, GL2, GL3) for PEMKeyFile and one for the clusterFile (mongo1, mongo2, mongo3). Each CAFile is set listing the respective RSA key, PEMKeyFile and RootCA for each server. I have mongo services running (according to systemctl) fine using the individual certs (PEMKey and clusterFILE).
net:
port: 27017
bindIp: 0.0.0.0
net:
ssl:
mode: requireSSL
PEMKeyFile: /opt/ssl/MongoDB.pem
CAFile: /opt/ssl/ca.pem
clusterFile: /opt/ssl/mongo.pem
allowConnectionsWithoutCertificates: true
#replication
replication:
replSetName: rs0
Getting the following error when I try to rs.add("192.168.1.31:27017") I get the following error
"errmsg" : "Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: 192.168.1.30:27017; the following nodes did not respond affirmatively: gl2.domain.com:27017 failed with stream truncated",
"code" : 74,
"codeName" : "NodeNotFound",
In the mongod.log on node 192.168.1.31 the following is logged:
2020-05-22T18:20:48.161+0000 E NETWORK [conn4] SSL peer certificate validation failed: unsupported certificate purpose
2020-05-22T18:20:48.161+0000 I NETWORK [conn4] Error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: unsupported certificate purpose. Ending connection from 192.168.1.30:55002 (connection id: 4)
I have read on an old Google groups post: https://groups.google.com/forum/#!msg/mongodb-user/EmESxx5KK9Q/xH6Ul7fTBQAJ that the clusterFile and PEMKeyFile had to be different. However, I did that and it still is throwing errors. I have done a lot of searching on this and I'm seeing much to support that this how it's done, but it is the only place I've found that has a similar error message and it seems logical that it should work. However, I'm not sure how I can verify that my clusterFile is actually being used. It is indeed a separate certificate with a FQDN for each node. All three nodes have host files updated to find each other (gl1, mongo1, etc). I can ping all nodes between themselves, so networking is up. I've also verified the firewall (ufw and iptables) is not blocking 27017 or anything at this point. Previously I tried the self-signed CA and certs but kept running into errors since those were self signed certs, so that is why I went the internal CA route.
The "purpose" is also known as "extended key usage".
Openssl x509v3 Extended Key Usage gives some example code for setting the purposes.
As pointed out by Joe, the documentation states that the certificates must either have no extended key usage at all, or the one in the PEMKeyFile must have server auth, and the one in the cluster file must have client auth.
We are developing a Boot-Admin dashboard using the codecentric provided library of spring-boot-admin-server version 1.4.5
Some of the applications are registering itself with the server via Eureka, and some directly using the spring-boot-admin-starter-client version 1.4.5.
All components are deployed on a PCF environment and are communication over HTTPS. In either ways, the applications though are able to register themselves with the admin sever but are showing up as OFFLINE only. There are no errors reported in the logs for any of the components viz. admin-server, admin-client, eureka-server, eureka-client.
However the only application showing as Up is the admin server itself.
The application properties for spring-boot-admin-client app to run in PCF is:
spring:
application:
name: bootadmin-ms-charlie
boot:
admin:
url: https://bootadmin-dashboard.abc.intl.com
ssl:
trust_store:
path: classpath:ssl/sslcacert.jks
password: a-password
As the result is same for both methods of registration, I've skipped putting the config here for apps registering via Eureka path to keep it simple.
The same is working perfectly fine locally, where the admin dashboard shows all the applications as expected.
Is there any configuration that needs to be done in specific to Cloud Foundry?
Or any obvious mistake that I might have made?
Any suggestions are most welcome.
---EDIT---
Here are the logs from SBA server showing that the communication between server and client is working okay. If these logs give any indication of an error, please point out.
OUT 2017-01-23 05:15:15.139 DEBUG 10 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : DispatcherServlet with name 'dispatcherServlet' processing POST request for [/api/applications]
OUT 2017-01-23 05:15:15.151 DEBUG 10 --- [nio-8080-exec-1] m.m.a.RequestResponseBodyMethodProcessor : Read [class de.codecentric.boot.admin.model.Application] as "application/json;charset=UTF-8" with [org.springframework.http.converter.json.MappingJackson2HttpMessageConverter#7df33a9f]
OUT 2017-01-23 05:15:15.163 DEBUG 10 --- [nio-8080-exec-1] o.s.w.s.m.m.a.HttpEntityMethodProcessor : Written [Application [id=3805ee6a, name=bootadmin-ms-charlie, managementUrl=http://23fcf304-82d6-44cd-7fce-2a5027de9f21:8080, healthUrl=http://23fcf304-82d6-44cd-7fce-2a5027de9f21:8080/health, serviceUrl=http://23fcf304-82d6-44cd-7fce-2a5027de9f21:8080]] as "application/json" using [org.springframework.http.converter.json.MappingJackson2HttpMessageConverter#7df33a9f]
OUT 2017-01-23 05:15:15.166 DEBUG 10 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Null ModelAndView returned to DispatcherServlet with name 'dispatcherServlet': assuming HandlerAdapter completed request handling
OUT 2017-01-23 05:15:15.166 DEBUG 10 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Successfully completed request
OUT bootadmin-dashboard.abc-intl.com - [23/01/2017:05:15:15.140 +0000] "POST /api/applications HTTP/1.1" 201 302 308 "-" "Java/1.8.0_121" 60.16.25.20:43224 x_forwarded_for:"10.10.10.10" x_forwarded_proto:"https" vcap_request_id:a40159e4-543f-40e0-627e-e8f1e7688b99 response_time:0.034164523 app_id:adcc8a33-83f4-448d-9ae2-bf2a2b16ea72
OUT 2017-01-23 05:15:18.719 DEBUG 10 --- [ updateTask1] o.s.web.client.RestTemplate : Created GET request for "http://23fcf304-82d6-44cd-7fce-2a5027de9f21:8080/health"
OUT 2017-01-23 05:15:18.722 DEBUG 10 --- [ updateTask1] o.s.web.client.RestTemplate : Setting request Accept header to [application/json, application/*+json]
The logs from client are all clean. It throws warning of "Failed to refister" only when the server is down.
Based upon the discussion on https://github.com/codecentric/spring-boot-admin/issues/399 it has been found out that below properties are vital for SBA clients to work with Dashboard on Cloud Foundry or container based architecture:
spring:
boot:
admin:
client:
management-url: <complete management-url for the client>
health-url: <complete health endpoint url for the client>
service-url: <complete root/service url for the client>
This is due to the fact that, when a client registers itself with the
SBA server it uses the runC container ID to form its service url. Such url is not valid for the Cloud Foundry router. That results in failure of communication between SBA dashboard and the client later on, causing it to show as OFFLINE.
The other approach can be to go ahead with container IPs, using spring.boot.admin.client.prefer-ip=true. That would list all containers/CF instacnes on SBA but not give a clear picture of overall health of the complete app from a site/AZ. Also direct connection to containers is never encouraged in CF as per the principles of cloud-native app and 12-factors.
Followed the instructions for creating an SSL set of files with a self signed certificate according to rabbit docs.
I am using it for the management plugin instead only for now, by configuring rabbitmq.config like:
{rabbitmq_management, [ {http_log_dir, "/tmp/rabbit-mgmt"},
{rates_mode, basic},
{listener, [{port, 7357},
{ssl, true},
{ssl_opts, [{cacertfile, "/path/to/ca_certificate.pem"},
{certfile, "/path/to/server_certificate.pem"},
{keyfile, "/path/to/server_key.pem"},
{verify,verify_peer},
{fail_if_no_peer_cert,false}
]}
]}
]}
The server starts, and the https port seems open, however, the connection fails as soon as a request is received with:
=ERROR REPORT==== 25-Sep-2015::14:25:33 ===
application: mochiweb
"Accept failed error"
"{error,{options,{cacertfile,\"/path/to/ca_certificate.pem\",\n {error,eacces}}}}"
=ERROR REPORT==== 25-Sep-2015::14:25:33 === {mochiweb_socket_server,295,{acceptor_error,{error,accept_failed}}}
I tried chown and chgrp of the folders that have all the certificate files created by following the documentation, but still have the same access error.
The problem was related to file permissions, the folders were all granting rabbitmq read access, but they were inside another folder without access.
When I start the server 'sails lift', it shown
debug: --------------------------------------------------------
debug: :: Mon May 05 2014 10:59:17 GMT+0700 (ICT)
debug:
debug: Environment : development
debug: Port : 1337
debug: --------------------------------------------------------
info: handshake authorized AUcEOqQtYzXw0jBMiSbp
info: handshake authorized t9Y7k4zozlyXd1nwiSbq
info: transport end (socket end)
info: transport end (undefined)
I wonder what are those last two lines?
info: transport end (socket end)
info: transport end (undefined)
TL;DR: If you don't want to see those messages, close any open pages / tabs that were connected to your Sails app before you re-lift it.
Those messages are coming from socket.io. It appears that you're lifting Sails with two open tabs / windows that were formerly connected via websockets to a running Sails instance, and reconnected once the server started again. However, something unexpected happened after the sockets reconnected, and they closed their connection. This can happen for any number of reasons; for example, an old socket connection may try to reconnect and resume a session which no longer exists, if you're using the memory store for sessions in Sails (which is the default for development mode). It's nothing to be concerned about; just make sure you refresh your pages after restarting Sails and all should be well. It's pretty rare that you'll need to maintain state for a web page between server reboots, but if you do, you can do it with a combination of the onConnect and onDisconnect methods in config/sockets.js and some front-end logic.