dovecot mail_crypt failed to initialize user,mail_crypt_global_private_key mail_crypt_global_private_key_password unset,no password to decrypt the key - email

I have been using postfix+dovecot successfully for a while now until I tried mail crypt plugin lately. I tried what is describe here https://doc.dovecot.org/configuration_manual/mail_crypt_plugin/ and I went for global-keys as described here: https://doc.dovecot.org/configuration_manual/mail_crypt_plugin/#global-keys
"A good solution for environments where no user folder sharing is needed is to generate per-user EC key pair and encrypt that with something derived from user’s password."
I am setting mail_crypt_global_private_key, mail_crypt_global_public_key, mail_crypt_save_version from user_query and userdb_mail_crypt_global_private_key_password from password_query. mail_crypt seems to work fine in imap (I saved a message as draft and it is stored encrypted on the disk), but lmtp complains about "mail_crypt_global_private_key_password unset, no password to decrypt the key" As you can see below in logs that it was able to set all other mail_crypt_ configurations successfully from user_query. However, the password is provided via password_query and I assume lmtp does not read password_query. How else can I provide a password in lmtp (Why do lmtp needs a password, can it not use public key to encrypt)? Is my approach correct to begin with?
-- Dovecot Configurations --
# using doveconf -n
# 2.3.19.1 (9b53102964): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.19 (4eae2f79)
# OS: Linux 5.15.0-57-generic x86_64 Ubuntu 20.04.5 LTS
# Hostname: mailserver-dovecot-7c9ff7b94b-8ldrr
auth_mechanisms = plain login
auth_verbose = yes
auth_verbose_passwords = yes
debug_log_path = /dev/stdout
haproxy_trusted_networks = 192.168.0.0/16 10.10.10.0/24 10.10.30.0/24 172.17.0.1/16
hostname = imap.mailserver.k8s.local pop.mailserver.k8s.local
info_log_path = /dev/stdout
listen = *
log_path = /dev/stdout
mail_debug = yes
mail_gid = 1000
mail_home = /var/vmail/mailboxes/%d/%n
mail_location = maildir:~/:LAYOUT=fs
mail_plugins = quota mail_crypt
mail_privileged_group = mail
mail_uid = 1000
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date index ihave duplicate mime foreverypart extracttext imapsieve vnd.dovecot.imapsieve
namespace inbox {
inbox = yes
location =
mailbox Drafts {
auto = subscribe
special_use = \Drafts
}
mailbox Sent {
auto = subscribe
special_use = \Sent
}
mailbox "Sent Messages" {
special_use = \Sent
}
mailbox Spam {
auto = subscribe
autoexpunge = 30 days
special_use = \Junk
}
mailbox Trash {
auto = subscribe
autoexpunge = 30 days
special_use = \Trash
}
prefix =
}
passdb {
args = /etc/dovecot/dovecot-sql.conf.ext
driver = sql
}
plugin {
imapsieve_mailbox1_before = file:/var/vmail/sieve/global/learn-spam.sieve
imapsieve_mailbox1_causes = COPY APPEND FLAG
imapsieve_mailbox1_name = Spam
imapsieve_mailbox2_before = file:/var/vmail/sieve/global/learn-ham.sieve
imapsieve_mailbox2_causes = COPY APPEND FLAG
imapsieve_mailbox2_from = Spam
imapsieve_mailbox2_name = *
mail_crypt_save_version = 0
quota = maildir:User quota
quota_exceeded_message = User %u has exhausted allowed storage space.
quota_rule = Junk:ignore
quota_rule2 = Trash:storage=+100M
quota_warning = storage=90%% quota-warning 90 %u %d
quota_warning2 = storage=80%% quota-warning 80 %u %d
sieve = file:~/sieve;active=~/.dovecot.sieve
sieve_before = /var/vmail/sieve/global/spam-global.sieve
sieve_global = /var/vmail/sieve/global/
sieve_global_extensions = +vnd.dovecot.pipe +vnd.dovecot.debug
sieve_pipe_bin_dir = /var/vmail/sieve/global
sieve_plugins = sieve_imapsieve sieve_extprograms
}
protocols = " imap lmtp sieve pop3"
service auth {
inet_listener {
port = 25252
}
}
service imap-login {
inet_listener imap {
haproxy = yes
}
inet_listener imaps {
haproxy = yes
ssl = yes
}
}
service lmtp {
executable = lmtp -L
inet_listener lmtp {
address = 0.0.0.0
port = 24
}
}
service managesieve-login {
inet_listener sieve {
port = 4190
}
}
service pop3-login {
inet_listener pop3 {
haproxy = yes
}
inet_listener pop3s {
haproxy = yes
}
}
ssl = required
ssl_cert = </etc/dovecot/certs/tls.crt
ssl_client_ca_dir = /etc/ssl/certs
ssl_key = # hidden, use -P to show it
ssl_prefer_server_ciphers = yes
userdb {
args = /etc/dovecot/dovecot-sql.conf.ext
driver = sql
}
protocol lmtp {
info_log_path = /dev/stdout
log_path = /dev/stdout
mail_plugins = quota mail_crypt sieve
postmaster_address = <hidden>
}
protocol imap {
mail_plugins = quota mail_crypt quota imap_quota imap_sieve
}
-- Dovecot Configurations Ends --
-- Password Query --
password_query = \
SELECT username, domain, password, \
'%{sha256:password}' AS userdb_mail_crypt_global_private_key_password \
FROM mailbox \
WHERE username='%u';
-- Password Query Ends--
-- User Query --
user_query = SELECT CONCAT('*:bytes=', 1024) as quota_rule, \
private_key AS mail_crypt_global_private_key, \
public_key AS mail_crypt_global_public_key, \
mail_crypt_save_version AS mail_crypt_save_version \
FROM mailbox \
WHERE username='%u';
-- User Query Ends --
-- Debug Logs --
--- Load Inbox ---
imap-login: Info: Login: user=<someone#example.com>, method=PLAIN, rip=192.168.49.1, lip=192.168.49.2, mpid=241, TLS, session=<oaoI9sLxVKXAqDEB>
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: Loading modules from directory: /usr/lib/dovecot/modules
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: Module loaded: /usr/lib/dovecot/modules/lib10_mail_crypt_plugin.so
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: Module loaded: /usr/lib/dovecot/modules/lib10_quota_plugin.so
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: Module loaded: /usr/lib/dovecot/modules/lib11_imap_quota_plugin.so
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: Module loaded: /usr/lib/dovecot/modules/lib95_imap_sieve_plugin.so
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: Added userdb setting: plugin/mail_crypt_global_private_key=LS0tLS1CRUd.....LS0tLS0K
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: Added userdb setting: plugin/mail_crypt_global_private_key_password=<hidden>
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: Added userdb setting: plugin/mail_crypt_global_public_key=LS0tLS1CRUd.....LS0tCg==
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: Added userdb setting: plugin/mail_crypt_save_version=2
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: Added userdb setting: plugin/quota_rule=*:bytes=1024000000
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: Effective uid=1000, gid=1000, home=/var/vmail/mailboxes/example.com/someone
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: mail_crypt_plugin: mail_crypt_curve setting missing - generating EC keys disabled
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: Quota root: name=User quota backend=maildir args=
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: Quota rule: root=User quota mailbox=* bytes=1024000000 messages=0
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: Quota rule: root=User quota mailbox=Trash bytes=+104857600 messages=0
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: Quota warning: bytes=921600000 (90%) messages=0 reverse=no command=quota-warning 90 someone#example.com example.com
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: Quota warning: bytes=819200000 (80%) messages=0 reverse=no command=quota-warning 80 someone#example.com example.com
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: Quota grace: root=User quota bytes=102400000 (10%)
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: open(/proc/self/io) failed: Permission denied
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: Namespace inbox: type=private, prefix=, sep=, inbox=yes, hidden=no, list=yes, subscriptions=yes location=maildir:~/:LAYOUT=fs
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: fs: root=/var/vmail/mailboxes/example.com/someone, index=, indexpvt=, control=, inbox=/var/vmail/mailboxes/example.com/someone, alt=
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: quota: quota_over_flag check: quota_over_script unset - skipping
imap(someone#example.com)<241><oaoI9sLxVKXAqDEB>: Debug: Mailbox INBOX: Mailbox opened
--- Load Inbox Ends ---
--- Lmtp ---
lmtp(248): Info: Connect from 172.17.0.1
lmtp(someone#example.com)<248><e2dcD6TuumP4AAAALzF/Qw>: Debug: auth-master: userdb lookup(someone#example.com): Started userdb lookup
lmtp(someone#example.com)<248><e2dcD6TuumP4AAAALzF/Qw>: Debug: auth-master: conn unix:/var/run/dovecot/auth-userdb: Connecting
lmtp(someone#example.com)<248><e2dcD6TuumP4AAAALzF/Qw>: Debug: auth-master: conn unix:/var/run/dovecot/auth-userdb (pid=143,uid=0): Client connected (fd=18)
imap(someone#example.com)<247><WlggG8PxEOvAqDEB>: Debug: Mailbox Sent: Purging (new file_seq=1673195172): creating cache
imap(someone#example.com)<247><WlggG8PxEOvAqDEB>: Debug: Mailbox Sent: Purging finished, file_seq changed 0 -> 1673195172, size=0 -> 388, max_uid=0
lmtp(someone#example.com)<248><e2dcD6TuumP4AAAALzF/Qw>: Debug: auth-master: userdb lookup(someone#example.com): auth USER input: someone#example.com quota_rule=*:bytes=1024000000 mail_crypt_global_private_key=LS0tLS1CRUd.....LS0tLS0K mail_crypt_global_public_key=LS0tLS1CRUd.....LS0tCg== mail_crypt_save_version=2
lmtp(someone#example.com)<248><e2dcD6TuumP4AAAALzF/Qw>: Debug: auth-master: userdb lookup(someone#example.com): Finished userdb lookup (username=someone#example.com quota_rule=*:bytes=1024000000 mail_crypt_global_private_key=LS0tLS1CRUd.....LS0tLS0K mail_crypt_global_public_key=LS0tLS1CRUd.....LS0tCg== mail_crypt_save_version=2)
lmtp(someone#example.com)<248><e2dcD6TuumP4AAAALzF/Qw>: Debug: lmtp-server: conn 172.17.0.1:6376 [1]: rcpt someone#example.com: Added userdb setting: plugin/mail_crypt_global_private_key=LS0tLS1CRUd.....LS0tLS0K
imap(someone#example.com)<247><WlggG8PxEOvAqDEB>: Debug: duplicate db: Initialize
lmtp(someone#example.com)<248><e2dcD6TuumP4AAAALzF/Qw>: Debug: lmtp-server: conn 172.17.0.1:6376 [1]: rcpt someone#example.com: Added userdb setting: plugin/mail_crypt_global_public_key=LS0tLS1CRUd.....LS0tCg==
imap(someone#example.com)<247><WlggG8PxEOvAqDEB>: Debug: sieve: Pigeonhole version 0.5.19 (4eae2f79) initializing
lmtp(someone#example.com)<248><e2dcD6TuumP4AAAALzF/Qw>: Debug: lmtp-server: conn 172.17.0.1:6376 [1]: rcpt someone#example.com: Added userdb setting: plugin/mail_crypt_save_version=2
lmtp(someone#example.com)<248><e2dcD6TuumP4AAAALzF/Qw>: Debug: lmtp-server: conn 172.17.0.1:6376 [1]: rcpt someone#example.com: Added userdb setting: plugin/quota_rule=*:bytes=1024000000
lmtp(someone#example.com)<248><e2dcD6TuumP4AAAALzF/Qw>: Debug: lmtp-server: conn 172.17.0.1:6376 [1]: rcpt someone#example.com: Effective uid=1000, gid=1000, home=/var/vmail/mailboxes/example.com/someone
lmtp(someone#example.com)<248><e2dcD6TuumP4AAAALzF/Qw>: Debug: lmtp-server: conn 172.17.0.1:6376 [1]: rcpt someone#example.com: mail_crypt_plugin: mail_crypt_curve setting missing - generating EC keys disabled
lmtp(someone#example.com)<248><e2dcD6TuumP4AAAALzF/Qw>: Debug: lmtp-server: conn 172.17.0.1:6376 [1]: rcpt someone#example.com: Quota root: name=User quota backend=maildir args=
lmtp(someone#example.com)<248><e2dcD6TuumP4AAAALzF/Qw>: Debug: lmtp-server: conn 172.17.0.1:6376 [1]: rcpt someone#example.com: Quota rule: root=User quota mailbox=* bytes=1024000000 messages=0
lmtp(someone#example.com)<248><e2dcD6TuumP4AAAALzF/Qw>: Debug: lmtp-server: conn 172.17.0.1:6376 [1]: rcpt someone#example.com: Quota rule: root=User quota mailbox=Trash bytes=+104857600 messages=0
lmtp(someone#example.com)<248><e2dcD6TuumP4AAAALzF/Qw>: Debug: lmtp-server: conn 172.17.0.1:6376 [1]: rcpt someone#example.com: Quota warning: bytes=921600000 (90%) messages=0 reverse=no command=quota-warning 90 someone#example.com example.com
lmtp(someone#example.com)<248><e2dcD6TuumP4AAAALzF/Qw>: Debug: lmtp-server: conn 172.17.0.1:6376 [1]: rcpt someone#example.com: Quota warning: bytes=819200000 (80%) messages=0 reverse=no command=quota-warning 80 someone#example.com example.com
lmtp(someone#example.com)<248><e2dcD6TuumP4AAAALzF/Qw>: Debug: lmtp-server: conn 172.17.0.1:6376 [1]: rcpt someone#example.com: Quota grace: root=User quota bytes=102400000 (10%)
lmtp(248): Error: lmtp-server: conn 172.17.0.1:6376 [1]: rcpt someone#example.com: Failed to initialize user: mail_crypt_plugin: mail_crypt_global_private_key: mail_crypt_global_private_key_password unset, no password to decrypt the key
lmtp(248): Info: Disconnect from 172.17.0.1: Logged out (state=READY)
--- Lmtp Ends ---
-- Debug Logs Ends --

Related

TLS/SSL in pgbouncer - FATAL TLS setup failed: failed to load CA

I'm trying to set up pgbouncer to require a TLS/SSL connection from the applications connecting to it, but it throws an error "FATAL TLS setup failed: failed to load CA"
This is my pgbouncer.ini:
[databases]
* = host=${postgres_host} port=5432
[pgbouncer]
# Do not change these settings:
listen_addr = 0.0.0.0
auth_file = /etc/pgbouncer/userlist.txt
auth_type = trust
client_tls_sslmode = require
client_tls_key_file = /etc/pgbouncer/server.key
client_tls_cert_file = /etc/pgbouncer/server.crt
server_tls_sslmode = verify-ca
server_tls_ca_file = /etc/root.crt.pem
# These are defaults and can be configured
# please leave them as defaults if you are
# uncertain.
listen_port = 5432
unix_socket_dir =
user = postgres
pool_mode = transaction
max_client_conn = 100
ignore_startup_parameters = extra_float_digits
admin_users = postgres
# Please add any additional settings below this line
but running it it throws this error, which doesn't seem correct since a CA root file is not needed.
FATAL TLS setup failed: failed to load CA: No such file or directory
p.s. It threw the error also before I had server_tlsmode = verify-ca

TLS encrypted PostgreSQL connection not possible

I would like to establish a TLS encrypted connection to a PostgreSQL 11 database using Tokio as the framework, Deadpool as the connection pooler and rustls as TLS library.
I developed/modified the following code:
let pool = if let Some(ca_cert) = settings.db_ca_cert {
let mut tls_config = ClientConfig::new();
let cert_file = File::open(&ca_cert)?;
let mut buf = BufReader::new(cert_file);
tls_config.root_store.add_pem_file(&mut buf).map_err(|_| {
anyhow::anyhow!("failed to read database root certificate: {}", ca_cert)
})?;
let tls = MakeRustlsConnect::new(tls_config);
settings.pg.create_pool(tls)?
} else {
settings.pg.create_pool(NoTls)?
};
My test scenario is taken from here:
PostgreSQL 11 docker container (including TLS turned on)
TLS was already tested successfully with the psql client
I now get the following error message and can't explain the problem. I already checked the access rights and other parameters.
/usr/local/bin/cargo run --color=always
Finished dev [unoptimized + debuginfo] target(s) in 0.20s
Running `target/debug/tokio-postgres-rustls-connection-pool-demo`
DEBUG tokio_postgres_rustls_connection_pool_demo > settings: Settings { pg: Config { user: Some("postgres"), password: Some("postgres"), dbname: Some("postgres"), options: Some("sslrootcert=/xxx/tokio-postgres-rustls-connection-pool-demo/docker/files/cert/ca.pem"), application_name: None, ssl_mode: None, host: Some("127.0.0.1"), hosts: None, port: Some(6432), ports: None, connect_timeout: None, keepalives: None, keepalives_idle: None, target_session_attrs: None, channel_binding: None, manager: None, pool: None }, db_ca_cert: None }
Error: Backend(Error { kind: Connect, cause: Some(Os { code: 2, kind: NotFound, message: "No such file or directory" }) })
I looked at the logs of the database and could identify the following error:
[86] LOG: XX000: could not accept SSL connection: Success
[86] LOCATION: be_tls_open_server, be-secure-openssl.c:408
How can I solve the problem?

How to implement modules in ejabberd?

I am new on the XMPP server ejabberd. I installed ejabberd on ubuntu from this link: https://docs.ejabberd.im/admin/installation/#install-on-linux. I am using the default ejabberd.yml file which is present in ejabberd-20.07/conf folder. Here is my ejabberd.yml file:
hosts:
- "faiqkhan-VirtualBox"
loglevel: 4
log_rotate_size: 10485760
log_rotate_count: 1
certfiles:
- "/home/faiqkhan/ejabberd-20.07/conf/server.pem"
## - "/etc/letsencrypt/live/localhost/fullchain.pem"
## - "/etc/letsencrypt/live/localhost/privkey.pem"
ca_file: "/home/faiqkhan/ejabberd-20.07/conf/cacert.pem"
listen:
-
port: 5222
ip: "::"
module: ejabberd_c2s
max_stanza_size: 262144
shaper: c2s_shaper
access: c2s
starttls_required: false
-
port: 5269
ip: "::"
module: ejabberd_s2s_in
max_stanza_size: 524288
-
port: 5443
ip: "::"
module: ejabberd_http
tls: true
request_handlers:
"/admin": ejabberd_web_admin
"/api": mod_http_api
"/bosh": mod_bosh
"/captcha": ejabberd_captcha
"/upload": mod_http_upload
"/ws": ejabberd_http_ws
"/oauth": ejabberd_oauth
-
port: 5280
ip: "::"
module: ejabberd_http
request_handlers:
"/admin": ejabberd_web_admin
-
port: 1883
ip: "::"
module: mod_mqtt
backlog: 1000
s2s_use_starttls: optional
acl:
local:
user_regexp: ""
loopback:
ip:
- 127.0.0.0/8
- ::1/128
- ::FFFF:127.0.0.1/128
admin:
user:
- "admin#faiqkhan-VirtualBox"
access_rules:
local:
allow: local
c2s:
deny: blocked
allow: all
announce:
allow: admin
configure:
allow: admin
muc_create:
allow: local
pubsub_createnode:
allow: local
trusted_network:
allow: local
api_permissions:
"console commands":
from:
- ejabberd_ctl
who: all
what: "*"
"admin access":
who:
access:
allow:
acl: loopback
acl: admin
oauth:
scope: "ejabberd:admin"
access:
allow:
acl: loopback
acl: admin
what:
- "*"
- "!stop"
- "!start"
"public commands":
who:
ip: 127.0.0.1/8
what:
- status
- connected_users_number
shaper:
normal: 1000
fast: 50000
shaper_rules:
max_user_sessions: 10
max_user_offline_messages:
5000: admin
100: all
c2s_shaper:
none: admin
normal: all
s2s_shaper: fast
max_fsm_queue: 10000
acme:
contact: "mailto:admin#faiqkhan-VirtualBox"
ca_url: "https://acme-v02.api.letsencrypt.org/directory"
modules:
mod_adhoc: {}
mod_admin_extra: {}
mod_announce:
access: announce
mod_avatar: {}
mod_blocking: {}
mod_bosh: {}
mod_caps: {}
mod_carboncopy: {}
mod_client_state: {}
mod_configure: {}
mod_disco: {}
mod_fail2ban: {}
mod_http_api: {}
mod_http_upload:
put_url: https://#HOST#:5443/upload
mod_last: {}
mod_mam:
## Mnesia is limited to 2GB, better to use an SQL backend
## For small servers SQLite is a good fit and is very easy
## to configure. Uncomment this when you have SQL configured:
## db_type: sql
assume_mam_usage: true
default: never
mod_mqtt: {}
mod_muc:
access:
- allow
access_admin:
- allow: admin
access_create: muc_create
access_persistent: muc_create
access_mam:
- allow
default_room_options:
allow_subscription: true # enable MucSub
mam: false
mod_muc_admin: {}
mod_offline:
access_max_user_messages: max_user_offline_messages
mod_ping: {}
mod_privacy: {}
mod_private: {}
mod_proxy65:
access: local
max_connections: 5
mod_pubsub:
access_createnode: pubsub_createnode
plugins:
- flat
- pep
force_node_config:
## Avoid buggy clients to make their bookmarks public
storage:bookmarks:
access_model: whitelist
mod_push: {}
mod_push_keepalive: {}
mod_register:
## Only accept registration requests from the "trusted"
## network (see access_rules section above).
## Think twice before enabling registration from any
## address. See the Jabber SPAM Manifesto for details:
## https://github.com/ge0rg/jabber-spam-fighting-manifesto
ip_access: all
mod_roster:
versioning: true
mod_s2s_dialback: {}
mod_shared_roster: {}
mod_stream_mgmt:
resend_on_timeout: if_offline
mod_vcard: {}
mod_vcard_xupdate: {}
mod_version:
show_os: false
mod_stanza_ack: {}
I try the given code in the link question Ejabberd return message to sender hook / message receipts and added my module in ejabberd.yml file which is in the last line of the above code. I create mod_stanza_ack.erl file and compile the file using command
./erlc mod_stanza_ack.erl
and get mod_stanza_ack.beam file. I coped mod_stanza_ack.beam file to ejabberd-20.07/lib/ejabberd-20.07/ebin folder where all the module files are present. Then I start ejabberd server using
./ejabberdctl live
command to view logs. Module works for me but on the server-side, it always crashes with an error
**Hook user_send_packet crashed when running mod_stanza_ack:on_user_send_packet/1:
** exception error: undefined function mod_stanza_ack:on_user_send_packet/1
in function ejabberd_hooks:safe_apply/4 (src/ejabberd_hooks.erl, line 236)
in call from ejabberd_hooks:run_fold1/4 (src/ejabberd_hooks.erl, line 217)
in call from ejabberd_c2s:handle_authenticated_packet/2 (src/ejabberd_c2s.erl, line 484)
in call from xmpp_stream_in:process_authenticated_packet/2 (src/xmpp_stream_in.erl, line 714)
in call from xmpp_stream_in:handle_info/2 (src/xmpp_stream_in.erl, line 404)
in call from p1_server:handle_msg/8 (src/p1_server.erl, line 696)
in call from proc_lib:init_p_do_apply/3 (proc_lib.erl, line 249)**.
Did I miss something? Or using deprecated functions?
Well, that example source code is six years old, and ejabberd development API has changed since then. I've updated the example, and this compiles and starts correctly with ejabberd 20.07:
-module(mod_stanza_ack).
-behaviour(gen_mod).
-include("xmpp.hrl").
-include("logger.hrl").
-include("translate.hrl").
-export([start/2, stop/1, mod_options/1, mod_doc/0, depends/2]).
-export([on_user_send_packet/1]).
start(Host, _Opts) ->
?INFO_MSG("mod_stanza_ack starting", []),
ejabberd_hooks:add(user_send_packet, Host, ?MODULE, on_user_send_packet, 0),
ok.
stop(Host) ->
?INFO_MSG("mod_stanza_ack stopping", []),
ejabberd_hooks:delete(user_send_packet, Host, ?MODULE, on_user_send_packet, 0),
ok.
on_user_send_packet({#presence{to = To, from = From} = Packet, C2SState}) ->
?INFO_MSG("mod_stanza_ack a presence has been sent coming from: ~p", [From]),
?INFO_MSG("mod_stanza_ack a presence has been sent to: ~p", [To]),
?INFO_MSG("mod_stanza_ack a presence has been sent with the following packet:~n ~p", [Packet]),
{Packet, C2SState};
on_user_send_packet({#iq{to = To, from = From} = Packet, C2SState}) ->
?INFO_MSG("mod_stanza_ack a iq has been sent coming from: ~p", [From]),
?INFO_MSG("mod_stanza_ack a iq has been sent to: ~p", [To]),
?INFO_MSG("mod_stanza_ack a iq has been sent with the following packet:~n ~p", [Packet]),
{Packet, C2SState};
on_user_send_packet({#message{to = To, from = From} = Packet, C2SState}) ->
?INFO_MSG("mod_stanza_ack a message has been sent coming from: ~p", [From]),
?INFO_MSG("mod_stanza_ack a message has been sent to: ~p", [To]),
?INFO_MSG("mod_stanza_ack a message has been sent with the following packet:~n ~p", [Packet]),
{Packet, C2SState}.
depends(_Host, _Opts) ->
[].
mod_options(_Host) ->
[].
mod_doc() ->
#{desc =>
?T("This an example module.")}.
Following your detailed step by step installation guide, I get two problems, that I describe here and how to solve them:
Compilation lacks header files.
I copy mod_stanza_ack.erl to ejabberd-20.07/bin, and then run this command:
$ ./erlc mod_stanza_ack.erl
mod_stanza_ack.erl:4: can't find include file "xmpp.hrl"
mod_stanza_ack.erl:5: can't find include file "logger.hrl"
mod_stanza_ack.erl:6: can't find include file "translate.hrl"
mod_stanza_ack.erl:12: undefined macro 'INFO_MSG/2'
mod_stanza_ack.erl:17: undefined macro 'INFO_MSG/2'
mod_stanza_ack.erl:22: undefined macro 'INFO_MSG/2'
mod_stanza_ack.erl:47: undefined macro 'T/1'
mod_stanza_ack.erl:8: function mod_doc/0 undefined
mod_stanza_ack.erl:8: function start/2 undefined
mod_stanza_ack.erl:8: function stop/1 undefined
mod_stanza_ack.erl:9: function on_user_send_packet/1 undefined
The solution is simple: provide the paths to the header files:
$ ./erlc -I ../lib/ejabberd-20.07/include/ -I ../lib/xmpp-1.4.9/include/ -I ../lib/fast_xml-1.1.43/include/ mod_stanza_ack.erl
This way the file is compiled correctly.
INFO_MSG in the source code do not produce log messages in ejabberd log file or "ejabberdctl live" console.
This is because we didn't tell the compiler to use the LAGER library. The solution is quite simple: include -DLAGER in the module compilation. So, this is the perfect compilation call:
$ ./erlc -I ../lib/ejabberd-20.07/include/ -I ../lib/xmpp-1.4.9/include/ -I ../lib/fast_xml-1.1.43/include/ -DLAGER mod_stanza_ack.erl
Now, you copy the resulting mod_stanza_ack.beam with all the other ejabberd beam files, enable the module in ejabberd.yml, and restart ejabberd, and all will work as expected

Failed to start Kafka after enabling Kerberos "SASL authentication failed"

Kafka version: kafka_2.1.1(binary)
When I enable the Kerberos I follow the official documents(https://kafka.apache.org/documentation/#security_sasl_kerberos) closely.
When I start the Kafka, I got the following errors:
[2019-02-23 08:55:44,622] ERROR SASL authentication failed using login context 'Client' with exception: {} (org.apache.zookeeper.client.ZooKeeperSaslClient)
javax.security.sasl.SaslException: Error in authenticating with a Zookeeper Quorum member: the quorum member's saslToken is null.
at org.apache.zookeeper.client.ZooKeeperSaslClient.createSaslToken(ZooKeeperSaslClient.java:279)
at org.apache.zookeeper.client.ZooKeeperSaslClient.respondToServer(ZooKeeperSaslClient.java:242)
at org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:805)
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:94)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1145)
[2019-02-23 08:55:44,625] ERROR [ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient)
[2019-02-23 08:55:44,746] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
I use almost the default krb5.conf.
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt
default_realm = EXAMPLE.COM
default_ccache_name = KEYRING:persistent:%{uid}
[realms]
EXAMPLE.COM = {
# kdc = kerberos.example.com
# admin_server = kerberos.example.com
kdc = localhost
admin_server = localhost
}
[domain_realm]
# .example.com = EXAMPLE.COM
# example.com = EXAMPLE.COM
The jaas file I passed to the Kafka is as below:
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/localhost.keytab"
principal="kafka/localhost#EXAMPLE.COM";
};
// Zookeeper client authentication
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/localhost.keytab"
principal="kafka/localhost#EXAMPLE.COM";
};
I also set the ENV as below:
"-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf -Dzookeeper.sasl.client.username=kafka"
I have googled a lot of posts but without any progress. I guess the problem may be the "localhost" I use when I create entries in Kerberos. But I'm not quite sure how to workaround. The goal for me is to setup a local Kafka+Kerberos testing environment.
In our case, the krb5 kerberos_config file wasn't read properly. if you're using keytab thru' yml then it'd need to be removed first. This was with IBM JDK though and had to use the following to set System.setProperty("java.security.auth.login.config", JaasConfigFileLocation);
KafkaClient {
com.ibm.security.auth.module.Krb5LoginModule required
useDefaultKeytab=false
credsType=both
principal="xkafka#xxx.NET"
useKeytab="/opt/apps/xxxr/my.keytab";
};

Why can't I connect to Gremlin-Server?

Abstract
I'm trying to set up a Titan/Cassandra/Gremlin-Server stack in Docker (v1.13.0). The problem I'm facing is that applications trying to connect to Gremlin-Server on the default port 8182 are reporting errors (details below).
First, here is some relevant version information:
Cassandra v2.2.8
Titan v1.0.0 (Hadoop 1)
Gremlin 3.2.3
Setup
Setup takes place in a Dockerfile in order to be reproducible. It assumes that a Cassandra container already exists, running a cassandra.yaml in which start_rpc has been set to true.
The Dockerfile is as follows:
FROM openjdk:alpine
ENV TITAN 'titan-1.0.0-hadoop1'
RUN apk update && apk add bash unzip && rm -rf /var/cache/apk/* \
&& adduser -S -s /bin/bash -D srg \
&& wget -O /tmp/$TITAN.zip http://s3.thinkaurelius.com/downloads/titan/$TITAN.zip \
&& unzip /tmp/$TITAN.zip -d /opt && ln -s /opt/$TITAN /opt/titan \
&& rm /tmp/*.zip \
&& chown -R srg /opt/$TITAN/ \
&& /opt/titan/bin/gremlin-server.sh -i org.apache.tinkerpop gremlin-python 3.2.3
COPY conf/gremlin-server/* /opt/$TITAN/conf/gremlin-server/
USER srg
WORKDIR /opt/titan
EXPOSE 8182
CMD ["bin/gremlin-server.sh", "conf/gremlin-server/srg.yaml"]
The astute reader will note that I am copying custom configuration files into the container, namely a Gremlin-Server configuration file (srg.yaml) and a titan graph properties file (srg.properties).
srg.yaml
host: localhost
port: 8182
threadPoolWorker: 1
gremlinPool: 8
scriptEvaluationTimeout: 30000
serializedResponseTimeout: 30000
channelizer: org.apache.tinkerpop.gremlin.server.channel.WebSocketChannelizer
graphs: {
graph: conf/gremlin-server/srg.properties
}
plugins:
- aurelius.titan
scriptEngines: {
gremlin-groovy: {
imports: [java.lang.Math],
staticImports: [java.lang.Math.PI],
scripts: [scripts/empty-sample.groovy]},
gremlin-jython: {},
gremlin-python: {},
nashorn: {
imports: [java.lang.Math],
staticImports: [java.lang.Math.PI]}}
serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { useMapperFromGraph: graph }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { serializeResultToString: true }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0, config: { useMapperFromGraph: graph }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, config: { useMapperFromGraph: graph }}
processors:
- { className: org.apache.tinkerpop.gremlin.server.op.session.SessionOpProcessor, config: { sessionTimeout: 28800000 }}
metrics: {
consoleReporter: {enabled: true, interval: 180000},
csvReporter: {enabled: true, interval: 180000, fileName: /tmp/gremlin-server-metrics.csv},
jmxReporter: {enabled: true},
slf4jReporter: {enabled: true, interval: 180000},
gangliaReporter: {enabled: false, interval: 180000, addressingMode: MULTICAST},
graphiteReporter: {enabled: false, interval: 180000}}
threadPoolBoss: 1
maxInitialLineLength: 4096
maxHeaderSize: 8192
maxChunkSize: 8192
maxContentLength: 65536
maxAccumulationBufferComponents: 1024
resultIterationBatchSize: 64
writeBufferLowWaterMark: 32768
writeBufferHighWaterMark: 65536
ssl: {
enabled: false}
srg.properties
gremlin.graph=com.thinkaurelius.titan.core.TitanFactory
storage.backend=cassandrathrift
storage.hostname=cassandra # refers to the linked container
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.25
# Start elasticsearch inside the Titan JVM
index.search.backend=elasticsearch
index.search.directory=db/es
index.search.elasticsearch.client-only=false
index.search.elasticsearch.local-mode=true
Execution
The container is run with the following command: docker run -ti --rm=true --link test.cassandra:cassandra -p 8182:8182 titan.
Here is the log output for Gremlin-Server:
0 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer -
\,,,/
(o o)
-----oOOo-(3)-oOOo-----
297 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer - Configuring Gremlin Server from conf/gremlin-server/srg.yaml
439 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics ConsoleReporter configured with report interval=180000ms
448 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics CsvReporter configured with report interval=180000ms to fileName=/tmp/gremlin-server-metrics.csv
557 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics JmxReporter configured with domain= and agentId=
561 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics Slf4jReporter configured with interval=180000ms and loggerName=org.apache.tinkerpop.gremlin.server.Settings$Slf4jReporterMetrics
1750 [main] INFO com.thinkaurelius.titan.core.util.ReflectiveConfigOptionLoader - Loaded and initialized config classes: 12 OK out of 12 attempts in PT0.148S
1972 [main] INFO com.thinkaurelius.titan.diskstorage.cassandra.thrift.CassandraThriftStoreManager - Closed Thrift connection pooler.
1990 [main] INFO com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration - Generated unique-instance-id=ac1100031-ad2d5ffa52e81
2026 [main] INFO com.thinkaurelius.titan.diskstorage.Backend - Configuring index [search]
2386 [main] INFO org.elasticsearch.node - [Lunatik] version[1.5.1], pid[1], build[5e38401/2015-04-09T13:41:35Z]
2387 [main] INFO org.elasticsearch.node - [Lunatik] initializing ...
2399 [main] INFO org.elasticsearch.plugins - [Lunatik] loaded [], sites []
6471 [main] INFO org.elasticsearch.node - [Lunatik] initialized
6472 [main] INFO org.elasticsearch.node - [Lunatik] starting ...
6477 [main] INFO org.elasticsearch.transport - [Lunatik] bound_address {local[1]}, publish_address {local[1]}
6507 [main] INFO org.elasticsearch.discovery - [Lunatik] elasticsearch/u2StmRW1RsyEHw561yoNFw
6519 [elasticsearch[Lunatik][clusterService#updateTask][T#1]] INFO org.elasticsearch.cluster.service - [Lunatik] master {new [Lunatik][u2StmRW1RsyEHw561yoNFw][ad2d5ffa52e8][local[1]]{local=true}}, removed {[Lunatik][kKyL9UE-R123LLZTTrsVCw][ad2d5ffa52e8][local[1]]{local=true},}, reason: local-disco-initial_connect(master)
6908 [main] INFO org.elasticsearch.http - [Lunatik] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/172.17.0.3:9200]}
6909 [main] INFO org.elasticsearch.node - [Lunatik] started
6923 [elasticsearch[Lunatik][clusterService#updateTask][T#1]] INFO org.elasticsearch.gateway - [Lunatik] recovered [0] indices into cluster_state
7486 [elasticsearch[Lunatik][clusterService#updateTask][T#1]] INFO org.elasticsearch.cluster.metadata - [Lunatik] [titan] creating index, cause [api], templates [], shards [5]/[1], mappings []
8075 [main] INFO com.thinkaurelius.titan.diskstorage.Backend - Initiated backend operations thread pool of size 4
8241 [main] INFO com.thinkaurelius.titan.diskstorage.Backend - Configuring total store cache size: 94787290
8641 [main] INFO com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog - Loaded unidentified ReadMarker start time 2017-01-21T16:31:28.750Z into com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog$MessagePuller#3520958b
8642 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer - Graph [graph] was successfully configured via [conf/gremlin-server/srg.properties].
8643 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Initialized Gremlin thread pool. Threads in pool named with pattern gremlin-*
14187 [main] INFO com.jcabi.manifests.Manifests - 108 attributes loaded from 264 stream(s) in 185ms, 108 saved, 3371 ignored: ["Agent-Class", "Ant-Version", "Archiver-Version", "Bnd-LastModified", "Boot-Class-Path", "Build-Date", "Build-Host", "Build-Id", "Build-Java-Version", "Build-Jdk", "Build-Job", "Build-Number", "Build-Time", "Build-Timestamp", "Build-Version", "Built-At", "Built-By", "Built-OS", "Built-On", "Built-Status", "Bundle-ActivationPolicy", "Bundle-Activator", "Bundle-BuddyPolicy", "Bundle-Category", "Bundle-ClassPath", "Bundle-Classpath", "Bundle-Copyright", "Bundle-Description", "Bundle-DocURL", "Bundle-License", "Bundle-Localization", "Bundle-ManifestVersion", "Bundle-Name", "Bundle-NativeCode", "Bundle-RequiredExecutionEnvironment", "Bundle-SymbolicName", "Bundle-Vendor", "Bundle-Version", "Can-Redefine-Classes", "Change", "Class-Path", "Created-By", "DynamicImport-Package", "Eclipse-AutoStart", "Eclipse-BuddyPolicy", "Eclipse-SourceReferences", "Embed-Dependency", "Embedded-Artifacts", "Export-Package", "Extension-Name", "Extension-name", "Fragment-Host", "Git-Commit-Branch", "Git-Commit-Date", "Git-Commit-Hash", "Git-Committer-Email", "Git-Committer-Name", "Gradle-Version", "Gremlin-Lib-Paths", "Gremlin-Plugin-Dependencies", "Gremlin-Plugin-Paths", "Ignore-Package", "Implementation-Build", "Implementation-Build-Date", "Implementation-Title", "Implementation-URL", "Implementation-Vendor", "Implementation-Vendor-Id", "Implementation-Version", "Import-Package", "Include-Resource", "JCabi-Build", "JCabi-Date", "JCabi-Version", "Java-Vendor", "Java-Version", "Main-Class", "Main-class", "Manifest-Version", "Maven-Version", "Module-Email", "Module-Origin", "Module-Owner", "Module-Source", "Originally-Created-By", "Os-Arch", "Os-Name", "Os-Version", "Package", "Premain-Class", "Private-Package", "Require-Bundle", "Require-Capability", "Scm-Connection", "Scm-Revision", "Scm-Url", "Specification-Title", "Specification-Vendor", "Specification-Version", "Tool", "X-Compile-Source-JDK", "X-Compile-Target-JDK", "hash", "implementation-version", "mode", "package", "url", "version"]
14842 [main] INFO org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines - Loaded gremlin-jython ScriptEngine
15540 [main] INFO org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines - Loaded nashorn ScriptEngine
16076 [main] INFO org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines - Loaded gremlin-python ScriptEngine
16553 [main] INFO org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines - Loaded gremlin-groovy ScriptEngine
17410 [main] INFO org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor - Initialized gremlin-groovy ScriptEngine with scripts/empty-sample.groovy
17410 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Initialized GremlinExecutor and configured ScriptEngines.
17419 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - A GraphTraversalSource is now bound to [g] with graphtraversalsource[standardtitangraph[cassandrathrift:[cassandra]], standard]
17565 [main] INFO org.apache.tinkerpop.gremlin.server.AbstractChannelizer - Configured application/vnd.gremlin-v1.0+gryo with org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0
17566 [main] INFO org.apache.tinkerpop.gremlin.server.AbstractChannelizer - Configured application/vnd.gremlin-v1.0+gryo-stringd with org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0
17808 [main] INFO org.apache.tinkerpop.gremlin.server.AbstractChannelizer - Configured application/vnd.gremlin-v1.0+json with org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0
17811 [main] INFO org.apache.tinkerpop.gremlin.server.AbstractChannelizer - Configured application/json with org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0
17958 [gremlin-server-boss-1] INFO org.apache.tinkerpop.gremlin.server.GremlinServer - Gremlin Server configured with worker thread pool of 1, gremlin pool of 8 and boss thread pool of 1.
17959 [gremlin-server-boss-1] INFO org.apache.tinkerpop.gremlin.server.GremlinServer - Channel started at port 8182.
1/21/17 4:34:20 PM =============================================================
-- Meters ----------------------------------------------------------------------
org.apache.tinkerpop.gremlin.server.GremlinServer.errors
count = 0
mean rate = 0.00 events/second
1-minute rate = 0.00 events/second
5-minute rate = 0.00 events/second
15-minute rate = 0.00 events/second
180564 [metrics-logger-reporter-thread-1] INFO org.apache.tinkerpop.gremlin.server.Settings$Slf4jReporterMetrics - type=METER, name=org.apache.tinkerpop.gremlin.server.GremlinServer.errors, count=0, mean_rate=0.0, m1=0.0, m5=0.0, m15=0.0, rate_unit=events/second
Symptoms
So far, everything appears to be working as intended. The logs indicate that I am able to load srg.properties and bind the data structure to a variable called graph.
The problem appears when I try to connect to the Gremlin-Server instance over the exported port 8182, for example using gremlin-python:
# executed via python 3.6.0 on the host machine, i.e. not inside of Docker
from gremlin_python import statics
from gremlin_python.structure.graph import Graph
from gremlin_python.process.graph_traversal import __
from gremlin_python.process.strategies import *
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
g = graph.traversal().withRemote(DriverRemoteConnection('ws://localhost:8182/gremlin','graph'))
produces the following exception ...
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
<ipython-input-10-59ad504f29b4> in <module>()
----> 1 g = graph.traversal().withRemote(DriverRemoteConnection('ws://localhost:8182/','g'))
/Users/lthibault/.pyenv/versions/3.6.0/lib/python3.6/site-packages/gremlin_python/driver/driver_remote_connection.py in __init__(self, url, traversal_source, username, password, loop, graphson_reader, graphson_writer)
41 self._password = password
42 if loop is None: self._loop = ioloop.IOLoop.current()
---> 43 self._websocket = self._loop.run_sync(lambda: websocket.websocket_connect(self.url))
44 self._graphson_reader = graphson_reader or GraphSONReader()
45 self._graphson_writer = graphson_writer or GraphSONWriter()
/Users/lthibault/.pyenv/versions/3.6.0/lib/python3.6/site-packages/tornado/ioloop.py in run_sync(self, func, timeout)
455 if not future_cell[0].done():
456 raise TimeoutError('Operation timed out after %s seconds' % timeout)
--> 457 return future_cell[0].result()
458
459 def time(self):
/Users/lthibault/.pyenv/versions/3.6.0/lib/python3.6/site-packages/tornado/concurrent.py in result(self, timeout)
235 return self._result
236 if self._exc_info is not None:
--> 237 raise_exc_info(self._exc_info)
238 self._check_done()
239 return self._result
/Users/lthibault/.pyenv/versions/3.6.0/lib/python3.6/site-packages/tornado/util.py in raise_exc_info(exc_info)
HTTPError: HTTP 599: Stream closed
Suspecting a problem specific to this library:
1) attempt to connect to the websocket port with nc
$ nc -z -v localhost 8182
found 0 associations
found 1 connections:
1: flags=82<CONNECTED,PREFERRED>
outif lo0
src ::1 port 58627
dst ::1 port 8182
rank info not available
TCP aux info available
Connection to localhost port 8182 [tcp/*] succeeded!
2) attempt to connect to Gremlin-Server using a different client library, namely go-gremlin
Test case:
package main
import (
"fmt"
"log"
"github.com/go-gremlin/gremlin"
)
func main() {
if err := gremlin.NewCluster("ws://localhost:8182/gremlin"); err != nil {
log.Fatal(err)
}
data, err := gremlin.Query(`graph.V()`).Exec()
if err != nil {
log.Fatalf("Query error: %s", err)
}
fmt.Println(string(data))
}
Output:
$ go run cmd/test/main.go
2017/01/21 14:47:42 Query error: unexpected EOF
exit status 1
Current Conclusions & Questions
From the previous tests, I conclude that this is an application-level problem (i.e. a problem on the websocket or ws protocol level, not a problem in the host or container networking stack). Indeed, nc reports that the socket connection is successful, but in both the Python and Go client libraries ostensibly complain of an inappropriate (empty) response from the server.
I have tried removing the /gremlin path from the websocket URL both in gremlin-python and in go-gremlin, to no avail.
My question is: where do I go from here? Any suggestions or diagnostic paths would be most appreciated!
The main problem is that the host in your Gremlin Server configuration is set to the default which is localhost. This will only allow connections from the server itself. You need to change the value to an external IP of the server or 0.0.0.0.
The other issue is that gremlin-python server plugin was made available with Apache TinkerPop 3.2.2. Titan 1.0.0 uses TinkerPop 3.0.1. I dobut that the gremlin-python 3.2.3 plugin will work with Titan 1.0.0.
Update: Consider using JanusGraph 0.1.1 which uses TinkerPop 3.2.3. JanusGraph was forked from Titan, so the code is basically the same with updated dependencies.