Ejabberd - ejabberd_auth_external:failure:103 External authentication program failed when calling 'check_password' - xmpp

I already have a schema of users with authentication-key and wanted to do authentication via that. I tried implementing authentication via sql but due to different structure of my schema I was getting error and so I implemented external-authentication method. The technologies and OS used in my application are :
Node.JS
Ejabberd as XMPP server
MySQL Database
React-Native (Front-End)
OS - Ubuntu 18.04
I implemented the external authentication configuration as mentioned in https://docs.ejabberd.im/admin/configuration/#external-script and took php script https://www.ejabberd.im/files/efiles/check_mysql.php.txt as an example. But I am getting the below mentioned error in error.log. In ejabberd.yml I have done following configuration.
...
host_config:
"example.org.co":
auth_method: [external]
extauth_program: "/usr/local/etc/ejabberd/JabberAuth.class.php"
auth_use_cache: false
...
Also, is there any external auth javascript script?
Here is the error.log and ejabberd.log as mentioned below
error.log
2019-03-19 07:19:16.814 [error]
<0.524.0>#ejabberd_auth_external:failure:103 External authentication
program failed when calling 'check_password' for admin#example.org.co:
disconnected
ejabberd.log
2019-03-19 07:19:16.811 [debug] <0.524.0>#ejabberd_http:init:151 S:
[{[<<"api">>],mod_http_api},{[<<"admin">>],ejabberd_web_admin}]
2019-03-19 07:19:16.811 [debug]
<0.524.0>#ejabberd_http:process_header:307 (#Port<0.13811>) http
query: 'POST' <<"/api/register">>
2019-03-19 07:19:16.811 [debug]
<0.524.0>#ejabberd_http:process:394 [<<"api">>,<<"register">>] matches
[<<"api">>]
2019-03-19 07:19:16.811 [info]
<0.364.0>#ejabberd_listener:accept:238 (<0.524.0>) Accepted connection
::ffff:ip -> ::ffff:ip
2019-03-19 07:19:16.814 [info]
<0.524.0>#mod_http_api:log:548 API call register
[{<<"user">>,<<"test">>},{<<"host">>,<<"example.org.co">>},{<<"password">>,<<"test">>}]
from ::ffff:ip
2019-03-19 07:19:16.814 [error]
<0.524.0>#ejabberd_auth_external:failure:103 External authentication
program failed when calling 'check_password' for admin#example.org.co:
disconnected
2019-03-19 07:19:16.814 [debug]
<0.524.0>#mod_http_api:extract_auth:171 Invalid auth data:
{error,invalid_auth}
Any help regarding this topic will be appreciated.

1) Your config about the auth_method looks good.
2) Here is a python script I've used and upgraded to make an external authentication for ejabberd.
#!/usr/bin/python
import sys
from struct import *
import os
def openAuth(args):
(user, server, password) = args
# Implement your interactions with your service / database
# Return True or False
return True
def openIsuser(args):
(user, server) = args
# Implement your interactions with your service / database
# Return True or False
return True
def loop():
switcher = {
"auth": openAuth,
"isuser": openIsuser,
"setpass": lambda(none): True,
"tryregister": lambda(none): False,
"removeuser": lambda(none): False,
"removeuser3": lambda(none): False,
}
data = from_ejabberd()
to_ejabberd(switcher.get(data[0], lambda(none): False)(data[1:]))
loop()
def from_ejabberd():
input_length = sys.stdin.read(2)
(size,) = unpack('>h', input_length)
return sys.stdin.read(size).split(':')
def to_ejabberd(result):
if result:
sys.stdout.write('\x00\x02\x00\x01')
else:
sys.stdout.write('\x00\x02\x00\x00')
sys.stdout.flush()
if __name__ == "__main__":
try:
loop()
except error:
pass
I didn't created the communication with Ejabberd from_ejabberd() and to_ejabberd(), and unfortunately can't find back the sources.

Related

Google Cloud Function works in emulator but errs on deploy to Firebase: Unexpected token p in JSON at position 1

I have a cloud function that is meant to delete a post with its subcollection of comments. It properly deletes the post in the emulator. However, when I try to deploy the cloud function to Firebase the following error occurs:
i functions: updating Node.js 16 function
recursiveDelete(us-central1)...
Function failed on loading user code. This is likely due to a bug in
the user code. Error message:
Error: please examine your function logs to see the error cause:
https://cloud.google.com/functions/docs/monitoring/logging#viewing_logs.
Additional troubleshooting documentation can be found at
https://cloud.google.com/functions/docs/troubleshooting#logging.
Functions deploy had errors with the following functions:
recursiveDelete(us-central1)
As instructed, I checked the error in the log in Google Cloud I found:
SyntaxError: Unexpected token p in JSON at position 1
at .JSON.parse
at .parse (
/layers/google.nodejs.functions-framework/functions-framework/node_modules/body-parser/lib/types/json.js:89
)
This is the function:
const functions = require("firebase-functions");
const firebase_tools = require('firebase-tools');
const admin = require('firebase-admin');
admin.initializeApp();
/**
* Initiate a recursive delete of documents at a given path.
* #param {string} data.path the document or collection path to delete.
*/
exports.recursiveDelete = functions
.runWith({
timeoutSeconds: 540,
memory: '2GB'
})
.https.onCall(async (data, context) => {
const path = data[0].path;
console.log(`Running cloud function recursiveDelete on ${path}`);
await firebase_tools.firestore
.delete(path, {
project: process.env.GCP_PROJECT,
recursive: true,
yes: true,
token: '...',
force: true
});
return {
path: path
};
});
This is how I'm calling it:
static Future<void> delete(String ref) async {
if (FirebaseAuth.instance.currentUser == null) {
throw Exception('Must be logged in');
}
await FirebaseFunctions.instance
.httpsCallable('recursiveDelete')
.call([{'path': 'posts/$ref'}])
.then((value) => logger("Post deleted: ${value.data}"))
.catchError((error) => logger.error("Failed to delete post: $error"));
}
In the emulator, deletion works fine and the output is:
I/flutter (26907): [Post] [INFO] Post deleted: {path:
posts/Gk4TeWEgZmm0QUcaqTrk}
So, what's wrong with it?
Full stack-trace:
=== Deploying to 'sightings-dev'...
i deploying functions
i functions: ensuring required API cloudfunctions.googleapis.com is enabled...
i functions: ensuring required API cloudbuild.googleapis.com is enabled...
✔ functions: required API cloudfunctions.googleapis.com is enabled
✔ functions: required API cloudbuild.googleapis.com is enabled
✔ artifactregistry: required API artifactregistry.googleapis.com is enabled
i functions: preparing codebase default for deployment
i functions: preparing functions directory for uploading...
i functions: packaged /Users/strijdhaftig/sightings/frontend/flutter/sightings/functions (55.13 KB) for uploading
✔ functions: functions folder uploaded successfully
i functions: updating Node.js 16 function recursiveDelete(us-central1)...
Function failed on loading user code. This is likely due to a bug in the user code. Error message: Error: please examine your function logs to see the error cause: https://cloud.google.com/functions/docs/monitoring/logging#viewing_logs. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation.
Functions deploy had errors with the following functions:
recursiveDelete(us-central1)
i functions: cleaning up build files...
Error: There was an error deploying functions
Having trouble? Try again or contact support with contents of firebase-debug.log

Error: failed to connect to database: password authentication failed in Rust

I am trying to connect to database in Rust using sqlx crate and Postgres database.
main.rs:
use dotenv;
use sqlx::Pool;
use sqlx::PgPool;
use sqlx::query;
#[async_std::main]
async fn main() -> Result<(), Error> {
dotenv::dotenv().ok();
pretty_env_logger::init();
let url = std::env::var("DATABASE_URL").unwrap();
dbg!(url);
let db_url = std::env::var("DATABASE_URL")?;
let db_pool: PgPool = Pool::new(&db_url).await?;
let rows = query!("select 1 as one").fetch_one(&db_pool).await?;
dbg!(rows);
let mut app = tide::new();
app.at("/").get(|_| async move {Ok("Hello Rustacean!")});
app.listen("127.0.0.1:8080").await?;
Ok(())
}
#[derive(thiserror::Error, Debug)]
enum Error {
#[error(transparent)]
DbError(#[from] sqlx::Error),
#[error(transparent)]
IoError(#[from] std::io::Error),
#[error(transparent)]
VarError(#[from] std::env::VarError),
}
Here is my .env file:
DATABASE_URL=postgres://localhost/twitter
RUST_LOG=trace
Error log:
error: failed to connect to database: password authentication failed for user "ayman"
--> src/main.rs:19:16
|
19 | let rows = query!("select 1 as one").fetch_one(&db_pool).await?;
| ^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
error: aborting due to previous error
error: could not compile `backend`.
Note:
There exists a database called twitter.
I have include macros for sqlx's dependency
sqlx = {version="0.3.5", features = ["runtime-async-std", "macros", "chrono", "json", "postgres", "uuid"]}
Am I missing some level of authentication for connecting to database? I could not find it in docs for sqlx::Query macro
The reason why it is unable to authenticate is that you must provide credentials before accessing the database
There are two ways to do it
Option 1: Change your URL to contain the credentials - For instance -
DATABASE_URL=postgres://localhost?dbname=mydb&user=postgres&password=postgres
Option 2 Use PgConnectionOptions - For instance
let pool_options = PgConnectOptions::new()
.host("localhost")
.port(5432)
.username("dbuser")
.database("dbtest")
.password("dbpassword");
let pool: PgPool = Pool::<Postgres>::connect_with(pool_options).await?;
Note: The sqlx version that I am using is sqlx = {version="0.5.1"}
For more information refer the docs - https://docs.rs/sqlx/0.5.1/sqlx/postgres/struct.PgConnectOptions.html#method.password
Hope this helps you.

can we connect to public api endpoint instead of local host using dredd tool?

I tried to use a public end point(eg:api.openweathermap.org/data/2.5/weather?lat=35&lon=139) instead of the local host while configuring dredd and ran the command to run the tool.But I am not able to connect to the end point through dredd. It is throwing Error:getaddrINFO EAI_AGAIN .
But when I tried to connect to the endpoint using post man .I am able to connect successfully
There is no difference in calling a local or remote endpoint.
Some remote endpoints have some sort of authorization requirements.
This an example of Dredd calling external endpoint:
dredd.yml configuration file fragment
...
blueprint: doc/api.md
# endpoint: 'http://api-srv:5000'
endpoint: https://private-da275-notes69.apiary-mock.com
As you see, the only change is the endpoint on Dredd configuration file (created using Dredd init).
But, as I mention, sometimes you'll need to provide authorization through the header or query string parameter.
Dreed has hooks that allow you to change things before each transaction, for instance:
You'd like to add the apikey parameter in each URL before executing the request. This code can handle that.
hook.js
// Writing Dredd Hooks In Node.js
// Ref: http://dredd.org/en/latest/hooks-nodejs.html
var hooks = require('hooks');
hooks.beforeEach(function(transaction) {
hooks.log('before each');
// add query parameter to each transaction here
let paramToAdd = 'api-key=23456';
if (transaction.fullPath.indexOf('?') > -1)
transaction.fullPath += '&' + paramToAdd;
else
transaction.fullPath += '?' + paramToAdd;
hooks.log('before each fullpath: ' + transaction.fullPath);
});
Code at Github gist
Save this hook file anywhere in your project an than run Dredd passing the hook file.
dredd --hookfiles=./hoock.js
That's it, after execution the log will show the actual URL used in the request.
info: Configuration './dredd.yml' found, ignoring other arguments.
2018-06-25T16:57:13.243Z - info: Beginning Dredd testing...
2018-06-25T16:57:13.249Z - info: Found Hookfiles: 0=/api/scripts/dredd-hoock.js
2018-06-25T16:57:13.263Z - hook: before each
2018-06-25T16:57:13.264Z - hook: before each fullpath: /notes?api-key=23456
"/notes?api-key=23456"
2018-06-25T16:57:16.095Z - pass: GET (200) /notes duration: 2829ms
2018-06-25T16:57:16.096Z - hook: before each
2018-06-25T16:57:16.096Z - hook: before each fullpath: /notes?api-key=23456
"/notes?api-key=23456"
2018-06-25T16:57:16.788Z - pass: POST (201) /notes duration: 691ms
2018-06-25T16:57:16.788Z - hook: before each
2018-06-25T16:57:16.789Z - hook: before each fullpath: /notes/abcd1234?api-key=23456
"/notes/abcd1234?api-key=23456"
2018-06-25T16:57:17.113Z - pass: GET (200) /notes/abcd1234 duration: 323ms
2018-06-25T16:57:17.114Z - hook: before each
2018-06-25T16:57:17.114Z - hook: before each fullpath: /notes/abcd1234?api-key=23456
"/notes/abcd1234?api-key=23456"
2018-06-25T16:57:17.353Z - pass: DELETE (204) /notes/abcd1234 duration: 238ms
2018-06-25T16:57:17.354Z - hook: before each
2018-06-25T16:57:17.354Z - hook: before each fullpath: /notes/abcd1234?api-key=23456
"/notes/abcd1234?api-key=23456"
2018-06-25T16:57:17.614Z - pass: PUT (200) /notes/abcd1234 duration: 259ms
2018-06-25T16:57:17.615Z - complete: 5 passing, 0 failing, 0 errors, 0 skipped, 5 total
2018-06-25T16:57:17.616Z - complete: Tests took 4372ms

Fiware Orion - pepProxy

i'm part of a team that is developing an application that uses the Fiware GE's has part of the Smart-AgriFood accelerator.
We are using the Orion Context Broker for gathering the data provided by the sensor network, and we intend to use the Pep-Proxy to authenticate the sensor node for access the Orion instance. We have tried the following pepProxy's:
https://github.com/telefonicaid/fiware-orion-pep
https://github.com/ging/fi-ware-pep-proxy
We only have success implementing the second (fi-ware-pep-proxy) implementation of the proxy. With the fiware-orion-pep we haven't been able to connect to the Keystone Global instance (account.lab.fi-ware.org), we have tried the account.lab... and the cloud.lab..., my question are:
1) is the keystone (IDM) instance for authentication the account.lab or the cloud.lab?? and what port's to use or address's?
2) is the fiware-orion-pep prepared for authenticate at the account.lab.fi-ware.org?? here is way i ask this:
This one works with the curl command at >> cloud.lab.fiware.org:4730/v2.0/tokens
{
"auth": {
"passwordCredentials": {
"username": "<my_user>",
"password": "<my_password>"
}
}
}'
This one does't work with the curl comand at >> account.lab.fi-ware.org:5000/v3/auth/tokens
{
"auth": {
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"domain": {
"name": "<my_domain>"
},
"name": "<my_user>",
"password": "<my_password>"
}
}
}
} }'
3) what is the implementation that i should be using for authenticate the devices or other calls to the Orion instance???
Here are the configuration that i used:
fiware-orion-pep
config.authentication = {
checkHeaders: true,
module: 'keystone',
user: '<my_user>',
password: '<my_password>',
domainName: '<my_domain>',
retries: 3,
cacheTTLs: {
users: 1000,
projectIds: 1000,
roles: 60
},
options: {
protocol: 'http',
host: 'account.lab.fiware.org',
port: 5000,
path: '/v3/role_assignments',
authPath: '/v3/auth/tokens'
}
};
fi-ware-pep-proxy (this one works), i have set the listing port to 1026 at the source code
var config = {};
config.account_host = 'https://account.lab.fiware.org';
config.keystone_host = 'cloud.lab.fiware.org';
config.keystone_port = 4731;
config.app_host = 'localhost';
config.app_port = '10026';
config.username = 'pepProxy';
config.password = 'pepProxy';
// in seconds
config.chache_time = 300;
config.check_permissions = false;
config.magic_key = undefined;
module.exports = config;
Thanks in advance for the time ... :)
The are currently some differences in how both PEP Proxies authenticate and validate against the global instances, so they do not behave in exactly the same way.
The one in telefonicaid/fiware-orion-pep was developed to fulfill the PEP Proxy requirements (authentication and validation against a Keystone and Access Control) in individual projects with their own Keystone and Keypass (a flavour of Access Control) installations, and so it evolved faster than the one in ging/fi-ware-pep-proxy and in a slightly different direction. As an example, the former supports multitenancy using the fiware-service and fiware-servicepath headers, while the latter is transparent to those mechanisms. This development direction meant also that the functionality slightly differs from time to time from the one in the global instance.
That being said, the concrete answer:
- Both PEP Proxies should be able to contact the global instance. If one doesn't, please, fill a bug in the issues of the Github repository and we will fix it as soon as possible.
- The ging/fi-ware-pep-proxy was specifically designed for accessing the global instance, so you should be able to use it as expected.
Please, if you try to proceed with the telefonicaid/fiware-orion-pep take note also that:
- the configuration flag authentication.checkHeaders should be false, as the global instance does not currently support multitenancy.
- current stable release (0.5.0) is about to change to next version (probably today) so maybe some of the problems will solve with the update.
Hope this clarify some of your doubts.
[EDIT]
1) I have already install the telefonicaid/fiware-orion-pep (v 0.6.0) from sources and from the rpm package created following the tutorial available in the github. When creating the rpm package, this is created with the following name pep-proxy-0.4.0_next-0.noarch.rpm.
2) Here is the configuration that i used:
/opt/fiware-orion-pep/config.js
var config = {};
config.resource = {
original: {
host: 'localhost',
port: 10026
},
proxy: {
port: 1026,
adminPort: 11211
} };
config.authentication = {
checkHeaders: false,
module: 'keystone',
user: '<##################>',
password: '<###################>',
domainName: 'admin_domain',
retries: 3,
cacheTTLs: {
users: 1000,
projectIds: 1000,
roles: 60
},
options: { protocol: 'http',
host: 'cloud.lab.fiware.org',
port: 4730,
path: '/v3/role_assignments',
authPath: '/v3/auth/tokens'
} };
config.ssl = {
active: false,
keyFile: '',
certFile: '' }
config.logLevel = 'DEBUG'; // List of component
config.middlewares = {
require: 'lib/plugins/orionPlugin',
functions: [
'extractCBAction'
] };
config.componentName = 'orion';
config.resourceNamePrefix = 'fiware:';
config.bypass = false;
config.bypassRoleId = '';
module.exports = config;
/etc/sysconfig/pepProxy
# General Configuration
############################################################################
# Port where the proxy will listen for requests
PROXY_PORT=1026
# User to execute the PEP Proxy with
PROXY_USER=pepproxy
# Host where the target Context Broker is located
# TARGET_HOST=localhost
# Port where the target Context Broker is listening
# TARGET_PORT=10026
# Maximum level of logs to show (FATAL, ERROR, WARNING, INFO, DEBUG)
LOG_LEVEL=DEBUG
# Indicates what component plugin should be loaded with this PEP: orion, keypass, perseo
COMPONENT_PLUGIN=orion
#
# Access Control Configuration
############################################################################
# Host where the Access Control (the component who knows the policies for the incoming requests) is located
# ACCESS_HOST=
# Port where the Access Control is listening
# ACCESS_PORT=
# Host where the authentication authority for the Access Control is located
# AUTHENTICATION_HOST=
# Port where the authentication authority is listening
# AUTHENTICATION_PORT=
# User name of the PEP Proxy in the authentication authority
PROXY_USERNAME=XXXXXXXXXXXXX
# Password of the PEP Proxy in the Authentication authority
PROXY_PASSWORD=XXXXXXXXXXXXX
In the files above i have tried the following parameters:
Keystone instance: account.lab.fiware.org or cloud.lab.fiware.org
User: pep or pepProxy or "user from fiware account"
Pass: pep or pepProxy or "user password from account"
Port: 4730, 4731, 5000
The result it's the same as before... the telefonicaid/fiware-orion-pep is unable to authenticate:
log file at /var/log/pepProxy/pepProxy
time=2015-04-13T14:49:24.718Z | lvl=ERROR | corr=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | trans=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | op=/v1/updateContext | msg=VALIDATION-GEN-003] Error connecting to Keystone authentication: KEYSTONE_AUTHENTICATION_ERROR: There was a connection error while authenticating to Keystone: 500
time=2015-04-13T14:49:24.721Z | lvl=DEBUG | corr=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | trans=71a34c8b-10b3-40a3-be85-71bd3ce34c8a | op=/v1/updateContext | msg=response-time: 50745 statusCode: 500
result from the client console
{
"message": "There was a connection error while authenticating to Keystone: 500",
"name": "KEYSTONE_AUTHENTICATION_ERROR"
}
I'm doing something wrong here??

How do I resolve a MongoDB timeout error when connecting via the Scala Play! framework?

I am connecting to MongoDB while using the Scala Play! framework. I end up getting this timeout error:
! #6j672dke5 - Internal server error, for (GET) [/accounts] ->
play.api.Application$$anon$1: Execution exception[[MongoTimeoutException: Timed out while waiting to connect after 10000 ms]]
at play.api.Application$class.handleError(Application.scala:293) ~[play_2.10-2.2.1.jar:2.2.1]
at play.api.DefaultApplication.handleError(Application.scala:399) [play_2.10-2.2.1.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:165) [play_2.10-2.2.1.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:162) [play_2.10-2.2.1.jar:2.2.1]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) [scala-library-2.10.4.jar:na]
at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185) [scala-library-2.10.4.jar:na]
Caused by: com.mongodb.MongoTimeoutException: Timed out while waiting to connect after 10000 ms
at com.mongodb.BaseCluster.getDescription(BaseCluster.java:131) ~[mongo-java-driver-2.12.3.jar:na]
at com.mongodb.DBTCPConnector.getClusterDescription(DBTCPConnector.java:396) ~[mongo-java-driver-2.12.3.jar:na]
at com.mongodb.DBTCPConnector.getType(DBTCPConnector.java:569) ~[mongo-java-driver-2.12.3.jar:na]
at com.mongodb.DBTCPConnector.isMongosConnection(DBTCPConnector.java:370) ~[mongo-java-driver-2.12.3.jar:na]
at com.mongodb.Mongo.isMongosConnection(Mongo.java:645) ~[mongo-java-driver-2.12.3.jar:na]
at com.mongodb.DBCursor._check(DBCursor.java:454) ~[mongo-java-driver-2.12.3.jar:na]
Here is my Scala code for connecting to the database:
//models.scala
package models.mongodb
//imports
package object mongoContext {
//context stuff
val client = MongoClient(current.configuration.getString("mongo.host").toString())
val database = client(current.configuration.getString("mongo.database").toString())
}
Here is the actual model that is making the connection:
//google.scala
package models.mongodb
//imports
case class Account(
id: ObjectId = new ObjectId,
name: String
)
object AccountDAO extends SalatDAO[Account, ObjectId](
collection = mongoContext.database("accounts")
)
object Account {
def all(): List[Account] = AccountDAO.find(MongoDBObject.empty).toList
}
Here's the Play! framework MongoDB conf information:
# application.conf
# mongodb connection details
mongo.host="localhost"
mongo.port=27017
mongo.database="advanced"
Mongodb is running on my local machine. I can connect to it by typing mongo at the terminal window. Here's the relevant part of the conf file:
# mongod.conf
# Where to store the data.
# Note: if you run mongodb as a non-root user (recommended) you may
# need to create and set permissions for this directory manually,
# e.g., if the parent directory isn't mutable by the mongodb user.
dbpath=/var/lib/mongodb
#where to log
logpath=/var/log/mongodb/mongod.log
logappend=true
#port = 27017
# Listen to local interface only. Comment out to listen on all interfaces.
#bind_ip = 127.0.0.1
So what's causing this timeout error and how do I fix it? Thanks!
I figured out that I needed to change:
val client = MongoClient(current.configuration.getString("mongo.host").toString())
val database = client(current.configuration.getString("mongo.database").toString())
to:
val client = MongoClient(conf.getString("mongo.host"))
val database = client(conf.getString("mongo.database"))