Connect Kamailio with PostgreSQL - postgresql

I have a specific situation where I need to connect Kamailio to PostgreSQL DB rather than MySQL. Can someone please provide the step for that. Tried multiple steps from the forum but it failed.
Problem faced: whenever kamailio creates the database in PostgreSQL it keeps on asking the password and ultimately it fails.
Ubuntu version: 16.04 LTS
Kamailio: 5.0
I have done following things so far:
1. Included the postgre modules
2. Modified kamailio.cfg and added following lines:
#!ifdef WITH_PGSQL
# - database URL - used to connect to database server by modules such
# as: auth_db, acc, usrloc, a.s.o.
#!ifndef DBURL
#!define DBURL "postgres://kamailio:password#localhost/kamailio"
#!endif
#!endif
This is my file kambdctlrc:
# The Kamailio configuration file for the control tools.
#
# Here you can set variables used in the kamctl and kamdbctl setup
# scripts. Per default all variables here are commented out, the control tools
# will use their internal default values.
## your SIP domain
SIP_DOMAIN=sip.<DOMAIN>.net
## chrooted directory
# $CHROOT_DIR="/path/to/chrooted/directory"
## database type: MYSQL, PGSQL, ORACLE, DB_BERKELEY, DBTEXT, or SQLITE
# by default none is loaded
#
# If you want to setup a database with kamdbctl, you must at least specify
# this parameter.
DBENGINE=PGSQL
## database host
DBHOST=localhost
## database host
# DBPORT=3306
## database name (for ORACLE this is TNS name)
DBNAME=kamailio
# database path used by dbtext, db_berkeley or sqlite
# DB_PATH="/usr/local/etc/kamailio/dbtext"
## database read/write user
DBRWUSER="kamailio"
## password for database read/write user
DBRWPW="password"
## database read only user
DBROUSER="kamailioro"
Thanks in advance !!

Finally, we have figured out the issue, It was a small mistake in .pgpass file which eventually creating authentication problem.

Related

Access remote database federation DB2

I have 2 systems. system A and system B and both are DB2 servers. I want to be able to access system B database from system A. Both have a database called TESTDB. I am trying to run the following command to create a server.
CREATE WRAPPER "drdawrapper"
LIBRARY 'libdb2drda.so'
OPTIONS (DB2_FENCED 'Y'
);
db2 "CREATE SERVER "PRD_SERVER_SSL_FLEX" TYPE DB2/UDB VERSION '11' WRAPPER "drdawrapper" AUTHORIZATION "xyz" PASSWORD "xyz" OPTIONS (DB2_CONCAT_NULL_NULL 'Y',DB2_VARCHAR_BLANKPADDED_COMPARISON 'Y',DBNAME 'TESTDB',HOST '169.62.253.230',NO_EMPTY_STRING 'N',PORT '50001',SECURITY 'SSL',STRING_UNITS 'S');"
But I keep getting:
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL1101N Remote database "TESTDB" on node "<unknown>" could not be accessed
with the specified authorization id and password. SQLSTATE=08004
Node directory:
db2 list node directory
Node Directory
Number of entries in the directory = 1
Node 1 entry:
Node name = TESTNODE
Comment =
Directory entry type = LOCAL
Protocol = TCPIP
Hostname = 123.21.23.12
Service name = 50001
The credentials are correct. I am not sure what node is it looking for. Any pointers?
Your question is more about configuration than programming.
As you appear to be encrypting the federated connection it can be wise to first verify that the encrypted connection works at the command-line, separately from federation. This irons out a lot of the detail and is easier to troubleshoot. After you get that working, you can then begin on encrypting the federated connection.
Please follow the detailed instructions here (choose the correct Db2-version):
You have to know in advance which kind of SSL/TLS trust verification you want (i.e. either single cert (client trusts the server - simplest and easiest), or multiple certs (both sides trust the other - more setup, arguably more secure), because this determines the configuration.
Ensure both of your Db2 instances and databases are properly configured for SSL.
Catalog the remote-node locally with security SSL (db2 catalog tcpip node ... remote ... server ...security ssl)
Catalog the remote-database locally on the new node name (db2 catalog database ... at node ...) followed by db2 terminate .
Verify a command-line connect to the remote database using the federated credentials, using the configured db2dsdriver.cfg if using SSLSERVERCERTIFICATE method, or using the keystore/stash configuration ( db2 connect to remotedb user ... using ... ). Use the same userid/password that you will use later in the create server command.
Once that command-line connect works, you can proceed with the encrypted federation link, via db2 create wrapper... and db2 create server....
There's no need to use quotes around the wrapper name, just let it fold, otherwise quotes are just annoying redundant noise, although it is not a mistake.
Inside the script for create server command options instead of AUTHORIZATION "xyz" PASSWORD "xyz" use AUTHORIZATION \"xyz\" PASSWORD \"xyz\" (i.e. escape the quotes).
For one-sided trust, use SSL_SERVERCERTIFICATE in the create server options clause and ensure the value is accurate (fully qualified path to the remote-db2instance-certificate-file), and that the file/directory permissions are valid.
For mutual trusts, use both SSL_KEYSTORE and SSL_KEYSTASH keywords with correct values, in the create server options clause (having previously ensured your keystores are properly populated, as verified by a command-line connect above).
You may also want to consider create user mapping depending on the requirements.
Finally you can create your nicknames, and test out the federated link by querying those nicknames.

PostgREST on Google Cloud SQL: unix socket URI format?

Any of you with experience with PostgREST and Cloud SQL ?
I have my SQL instance ready with open access (0.0.0.0/0) and I can access it with local PostGREST using the Cloud proxy app.
Now I want to run Postgrest from an instance of the same project but
I can't find an URI format for Postgrest that supports Cloud SQL format, as
Google SQL Cloud uses only unix sockets like /cloudsql/INSTANCE_CONNECTION_NAME
Config 1
db-uri = "postgres://postgres:password#/unix(/cloudsql/INSTANCE_CONNECTION_NAME)/mydatabase"
db-schema = "api"
jwt-secret = "OOcJ7VoSY1mXqod4MKtb9WCCwt9erJkRQ2tzYmLb4Xe="
db-anon-role = "web_anon"
server-port=3000
Returns {"details":"could not translate host name \"unix(\" to address: Unknown host\n","code":"","message":"Database connection error"}
Config 2
db-uri = "postgres://postgres:password#/mydatabase?unix_socket=/cloudsql/INSTANCE_CONNECTION_NAME"
db-schema = "api"
jwt-secret = "OOcJ7VoSY1mXqod4MKtb9WCCwt9erJkRQ2tzYmLb4Xe="
db-anon-role = "web_anon"
server-port=3000
The parser rejects the question mark
{"details":"invalid URI query parameter: \"unix_socket\"\n","code":"","message":"Database connection error"}
Config 3
db-uri = "postgres://postgres:password#/mydatabase"
db-schema = "api"
jwt-secret = "OOcJ7VoSY1mXqod4MKtb9WCCwt9erJkRQ2tzYmLb4Xe="
db-anon-role = "web_anon"
server-port=3000
server-unix-socket= "/cloudsql/INSTANCE_CONNECTION_NAME"
server-unix-socket appears to only take socket lock file path. Feeding it /cloudsql/INSTANCE_CONNECTION_NAME tries to delete file as in `postgrest.exe: /cloudsql/INSTANCE_CONNECTION_NAME: DeleteFile "/cloudsql/INSTANCE_CONNECTION_NAME": invalid argument t (The filename, directory name, or volume label syntax is incorrect.)
Documentation
Cloud SQL Doc
https://cloud.google.com/sql/docs/mysql/connect-run
PostgREST
http://postgrest.org/en/v6.0/configuration.html
https://github.com/PostgREST/postgrest/issues/1186
https://github.com/PostgREST/postgrest/issues/169
Environment
PostgreSQL version:11
PostgREST version: 6.0.2
Operating system: Win10 and Alpine
First you have to add the Cloud SQL connection to the Cloud Run instance:
https://cloud.google.com/sql/docs/postgres/connect-run#configuring
After that, the DB connection will be available in the service on a Unix domain socket at path /cloudsql/<cloud_sql_instance_connection_name> and you can set the PGRST_DB_URI environment variable to reflect that.
Here's the correct format:
postgres://<pg_user>:<pg_pass>#/<db_name>?host=/cloudsql/<cloud_sql_instance_connection_name>
e.g.
postgres://postgres:postgres#/postgres?host=/cloudsql/project-id:zone-id-1:sql-instance
According with Connecting with CloudSQL, the example is:
# postgres+pg8000://<db_user>:<db_pass>#/<db_name>?unix_sock=/cloudsql//.s.PGSQL.5432
Then you can try with (Just as #marian.vladoi mentioned):
db-uri = "postgres://postgres:password#/mydatabase?unix_socket=/cloudsql/INSTANCE_CONNECTION_NAME/.s.PGSQL.5432"
Keep in mind that the connection name should include:
ProjectID:Region:DatabaseName
For example: myproject:myregion:myinstance
Anyway, you can find here more options to connect from external applications and from within Google Cloud.
I tried many variations but couldn't get it to work out of the box, however I'll post this workaround.
FWIW I was able to use an alternate socket location with postgrest locally, but then when trying to use the cloudsql location it doesn't seem to interpret it right - perhaps the colons in the socket path are throwing it off?
In any case as #Steve_Chávez mentions, this approach does work db-uri = postgres:///user:password#/dbname and defaults to the postgrest default socket location (/run/postgresql/.s.PGSQL.5432). So in the docker entrypoint we can symlink this location to the actual socket injected by Cloud Run.
First, add the following to the Dockerfile (above USER 1000):
RUN mkdir -p /run/postgresql/ && chown postgrest:postgrest /run/postgresql/
Then add an executable file at /etc/entrypoint.bash containing:
set -eEux pipefail
CLOUDSQL_INSTANCE_NAME=${CLOUDSQL_INSTANCE_NAME:-PROJECT_REGION_INSTANCE_NAME}
POSTGRES_SOCKET_LOCATION=/run/postgresql
ln -s /cloudsql/${CLOUDSQL_INSTANCE_NAME}/.s.PGSQL.5432 ${POSTGRES_SOCKET_LOCATION}/.s.PGSQL.5432
postgrest /etc/postgrest.conf
Change the Dockefile entrypoint to CMD /etc/entrypoint.sh. Then add CLOUDSQL_INSTANCE_NAME as an env var in cloud run. The PGRST_DB_URI env var is like so postgres://authenticator:password#/postgres
An alternative approach if you don't like this, would be to connect via serverless vpc connector.
I struggled with this too.
I end up doing a one-liner for DB-URI env variable
host=/cloudsql/project-id:zone:instance-id user=user port=5432 dbname=dbname password=password
However, I have postgrest running on cloud run that lets you specify the instance connection name via
INSTANCE_CONNECTION_NAME=/cloudsql/project-id:zone:instance-id
Maybe you can host it there and you end up doing it serverless Im not sure where are you running it currently.
https://cloud.google.com/sql/docs/mysql/connect-run

Informix dbserver connections in sqlhosts via perl

I want to add a new Informix sever entry into sqlhosts, but I'm not quite sure how it will impact the existing connection.
Currently sqlhosts contains only one server entry...
dbserver onsoctcp 111.111.111.20 7101
The database handle is created within an existing perl module (db is a database on the server)...
my $dsn = "DBI:Informix:db";
my $dbh = DBI->connect($dsn,"user","password");
Notice that "dbserver" is never referenced.
I want to add a test server to sqlhosts. Something like this...
dbserver onsoctcp 111.111.111.20 7101
dbserver_test onsoctcp 111.111.111.21 7101
With only one entry in sqlhosts, everything has been working fine. But my connection never references the server name in sqlhosts.
So, my question(s)...
Does Informix just try to use the only one available?
Will adding a second server entry in sqlhosts force me to include the server name in the connection string?
Thanks!
Informix client uses environment variables to resolve hosts and other configuration; check that INFORMIXDIR is set to the path where Informix CSDK is installed (I assume it is), and set INFORMIXSERVER to point to the new entry in sqlhosts. See this article in IBM knowledge base.
Alternatively, use db#server data source format:
my $dbh = DBI->connect("DBI:Informix:db#server", "user", "password");
Maybe it is a permissions issue? From the documentation:
Note that you might also be able to connect to other databases not
listed by DBI->data_sources using other notations to identify the
database. For example, you can connect to "dbase#server" if "server"
appears in the sqlhosts file and the database "dbase" exists on the
server and the server is up and you have permission to use both the
server and the database on the server and so on. Also, you might not
be able to connect to every one of the databases listed if you have
not been given at least connect permission on the database. However,
the databases listed by the DBI->data_sources method certainly exist,
and it is legitimate to try connecting to those sources.
http://search.cpan.org/~johnl/DBD-Informix-2013.0521/Informix.pm

How do I get started if I want to use PostgreSQL for local use?

Good day,
Currently I use MS Access at home for several Databases (for personal use).
At work, I use PostgreSQL, which is infinity times better. I want to start using postgres for my personally used databases, but I don't know where to start.
I've tried reading the documentation, but still don't know how to start. I don't have a server at home; is it possible I can just make a local database/tablespace? Or would I have to host a virtual server?
Note that I am willing to use other open source databases if there is an easy option out there - MS access is just so... terrible.
Thanks,
So, it seems you have Windows at home. You just need to download full installer for PostgreSQL:
http://www.postgresql.org/download/windows/
After installation it will automatically add starting postgres server as a service on local machine. That means, server will always run in background, but you can disable that later, or just uninstall.
After that, you can use pgAdmin (included in default installation package) or other client tools to access the DB engine.
UPD in pgadmin, create connection with this settings:
'localhost' as hostname;
port - 5432;
user, database - postgres (for testing purpose only - you should create your own user and tables with restricted rights later).
Password for postgres (that is DB admin user) must be entered during installation process.
Server settings are stored somewhere here:
"C:\Program Files\PostgreSQL\9.3\data"
pg_hba.conf - Client Authentication Configuration File
postgresql.conf - Configuration File

Moving ejabberd default DB to MySQL shows authentication failure

I am trying to setup an ejabberd server on my Amazon EC2 Ubuntu instance.
With the default DB Provided by ejabberd, I can easily setup my connection. But I need to replace the mnesia DB with MySQL. I found some tutorials over the internet. From those tutorial I found out a solution. I will explain it as step by step.
I am using ejabberd 2.1.11. I made the following changes on ejabberd.cfg file
Commented the following line :
{auth_method, internal}
Uncommented this:
{auth_method, odbc}
Configured my MySQL DB
{odbc_server, {mysql, "localhost", "students", "root", ""}} // No Password set
Change mod_last to mod_last_odbc
Change mod_offline to mod_offline_odbc
Change mod_roster to mod_roster_odbc
Change mod_private to mod_private_odbc
Change mod_privacy to mod_privacy_odbc
Change mod_pubsub to mod_pubsub_odbc
Change mod_vcard to mod_vcard_odbc
Then I installed ejabberd-mysql driver from the following link
http://stefan-strigler.de/2009/01/14/ejabberd-mysql-drivers-for-debian-and-ubuntu/
After making all these changes I restarted my ejabberd server.
Then I tried to login to my ejabberd server. It shows me the login prompt.
After entering the credentials it takes a lot time and then displays authentication failed.
Any help on the topic is appreciated.
Let's dig into problem
Your setup is working that means your config file is fine. But then
Why does auth fails ?
What schema you have in your students database ?
If you have a proper schema installed then does the user present in ur db's users table?
Have you also updated conf/odbc.ini with proper mysql details.
Even if both the conditions meet then I'll advice you to set mysql password and try again.
Let me know if that helps or not.
Update :-
update your config with {loglevel, 5}
then hit the login and tail all the log files.
odbc.ini
1 [ejabberd]
2 Driver = MySQL
3 DATABASE = students
4 PWD =
5 SERVER = localhost
6 SOCKET = /tmp/mysql.sock
7 UID = root
One Major basic part that one can easily miss is that data which was previously stored in mnesia database will no longer will be available for your new configuration so again you have to create one admin user like this to access your admin account.
./ejabberdctl register admin "password"