I am trying to setup an ejabberd server on my Amazon EC2 Ubuntu instance.
With the default DB Provided by ejabberd, I can easily setup my connection. But I need to replace the mnesia DB with MySQL. I found some tutorials over the internet. From those tutorial I found out a solution. I will explain it as step by step.
I am using ejabberd 2.1.11. I made the following changes on ejabberd.cfg file
Commented the following line :
{auth_method, internal}
Uncommented this:
{auth_method, odbc}
Configured my MySQL DB
{odbc_server, {mysql, "localhost", "students", "root", ""}} // No Password set
Change mod_last to mod_last_odbc
Change mod_offline to mod_offline_odbc
Change mod_roster to mod_roster_odbc
Change mod_private to mod_private_odbc
Change mod_privacy to mod_privacy_odbc
Change mod_pubsub to mod_pubsub_odbc
Change mod_vcard to mod_vcard_odbc
Then I installed ejabberd-mysql driver from the following link
http://stefan-strigler.de/2009/01/14/ejabberd-mysql-drivers-for-debian-and-ubuntu/
After making all these changes I restarted my ejabberd server.
Then I tried to login to my ejabberd server. It shows me the login prompt.
After entering the credentials it takes a lot time and then displays authentication failed.
Any help on the topic is appreciated.
Let's dig into problem
Your setup is working that means your config file is fine. But then
Why does auth fails ?
What schema you have in your students database ?
If you have a proper schema installed then does the user present in ur db's users table?
Have you also updated conf/odbc.ini with proper mysql details.
Even if both the conditions meet then I'll advice you to set mysql password and try again.
Let me know if that helps or not.
Update :-
update your config with {loglevel, 5}
then hit the login and tail all the log files.
odbc.ini
1 [ejabberd]
2 Driver = MySQL
3 DATABASE = students
4 PWD =
5 SERVER = localhost
6 SOCKET = /tmp/mysql.sock
7 UID = root
One Major basic part that one can easily miss is that data which was previously stored in mnesia database will no longer will be available for your new configuration so again you have to create one admin user like this to access your admin account.
./ejabberdctl register admin "password"
Related
i am using pg lib in strapi application, where initially it creates postgres connection using correct postgresql username(postgres), database name(strapi_db) and password(postgres) but after login it changed it to connect using my windows 10 username(rayappan.a, and database as rayappan.a). it seems strange to me because i never configured anywhere to use my windows credentials for POSTGRESQL connection. please any one tell me how to fix username connection issue
Regards,
Rayappan Antoniraj
Take a look into https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING . It states that username:
Defaults to be the same as the operating system name of the user running the application."
The database name:
Defaults to be the same as the user name.
So it seems a new connection with parameters not set is made.
I have pgAdmin (dpage/pgadmin4:4.29) container running in Kubernetes. As a master user I have added Database connections and share the connection. I can disconnect/reconnect to database without password as a master user.
But for additional users I have created, those users password are not getting saved even they have selected Save Password option at the time of connection. pgAdmin keep asking for password when connecting to DB.
What I am missing in my setup.
NobleUplift's issue of the "Save Password" checkbox being disabled can be caused by certain config flags in the config.py file. Remembering passwords for SSH tunneling is disabled by default, for example.
You can re-enable the checkbox by writing
ALLOW_SAVE_TUNNEL_PASSWORD = True # SSH tunnel password saving, default False
ALLOW_SAVE_PASSWORD = True # database password saving, default True
to a new config_local.py file in the same directory where you find your main config.py file. Pg discourages writing the the main config.py file directly. See the docs for more details about the preferred config file and where to find them. (I found mine under "pgAdmin 4\v6\web\config.py", not where the docs said.)
This seems to be where the 'disable password saving' idea came from.
Apologies roy for not answering your question directly, but I didn't have the ability to respond directly to Noble. However, the config docs also mention that the ENHANCED_COOKIE_PROTECTION flag can interfere with Kubernetes (and other auth settings), which might be worth a look.
For a system that I am using, I run into the problem pasted on the title when I try to drop a database and recreate it. More specifically, this is the exact problem that I am facing:
Couldn't drop staging_databse : #<PG::ConnectionBad: FATAL: no pg_hba.conf entry for host xxx.xx.xxxx.xxx, user "ruby", database "postgres", SSL off
I've done some research regarding this problem and it seems that the solution is simply to turn on SSL. I've consulted Postgresql Documentation on pg_hba.conf but I'm unable to find this configuration file.
I typed in locate postgresql and I noticed that there is a postgresql-client-9.2 installed on the system. From what I have determined, I won't find this pg_hba.conf file because the client doesn't have it. I've also looked for the postgresql.conf file on SSL according to the official documentation but this file is not included either.
Finally, the documentation shows me this configuration option of the pgsql 9.2 client shown below:
libpq reads the system-wide OpenSSL configuration file. By default,
this file is named openssl.cnf and is located in the directory reported by
openssl version -d. This default can be overridden by setting environment
variable OPENSSL_CONF to the name of the desired configuration file.
However, this too isn't in my system. I've run the linux find command and this file doesn't seem to be there. I've run out of leagues and I have a sneaking suspicion that I am overlooking something very simple. Is there any other leads I can go on? Thanks.
Based on this message and the rest of the context:
FATAL: no pg_hba.conf entry for host xxx.xx.xxxx.xxx, user "ruby", database "postgres", SSL off
It looks plausible that:
you're connecting to a remote PostgreSQL instance which you don't administrate because you're acting as developer, not admin.
the db management layer tries to connect to the database named postgres in order to drop another database (staging_database) . It's necessary indeed because we can't drop a database when we're connected to it (in fact, a database can't be dropped when anyone is connected to it).
the admin policy established by the remote pg_hba.conf is such that your login and IP address together are not allowed to connect to the database named postgres
These facts combined together imply that you're missing the necessary rights to drop your database, even if indirectly.
At this point you want to submit the problem to the admin responsible for that PostgreSQL server.
I am trying to connect to my mongodb deployed in jelastic cloud
If i try to use the test database already provided in the mongodb node in jelastic..it works fine.But if i create my own database and try to access the collections created in it ..i get the following exception
com.mongodb.MongoException: unauthorized db:appdb lock type:-1 client:192.168.1.53
Why is this happening?how can i resolve it?
I am reading the configuration from a file mydb.cfg
host=mongodb-***.jelastic.servint.net
dbname=appdb
user=admin
password=*****
When in Rock Mongo web interface pick the targeted db and go for 'More' in config panel.
This has to show you the list of users having rights for DB.
Did you set the user and rights for your custom collection?
Try to check the configuration under 'Authentication' section?
Anyway, supposedly admin user should have rights to all DBs. You can try to figure this issue out at Jelastic community
I had the same issue when connection to mongodb custom named database.
In order to succeed with the connection, I have created an user for my custom named database.
(I added the provided admin user with its password to the authorized users as the image shown).
I stopped my db using db2stop force. The started did a backup restarted and after that
i cannot connect to db from the a client anymore i get:
using the command
db2 connect to "dbname" using "user"
SQL30082N Security processing failed
with reason "42" ("ROOT CAPABILITY
REQUIRED"). SQLSTATE=08001
password and username are correct. When im on the server connecting using command
db2 connect to "dbname"
or
db2 connect to "dbnmae" user "user"
or
db2 connect to "dbname" user db2inst1
works just fine.
I m really confused. Any help is much appreciated
Thanks.
What i tried so far :
db2 get dbm cfg | grep -i auth GSS
Plugin for Local Authorization
(LOCAL_GSSPLUGIN) = Server
Connection Authentication
(SRVCON_AUTH) = NOT_SPECIFIED
Database manager authentication
(AUTHENTICATION) = SERVER Cataloging
allowed without authority
(CATALOG_NOAUTH) = NO Trusted client
authentication
(TRUST_CLNTAUTH) = CLIENT Bypass
federated authentication
(FED_NOAUTH) = NO
switched to client but did not using
db2 update dbm cfg using
authentication client
Update:
Despite the age of this question, it would be wonderful to have a solid answer to this question. Hi locojay, how did you manage? :-)
I'm having the SQL30082N reason code 24 issue in my Windows PC, and today we experienced the same issue in an AIX server.
I googled for a couple hours and didn't find but one happy answer, related to having users with the same name both in the server and the client.
IMO it does not apply to me, as I'm running into a VBox that´s isolated from the domain (no network).
My case: I installed DB2 as user db2admin, no security. Then I granted DBADM to VIRTUALUSR01 and gave this user a password.
db2 connect to TheBase
works fine. But
db2 connect to TheBase user VIRTUALUSR01 using TheRightPassword
returns SQL30082N with reason code 24.
Using client authentication is generally a Bad Idea(TM). That's because you now rely on machines that you may not control for authentication. If I wanted to subvert your system, I could create a new user locally, say, db2inst1 or VIRTUALUSR01 or Administrator, with a password I know, and then, use that to wreak havoc on the database. If, however, no one in your organisation has root/administrator authority over their own machines, client authentication can be made to work. But all it takes is someone plugging in their own personal laptop, and your database could be at risk.
Instead, check the permissions of the files. If you've installed as root, ~db2inst1/sqllib/security/db2c[hk]pw (assuming instance ID of db2inst1) should be setuid root. If not, run db2iupdt against your instance (./db2iupdt db2inst1) which should fix the permissions.
If you've installed without root authority ("non-root install"), which I doubt, since you seem to have had this working, you would need to read the DB2 documentation on non-root installations and their limitations - I don't use non-root installs myself, so I'm not so familiar with them. However, there should be a set-root script that you can use to enable setuid root which, of course, you have to run as root.
I had the same problem and solved with the following way.
Problem occurs because of /etc/shadow file. If the user's password hash is created with SHA then DB2 cannot authenticate or authorize that user. You need MD5 for hashing that user's password.
If you are using Fedora or RedHat Linux, first change hashing method of passwords with:
# authconfig –-passalgo md5 –-update
Then drop and recreate the user:
# userdel userName
# useradd userName
# passwd userName
If you are using AIX or any other linux distros, authconfig won't work. So instead of passwd userName, issue this command:
# usermod --password `openssl passwd desiredPassword`
After that, your password hash belonging to userName will be generated with MD5.
Now grant user privilege to that user:
# su - db2inst1
(db2inst1)$ db2 connect to databaseName
(db2inst1)$ db2 GRANT DBADM with dataaccess with accessctrl on database to user userName
I hope it works for you too.
Thanks to Honza for his solution
Solutions to specific problem causes described previously in
this message are:
1. Run DB2IUPDT <InstName> to update the instance.
2. Ensure that the username created is valid. Review the DB2
General Naming Rules.
3. Ensure that catalog information is correct.