I am using IAM authentication to connect to Amazon RDS. Since the password gets expired every 15 minutes, I am using HikariConfigMXBean to update the credentials every 14 minutes (1 minute before the auth-token actually expires).
I have been referring codes from other folks, and I have seen people doing softEvictConnections() after refreshing the credentials.
As per the documentation, softEvictConnections() will basically remove all preexisting connections and create a new pool of connections using fresh credentials.
When I tried to test, I am able to verify that the older connection created with old auth-token (which has now expired) continues to work.
For reference, below is the piece of code:
void updateHikariCredentials(HikariDataSource dataSource, String userName, String password)
{
// Update username & password.
HikariConfigMXBean configBean = dataSource.getHikariConfigMXBean();
configBean.setUsername(userName);
configBean.setPassword(password);
HikariPoolMXBean pool = dataSource.getHikariPoolMXBean();
if (pool != null) {
pool.softEvictConnections(); // <-- Why is this needed?
}
}
I am trying to understand, what is the need of evicting the existing connections?
I already set maxConnectionAge for my connections. Is there any additional advantage of forcefully evicting old connection on password update, which I am missing?
I am trying to understand, what is the need of evicting the existing connections? [...] Is there any additional advantage of forcefully evicting old connection on password update, which I am missing?
There isn't
Authentication happens when the database connection is established. Once established, the connection is not re-authenticated. By throwing away all established connections, you're just adding overhead.
The goal of short-lived credentials is to prevent an attacker from exfiltrating credentials and then using them to open a connection. If they manage to open that connection during the time that the credentials are valid, there's nothing you can do to stop them accessing your data short of manually killing the back-end process.
Related
I am using PostgreSQL for my web application. I have put session.close in each Hibernate call in application code. But postgres connections are not closiing which makes the application to break after the connection count reahes the threshhold. I need to restart my postgres service to get it up again.
Using Apache Tomcat as webserver.
How can I solve this? Please suggest a permanent fix.
Code snippet below :.
session = sessionFactory.opensession();
Transaction tx= session.beginTransaction();
finally{
if(session != null) session.close();
}
Are you using the connection pooling or directly connecting through the Hibernate to database? If you are using it through connection pooling than please check the configuration over there once. And If you could post some piece of your code than it will be easy for us to debug and find the issue.
In the PostgreSQL documentation https://www.postgresql.org/docs/10/libpq-connect.html, it has been said that multiple hosts can be specified in a single connection string such that all the hosts will be tried in order one after the other until one of the server gets succeeds.
But when i tried to implement the same setting in the tag present in my ASP.net web.config file, it is throwing error as no such host name. I am using NpgSQL provider in order to connect to PostgreSQL database.
I need to add multiple server names in the connection string such that if the server#1 fails then it should try for the next server server#2 immediately provided in the order until it succeeds
Can you please suggest on how multiple hosts can be provided in the connection string?
The Npgsql driver does not currently support this functionality. The issue tracking this is https://github.com/npgsql/npgsql/issues/732, I'm still hoping we can get this into the next release but there's a lot going on.
Load balancing and failover is avaialble in Npgsql version 6. At the time of writing v.6 is in preview.
Simple failover example (server2 is only used if a connection could not be established to server1):
Host=server1,server2;Username=test;Password=test
Example with load balancing (round robin I guess):
Host=server1,server2,server3,server4,server5;Username=test;Password=test;Load
Balance Hosts=true;Target Session Attributes=prefer-standby
https://www.npgsql.org/doc/failover-and-load-balancing.html
(I'm afraid I'm probably about to reveal myself as completely unfit for the task at hand!)
I'm trying to setup a Redshift cluster and database to help manage data for a class/group project.
I have a dc2.large cluster running with either default options, or what looked like the most generic in the couple of place I was forced to make entries.
I have downloaded Aginity (Win64) as it is described as being specialized for Redshift. That said, I can't find any instructions for connecting using it. The connection dialog requests the follwoing:
Server: using the endpoint for my cluster (less :57xx at the end).
UserID: the Master username for the database defined for the cluster.
Password: to match the UserID
SSL Mode (Disable, Allow, Prefer, Require): trying various options
Database: as named in cluster setup
Port: as defined in cluster setup
I can't get it to connect ("failed to establish connection") and don't know if I'm entering something wrong in Aginity or if I haven't set up my cluster properly.
Message: Failed to establish a connection to 'abc1234-smtm.crone7m2jcwv.us-east-1.redshift.amazonaws.com'.
Type : Npgsql.NpgsqlException
Source : Npgsql
Trace : at Npgsql.NpgsqlClosedState.Open(NpgsqlConnector context, Int32 timeout)
at Npgsql.NpgsqlConnector.Open()
at Npgsql.NpgsqlConnection.Open()
at Aginity.MPP.Common.BaseDataProvider.get_Connection()
at Aginity.MPP.Common.BaseDataProvider.CreateCommand(String commandText, CommandType commandType, IDataParameter[] commandParams)
at Aginity.MPP.Common.BaseDataProvider.ExecuteReader(String commandText, CommandType commandType, IDataParameter[] commandParams)
--- Inner Exception: ---
......
It seems there is not enough information going into Aginity to authorize connection to my cluster - no account credential are supplied. For UserID, am I meant to enter the ID of a valid user? Can I use the root account? What would the ID look like? I have setup a User with FullAccess to S3 and Redshift, then entered the UserID in this format
arn:aws:iam::600123456789:user/john
along with the matching password, but that hasn't worked either.
The only training/tutorial I have been able to find/do on this is the Intro AWS direct you to, at https://qwiklabs.com/focuses/2366, which uses a web-based client that I can't find outside of the tutorial (pgweb).
Any advice what I am doing wrong, and how to do it right?
Well, I think I got it working - I haven't had a chance to see if I can actually create table yet, but it seems to be connected. I had to allow inbound traffic from outside the VPC, as per the above snapshot.
I'm guessing there's a better way than opening it up to all IP addresses, but I don't know the users' (fellow team members) IPs, and aren't they all subject to change depending on the device they're using to connect?
How does one go about getting inside the VPC to connect that way, presumably more securely?
I tried to establish connection using Dataworks Forge for sql db, I got following error.
Unable to establish a connection using the supplied values.Check that all values are correct and try again. Internal Details: Failed to send the request to the handler: The agent at yp-iis-dataworks-ga-wdc01-2-12-0-0-5-vm5:31531 is not available.; nested exception is: com.ibm.iis.prs.exception.CommunicationException: Failed to send the request to the handler: The agent at yp-iis-dataworks-ga-wdc01-2-12-0-0-5-vm5:31531 is not available.
I input the values based on VCAP_Service, and double checked it. How can I troubleshoot this?
Connection name sqldb1
Host 75.126.155.1xx
Database SQLDB
User user06xxx
Port 50000
Password xxx
Today, I did same thing when I posted this question and could establish connection successfully.
As Nigel mentioned, the dataworks service states when I posted was green. But maybe there were some issues. And they were fixed now.
We have a Mongo Database for testing purposes on a cloud server.
Recently, this server almost run out of space (97% disk used), and that resulted in Mongo writes failing. I decided to resize the server to have more free space.
Important detailed that i set the auth parameter in the config to true, so each clients had to auth before using the db. I thought this is normal. I created a user with the following command, which worked:
db.addUser( { user: "username",pwd: "password",roles: [ "userAdminAnyDatabase" ] } );
Now, what happened, that when the resize happened and mongo restarted, i cannot get any reads / writes to the database, only if i set the auth = false parameter in the config. I couldn't even add a user from localhost.
The other interesting thing was that i switched off auth and recreated the same user - it succeeeded, which means the user got lost!
Ok, i have lost the user after a restart. That's bad. What's worse that still, i can't get this user to auth from the remote clients.
I have no idea why is this happening, what went wrong.
The data, which is originally intended to create still exists, count() returns 111090914, which about what is expected. I can also do find(), so that data is OK.