I have a Java EE app that runs in Tomcat, uses Tomcat connection pooling, over postgres database, and the application uses OpenJPA for ORM.
For some operations, I need to get access to LargeObjectManager API from postgres. That requires me having a reference to the actual postgres connection object.
So, I'm trying to drill into the connection object I get from OpenJPA, to get the actual Postgres one. But I'm stopped by a 'sun.misc.proxy.$ProxyNN' object, and I'm not sure how can I get further.
Here is what I'm doing now:
OpenJPAEntityManager oem = OpenJPAPersistence.cast(em);
// get open JPA connection object
Connection c = oem.getConnection();
// all the way down to what's no longer OpenJPA
c = ((DelegatingConnection)c).getInnermostDelegate()
And that's where I stop. The returned object is below. I can't call ProxyConnection methods on it, and I don't even know who creates these proxies (but I bet it's Tomcat DBCP).
c = {com.sun.proxy.$Proxy5#1793}"ProxyConnection[PooledConnection[Pooled connection wrapping physical connection org.postgresql.jdbc4.Jdbc4Connection#a7a434]]"
h = {org.apache.tomcat.jdbc.pool.ProxyConnection#5671}"ProxyConnection[PooledConnection[Pooled connection wrapping physical connection org.postgresql.jdbc4.Jdbc4Connection#a7a434]]"
connection = {org.apache.tomcat.jdbc.pool.PooledConnection#5673}"PooledConnection[Pooled connection wrapping physical connection org.postgresql.jdbc4.Jdbc4Connection#a7a434]"
poolProperties = {org.apache.tomcat.jdbc.pool.PoolProperties#5676}"ConnectionPool[defaultAutoCommit=false; defaultReadOnly=null; defaultTransactionIsolation=-1; defaultCatalog=null; driverClassName=null; maxActive=100; maxIdle=100; minIdle=10; initialSize=10; maxWait=30000; testOnBorrow=false; testOnReturn=false; timeBe...
connection = {com.sun.proxy.$Proxy0#5677}"Pooled connection wrapping physical connection org.postgresql.jdbc4.Jdbc4Connection#a7a434"
h = {org.postgresql.ds.jdbc23.AbstractJdbc23PooledConnection$ConnectionHandler#5686}
xaConnection = null
abandonTrace = null
timestamp = 1372621172141
lock = {java.util.concurrent.locks.ReentrantReadWriteLock#5678}"java.util.concurrent.locks.ReentrantReadWriteLock#9267fe[Write locks = 0, Read locks = 0]"
discarded = false
lastConnected = 1372620679402
lastValidated = 1372620679398
parent = {org.apache.tomcat.jdbc.pool.ConnectionPool#5674}
attributes = {java.util.HashMap#5679} size = 2
handler = {org.apache.tomcat.jdbc.pool.ProxyConnection#5671}"ProxyConnection[PooledConnection[Pooled connection wrapping physical connection org.postgresql.jdbc4.Jdbc4Connection#a7a434]]"
released = {java.util.concurrent.atomic.AtomicBoolean#5680}"false"
suspect = false
driver = null
pool = {org.apache.tomcat.jdbc.pool.ConnectionPool#5674}
properties = null
next = null
useEquals = true
tomcat: 7.0.22
postgres jdbc:9.2-1002
openJPA: 2.1.1
Give this a try, might require some tweaking but I believe it should work as is or at least give you some useful pointers of where are the proxies that need to be unwrapped
//unwrap proxy invocation handler
org.apache.tomcat.jdbc.pool.ProxyConnection tomcatProxy = (ProxyConnection) Proxy.getInvocationHandler(c);
org.apache.tomcat.jdbc.pool.PooledConnection tomcatPooledConnection = tomcatProxy.getConnection(); //if this doesn't work try getDelegateConnection!
Connection connection = tomcatPooledConnection.getConnection();
then try
org.postgresql.jdbc4.Jdbc4Connection jdbc4conn = (org.postgresql.jdbc4.Jdbc4Connection) connection;
jdbc4conn.getLargeObjectAPI();
if the above cast fails try:
org.postgresql.ds.jdbc23.AbstractJdbc23PooledConnection psAbstractPooledConn = Proxy.getInvocationHandler(connection.getProxy().getConnection());
Well, if it does not work, if you can give more info on these other objects and their attributes it will be really helpful, but the idea here is to get the invocation handlers generated using JDK Proxies and the connection from Tomcat proxies
Related
Problem:
So I am trying to connect to a PostgreSql DB with scala slick v3.3.3 and it is failing to find the relation (table) users in schema 'one' within the 'onetest' Database.
I have the following Table setup:
CREATE SCHEMA one;
CREATE TABLE one.users (
...
);
and the table definition:
class UsersTable(tag: Tag) extends Table[UserRequest](tag, Some("one"), "users") {
...
}
with database configuration:
onedbtest = {
profile = "slick.jdbc.PostgresProfile$"
db = {
dataSourceClass = "org.postgresql.ds.PGSimpleDataSource" //Simple datasource with no connection pooling. The connection pool has already been specified with HikariCP.
driver = "slick.driver.PostgresDriver$"
serverName = "localhost"
portNumber = "5432"
databaseName = "onetest"
user = onetestuser
password = "password"
connectionPool = disabled
}
}
and when running (with necessary imports):
dbConfig.db.run((usersTable += createUserRequest).asTry)
Why can it not find relations (tables) in db?
Note: Error does not appear (with tests passing) when:
keepAliveConnection = true is added to config for DB initialisation however, it writes to another db called "one" (dev environment) doesn't work when connectionPool = disabled is added. It should work with the connectionPool attribute added but it doesn't. Strange it is referencing another DB when the db isn't defined anywhere within the code. I am using sbt.version = 1.3.13 and scalaVersion := "2.12.6". sbt clean compile and rebuilding does not solve caching issues. I have also killed all processes to stop any open connections and used db.close where necessary.
I swapped dataSourceClass = "org.postgresql.ds.PGSimpleDataSource"
with dataSourceClass = "slick.jdbc.DatabaseUrlDataSource".
This enables the application to actually close collections and not just return connection to connection pool. This in turn allows the application to initialise a new connection to the relevant db and pick up any modifications to the db.
I am trying to persist my Orion data into the public cosmos.lab.fi-ware.org instance using Cygnus.
Cygnus is up and running and the HDFSSink part of my /usr/cygnus/conf/agent_1.conf looks like this:
# OrionHDFSSink configuration
cygnusagent.sinks.hdfs-sink.channel = hdfs-channel
cygnusagent.sinks.hdfs-sink.type = com.telefonica.iot.cygnus.sinks.OrionHDFSSink
cygnusagent.sinks.hdfs-sink.enable_grouping = false
cygnusagent.sinks.hdfs-sink.backend_impl = rest
cygnusagent.sinks.hdfs-sink.hdfs_host = cosmos.lab.fi-ware.org
cygnusagent.sinks.hdfs-sink.hdfs_port = 14000
cygnusagent.sinks.hdfs-sink.hdfs_username = myUsernameInCosmosLabInstance
cygnusagent.sinks.hdfs-sink.hdfs_password = myPasswordInCosmosLabInstance
cygnusagent.sinks.hdfs-sink.oauth2_token = myTokenForCosmosLabInstance
cygnusagent.sinks.hdfs-sink.hive = true
cygnusagent.sinks.hdfs-sink.hive.server_version = 2
cygnusagent.sinks.hdfs-sink.hive.host = cosmos.lablfi-ware.org
cygnusagent.sinks.hdfs-sink.hive.port = 10000
cygnusagent.sinks.hdfs-sink.hive.db_type = default-db
I add a new subscription with Cygnus as the reference endpoint and I send an update to previously created NGSIEntity, but nothing appears in my cosmos.lab.fi-ware.org instance.
When looking at /var/log/cygnus/cygnus.log I cant find nothing useful, and I find some Java errors.
I am using Orion v. 0.28 and Cygnus v. 0.13.
As the log is saying:
Could not open connection to jdbc:hive2://cosmos.lablfi-ware.org:10000/default: java.net.UnknownHostException: cosmos.lablfi-ware.org
You must configure the right Hive endpoint:
cygnusagent.sinks.hdfs-sink.hive.host = cosmos.lab.fiware.org
Instead of:
cygnusagent.sinks.hdfs-sink.hive.host = cosmos.lablfi-ware.org
NOTE: Youy may have noticed I've used cosmos.lab.fiware.org. Both cosmos.lab.fiware.org and cosmos.lab.fi-ware.org are valid, bit the first one is preferred.
To find the data that Orion was persisting in my Cosmos global instance:
From Hadoop:
# hive
hive> select * from myUsernameInCosmosLabInstance_def_serv_def_servpath_room1_room_column;
Alternative method:
# hadoop fs -ls /user/myUsernameInCosmosInstance/def_serv/def_servpath/Room1_Room/Room1_Room.txt
I'm having a problem with lots of connections being opened to the mongo db.
The readme on the Github page for the C# driver gives the following code:
using MongoDB.Bson;
using MongoDB.Driver;
var client = new MongoClient("mongodb://localhost:27017");
var server = client.GetServer();
var database = server.GetDatabase("foo");
var collection = database.GetCollection("bar");
collection.Insert(new BsonDocument("Name", "Jack"));
foreach(var document in collection.FindAll())
{
Console.WriteLine(document["Name"]);
}
At what point does the driver open the connection to the server? Is it at the GetServer() method or is it the Insert() method?
I know that we should have a static object for the client, but should we also have a static object for the server and database as well?
Late answer... but the server connection is created at this point:
var client = new MongoClient("mongodb://localhost:27017");
Everything else is just getting references for various objects.
See: http://docs.mongodb.org/ecosystem/tutorial/getting-started-with-csharp-driver/
While using the latest MongoDB drivers for C#, the connection happens at the actual database operation. For eg. db.Collection.Find() or at db.collection.InsertOne().
{
//code for initialization
//for localhost connection there is no need to specify the db server url and port.
var client = new MongoClient("mongodb://localhost:27017/");
var db = client.GetDatabase("TestDb");
Collection = db.GetCollection<T>("testCollection");
}
//Code for db operations
{
//The connection happens here.
var collection = db.Collection;
//Your find operation
var model = collection.Find(Builders<Model>.Filter.Empty).ToList();
//Your insert operation
collection.InsertOne(Model);
}
I found this out after I stopped my mongod server and debugged the code with breakpoint. Initialization happened smoothly but error was thrown at db operation.
Hope this helps.
I'm finding this incredibly frustrating. I'm trying to use the InventoryFacadeClient to call either the Change or Sync web services to update product availability. The issue I'm facing is that I can't seem to instantiate all of the required DataTypes to populate the request.
It's quite confusing, I wanted to call ChangeInventory but can't compose the request, and started down SyncProductAvailability but again, can't compose the request.
The problem below is that the ProductIdentifierType is null, and there's no corresponding "createProductIdentifierType" on the Factory....I'm not sure what I"m missing here, the factory seems to be half baked...
If someone can help me complete this code, it would be great?
public void setUp() throws Exception {
String METHOD_NAME = "setUp";
logger.info("{} entering", METHOD_NAME);
super.setUp();
InventoryFacadeClient iClient = super.initializeInventoryClient(false);
InventoryFactory f = com.ibm.commerce.inventory.datatypes.InventoryFactory.eINSTANCE;
com.ibm.commerce.inventory.facade.datatypes.InventoryFactory cf = iClient.getInventoryFactory();
CommerceFoundationFactory fd = iClient.getCommerceFoundationFactory();
// we must have customised the SyncProductAvailability web service to
// handle ATP inventory model.
SyncProductAvailabilityDataAreaType dataArea = f.createSyncProductAvailabilityDataAreaType();
SyncProductAvailabilityType sat = f.createSyncProductAvailabilityType();
sat.setDataArea(dataArea);
DocumentRoot root = cf.createDocumentRoot();
sat.setVersionID(root.getInventoryAvailabilityBODVersion());
ProductAvailabilityType pat = f.createProductAvailabilityType();
ProductIdentifierType pid = pat.getProductIdentifier();
I found the answer to this on another forum. I was missing the right CommerceFoundationFactory - the class the ProductIdentifierType is created from is:
com.ibm.commerce.foundation.datatypes.CommerceFoundationFactory fd2 = com.ibm.commerce.foundation.datatypes.CommerceFoundationFactory.eINSTANCE;
fd2.createProductIdentifierType
We have two different query strategies that we'd ideally like to operate in conjunction on our site without opening redundant connections. One strategy uses the enterprise library to pull Database objects and Execute_____(DbCommand)s on the Database, without directly selecting any sort of connection. Effectively like this:
Database db = DatabaseFactory.CreateDatabase();
DbCommand q = db.GetStoredProcCommand("SomeProc");
using (IDataReader r = db.ExecuteReader(q))
{
List<RecordType> rv = new List<RecordType>();
while (r.Read())
{
rv.Add(RecordType.CreateFromReader(r));
}
return rv;
}
The other, newer strategy, uses a library that asks for an IDbConnection, which it Close()es immediately after execution. So, we do something like this:
DbConnection c = DatabaseFactory.CreateDatabase().CreateConnection();
using (QueryBuilder qb = new QueryBuilder(c))
{
return qb.Find<RecordType>(ConditionCollection);
}
But, the connection returned by CreateConnection() isn't the same one used by the Database.ExecuteReader(), which is apparently left open between queries. So, when we call a data access method using the new strategy after one using the old strategy inside a TransactionScope, it causes unnecessary promotion -- promotion that I'm not sure we have the ability to configure for (we don't have administrative access to the SQL Server).
Before we go down the path of modifying the query-builder-library to work with the Enterprise Library's Database objects ... Is there a way to retrieve, if existent, the open connection last used by one of the Database.Execute_______() methods?
Yes, you can get the connection associated with a transaction. Enterprise Library internally manages a collection of transactions and the associated database connections so if you are in a transaction you can retrieve the connection associated with a database using the static TransactionScopeConnections.GetConnection method:
using (var scope = new TransactionScope())
{
IEnumerable<RecordType> records = GetRecordTypes();
Database db = DatabaseFactory.CreateDatabase();
DbConnection connection = TransactionScopeConnections.GetConnection(db).Connection;
}
public static IEnumerable<RecordType> GetRecordTypes()
{
Database db = DatabaseFactory.CreateDatabase();
DbCommand q = db.GetStoredProcCommand("GetLogEntries");
using (IDataReader r = db.ExecuteReader(q))
{
List<RecordType> rv = new List<RecordType>();
while (r.Read())
{
rv.Add(RecordType.CreateFromReader(r));
}
return rv;
}
}