OrientDB: Cannot execute the request because an asynchronous operation is in progress - orientdb

When I'm querying my OrientDb (2.2.18) more than once in this way:
def orientService = new OrientGraphFactory(url, username, password).setupPool(1, 50)
orientService.getNoTx().command(new OCommandSQL(query)).execute().toList()
I have this:
Cannot execute the request because an asynchronous operation is in progress. Please use a different connection
DB name="MyDB"
How can I avoid it? Thanks!

Related

How to handle multiple mongo databases connections

I have a multi tenant application, i.e., each customer has their database.
My stack is NestJS with MongoDB, and to handle HTTP requests to the right database I use the lib nestjs-tenancy, it scopes the connection based in the request subdomain.
But this lib does not work when I need to execute something asynchronous, like a queue job (bull). So in the queue consumer, I create a new connection to the right database that I need to access. This info about the database I extract from the queue job data:
try {
await mongoose.connect(
`${this.configService.get('database.host')}/${
job.data.tenant
}?authSource=admin`,
);
const messageModel = mongoose.model(
Message.name,
MessageSchema,
);
// do a lot of stuffs with messageModel
await messageModel.find()
await messageModel.save()
....
} finally {
await mongoose.connection.close();
}
I have two different jobs that may run at the same time and for the same database.
I'm noticing sometimes that I'm getting some erros about the connection, like:
MongoExpiredSessionError: Cannot use a session that has ended
MongoNotConnectedError: Client must be connected before running operations
So for sure I'm doing something wrong. Ps: in all my operations I use await.
Any clue of what can be?
Should I use const conn = mongoose.createConnection() instead mongoose.connect()?
Should I always close the connection or leave it "open"?
EDIT:
After some testing, I'm sure that I can't use the mongoose.connect(). Changing it to mongoose.createConnection() solved the issues. But I'm still confused if I need to close the connection. Closing it, I'm sure that I'm not overloading mongo with a lot of connections, but in the same time, every request it will create a new connection and I have hundreds of jobs running at once for different connections..

Vertx mysql client asynchronous and synchronou

I need help on the vertx mysql client v4.1.0 java api. I have used the two codes below however I am getting inconsistent results with number 2 as it returns nulls even when there is a database record. Is it not a blocking call??
/* 1. asynchronous */
mysql.query(sql).execute(asr -> {
if (asr.succeeded()) {
RowSet<Row> rowset = asr.result();
}else{
//Log and handle error
}
});
/* 2. ??? synchronous */
RowSet<Row> rowset = mysql.query(sql).execute().result();
Both .reactivex and raw Vert.x MySQL client library variants are asynchronous by design providing a Handler base type as a way to handle SQL queries results:
io.vertx.sqlclient.Query#execute(Handler<AsyncResult> handler)
Execute the query.
Vert.x still provides API portability to common patterns such as Java's Future and here down the method signature:
io.vertx.sqlclient.Query#execute()
Like execute(Handler) but returns a Future of the asynchronous result
The Reactive client library port provides an rxfied method as well returning query results as a publisher Single:
io.vertx.reactivex.sqlclient.Query#rxExecute()
Execute the query.
With above considerations in mind, when calling:
RowSet<Row> rowset = mysql.query(sql).execute().result();
You are executing the SQL query and then blocking the current thread until the query result is returned and the Future is resolved.

Elixir / Elixir-mongo collection find breaks on Enum.to_list

This is my first go around with elixir, and I'm trying to make a simple web scraper that saves into mongodb.
I've installed the elixir-mongo package and am able to insert into the database correctly. Sadly, I'm not able to retrieve the values that I have put into the DB.
Here is the error that I am getting:
** (Mix) Could not start application jobboard: exited in: JB.start(:normal, [])
** (EXIT) an exception was raised:
** (ArgumentError) argument error
(elixir) lib/enum.ex:1266: Enum.reduce/3
(elixir) lib/enum.ex:1798: Enum.to_list/1
(jobboard) lib/scraper.ex:8: JB.Scraper.scrape/0
(jobboard) lib/jobboard.ex:26: JB.start/2
(kernel) application_master.erl:272: :application_master.start_it_old/4
If I understand the source correctly, then the mongo library should implement reduce here:
https://github.com/checkiz/elixir-mongo/blob/13211a0c0c9bb5fed29dd2faf7a01342b4e97eb4/lib/mongo_find.ex#L78
Here are the relevant sections of my code:
#JB.Scraper
def scrape do
urls = JB.ScrapedUrls.unscraped_urls
end
#JB.ScrapedUrls
def unscraped_urls do
MongoService.find(%{scraped: false})
end
#MongoService
def find(statement) do
collection |> Mongo.Collection.find(statement) |> Enum.to_list
end
defp collection do
mongo = Mongo.connect!
db = mongo |> Mongo.db("simply_hired_urls")
db |> Mongo.Db.collection("urls")
end
As a bonus, if anyone can tell me how I can get around connecting to Mongo every time I make a new call, that would be awesome. :) I'm still figuring out FP.
Thanks!
Jon
Didn't use this library, but I just made a simple attempt of the simplified version of your code.
I've started with
Mongo.connect!
|> Mongo.db("test")
|> Mongo.Db.collection("foo")
|> Mongo.Collection.find(%{scraped: true})
|> Enum.to_list
This worked fine. Then I suspected that the problem occurs when too many connections are open, so I ran this test repeatedly, and then it failed with the same error you got. It failed consistently when trying to open the connection for the 2037th time. Looking at the mongodb log, I can tell that it can't open another connection:
[initandlisten] can't create new thread, closing connection
To fix this, I simply closed the connection after I converted the results to list, using Mongo.Server.close/1. That fixed the problem
As you detect yourself, this is not an optimal way of communicating with the database, and you'd be better off if you could reuse the connection for multiple queries.
A standard way of doing this is to hold on to the connection in a process, such as GenServer or an Agent. The connection becomes a part of the process state, and you can run multiple queries in that process over the same connection.
Obviously, if multiple client processes use a single database process, all queries will be serialized, and the database process then becomes a performance bottleneck. To deal with this, you could open a pool of processes, each one managing a distinct database connection. This can be done in simple way with the poolboy library.
My suggestion is that you try implementing a single GenServer based process that maintains the connection and runs queries. Then see if your code works correctly, and when it does, try to use poolboy to be able to deal with concurrent requests efficiently.

Wrap Slick queries in Futures

I am trying to query a MySQL database asynchronously using Slick. The following code template, which I use to query about 90k rows in a for comprehension, seems to be working initially, but the program consumes several gigabytes of RAM and fails without warning after around 200 queries.
import scala.slick.jdbc.{StaticQuery => Q}
def doQuery(): Future[List[String]] = future {
val q = "select name from person"
db withSession {
Q.query[String](q).list
}
}
I have tried setting up connections both using the fromURL method and also using a c3p0 connection pool. My question is: Is this the way to do asynchronous calls to the database?
Async is still an open issue for Slick.
You could try using Iterables and stream data instead of storing it in memory with a solution similar to this: Treating an SQL ResultSet like a Scala Stream
Although please omit the .toStream call at the end. It will cache the data in memory, while Iterable will not.
If you want an async version of iterable you could look into Observables.
It turns out that this is a non issue (actually a bug in my code, which opened a new database connection for each query). In my experience, you can wrap DB queries in Futures as shown above and compose them later with Scala Async or Rx, as shown here. All is required for good performance is a large thread pool (x2 the CPUs in my case) and an equally large connection pool.
Slick 3 (Reactive Slick) looks like it might address this.

Re-using sessions in ScalaQuery?

I need to do small (but frequent) operations on my database, from one of my api methods. When I try wrapping them into "withSession" each time, I get terrible performance.
db withSession {
SomeTable.insert(a,b)
}
Running the above example 100 times takes 22 seconds. Running them all in a single session is instantaneous.
Is there a way to re-use the session in subsequent function invocations?
Do you have some type of connection pooling (see JDBC Connection Pooling: Connection Reuse?)? If not you'll be using a new connection for every withSession(...) and that is a very slow approach. See http://groups.google.com/group/scalaquery/browse_thread/thread/9c32a2211aa8cea9 for a description of how to use C3PO with ScalaQuery.
If you use a managed resource from an application server you'll usually get this for "free", but in stand-alone servers (for example jetty) you'll have to configure this yourself.
I'm probably stating the way too obvious, but you could just put more calls inside the withSession block like:
db withSession {
SomeTable.insert(a,b)
SomeOtherTable.insert(a,b)
}
Alternately you can create an implicit session, do your business, then close it when you're done:
implicit val session = db.createSession
SomeTable.insert(a,b)
SomeOtherTable.insert(a,b)
session.close