OrientDB multithread Concurrent Modification Exception and other errors - orientdb

I am writing an application that writes data to a graph on orientDB (v 2.2.3) this graph is something like the following:
I have threads that add vertices to C vertices each C vertex has an independent thread which is responsible add D vertexes with there edges.
Each thread is working on a separate transaction, I have been getting various errors and exception like the following:
com.orientechnologies.orient.core.exception.OStorageException: Error on commit
at com.orientechnologies.orient.client.remote.OStorageRemote.baseNetworkOperation(OStorageRemote.java:253)
at com.orientechnologies.orient.client.remote.OStorageRemote.networkOperation(OStorageRemote.java:189)
at com.orientechnologies.orient.client.remote.OStorageRemote.commit(OStorageRemote.java:1271)
at com.orientechnologies.orient.core.tx.OTransactionOptimistic.doCommit(OTransactionOptimistic.java:549)
at com.orientechnologies.orient.core.tx.OTransactionOptimistic.commit(OTransactionOptimistic.java:109)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.commit(ODatabaseDocumentTx.java:2665)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.commit(ODatabaseDocumentTx.java:2634)
at com.tinkerpop.blueprints.impls.orient.OrientTransactionalGraph.commit(OrientTransactionalGraph.java:175)
at JSONManager$.commitGrap2(JSONManager.scala:371)
at JSONManager$$anonfun$main$2$$anon$1.run(JSONManager.scala:87)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
.....
Caused by: java.util.ConcurrentModificationException
at java.util.LinkedHashMap$LinkedHashIterator.nextNode(LinkedHashMap.java:711)
at java.util.LinkedHashMap$LinkedValueIterator.next(LinkedHashMap.java:739)
at com.orientechnologies.orient.client.remote.OStorageRemote$28.execute(OStorageRemote.java:1284)
at com.orientechnologies.orient.client.remote.OStorageRemote$28.execute(OStorageRemote.java:1271)
at com.orientechnologies.orient.client.remote.OStorageRemote$2.execute(OStorageRemote.java:192)
at com.orientechnologies.orient.client.remote.OStorageRemote.baseNetworkOperation(OStorageRemote.java:224)
... 12 more
UPDATE code:
val t: Runnable = new Runnable {
override def run(): Unit = {
graph = factory.getTx
saveDUnits(dUnit, graph)
commitGrap(graph)
graph.shutdown()
}
};
pool.execute(t)
def commitGrap(graph: OrientGraph): Unit = {
var retryCount = 0
while (retryCount < 10) {
try {
graph.commit()
retryCount = 11
} catch {
case e: Exception => println("Commit Error")
e.printStackTrace()
var sleepTime = 50
if (retryCount > 5) {
sleepTime = 6000
}
Thread.sleep(sleepTime);
}
retryCount = retryCount + 1
}
}

Finally I found the error what i did, the problem was in creating OrientGraphFactory instance, the non thread safe factory is created like the follwoing
var factory: OrientGraphFactory = new OrientGraphFactory("remote:106.140.20.233/test", "root", "123")
The thread safe factory is created like the following:
var factory: OrientGraphFactory = new OrientGraphFactory("remote:106.140.20.233/test", "root", "123").setupPool(1, 20)
I missed adding .setPool(1,20)
That's it

Related

Is there a way to avoid cold start with Cloud SQL and Cloud Functions (using JVM/Scala)? [duplicate]

This question already has answers here:
How can I keep Google Cloud Functions warm?
(8 answers)
Closed 7 months ago.
I have implemented a cloud function that accesses a postgres DB per the documentation like this...
import java.util.Properties
import javax.sql.DataSource
import com.zaxxer.hikari.HikariConfig
import com.zaxxer.hikari.HikariDataSource
import io.github.cdimascio.dotenv.Dotenv
import java.sql.Connection
class CoreDataSource {
def getConnection = {
println("Getting the connection")
CoreDataSource.getConnection
}
}
object CoreDataSource {
var pool : Option[DataSource] = None
def getConnection: Option[Connection] = {
if(pool.isEmpty) {
println("Getting the datasource")
pool = getDataSource
}
if(pool.isEmpty){
None
} else {
println("Reusing the connection")
Some(pool.get.getConnection)
}
}
def getDataSource: Option[DataSource] = {
Class.forName("org.postgresql.Driver")
var dbName,dbUser,dbPassword,dbUseIAM,ssoMode, instanceConnectionName = ""
val dotenv = Dotenv
.configure()
.ignoreIfMissing()
.load()
dbName = dotenv.get("DB_NAME")
println("DB Name "+ dbName)
dbUser= dotenv.get("DB_USER")
println("DB User "+ dbUser)
dbPassword = Option(
dotenv.get("DB_PASS")
).getOrElse("ignored")
dbUseIAM = Option(
dotenv.get("DB_IAM")
).getOrElse("true")
println("dbUseIAM "+ dbUseIAM)
ssoMode = Option(
dotenv.get("DB_SSL")
).getOrElse("disable") // TODO: Should this be enabled by default?
println("ssoMode "+ ssoMode)
instanceConnectionName = dotenv.get("DB_INSTANCE")
println("instanceConnectionName "+ instanceConnectionName)
val jdbcURL: String = String.format("jdbc:postgresql:///%s", dbName)
val connProps = new Properties
connProps.setProperty("user", dbUser)
// Note: a non-empty string value for the password property must be set. While this property will be ignored when connecting with the Cloud SQL Connector using IAM auth, leaving it empty will cause driver-level validations to fail.
if( dbUseIAM.equals("true") ){
println("Using IAM password is ignored")
connProps.setProperty("password", "ignored")
} else {
println("Using manual, password must be provided")
connProps.setProperty("password", dbPassword)
}
connProps.setProperty("sslmode", ssoMode)
connProps.setProperty("socketFactory", "com.google.cloud.sql.postgres.SocketFactory")
connProps.setProperty("cloudSqlInstance", instanceConnectionName)
connProps.setProperty("enableIamAuth", dbUseIAM)
// Initialize connection pool
val config = new HikariConfig
config.setJdbcUrl(jdbcURL)
config.setDataSourceProperties(connProps)
config.setMaximumPoolSize(10)
config.setMinimumIdle(4)
config.addDataSourceProperty("ipTypes", "PUBLIC,PRIVATE") // TODO: Make configureable
println("Config created")
val pool : DataSource = new HikariDataSource(config) // Do we really need Hikari here if it doesn't need pooling?
println("Returning the datasource")
Some(pool)
}
}
class DoSomething() {
val ds = new CoreDataSource
def getUserInformation(): String = {
println("Getting user information")
connOpt = ds.getConnection
if(connOpt.isEmpty) throw new Error("No Connection Found")
...
}
}
class SomeClass extends HttpFunction {
override def service(httpRequest: HttpRequest, httpResponse: HttpResponse): Unit = {
httpResponse.setContentType("application/json")
httpResponse.getWriter.write(
GetCorporateInformation.corp.getUserInformation( )
)
}
}
object GetCorporateInformation {
val corp = new CorporateInformation()
}
And I deploy like this...
gcloud functions deploy identity-corporate --entry-point ... --min-instances 2 --runtime java17 --trigger-http --no-allow-unauthenticated --set-secrets '...'
But when first deployed (and after sitting idle for a while) the function takes 25 secs to return causing all kinds of issues with SLAs. After the "cold start" it returns quickly but at least in dev I can't really make sure someone is always hitting it.
Is there a way to mitigate this or do I need to use a VM to make sure it isn't destroyed? Or is there a way to do this without the overhead of pooling?
Since functions are stateless, your function sometimes initializes the execution environment from scratch, which is called a cold start. However, you can minimize the impact of cold start by setting a minimum number of instances (Note that this can help reduce but not eliminate) or you could create a scheduled function warmer that runs every few minutes and calls your high priority function ensuring they are kept warm.

MongoDB reactive template transactions

I've been using mongodb for my open source project for more than a year now and recently I decided to try out the transactions. After writing some tests for methods that use transactions I figured out that they throw some strange exceptions and I can't figure out what is the problem. So I have a method delete that uses custom coroutine context and a mutex:
open suspend fun delete(photoInfo: PhotoInfo): Boolean {
return withContext(coroutineContext) {
return#withContext mutex.withLock {
return#withLock deletePhotoInternalInTransaction(photoInfo)
}
}
}
It then calls a method that executes some deletion:
//FIXME: doesn't work in tests
//should be called from within locked mutex
private suspend fun deletePhotoInternalInTransaction(photoInfo: PhotoInfo): Boolean {
check(!photoInfo.isEmpty())
val transactionMono = template.inTransaction().execute { txTemplate ->
return#execute photoInfoDao.deleteById(photoInfo.photoId, txTemplate)
.flatMap { favouritedPhotoDao.deleteFavouriteByPhotoName(photoInfo.photoName, txTemplate) }
.flatMap { reportedPhotoDao.deleteReportByPhotoName(photoInfo.photoName, txTemplate) }
.flatMap { locationMapDao.deleteById(photoInfo.photoId, txTemplate) }
.flatMap { galleryPhotoDao.deleteByPhotoName(photoInfo.photoName, txTemplate) }
}.next()
return try {
transactionMono.awaitFirst()
true
} catch (error: Throwable) {
logger.error("Could not delete photo", error)
false
}
}
Here I have five operations that delete data from five different documents. Here is an example of one of the operations:
open fun deleteById(photoId: Long, template: ReactiveMongoOperations = reactiveTemplate): Mono<Boolean> {
val query = Query()
.addCriteria(Criteria.where(PhotoInfo.Mongo.Field.PHOTO_ID).`is`(photoId))
return template.remove(query, PhotoInfo::class.java)
.map { deletionResult -> deletionResult.wasAcknowledged() }
.doOnError { error -> logger.error("DB error", error) }
.onErrorReturn(false)
}
I want this operation to fail if either of deletions fails so I use a transaction.
Then I have some tests for a handler that uses this delete method:
#Test
fun `photo should not be uploaded if could not enqueue static map downloading request`() {
val webClient = getWebTestClient()
val userId = "1234235236"
val token = "fwerwe"
runBlocking {
Mockito.`when`(remoteAddressExtractorService.extractRemoteAddress(any())).thenReturn(ipAddress)
Mockito.`when`(banListRepository.isBanned(Mockito.anyString())).thenReturn(false)
Mockito.`when`(userInfoRepository.accountExists(userId)).thenReturn(true)
Mockito.`when`(userInfoRepository.getFirebaseToken(Mockito.anyString())).thenReturn(token)
Mockito.`when`(staticMapDownloaderService.enqueue(Mockito.anyLong())).thenReturn(false)
}
kotlin.run {
val packet = UploadPhotoPacket(33.4, 55.2, userId, true)
val multipartData = createTestMultipartFile(PHOTO1, packet)
val content = webClient
.post()
.uri("/v1/api/upload")
.contentType(MediaType.MULTIPART_FORM_DATA)
.body(BodyInserters.fromMultipartData(multipartData))
.exchange()
.expectStatus().is5xxServerError
.expectBody()
val response = fromBodyContent<UploadPhotoResponse>(content)
assertEquals(ErrorCode.DatabaseError.value, response.errorCode)
assertEquals(0, findAllFiles().size)
runBlocking {
assertEquals(0, galleryPhotoDao.testFindAll().awaitFirst().size)
assertEquals(0, photoInfoDao.testFindAll().awaitFirst().size)
}
}
}
#Test
fun `photo should not be uploaded when resizeAndSavePhotos throws an exception`() {
val webClient = getWebTestClient()
val userId = "1234235236"
val token = "fwerwe"
runBlocking {
Mockito.`when`(remoteAddressExtractorService.extractRemoteAddress(any())).thenReturn(ipAddress)
Mockito.`when`(banListRepository.isBanned(Mockito.anyString())).thenReturn(false)
Mockito.`when`(userInfoRepository.accountExists(userId)).thenReturn(true)
Mockito.`when`(userInfoRepository.getFirebaseToken(Mockito.anyString())).thenReturn(token)
Mockito.`when`(staticMapDownloaderService.enqueue(Mockito.anyLong())).thenReturn(true)
Mockito.doThrow(IOException("BAM"))
.`when`(diskManipulationService).resizeAndSavePhotos(any(), any())
}
kotlin.run {
val packet = UploadPhotoPacket(33.4, 55.2, userId, true)
val multipartData = createTestMultipartFile(PHOTO1, packet)
val content = webClient
.post()
.uri("/v1/api/upload")
.contentType(MediaType.MULTIPART_FORM_DATA)
.body(BodyInserters.fromMultipartData(multipartData))
.exchange()
.expectStatus().is5xxServerError
.expectBody()
val response = fromBodyContent<UploadPhotoResponse>(content)
assertEquals(ErrorCode.ServerResizeError.value, response.errorCode)
assertEquals(0, findAllFiles().size)
runBlocking {
assertEquals(0, galleryPhotoDao.testFindAll().awaitFirst().size)
assertEquals(0, photoInfoDao.testFindAll().awaitFirst().size)
}
}
}
#Test
fun `photo should not be uploaded when copyDataBuffersToFile throws an exception`() {
val webClient = getWebTestClient()
val userId = "1234235236"
val token = "fwerwe"
runBlocking {
Mockito.`when`(remoteAddressExtractorService.extractRemoteAddress(any())).thenReturn(ipAddress)
Mockito.`when`(banListRepository.isBanned(Mockito.anyString())).thenReturn(false)
Mockito.`when`(userInfoRepository.accountExists(userId)).thenReturn(true)
Mockito.`when`(userInfoRepository.getFirebaseToken(Mockito.anyString())).thenReturn(token)
Mockito.`when`(staticMapDownloaderService.enqueue(Mockito.anyLong())).thenReturn(true)
Mockito.doThrow(IOException("BAM"))
.`when`(diskManipulationService).copyDataBuffersToFile(Mockito.anyList(), any())
}
kotlin.run {
val packet = UploadPhotoPacket(33.4, 55.2, userId, true)
val multipartData = createTestMultipartFile(PHOTO1, packet)
val content = webClient
.post()
.uri("/v1/api/upload")
.contentType(MediaType.MULTIPART_FORM_DATA)
.body(BodyInserters.fromMultipartData(multipartData))
.exchange()
.expectStatus().is5xxServerError
.expectBody()
val response = fromBodyContent<UploadPhotoResponse>(content)
assertEquals(ErrorCode.ServerDiskError.value, response.errorCode)
assertEquals(0, findAllFiles().size)
runBlocking {
assertEquals(0, galleryPhotoDao.testFindAll().awaitFirst().size)
assertEquals(0, photoInfoDao.testFindAll().awaitFirst().size)
}
}
}
Usually the first test passes:
and the following two fail with the following exception:
17:09:01.228 [Thread-17] ERROR com.kirakishou.photoexchange.database.dao.PhotoInfoDao - DB error
org.springframework.data.mongodb.UncategorizedMongoDbException: Command failed with error 24 (LockTimeout): 'Unable to acquire lock '{8368122972467948263: Database, 1450593944826866407}' within a max lock request timeout of '5ms' milliseconds.' on server 192.168.99.100:27017.
And then:
Caused by: com.mongodb.MongoCommandException: Command failed with error 246 (SnapshotUnavailable): 'Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1545661357, 23). Collection minimum is Timestamp(1545661357, 24)' on server 192.168.99.100:27017.
And:
17:22:36.951 [Thread-16] WARN reactor.core.publisher.FluxUsingWhen - Async resource cleanup failed after cancel
com.mongodb.MongoCommandException: Command failed with error 251 (NoSuchTransaction): 'Transaction 1 has been aborted.' on server 192.168.99.100:27017.
Sometimes two of them pass and the last one fails.
It looks like only the first transaction succeeds and any following will fail and I guess the reason is that I have to manually close it (or the ClientSession). But I can't find any info on how to close transactions/sessions. Here is one of the few examples I could find where they use transactions with reactive template and I don't see them doing anything additional to close transaction/session.
Or maybe it's because I'm mocking a method to throw an exception inside the transaction? Maybe it's not being closed in this case?
The client sessions/tranactions are closed properly however it appears the indexes creation in tests are acquiring global lock causes the next transaction lock to fall behind and wait before timing out on the lock request.
Basically you have to manage your index creation so they don’t interfere with transaction from client.
One quick fix would be to increase the lock timeout by running below command in shell.
db.adminCommand( { setParameter: 1, maxTransactionLockRequestTimeoutMillis: 50 } )
In production you can look at the transaction error label
and retry the operation.
More here https://docs.mongodb.com/manual/core/transactions-production-consideration/#pending-ddl-operations-and-transactions
You could check connection options and accord you driver
val connection = MongoConnection(List("localhost"))
val db = connection.database("plugin")
...
connection.askClose()
you could search method askClose(), hope you can helpfull

Akka dispatcher not configured exception in Play/Scala application

I am doing a disk intensive operation and I want to use my own thread-pool for it and not the default one.
I read the following link, and I am facing the exact same problem
Akka :: dispatcher [%name%] not configured, using default-dispatcher
But my config file is slightly different, I have tried the suggestion but it not working.
My application.conf in play has the following
jpa-execution-context {
thread-pool-executor {
core-pool-size-factor = 10.0
core-pool-size-max = 10
}
}
And then in my test code I do the following, but I get an exception. Here is the test method
private def testContext():Future[Int] = {
val system = ActorSystem.create()
val a = ActorSystem.create()
implicit val executionContext1 = system.dispatchers.lookup("jpa-execution-context")
Future{logger.error("inside my new thread pool wonderland");10}{executionContext1}
}
Here is the exception:
akka.ConfigurationException: Dispatcher [jpa-execution-context] not configured
I think you forgot a few elements in your configuration:
jpa-execution-context {
type = Dispatcher
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-factor = 10.0
core-pool-size-max = 10
}
}
Doc link: http://doc.akka.io/docs/akka/current/scala/dispatchers.html#types-of-dispatchers

Exception / error handling in an Akka Stream

I've defined the following Duct:
val augmenter1 = new Augmenter1
val augmenter2 = new Augmenter2
val augmenter3 = new Augmenter3
val defaultEventAugmenterPipeline: Duct[Event, Event] = Duct[Event].
map(augmenter1.augment).
map(augmenter2.augment).
map(augmenter3.augment)
and Flow:
Flow(eventConsumer).append(defaultEventAugmenterPipeline).onComplete(materializer) { ... }
and an Augmenter looks like this:
class Augmenter1 extends Augmenter[Event] {
def augment(e: Event): Event = {
if(someCondition)
e.addAugmentation(...)
else
throw new Exception("someCondition not met!")
e
}
}
Now, if the condition that leads to the exception in Augmenter1 is met, the flow simply terminates (successfully) at the first instance of the exception, without throwing any exception.
I'd like to be able to do 2 things: catch the exception up the chain, and skip to the next event.
My question: what is the proper way to deal with errors/exceptions in a flow ?
Thanks

Appending List[Database]

I am new with Scala and I am writing a simple rss reader.
I have class Manager for managing feeds and content.
package lib
import scala.xml._
import java.net.URL
import net.liftweb.couchdb.{CouchDB, Database}
import dispatch.{Http, StatusCode}
/**
* #author smix
*
* Feeds manager
*/
object Manager {
var db = List[Database]()
/*
* Initialize CouchDb databases
*/
def init = {
this.appendDb(new Database("myserver.com", 5984, "content"))
}
/*
* Append a new database to the databases list
*/
private def appendDb(database: Database) : Unit = {
database :: db
// Strange exception if database has been already created
/* try {
this.db.head.createIfNotCreated(new Http())
} catch {
case e:java.lang.NoClassDefFoundError => {}
} */
}
/*
* Fetch articles from feed by url
*/
def fetchItems(feedUrl: String): List[scala.xml.Elem] = {
val rssFeed = XML.load( (new URL(feedUrl)).openConnection.getInputStream )
val items = rssFeed \ "channel" \ "item"
val articles: List[scala.xml.Elem] = List()
for(item <- items) {
item :: articles
}
articles
}
}
I want to store content in CouchDb. I need to have list of couch databases(feeds, articles, etc...). I wrote class but when I call appendDb i get an error:
Exception in thread "main" java.lang.NoClassDefFoundError: lib/Manager$
at collector$.main(collector.scala:5)
at collector.main(collector.scala)
Caused by: java.lang.ClassNotFoundException: lib.Manager$
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
... 2 more
When I rewrited db definition: var db = List[Int]() and the first line of appendDb: 1 :: this.db project ran fine... Strange.
Also, it is interesting why am I getting exception when I call createIfNotCreated for existing database(commented try-catch block in appendDb).
The exception indicates that you're missing some classes (one or more JAR files, presumably) when you run your program, though they're either irrelevant to compiling it or they are available then.
You should also note that the first line in appendDb accomplishes nothing. It builds a new List by consing database onto the front of the List referred to by db, but the resulting value is discarded. Perhaps you meant this:
db = database :: db