I'm getting this error when I try to insert 17000 vertex in the DB. The vertex are grouped as a multiple tree an the commit occur when a tree has bean fulled readed/stored. The first tree has 2300 vertex, the second has 5500 vertex and is in this point when it fail.
java.lang.IllegalStateException: Cannot begin a transaction while a hook is executing
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.begin(ODatabaseDocumentTx.java:2210)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.begin(ODatabaseDocumentTx.java:2192)
at com.tinkerpop.blueprints.impls.orient.OrientTransactionalGraph.ensureTransaction(OrientTransactionalGraph.java:229)
at com.tinkerpop.blueprints.impls.orient.OrientTransactionalGraph.commit(OrientTransactionalGraph.java:177)
at net.odbogm.SessionManager.commit(SessionManager.java:351)
at com.quiencotiza.utilities.SetupInicial.loadRubros(SetupInicial.java:180)
at com.quiencotiza.utilities.SetupInicial.initDatabase(SetupInicial.java:48)
at com.quiencotiza.utilities.SetupInicial.main(SetupInicial.java:41)
It's a single thread app. It load the database with the initials records.
I have upgraded to 2.2.4 but I get the same error.
Thanks
Marcelo
Well. I solve the problem. It seem is something related to the activateOnCurrentThread() but don't know why it happened. What means that exception? Why it is throwing?
I know its an old topic, but maybe its will help someone,
Had the same problem, a lot threads with many queries and updates.
So I started to work with one thread (SingleThreadExecutor in Java) and solve it,
I guess there is a bug in the locks of hooks
Related
I have created a model to generate a product that will be cycled through a list of machines. Technically the product list is for a single-day run, but I run the model for long durations to stabilise the model output.
The model can run properly for months until around 20 months, then suddenly stops without any error message as shown in the screenshot. I do not know how to debug this since I do not know where the error comes from.
Does anyone have a similar encounter and could advise on how to approach this issue? Could it be an issue of memory overload?
Without more details, it's hard to pinpoint the exact reason, but this generally happens if the run is stuck in an infinite While Loop or similar. So check all your loops where it's possible for such a scenario to happen and it's likely that one of them (or more) is causing the issue.
What does error 208 means? the query:
dependencies
| where type == "SQL" and success == "False"
| summarize count() by resultCode
is giving me 4500+ itens on the last hour alone and I can't seem to find any solid documentation about this.
Details:
The frequency of error rises as concurrency rises, meaning 1000 concurrent requests will generate more erros than 1000 sequential ones.
My application is Asp.Net MVC 4 framework 4.6 using latest EF
The error is intermittent. Performing a certain operation won't definitely result in the error
I don't think that this error means "Invalid Object Name" (as per other threads) because i can see EF auto-retrying this and eventually it goes through and the whole request is successfully returned (otherwise i would have A LOT of missed phone calls...)
The error occurs on both ASYNC and sync requests
I got in touch with MS support and according to them, this is caused by entity framework. Apparently EF keeps looking for 2 tables (migrationsHistory and edmMetadata) that I deliberately deleted. although that makes sense, i don't know why that error does not present itself on our in-house tests (the table are not present on the in-house dev env too...)
Above answer is correct however Id like to add additional information:
You need to have MigrationHistory table and it has to be populated correctly. edmMetadata is old table which was replaced by MigrationHistory so no need to worry about that.
Just by adding MigrationHistory tabled did not solve the issue completely ( I was down to 3 exceptions 208 from 5 ).
However, keep in mind that populating MigrationHistory table will render your dbContext out of sync if latest migration is not inserted in MigrationHistory!
Best way to get this is to issue:
UpdateDatabase -script
command and copy CREATE/INSERT/UPDATE statements from there.
I am trying to import all the logs of the jobs running into a table in Postgres. I am using the components tLogCatcher and tStatCatcher and joining them to create a table with all the available data.
The job looks like this:
Inside the tMap, I am joining the two sources from logcatcher and statcatcher on the pid and the job name and try to merge the results to have them combined in a table:
However whenever the job fails I get nulls in the logcatcher output, even if there are error messages:
[statistics] connecting to socket on port 3696
[statistics] connected
2017-02-03 13:51:07|PR7710|PR7710|PR7710|6981|NASIA|Master_ETL_Job|_52dYEJUvEeaqS8phzVFskQ|0.1|Default||begin||
Exception in component tFileInputDelimited_1
java.io.FileNotFoundException: /Users/nasiantalla/Documents/keychain.csv (No such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:93)
at org.talend.fileprocess.TOSDelimitedReader.<init>(TOSDelimitedReader.java:88)
at org.talend.fileprocess.FileInputDelimited.<init>(FileInputDelimited.java:164)
at nasia.master_etl_job_0_1.Master_ETL_Job.tFileInputDelimited_1Process(Master_ETL_Job.java:796)
at nasia.master_etl_job_0_1.Master_ETL_Job.runJobInTOS(Master_ETL_Job.java:6073)
at nasia.master_etl_job_0_1.Master_ETL_Job.main(Master_ETL_Job.java:5879)
2017-02-03 13:51:08|PR7710|PR7710|PR7710|NASIA|Master_ETL_Job|Default|6|Java Exception|tFileInputDelimited_1|java.io.FileNotFoundException:/Users/nasiantalla/Documents/keychain.csv (No such file or directory)|1
2017-02-03 13:51:08|PR7710|PR7710|PR7710|6981|NASIA|Master_ETL_Job|_52dYEJUvEeaqS8phzVFskQ|0.1|Default||end|failure|890
[statistics] disconnected
Job Master_ETL_Job endet am 13:51 03/02/2017. [exit code=1]
And in my table the data I get are like this:
Do you see something that I might have missed? I tried with all different joins in the tMap but it doesn't seem to work and I dont understand why..
Thanks in advance!
The tStatCatcher and tLogCatcher do not work when joined with a tMap. I cannot give a definitive answer as to why, but I think its related to the special functionality involved in 'catching' the errors and stats, and is likely a timing issue. The log catcher for instance will only catch an error while the stats can catch stats on every component.
I recommend writing to separate tables and joining on those tables to produce reports. As a matter of fact Talend has this functionality built in so you do not even need to provide your own tStatCatcher and tLogCatcher components in each job.
You must first create the AMC database structure then go to file-->edit project settings--> job settings --> stats and logs. Choose the 'on database' option. Then Talend will automatically log stats, errors and flows to the AMC db. You can report off this db.
They are 3 reasons for that :
tLogCatcher does not provide logs if there is no tDie or tWarn, and i think this is your case.
It's not necessary that tLogCatcher and tStatCatcher provide their data at the same time, because their are triggered by different events. So join will not match.
From functional prespective, joining the 2 flow does not make sense, they are fully independent.
I recommand you to dump these flows into different tables, and this can be achieved implicitly without using any component and without development, see here.
I am using the Play! framework, and have a difficulty with in the following scenario.
I have a server process which has a 'read-only' transaction. This to prevent any possible database lock due to execution as it is a complicated procedure. There are one or two record to be stored, but I do that as a job, as I found doing them in the main thread could result in a deadlock under higher load.
However, in one occasion I need to create an object and subsequently use it.
However, when I create the object using a Job, wait for the resulting id (with a Promise return) and then search in the database for it, it cannot be found.
Is there an easy way to have the JPA search 'afresh' in the DB at this point? I implemented a 5 sec. pause to test, so I am sue it is not because the procedure hadn't finished yet.
Check if there is a transaction wrapped around your INSERT and if there is one check that the transaction is COMMITed.
I'm now working with Qrtz and found many times that the child jobs were not fired automatically after the mother job already executed. I've investigated the log which generated by log4j in the QRTZ library. We found that there was a trigger missing problem from the table "qrtz_simple_triggers". The problem occurred since the Mother job did the trigger insertion into table "qrtz_triggers" and then it should insert the data into "qrtz_simple_triggers" Immediately. But for my case, there was a period delayed about 1 sec during the simeple trigger completely inserted into the table and in the same moment, there was an operation from the Thread pooling to UPDATE the trigger status in table "qrtz_triggers" from "WAITING" to "ACQUIRE" as while as the mother job was not finished the trigger insertion into "qrtz_simple_triggers". So, the main thread cannot find the simple trigger in the table and then it was stopped working. (For that child job)
My point is how to prevent the case like this? I think that the two insert statements are not in the same trasaction. I'm now investigation on that and suppose the solution to merge those statements into the same transaction. Would you bring me more ideas?
Thanks in advance,
Stop :)
If you are using spring to manage transactions, then check post
Setting
org.quartz.jobStore.class = org.springframework.scheduling.quartz.LocalDataSourceJobStore