Entity framework extended throws DynamicProxy exception - entity-framework

When trying to do bulk updates using EntityFramework.Extended I get one of two exceptions.
Looking at the example I tried:
context.ProcessJobs.Where(job => true).Update(job => new ProcessJob
{
Status = ProcessJobStatus.Processing,
StatusTime = DateTime.Now,
LogString = "Processing"
});
I got the following exception:
'EntityFramework.Reflection.DynamicProxy' does not contain a definition for 'InternalQuery'
...
System.Core.dll!System.Dynamic.UpdateDelegates.UpdateAndExecute1(System.Runtime.CompilerServices.CallSite site, object arg0) + 0x153 bytes
EntityFramework.Extended.dll!EntityFramework.Extensions.ObjectQueryExtensions.ToObjectQuery(System.Linq.IQueryable query) + 0x2db bytes
EntityFramework.Extended.dll!EntityFramework.Extensions.BatchExtensions.Update(System.Linq.IQueryable source, System.Linq.Expressions.Expression> updateExpression) + 0xe9 bytes
EntityFramework.Extended.dll!EntityFramework.Extensions.BatchExtensions.Update(System.Linq.IQueryable source, System.Linq.Expressions.Expression> updateExpression) + 0xe9 bytes
Based on a github issue, I tried :
var c = ((IObjectContextAdapter) context).ObjectContext.CreateObjectSet<ProcessJob>();
c.Update(job => new ProcessJob
{
Status = ProcessJobStatus.Processing,
StatusTime = DateTime.Now,
LogString = "Processing"
});
Which results in the exception (probably same error as reported here)
'EntityFramework.Reflection.DynamicProxy' does not contain a definition for 'EnsureMetadata'
...
EntityFramework.Extended.dll!EntityFramework.Mapping.ReflectionMappingProvider.FindMappingFragment(System.Collections.Generic.IEnumerable itemCollection, System.Data.Entity.Core.Metadata.Edm.EntitySet entitySet) + 0xc1e bytes
EntityFramework.Extended.dll!EntityFramework.Mapping.ReflectionMappingProvider.CreateEntityMap(System.Data.Entity.Core.Objects.ObjectQuery query) + 0x401 bytes
EntityFramework.Extended.dll!EntityFramework.Mapping.ReflectionMappingProvider.GetEntityMap(System.Data.Entity.Core.Objects.ObjectQuery query) + 0x58 bytes
EntityFramework.Extended.dll!EntityFramework.Mapping.MappingResolver.GetEntityMap(System.Data.Entity.Core.Objects.ObjectQuery query) + 0x9f bytes
EntityFramework.Extended.dll!EntityFramework.Extensions.BatchExtensions.Update(System.Linq.IQueryable source, System.Linq.Expressions.Expression> updateExpression) + 0x1c8 bytes
I tried the latest version for EF5, and I upgraded to EF6 to see if the latest version works, but I get the same problem. We use Code First.
I am not sure how to proceed, I've started trying to understand how the EntityFramework.Extensions code works. But I am wondering whether I will have to fall back to using a stored procedure or SQL, which neither are ideal for our setup.
Does anyone know what these problems are, or have any ideas about how to work out what is going on?

It turns out that you can ignore this error. I had CLR runtime exceptions debug option turned on. I followed through the source code, and then downloaded it and started debugging.
It seems that the exception being thrown initially is expected and it retries with some other options. Unfortunately I didn't have time to look into the exact problem because I ran into another - but that's the subject of a different question.

Related

How I can achieve SKIP LOCKED functionality using pessimistic lock?

I want to implement skip lock. I am using postgres 9.6.17. I am using the following code
#Lock(LockModeType.PESSIMISTIC_WRITE)
#QueryHints({#QueryHint(name = "javax.persistence.lock.timeout", value = "-2")})
#Query("Select d from Demo d where d.id in (?1)")
List<Demo> getElementByIds(List<Long> ids);
I am making the same DB call from 2 services at the same time through cmd(parallel Curl request to both services which make DB call). From 1 server I am passing ids from 1...4 and from other I am passing ids from 1.....7.
But in case if the first service takes a lock on 1...4 row and then the second service has to wait until first service removes its lock but ideally, the second service should return rows 5...7
From the first service I am calling like this
List<Long> ids = new ArrayList<>();
ids.add(1l);
ids.add(2l);
ids.add(3l);
ids.add(4l);
List<Demo> demos = demoRepo.getElementByIds(ids);
try{
Thread.sleep(500);
} catch (Exception e) {
}
logger.info("current time: " + System.currentTimeMillis());
and from the second service I am calling like this:
List<Long> ids = new ArrayList<>();
ids.add(1l);
ids.add(2l);
ids.add(3l);
ids.add(4l);
ids.add(5l);
ids.add(6l);
ids.add(7l);
try{
Thread.sleep(100);
} catch (Exception e) {
}
logger.info("current time: " + System.currentTimeMillis());
List<Demo> demos = demoRepo.getElementByIds(ids);
logger.info("current time: " + System.currentTimeMillis());
But always both the queries returning the same rows which I am asking after waiting for another service to release the lock.
Spring JPA version I am using :
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
<version>2.2.5.RELEASE</version>
I have also tried at the application level itself spring.jpa.javax.persistence.lock.timeout=-2 that also not working.
Both the methods seems to working like PESSIMISTIC_WRITE.
Please suggest how I can achieve skip locked functionality.
The Queries seem to be correct.
Hope you are using latest Dialect of Postgres which supports Skip Lock functionality.
Considering the version of Postgres you are using, below Dialect should be used.
org.hibernate.dialect.PostgreSQL95Dialect
You can refer this link for more information
Answer of RAVI SHANKAR was correct. I have tested and it realy worked. You need to specify dialect version.
For example in spring boot
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQL94Dialect
Also code will be more readable if you use constants insted of strings.
#QueryHint(name = AvailableSettings.JPA_LOCK_TIMEOUT, value = LockOptions.SKIP_LOCKED.toString())

DxlImporter inside a loop throws error " DXL importer operation failed"

I am having a java agent which loops through the view and gets the attachment from each document, The attachment is nothing but the .dxl file containing the document xml data. I am extracting the file at some temp directory and trying import the extracted .dxl as soon as it get extracted.
But the problem here is ,it only imports or works on first document's attachment in the loop and throws the error in java debug console
NotesException: DXL importer operation failed
at lotus.domino.local.DxlImporter.importDxl(Unknown Source)
at JavaAgent.NotesMain(Unknown Source)
at lotus.domino.AgentBase.runNotes(Unknown Source)
at lotus.domino.NotesThread.run(Unknown Source)
My java Agent code is
public class JavaAgent extends AgentBase {
static DxlImporter importer = null;
public void NotesMain() {
try {
Session session = getSession();
AgentContext agentContext = session.getAgentContext();
// (Your code goes here)
// Get current database
Database db = agentContext.getCurrentDatabase();
View v = db.getView("DXLProcessing_mails");
DocumentCollection dxl_tranfered_mail = v.getAllDocumentsByKey("dxl_tranfered_mail");
Document dxlDoc = dxl_tranfered_mail.getFirstDocument();
while(dxlDoc!=null){
RichTextItem rt = (RichTextItem) dxlDoc.getFirstItem("body");
Vector allObjects= rt.getEmbeddedObjects();
System.out.println("File name is "+ allObjects.get(0));
EmbeddedObject eo = dxlDoc.getAttachment(allObjects.get(0).toString());
if(eo.getFileSize()>0){
eo.extractFile(System.getProperty("java.io.tmpdir") + eo.getName());
System.out.println("Extracted File to "+System.getProperty("java.io.tmpdir") + eo.getName());
String filePath = System.getProperty("java.io.tmpdir") + eo.getName();
Stream stream = session.createStream();
if (stream.open(filePath) & (stream.getBytes() >0)) {
System.out.println("In If"+System.getProperty("java.io.tmpdir"));
importer = session.createDxlImporter();
importer.setDocumentImportOption(DxlImporter.DXLIMPORTOPTION_CREATE);
System.out.println("Break Point");
importer.importDxl(stream,db);
System.out.println("Imported Sucessfully");
}else{
System.out.println("In else"+stream.getBytes());
}
}
dxlDoc = dxl_tranfered_mail.getNextDocument();
}
} catch(Exception e) {
e.printStackTrace();
}
The code executes till it prints "Break Point" and throws the error but the attachment get imported for first time
In other case if i hard code the filePath for the specific dxl file from file system it imports the dxl as document in the database with no errors
I am wondering if it is the issue of the stream passed doesn't get completes and the next loop executes.
Any kind of suggestion will be helpful.
I can't see any part where your while loop would move on from the first document.
Usually you would have something like:
Document nextDoc = dxl_tranfered_mail.getNextDocument(dxlDoc);
dxlDoc.recycle();
dxlDoc = nextDoc;
Near the end of the loop to advance it to the next document. As your code currently stands it looks like it would never advance, and always be on the first document.
If you do not know about the need to 'recycle' domino objects I suggest you have a search for some blog posts articles that explain the need to do so.
It is a little complicated but basically, the Java Objects are just a 'wrapper' for the the objects in the C API.
Whenever you create a Domino Object (such as a Document, View, DocumentCollection etc.) a memory handle is allocated in the underlying 'C' layer. This needs to be released (or recycled) and it will eventually do so when the session is recycled, however when your are processing in a loop it is much more important to recycle as you can easily exhaust the available memory handles and cause a crash.
Also it's possible you may need to close (and recycle) each Stream after you a finished importing each file
Lastly, double check that the extracted file that is causing an exception is definitely a valid DXL file, it could simply be that some of the attachments are not valid DXL and will always throw an exception.
you could put a try/catch within the loop to handle that scenario (and report the problem files), which will allow the agent to continue without halting

XmlDeserialization fails in medium trust level

We have our site hosted in medium trust level and the hosting provider has refused to give us full trust. Our code tries to deserialize code using following code snippet but fails with the reflectionpermission error. Upon debug I get "There is an error in XML document (71, 6)." error. It works perfectly fine in full trust. Please someone advice how I can solve this problem before we decide to move to full trust hosting provider.
public static T Decrypt<T>(Stream stream)
{
Rijndael rij = Rijndael.Create();
rij.Key = key;
rij.IV = iv;
T obj = default(T); // assigns null if T is a reference type, or 0 (zero) for value types
using (CryptoStream cs = new CryptoStream(stream, rij.CreateDecryptor(), CryptoStreamMode.Read))
{
using (GZipStream zs = new GZipStream(cs, CompressionMode.Decompress))
{
XmlSerializer xs = new XmlSerializer(typeof(T));
obj = (T)xs.Deserialize(zs);
zs.Close();
}
cs.Close();
}
return obj;
}
Open the project properties and set "Generate serialization assembly" to "on". This will make the compiler generate serialization assemblies at compile-time instead of on the fly. Just make sure to deploy the serialization assemblies.

Invalid attempt to call FieldCount when reader is closed

The error above occurs when I try to do a dataReader.Read on the data recieved from the database. I know there are two rows in there so it isnt because no data actually exists.
Could it be the CommandBehavior.CloseConnection, causing the problem? I was told you had to do this right after a ExecuteReader? Is this correct?
try
{
_connection.Open();
using (_connection)
{
SqlCommand command = new SqlCommand("SELECT * FROM Structure", _connection);
SqlDataReader dataReader = command.ExecuteReader(CommandBehavior.CloseConnection);
if (dataReader == null) return null;
var newData = new List<Structure>();
while (dataReader.Read())
{
var entity = new Structure
{
Id = (int)dataReader["StructureID"],
Path = (string)dataReader["Path"],
PathLevel = (string)dataReader["PathLevel"],
Description = (string)dataReader["Description"]
};
newData.Add(entity);
}
dataReader.Close();
return newData;
}
}
catch (SqlException ex)
{
AddError(new ErrorModel("An SqlException error has occured whilst trying to return descendants", ErrorHelper.ErrorTypes.Critical, ex));
return null;
}
catch (Exception ex)
{
AddError(new ErrorModel("An error has occured whilst trying to return descendants", ErrorHelper.ErrorTypes.Critical, ex));
return null;
}
finally
{
_connection.Close();
}
}
Thanks in advance for any help.
Clare
When you use the Using in C#, after the last } from the using, the Connection automatically close, thats why you get the fieldcount to be closed when u try to read him, as that is impossible, because u want those datas, read then before close the using, or u can open and close manually the connection, by not using the (using)
Your code, as displayed is fine. I've taken it into a test project, and it works. It's not immediately clear why you get this message with the code shown above. Here are some debugging tips/suggestions. I hope they're valuable for you.
Create a breakpoint on the while (dataReader.Read()). Before it enters its codeblock, enter this in your Immediate or Watch Window: dataReader.HasRows. That should evaluate to true.
While stopped on that Read(), open your Locals window to inspect all the properties of dataReader. Ensure that the FieldCount is what you expect from your SELECT statement.
When stepping into this Read() iteration, does a student object get created at all? What's the value of dataReader["StructureID"] and all others in the Immediate Window?
It's not the CommandBehavior.CloseConnection causing the problem. That simply tells the connection to also close itself when you close the datareader.
When I got that error, it happened to be a command timeout problem (I was reading some large binary data). As a first attempt, I increased the command timeout (not the connection timeout!) and the problem was solved.
Note: while attempting to find out the problem, I tried to listen to the (Sql)connection's StateChanged event, but it turned out that the connection never fall in a "broken" state.
Same problem here. Tested all the above solutions
increase command timeout
close the connection after read
Here's the code
1 objCmd.Connection.Open()
2 objCmd.CommandTimeout = 3000
3 Dim objReader As OleDbDataReader = objCmd.ExecuteReader()
4 repeater.DataSource = objReader
5 CType(repeater, Control).DataBind()
6 objReader.Close()
7 objCmd.Connection.Dispose()
Moreover, at line 4 objReader has Closed = False
I got this exception while using the VS.NET debugger and trying to examine some IQueryable results. Bad decision because the IQueryable resulted in a large table scan. Stopping and restarting the debugger and NOT trying to preview this particular IQueryable was the workaround.

Lucene.net read past EOF error during IndexWriter creation

I'm trying to implement Lucene.net in my C# application.
At this point i'm still at the very start: creating an index.
I use the following code:
var directory = new Lucene.Net.Store.SimpleFSDirectory(new System.IO.DirectoryInfo("d:\\tmp\\lucene-index\\"));
var analyzer = new Lucene.Net.Analysis.Standard.StandardAnalyzer(Lucene.Net.Util.Version.LUCENE_29);
var writer = new Lucene.Net.Index.IndexWriter(directory, analyzer, true, Lucene.Net.Index.IndexWriter.MaxFieldLength.UNLIMITED);
I get an IOException on the writer initialization line.
The error message is "Read past EOF" and it occurs in the IndexInput class in the ReadInt() method.
The code does produce some files in the lucene-index directory (segments.gen and write.lock) but both are 0 bytes.
I tried to google for this problem but i can't find any good info about it.
Is there a Lucene.Net expert here who can help me?
Here's some code that I've used before. I think the problem that you are experiencing is with the SimpleFSDirectory.
var writer = new IndexWriter("SomePath", new StandardAnalyzer());
writer.SetMaxBufferedDocs(100);
writer.SetRAMBufferSizeMB(256);
// add your document here
writer.AddDocument( ... );
writer.Flush();
// the Optimize method is optional and is used by lucene to combine multiple index files
writer.Optimize();
writer.Close();