Error Using DbExecutionStrategy About Streaming but Streaming not used - entity-framework

I'm trying to use a custom DbExecutionStrategy that implements retries, but I'm getting the following error:
An exception of type 'System.InvalidOperationException' occurred in EntityFramework.dll but was not handled in user code
Additional information: Streaming queries are not supported by the configured execution strategy 'MyDbExecutionStrategy'. See http://go.microsoft.com/fwlink/?LinkId=309381 for additional information.
If you follow that link it will explain that EF6 doesn't use streaming by default but that you can turn it on by using AsStreaming().
By default, EF6 and later version will buffer query results rather than streaming them. If you want to have results streamed you can use the AsStreaming method to change a LINQ to Entities query to streaming.
However, the call that is being made does not use streaming and I don't have any code that calls AsStreaming().
dataModel.Set<DeploymentLog>().OrderByDescending(d => d.DeploymentTimestamp).FirstOrDefault()
I've copied the code and the strategy to a different command line app that I created to test it and it works fine there. But in my web application I'm getting that error.
Any ideas why? Is there some kind of setting to turn on streaming on all queries?

Related

How to force to set Pipelines' status to failed

I'm using Copy Data.
When there is some data error. I would export them to a blob.
But in this case, the Pipelines's status is still Succeeded. I want to set it to false. Is it possible?
When there is some data error.
It depends on what error you mentioned here.
1.If you mean it's common incompatibility or mismatch error, ADF supports built-in feature named Fault tolerance in Copy Activity which supports below 3 scenarios:
Incompatibility between the source data type and the sink native
type.
Mismatch in the number of columns between the source and the sink.
Primary key violation when writing to SQL Server/Azure SQL
Database/Azure Cosmos DB.
If you configure to log the incompatible rows, you can find the log file at this path: https://[your-blob-account].blob.core.windows.net/[path-if-configured]/[copy-activity-run-id]/[auto-generated-GUID].csv.
If you want to abort the job as soon as any error occurs,you could set as below:
Please see this case: Fault tolerance and log the incompatible rows in Azure Blob storage
2.If you are talking about your own logic for the data error,may some business logic. I'm afraid that ADF can't detect that for you, though it's also a common requirement I think. However,you could follow this case (How to control data failures in Azure Data Factory Pipelines?) to do a workaround. The main idea is using custom activity to divert the bad rows before the execution of copy activity. In custom activity, you could upload the bad rows into Azure Blob Storage with .net SDK as you want.
Update:
Since you want to log all incompatible rows and enforce the job failed at the same time, I'm afraid that it can not be implemented in the copy activity directly.
However, I came up with an idea that you could use If Condition activity after Copy Activity to judge if the output contains rowsSkipped. If so, output False,then you will know there are some skip data so that you could check them in the blob storage.

Extended events not capturing Entity Framework queries (read/updates)

I had added the Extended event to track sql calls which is slowing down my system leading to Timeout exceptions and other
CREATE EVENT SESSION [longrunning_statements] ON SERVER
ADD EVENT sqlserver.sql_statement_completed(
WHERE ([duration]>(2000000) AND [database_id]=(9)))
ADD TARGET package0.event_file(SET filename=N'c:\capture\xe_longrunning_statement.xel',metadatafile=N'c:\capture\xe_longrunning_statement.xem')
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF)
GO
But i noticed it does not register sql server updates/read queries/procedures calls from Entity Framework but only logged my sql queries run using SSMS.
Any ideas are appreciated
UPDATE:
I use EF6.1 which used i think batches to save data.
Instead of trying to capture sqlserver.sql_statement_completed I would capture sqlserver.sql_batch_completed and sqlserver.rpc_completed for application/API issued queries/stored procedures.
If that does not work then removing the filters(or at least the duration filter (as Andrey is suggesting in the comments) would probably give us more insight on why the queries are not being captured.

Apache geode failing with OOM exception

I am trying to create a REST application over Apache geode. Application works well in case of limited data, but in cases when I need to get the complete data ( ~0.8M ), it fails with an OOM exception on server.
Exception :
HTTP GET Error: 500
REST OQL Response: {"cause":"Handler dispatch failed; nested exception is java.lang.OutOfMemoryError: Java heap space"}
I tried the same approach with cache client, it works seamlessly, but we need to use REST to integrate with our application.
Any ideas to go around with this?
I am thinking on the following approaches.
Can we break the data on server side and use something like "Range" with Apache Geode? I tried this quickly, but did not work well.
Can we start getting the data into smaller buffers at the client side and start reading buffer by buffer?
Is it possible to get data out from Geode as a data-stream?
Thanks,
Abhay
Would it be possible for your to share the stack trace for the OOM you are getting? Are you saying the results are 0.8 megabytes? That doesn't seem like it should cause on OOME.
You can get ranges of data using OQL queries, but if your data set is really that small it seems like something else strange is going on with the REST API.

How DSpace process a query in jspui?

How any query is processed in DSpace and data is managed between front end and PostgreSQL
Like every other webapp running in a Servlet Container like Tomcat, the file WEB-INF/web.xml controls how a query is processed. In case of DSpace's JSPUI you'll find this file in [dspace-install]/webapps/jspui/WEB-INF/web.xml. The JSPUI defines several filters, listeners and servlets to process a request.
The filters are used to report that the JSPUI is running, that restricted areas can be seen by authenticated users or even by authenticated administrators only and to handle Content Negotiation.
The listeners ensure that DSpace has started correctly. During its start DSpace loads the configuration, opens database connections that it uses in a connection pool, let Spring do its IoC magic and so on.
For the beginning the most important part to see how a query is processed are the servlets and the servlet-mappings. A servlet-mapping defines which servlet is used to process a request with a specific request path: e.g. all requests to example.com/dspace-jspui/handle/* will be processed by org.dspace.app.webui.servlet.HandleServlet, all requests to example.com/dspace-jspui/submit will be processed by org.dspace.app.webui.servlet.SubmissionController.
The servlets uses their Java code ;-) and the DSpace Java API to process the request. You'll find most of it in the dspace-api module (see [dspace-source]/dspace-api/src/main/java/...) and some smaller part in dspace-services module ([dspace-source]dspace-services/src/main/java/...). Within the DSpace Java API their are two important classes if you're interested in the communication with the database:
One is org.dspace.core.Context. The context contains information whether and which user is logged in, an initialized and connected database connection (if all went well) and a cache. The methods Context.abort(), Context.commit() and Context.complete() are used to manage the database transaction. That is the reason, why almost all methods manipulating the database requests a Context as method parameter: it controls the database connection and the database transaction.
The other one is org.dspace.storage.rdbms.DatabaseManager. The DatabaseManager is used to handle database queries, updates, deletes and so on. All DSpaceObjects contains an object TableRow which contains the information of the object stored in the database. Inside the DSpaceObject classes (e.g. org.dspace.content.Item, org.dspace.content.Collection, ...) the TableRow may be manipulated and the changes stored back to the database by using DatabaseManager.update(Context, DSpaceObject). The DatabaseManager provides several methods to send SQL queries to the database, to update, delete, insert or even create data in the database. Just take a look to its API or look for "SELECT" it the DSpace source to get an example.
In JSPUI it is important to use Context.commit() if you want to commit the database state. If a request is processed and Context.commit() was not called, then the transaction will be aborted and the changes gets lost. If you call Context.complete() the transaction will be committed, the database connection will be freed and the context is marked as been finished. After you called Context.complete() the context cannot be used for a database connection any more.
DSpace is quite a huge project and their could be written a lot more about its ORM, the initialization of the database and so on. But this should already help you to start developing for DSpace. I would recommend you to read the part "Architecture" in the DSpace manual: https://wiki.duraspace.org/display/DSDOC5x/Architecture
If you have more specific questions you are always invited to ask them here on stackoverflow or on our mailing lists (http://sourceforge.net/p/dspace/mailman/) dspace-tech (for any question about DSpace) and dspace-devel (for question regarding the development of DSpace).
It depends on the version of DSpace you are running, along with your configuration.
In DSpace 4.0 or above, by default, the DSpace JSPUI uses Apache Solr for all searching and browsing. DSpace performs all indexing and querying of Solr via its Discovery module. The Discovery (Solr) based searche/indexing classes are available under the "org.dspace.discovery" package.
In earlier versions of DSpace (3.x or below), by default, the DSpace JSPUI uses Apache Lucene directly. In these older versions, DSpace called Lucene directly for all indexing and searching. The Lucene based search/indexing classes are available under the "org.dspace.search" package.
In both situations, queries are passed directly to either Solr or Lucene (again depending on the version of DSpace). The results are parsed and displayed within the DSpace UI.

Logging logic and data errors in MVC3 with Elmah

I have a Service layer in my MVC3 app, which plays the role of a Repository among other things, as a layer between my Data layer and the actual web application. I have coded all my GetById methods to be robust, using FirstOrDefault and not just First, because the Id is passed in a URL and cannot be guaranteed to be a valid Id.
I now find myself where I'm doing a FirstOrDefault, then only proceeding if the result is not null. I would like to log the event when it is null, and then proceed to do nothing etc. Now, I am already using Elmah to log unhandled exceptions, and I have very little experience with exception handling etc. in MVC3, but it occurs to me that it might be better for me to use a simple First, with Elmah logging the exception if no entity is found.
How should I approach this scenario, where an invalid Id is quite definitely an logic exception, but not a low level CLR exception? This is not like when somebody is asked to enter an Id and no entity is found for their search term, which is a normal logic result.
Generating exceptions can be expensive. You're initial approach of validating user input is more robust. I would recommend using a logging framework such as NLog (http://nlog-project.org) to log the case were an invalid ID is passed in.
If you would like to keep all of your log messages in Elmah, then you can decide to write directly to Elmah's error log instead of bubbling-up an exception.