Caching Issue with Teiid Virtual DB with ODATA Query - jboss

I have a business scenario where, whenever a new record is loaded into a DB table,
a) A notification will be sent to the client. Notification message is to convey data is loaded and ready for querying.
b) Upon receiving the notification, the Client will make an OData query to the JBOSS vitrual DB. Odata is supported by Teiid VDB
Problem is that: The new records (inserted via manual/automated SL script) that are not returned in the ODATA query response. It is always returning the cached result for first 5 minutes. Because the Odata has a default cache time setting to 5 minutes.
We want TEIID to always return all the records including the newly inserted one.
I tried the following option but it is not working as expected (https://developer.jboss.org/wiki/AHowToGuideForMaterializationcachingViewsInTeiid)
1) Cache hints
/*+ cache(ttl:300000) */ select * from Source.UpdateProduct
2) OPTION NOCACH
**** This works when I make a JDBC query to the DB.
Please suggest, how to turn off this caching in case of ODATA REST query ?

I think Teiid documentation https://docs.jboss.org/author/display/TEIID/OData+Support could help.
You don't specify version of Teiid you use, so I enclose the most current version's documentation.
Now when you go through the docs page, at the bottom there is section Configuration, where there are several configurable options.
Doesn't the skiptoken-cache-time option serve your need? Try setting it to lower value/zero and see if this helps. Just locate the odata war, open it, and change the WEB-INF/web.xml file.
Jan

Related

Couchbase Sync-Gateway View Index Failure

I'm using Couchbase to power the back end of my mobile app and am experiencing a strange error when using views.
I have a view set up to fetch a specific document type and am querying that view via the Sync-Gateway admin API. Normally it works well but I've found that if a document has been recently added to the database then the view query will return 0 results on the first request. The second identical request will then return the expected response.
I suspect that the new document hasn't been indexed by couchbase yet and the query triggers a re-indexing of documents. What I'm wondering is if there's a way of notifying couchbase that I'm about to query the view and to prepare the documents in advance. I don't want to have to perform 2 requests for each query.
Has anyone else come across this issue?
Any solutions?
By default, Sync Gateway allows using a "stale" index, meaning a query won't necessarily rebuild the index before processing a query.
To override this, add stale=false to your query.
(Allowed options are false, ok, and update_after. The default is update_after.)

Strongloop API response limit over Oracle Database

I've just started using Strongloop to define a REST api over my oracle database.
Everything works fine when I check my API using "localhost:3000/explorer".
For instance, when I send a "get" to list all persons, the server answers with the list of people in the PERSONS table.
The issue is that the server does not return all the records in the table.
It returns a 100 records only, knowing that the table contains more than a 100 records.
Am I missing something?
I found the solution, in case someone faces the same issue.
The problem is that in loopback-connector-oracle, the maximum number of rows is set to 100.
To change the maximum rows you should :
1- In "datasources.json" file, set the property "maxRows" to the number you want, for instance "maxRows":1000
2- Replace the file \node_modules\loopback-connector-oracle\lib\oracle.js with the file oracle.js
3- Restart your API, now it will return more than 100 records
See this link for more details about the issue
I don't think there is any such thing, by default it will fetch all the records.
Please check your table/database settings.

SpagoBI + Firebird DataSource (The result set is closed)

I am using Spagobi version 3.6.0, Jaybird-2.2.2JDK_1.7 and Firebird 2.5 (x64). I set up a datasource and the testing is OK.
I set up a dataset and the preview shows the correct list of colunms, only there is no data. Access via some other SQL viewer shows the data.
The error message in the Catalina log is:
org.firebirdsql.jdbc.FBSQLException: The result set is closed
Does anybody have an idea what I did wrong?
After some testing the solution to your problem is to specify the connection property defaultHoldable=true in the connection URL of the datasource, so for example:
jdbc:firebirdsql://localhost/database?defaultHoldable=true
As commented earlier you also need to upgrade to Jaybird 2.2.7, otherwise you will be confronted with bugs JDBC-304 and/or JDBC-305.
I haven't checked the code of SpagoBI, but it looks like SpagoBI assumes that result sets are always holdable over commit and executes its queries using auto commit. It should either not use auto commit, or check the DatabaseMetaData.getResultSetHoldability() and/or Connection.getHoldability() and explicitly request holdable result sets.

How DSpace process a query in jspui?

How any query is processed in DSpace and data is managed between front end and PostgreSQL
Like every other webapp running in a Servlet Container like Tomcat, the file WEB-INF/web.xml controls how a query is processed. In case of DSpace's JSPUI you'll find this file in [dspace-install]/webapps/jspui/WEB-INF/web.xml. The JSPUI defines several filters, listeners and servlets to process a request.
The filters are used to report that the JSPUI is running, that restricted areas can be seen by authenticated users or even by authenticated administrators only and to handle Content Negotiation.
The listeners ensure that DSpace has started correctly. During its start DSpace loads the configuration, opens database connections that it uses in a connection pool, let Spring do its IoC magic and so on.
For the beginning the most important part to see how a query is processed are the servlets and the servlet-mappings. A servlet-mapping defines which servlet is used to process a request with a specific request path: e.g. all requests to example.com/dspace-jspui/handle/* will be processed by org.dspace.app.webui.servlet.HandleServlet, all requests to example.com/dspace-jspui/submit will be processed by org.dspace.app.webui.servlet.SubmissionController.
The servlets uses their Java code ;-) and the DSpace Java API to process the request. You'll find most of it in the dspace-api module (see [dspace-source]/dspace-api/src/main/java/...) and some smaller part in dspace-services module ([dspace-source]dspace-services/src/main/java/...). Within the DSpace Java API their are two important classes if you're interested in the communication with the database:
One is org.dspace.core.Context. The context contains information whether and which user is logged in, an initialized and connected database connection (if all went well) and a cache. The methods Context.abort(), Context.commit() and Context.complete() are used to manage the database transaction. That is the reason, why almost all methods manipulating the database requests a Context as method parameter: it controls the database connection and the database transaction.
The other one is org.dspace.storage.rdbms.DatabaseManager. The DatabaseManager is used to handle database queries, updates, deletes and so on. All DSpaceObjects contains an object TableRow which contains the information of the object stored in the database. Inside the DSpaceObject classes (e.g. org.dspace.content.Item, org.dspace.content.Collection, ...) the TableRow may be manipulated and the changes stored back to the database by using DatabaseManager.update(Context, DSpaceObject). The DatabaseManager provides several methods to send SQL queries to the database, to update, delete, insert or even create data in the database. Just take a look to its API or look for "SELECT" it the DSpace source to get an example.
In JSPUI it is important to use Context.commit() if you want to commit the database state. If a request is processed and Context.commit() was not called, then the transaction will be aborted and the changes gets lost. If you call Context.complete() the transaction will be committed, the database connection will be freed and the context is marked as been finished. After you called Context.complete() the context cannot be used for a database connection any more.
DSpace is quite a huge project and their could be written a lot more about its ORM, the initialization of the database and so on. But this should already help you to start developing for DSpace. I would recommend you to read the part "Architecture" in the DSpace manual: https://wiki.duraspace.org/display/DSDOC5x/Architecture
If you have more specific questions you are always invited to ask them here on stackoverflow or on our mailing lists (http://sourceforge.net/p/dspace/mailman/) dspace-tech (for any question about DSpace) and dspace-devel (for question regarding the development of DSpace).
It depends on the version of DSpace you are running, along with your configuration.
In DSpace 4.0 or above, by default, the DSpace JSPUI uses Apache Solr for all searching and browsing. DSpace performs all indexing and querying of Solr via its Discovery module. The Discovery (Solr) based searche/indexing classes are available under the "org.dspace.discovery" package.
In earlier versions of DSpace (3.x or below), by default, the DSpace JSPUI uses Apache Lucene directly. In these older versions, DSpace called Lucene directly for all indexing and searching. The Lucene based search/indexing classes are available under the "org.dspace.search" package.
In both situations, queries are passed directly to either Solr or Lucene (again depending on the version of DSpace). The results are parsed and displayed within the DSpace UI.

How to inspect every query going to DB from Zend Framework

I have a complex reporting application that allows clients to login and view reports for their client data. There are several sections of the application where there are database calls, using various controllers. I need to make sure that client A doesn't get client B's information via header manipulation.
The system authenticates, and assignes them a clientID and roleID. If your roleID >1, that means you work for the company hosting the data, and you can see all client info. I want to create a catch-all that basically works like this:
if($roleID > 1) {
...send query to database
}else {
if(...does this query select a record with clientID other than my $auth->clientID){
do not execute query
}else {
execute query
}
}
The problem is, I want this to run for every query that goes to the server... how can I place this code as a "roadblock" between the application and the DB? I already use Zend_Profiler to look at queries, so I know it is somehow possible, but cannot discern this from the Profiler code...
I can always write an authentication function and pass selected queries that way, but this catch-all would be easier to implement across all of the calls and would be future proof. Any help is appreciated.
it's application design fault.
you shoud use 'service architecture' - the only one entry point for queries would be a service. and any checks inside it.
If this is something you want run on every query, I'd suggest extending Zend_Db_Select and overwrite either the query() or assemble() functions to add in your logic. You'll also want to add a way for it to be aware of your $auth object.
Another option is to extend your database adapter so you can intercept the queries directly. IMO, you should try and do this at the application level though.
Depending on your database server, you can put a trace on the DB side.
Here's an example for Oracle:
http://orafaq.com/wiki/SQL_Trace