HSQLDB and in-memory files - persistence

Is it possible to setup HSQLDB in a way, so that the files with the db information are written into memory instead of using actual files? I want to use hsqldb to export some data structures together with hibernate mappings. Is is, however, not possible to write temporary files, so that I need to generate the files in-memory and return a stream with their contents as a response.
Setting hsqldb to use nio seems not to be a solution, because there is no way to get hold of those files before they get written onto the filesystem.
What I'm thinking of is a protocol handler for hsqldb, but I didn't find a suitable solution yet.
Just to describe in other words: A hack solution would be to pass hsqldb a stream or several streams. It would then during its operation write data into those streams. After all data is written, the user of the db could then use those streams to send it back over the network.

Yes, of course, we use it all the time for integration testing.
use as url : jdbc:hsqldb:mem:aname
see here for more details
DbUnit offers a handy database dump method as part of their package :
// database connection
Class driverClass = Class.forName("org.hsqldb.jdbcDriver");
Connection jdbcConnection = DriverManager.getConnection(
"jdbc:hsqldb:sample", "sa", "");
IDatabaseConnection connection = new DatabaseConnection(jdbcConnection);
// full database export
IDataSet fullDataSet = connection.createDataSet();
FlatXmlDataSet.write(fullDataSet, new FileOutputStream("full.xml"));
see DbUnit FAQ for more details. Of course there are routines to restore the data, as that is actually the puropose of the package : prepare a test database for integration testing. Usually we do this with an annotation, but you'll have to use tha API for that.

Related

Connect to a DBIx::Class database without repeating the connection details?

DBIx::Class::Manual::Intro
suggests connecting to the database as follows
my $schema = MyApp::Schema->connect(...)
explicitly providing connection details such as the password.
I want to connect to the same database from multiple different scripts, and it would be unwise to code the same connection parameters into each of the programs separately.
What is the "official" way to create a connection method with fixed connection details?
I realize that I can write something like this
package MyApp::Schema;
use base qw/DBIx::Class::Schema/;
sub my_connect {
$_[0]::SUPER->connect(...);
}
1;
Is this approach recommended?
I realize that providing different connection details may be useful for testing scripts, but in reality we do not yet use testing scripts, so this is currently irrelevant for our team.
Put your connection details in a config file, create a utility to return the connection and read the config details like you showed, or as a factory type function. Make the config dependant on the environment and you'll have testing capabilities for free.

How DSpace process a query in jspui?

How any query is processed in DSpace and data is managed between front end and PostgreSQL
Like every other webapp running in a Servlet Container like Tomcat, the file WEB-INF/web.xml controls how a query is processed. In case of DSpace's JSPUI you'll find this file in [dspace-install]/webapps/jspui/WEB-INF/web.xml. The JSPUI defines several filters, listeners and servlets to process a request.
The filters are used to report that the JSPUI is running, that restricted areas can be seen by authenticated users or even by authenticated administrators only and to handle Content Negotiation.
The listeners ensure that DSpace has started correctly. During its start DSpace loads the configuration, opens database connections that it uses in a connection pool, let Spring do its IoC magic and so on.
For the beginning the most important part to see how a query is processed are the servlets and the servlet-mappings. A servlet-mapping defines which servlet is used to process a request with a specific request path: e.g. all requests to example.com/dspace-jspui/handle/* will be processed by org.dspace.app.webui.servlet.HandleServlet, all requests to example.com/dspace-jspui/submit will be processed by org.dspace.app.webui.servlet.SubmissionController.
The servlets uses their Java code ;-) and the DSpace Java API to process the request. You'll find most of it in the dspace-api module (see [dspace-source]/dspace-api/src/main/java/...) and some smaller part in dspace-services module ([dspace-source]dspace-services/src/main/java/...). Within the DSpace Java API their are two important classes if you're interested in the communication with the database:
One is org.dspace.core.Context. The context contains information whether and which user is logged in, an initialized and connected database connection (if all went well) and a cache. The methods Context.abort(), Context.commit() and Context.complete() are used to manage the database transaction. That is the reason, why almost all methods manipulating the database requests a Context as method parameter: it controls the database connection and the database transaction.
The other one is org.dspace.storage.rdbms.DatabaseManager. The DatabaseManager is used to handle database queries, updates, deletes and so on. All DSpaceObjects contains an object TableRow which contains the information of the object stored in the database. Inside the DSpaceObject classes (e.g. org.dspace.content.Item, org.dspace.content.Collection, ...) the TableRow may be manipulated and the changes stored back to the database by using DatabaseManager.update(Context, DSpaceObject). The DatabaseManager provides several methods to send SQL queries to the database, to update, delete, insert or even create data in the database. Just take a look to its API or look for "SELECT" it the DSpace source to get an example.
In JSPUI it is important to use Context.commit() if you want to commit the database state. If a request is processed and Context.commit() was not called, then the transaction will be aborted and the changes gets lost. If you call Context.complete() the transaction will be committed, the database connection will be freed and the context is marked as been finished. After you called Context.complete() the context cannot be used for a database connection any more.
DSpace is quite a huge project and their could be written a lot more about its ORM, the initialization of the database and so on. But this should already help you to start developing for DSpace. I would recommend you to read the part "Architecture" in the DSpace manual: https://wiki.duraspace.org/display/DSDOC5x/Architecture
If you have more specific questions you are always invited to ask them here on stackoverflow or on our mailing lists (http://sourceforge.net/p/dspace/mailman/) dspace-tech (for any question about DSpace) and dspace-devel (for question regarding the development of DSpace).
It depends on the version of DSpace you are running, along with your configuration.
In DSpace 4.0 or above, by default, the DSpace JSPUI uses Apache Solr for all searching and browsing. DSpace performs all indexing and querying of Solr via its Discovery module. The Discovery (Solr) based searche/indexing classes are available under the "org.dspace.discovery" package.
In earlier versions of DSpace (3.x or below), by default, the DSpace JSPUI uses Apache Lucene directly. In these older versions, DSpace called Lucene directly for all indexing and searching. The Lucene based search/indexing classes are available under the "org.dspace.search" package.
In both situations, queries are passed directly to either Solr or Lucene (again depending on the version of DSpace). The results are parsed and displayed within the DSpace UI.

What kind of int storage is this?

We have an Firebird database for a (very crappy) application, and the app's front end, but nothing in between (i.e. no source code).
There is a field in the database that is stored as -2086008209 but in the front-end represents as 63997.
Examples:
Database Front-End
758038959 44093
1532056691 61409
28401112 65866
-712038758 40712
936488434 43872
-688079579 48567
1796491935 39437
1178382500 30006
1419373703 66069
1996421588 48454
890825339 46313
-820234748 45206
What kind of storage is this? The aim for us here is to access the application's back-end data and bypass the front-end GUI alltogether, so I need to know how to decode this field in order to get appropriate values from it. It is stored as a int in FireBird (I don't know if FireBird has signed/unsigned ints, but this is showing as signed when we select it).
This is the definition of the field:
It is not, as far as I can tell, de-normalised. The generator GEN_CONTACTS_ID has 66241 against it, which at a glance looks accurate.
I work on with an application that stores bitmaps in integers (just don't ask), if you express them in that form do you something useful or consistant
My impression is that the problem is in the front end. If what is stored in the DB is -2086008209, then what is stored in the DB is -2086008209. To understand better how the application is manipulating the data, try storing other numbers in the DB and see how they are displayed.
Did you come to this realization through logging SQL? If you havent, you may serve yourself well by using the Firebird Trace API to get that SQL: http://www.firebirdfaq.org/faq95/. An easier tool to parse the Trace API is this commercial product: http://www.upscene.com/products.fbtm.index.php.
I've used these tools and other techniques (triggers etc,.) to find what an application is using/changing in the Database.
Of course, if the SQL statement is select * from table, then these tools would not help much.

read sqlserver database using mirth connect and convert it into xml format and vice versa

I have a requirement where I have to read data from sql server local database and first map it in XML file provided by another third party org. who have their own database. Then once I have proper mapping of fields I have to transform the data from sql server database to XML format and vice versa.
So far, I am able to connect sqlserver database in mirthconnect however I dont know what steps are required to create in channels and transformer to carry the task of reading data and mapping corresponding fields to XML format provided by third party and finally writing in XML file provided and vice versa.
In short if I can get details of creating such channel in mirth connect where I can read sql server database and map the fields in corresponding xml file....I guess I can write to it. Same way applies if I go from xml format to sqlserver database. Can someone tell me how to accomplish this?
For database field mapping whats the best way to map fields entirely on two different databases is there any tool which can help....
Also once the task of transforming the data from one end to another is accomplished is there any way of validation in mirth connect that verifies that data is correctly moved from one to another?
If you want to process one row at a time, the normal database reader will work fine; just set the data type under Summary to XML for all steps. Set a destination of channel writer to nowhere and run it once to see what it does in the Dashboard. You can copy and paste that as an example into your message template so you can map variables.
If you want to work an entire result at one time in the Transformer steps, I find it easier to create a custom reader and use "FOR XML RAW, ELEMENTS" on the end of my Microsoft SQL query.
Something like:
//build connection
var dbConn = DatabaseConnectionFactory.createDatabaseConnection('com.microsoft.sqlserver.jdbc.SQLServerDriver','jdbc:sqlserver://servername:1433;databaseName=dbname;integratedSecurity=true;','',''); //this uses the MS JDBC driver and auth dll
//query results with XML output from server 'FOR XML' statement at end
var result = dbConn.executeCachedQuery("SELECT col1 AS FirstColumn, col2 AS SecondColumn FROM [dbname].[dbo].[table1] WHERE [processed] = 'False' FOR XML RAW, ELEMENTS");
//Make sure we are at the top of results
result.beforeFirst();
//wrap XML. Namespace etc. not required
XMLresult = '<message>';
//XML broke up across several rows in one column. Re-combine
while (result.next()) {
XMLresult += result.getString(1);
}
XMLresult += '</message>';
dbConn.close();
return XMLresult;

how to receive data from different datasources and transmit them through JMS

The project is required to receive lot of data(the possible historical weather data of one State) from different datasources, like zip files, data files within a website. The data format is not clear, the files might be txt, pdf, or .xml.
Since it specifies that JMS and JPA should be used for implementation, I am thinking use JMS ObjectMessage to transfer data to application server. The advantage of Obejct Message is that it can read data as an object and so I can store them as persistent object in memory. And use JPA access them for lata query.
I am looking for a simple and extendable way to do this with JMS and JPA. The data source, data format, data size might change in the future.
Create a client (a sample test class) which will read data from the FSL/INGRA file and create text JMS messages and send them to your server.