Transaction for database query - rhodes

I have been using Rhom to store data of the users, but while inserting the data, if you some parsing reasons the app gets some error, the the partial data get inserted into the db.
Do we have any database transaction concept fro Rhom database query ?

Fortunately, yes rhodes do have as below,
db = ::Rho::RHO.get_src_db("Model")
db.start_transaction
begin
# your logic
db.commit
rescue
db.rollback
end

Related

Get transaction ID from doctrine

I want to get the transaction ID of the current running transaction.
Here is my code:
$con = $entityManager->getConnection();
$con->beginTransaction();
$entity = new Entity(); .....
$entityManager->persist($entity);
$entityManager->flush();
$con->commit();
I can't find any method to get the ID... Only running native SQL can solve this, but I don't think this is proper
I'm assuming you're using the default settings of Doctrine, so it will use PHP PDO underneath. It looks that PDO does not have ability to resolve transaction ID - maybe because it's different for each DBMS, so it's not ANSI SQL.
Take a look on PDO::beginTransaction() documentation, it returns just boolean. Also, there is no other function to retrieve ID.
You have to execute raw SQL which may be not that bad. I know that many people thinks that ORM/DBAL will allow to change DB engine in future, but - from my experience, YMMV - I always used some engine-specific behaviours. Even running SQLite for testing instead of MySQL failed at some point because of small differences about handling nulls and default values.
To fetch transation ID in PostgreSQL:
$con = $entityManager->getConnection();
$query = $con->executeQuery('SELECT txid_current()');
$transactionId = $query->fetchOne();

What could cause Firebird to silently turn calculated fields into "normal" fields?

I'm using Firebird 2.5.8 to store information for a software I designed.
A customer contacted me today to inform me of multiple errors that I couldn't understand, and I used the "IBExpert" tool to inspect its database.
To my surprise, all the calculated fields had been transformed into "standard" fields. This is clearly visible in the "DDL" tab of the database tool, which displays tables definition as SQL code.
For instance, the following table definition:
CREATE TABLE TVERSIONS (
...
PARENTPATH COMPUTED BY (((SELECT TFILES.FILEPATH FROM TFILES WHERE ID = TVERSIONS.FILEID))),
....
ISCOMPLETE COMPUTED BY ((((SELECT TBACKUPVERSIONS.ISCOMPLETE FROM TBACKUPVERSIONS WHERE ID = TVERSIONS.CVERSION)))),
CDATE COMPUTED BY (((SELECT TBACKUPVERSIONS.SERVERSTARTDATE FROM TBACKUPVERSIONS WHERE ID = TVERSIONS.CVERSION))),
DDATE COMPUTED BY (((SELECT TBACKUPVERSIONS.SERVERSTARTDATE FROM TBACKUPVERSIONS WHERE ID = TVERSIONS.DVERSION))),
...
);
has been "changed" in the client database into this:
CREATE TABLE TVERSIONS (
...
PARENTPATH VARCHAR(512) CHARACTER SET UTF8 COLLATE UNICODE,
...
ISCOMPLETE SMALLINT,
CDATE TIMESTAMP,
DDATE TIMESTAMP,
...
);
How can such a thing be possible?
I've been using Firebird for more than 10 years, and I've never seen such a behavior until now. Is it possible that it could be a corruption of RDB$FIELDS.RDB$COMPUTED_SOURCE fields?
What would you advise?
To summarize the discussion on firebird-support (and comments above):
The likely cause of this happening is that the database was backed up and restored using gbak, and the restore did not complete successfully. If this happens, gbak will have ended in an error, and the database is in single shutdown state (which means only SYSDBA or the database owner is allowed to create one connection). If the database is not currently in single shutdown mode, someone used gfix to bring the database online again in normal state.
When a database is restored using gbak, calculated fields are initially created as normal fields (though their values are not part of the backup). After data is restored successfully, those fields are altered to be calculated fields. If there are any errors before or during redefinition of the calculated fields, the restore will fail, and the database will be in single shutdown state, and the calculated fields will still be "normal" fields.
I recommend doing a structural comparison of the database to check if calculated fields are the only problem, or if other things (e.g. constraints) are missing. A simple way to do this is to export the DDL of the database and a "known-good" database, for example using ISQL (command line option -extract), and comparing them with a diff tool.
Then either fix the existing database by executing the necessary DDL to restore calculated fields (and other things), or create a new empty database, and move the data from the old to the new (using a datapump tool).
Also check if any data is missing. By default, gbak restores the data in a single transaction, so in that case either all data is present or all data is missing. However, gbak also has a "transaction-per-table" mode (-ONE_AT_A_TIME or -O), which could mean some tables have data, and others have no data.

how read-through work in ignite

my cache is empty so sql queries return null.
The read-through means that if the cache is missed, then Ignite will automatically get down to the underlying db(or persistent store) to load the corresponding data.
If there are new data inserted into the underlying db table ,i have to down cache server to load the newly inserted data from the db table automatically or it will sync automatically ?
Is work same as Spring's #Cacheable or work differently.
It looks to me that the answer is no. Cache SQL query don't work as no data in cache but when i tried cache.get in i got following results :
case 1:
System.out.println("data == " + cache.get(new PersonKey("Manish", "Singh")).getPhones());
result ==> data == 1235
case 2 :
PersonKey per = new PersonKey();
per.setFirstname("Manish");
System.out.println("data == " + cache.get(per).getPhones());
throws error:- as following
error image, image2
Read-through semantics can be applied when there is a known set of keys to read. This is not the case with SQL, so in case your data is in an arbitrary 3rd party store (RDBMS, Cassandra, HBase, ...), you have to preload the data into memory prior to running queries.
However, Ignite provides native persistence storage [1] which eliminates this limitation. It allows to use any Ignite APIs without having anything in memory, and this includes SQL queries as well. Data will be fetched into memory on demand while you're using it.
[1] https://apacheignite.readme.io/docs/distributed-persistent-store
When you insert something into the database and it is not in the cache yet, then get operations will retrieve missing values from DB if readThrough is enabled and CacheStore is configured.
But currently it doesn't work this way for SQL queries executed on cache. You should call loadCache first, then values will appear in the cache and will be available for SQL.
When you perform your second get, the exact combination of name and lastname is sought in DB. It is converted into a CQL query containing lastname=null condition, and it fails, because lastname cannot be null.
UPD:
To get all records that have firstname column equal to 'Manish' you can first do loadCache with an appropriate predicate and then run an SQL query on cache.
cache.loadCache((k, v) -> v.lastname.equals("Manish"));
SqlFieldsQuery qry = new SqlFieldsQuery("select firstname, lastname from Person where firstname='Manish'");
try (FieldsQueryCursor<List<?>> cursor = cache.query(qry)) {
for (List<?> row : cursor)
System.out.println("firstname:" + row.get(0) + ", lastname:" + row.get(1));
}
Note that loadCache is a complex operation and requires to run over all records in the DB, so it shouldn't be called too often. You can provide null as a predicate, then all records will be loaded from the database.
Also to make SQL run fast on cache, you should mark firstname field as indexed in QueryEntity configuration.
In your case 2, have you tried specifying lastname as well? By your stack trace it's evident that Cassandra expects it to be not null.

Spring store data in jdbcTemlate(h2 db) permanently

I am starting to learn Spring and faced with some issues regarding spring-jdbc.
First, I tried run the example from this: https://spring.io/guides/gs/relational-data-access/ and it worked. Then, I commented lines with droping and creating new tables(http://pastebin.com/zcJHsL1P), in order to not override data, but just get it from db and show it. However, spring showed me error:
Table "CUSTOMERS" not found; SQL statement: ...
So, my question is: What should I do to store my database permanently? I don't want to recreate all time new database, I want create it once and update it.
P.S. I used H2 database. Maybe problem exists in tis db?
That piece of code looks like you are "prototyping" something; so it's easier to automatically create a new database (schema, tables, data) on the fly, execute and/or test whatever you want to...and finish the execution.
If you want to persist your data and only modify/update it, either use H2 with the "file layout" or use MySQL, PostreSQL, etcetera.
By the way, the reason you are getting Table "CUSTOMERS" not found; SQL statement: ... is because you are using H2 as an in-memory database and every time you start your application you need to re-create the tables and populate them with data.

JPA: How to call a stored procedure

I have a stored procedure in my project under sql/my_prod.sql
there I have my function delete_entity
In my entity
#NamedNativeQuery(name = "delete_entity_prod",
query = "{call /sql/delete_entity(:lineId)}",
and I call it
Query query = entityManager.createNamedQuery("delete_entity_prod")
setParameter("lineId",lineId);
I followed this example: http://objectopia.com/2009/06/26/calling-stored-procedures-in-jpa/
but it does not execute the delete and it does not send any error.
I haven't found clear information about this, am I missing something? Maybe I need to load the my_prod.sql first? But how?
JPA 2.1 standardized stored procedure support if you are able to use it, with examples here http://en.wikibooks.org/wiki/Java_Persistence/Advanced_Topics#Stored_Procedures
This is actually they way you create a query.
Query query = entityManager.createNamedQuery("delete_entity_prod")
setParameter("lineId",lineId);
To call it you must execute:
query.executeUpdate();
Of course, the DB must already contain the procedure. So if you have it defined in your SQL file, have a look at Executing SQL Statements from a Text File(this is for MySQL but other database systems use a similar approach to execute scripts)
There is no error shown because query is not executed at any point - just instance of Query is created. Query can be executed by calling executeUpdate:
query.executeUpdate();
Then next problem will arise: Writing some stored procedures to file is not enough - procedures live in database, not in files. So next thing to do is to check that there is correct script to create stored procedure in hands (maybe that is currently content of sql/my_prod.sql) and then use that to create procedure via database client.
All JPA implementations do not support calling stored procedures, but I assume Hibernate is used under the hood, because that is also used in linked tutorial.
It can be the case that current
{call /sql/delete_entity(:lineId)}
is right syntax for calling stored procedure in your database. It looks rather suspicious because of /sql/. If it turns out that this is incorrect syntax, then:
Consult manual for correct syntax
Test via client
Use that as a value of query attribute in NamedNativeQuery annotation.
All that with combination MySQL+Hibernate is explained for example here.