(Full disclosure, while I'm a fairly experienced .NET dev, t-sql is still new to me.)
I'm trying to implement tSQLt on a fairly massive DB, with 10 years of data, and I'm running into roadblock after roadblock.
First, it was SchemaBinding that was the problem, which lead me here: http://tech.4pi.si/2015/01/tsqlt-faketable-fails-with-error-cannot.html
Now that that's in place, the error I'm getting is:
(Error) Cannot drop the index 'indexName', because it does not exist or you do not have permission.
I have confirmed that the index does exist, and I'm fairly certain I have permissions to my local instance, which next lead me to think it might be something similar to this: An explicit DROP INDEX is not allowed even when the constraint is dropped
Do any of you know of a solution for faking Schema-bound tables? Is it possible to make this work? Am I just SOL?
Thanks.
Related
I was recently working on a Perl project that required me to use DBIx::Class as the ORM to interact with a database. One of the things I found most annoying and just time consuming was just trying to debug and understand what was happening.
I was especially frustrated with and error I was getting Column 'XXXXXX' in where clause is ambiguous and I figured out what was causing this error. It was down to the fact I was requesting columns from 2 different tables which where joined on the XXXXXX attribute and in the WHERE clause the column wasn't being aliased. This lead to DBIx::Class not knowing which column to use.
The most frustrating thing was not knowing what DBIx::Class was doing, leading me to have many doubts about where the error was coming from.
How to efficiently debug this kind of DBIx::Class errors?
You enable debugging by setting the DBIC_TRACE environment variable to 1 or a filename.
That's documented at the very top of DBIx::Class::Manual::Troubleshooting
So I knew what the error was, but I didn't know exactly where it was being caused. If this was plain old SQL I would've simply added the aliases myself but I didn't know how to do that in DBIx::Class. Moreover, I had no clue what SQL query was actually being executed, which made things even worse.
That's when I found out about the as_query method (https://metacpan.org/pod/DBIx::Class::ResultSet#as_query) which, when logged, prints out the SQL query that DBIx::Class executes. LIFE CHANGER. Saved me from so much trouble and gave me exactly what I needed. Shortly after I was able to solve the issue.
Moral of the story: if you're having trouble seeing what DBIx::Class is doing, use this and save yourself from countless headaches.
Been working on a module that is working pretty well when using MySQL, but when I try and run the unit tests I get an error when testing under PostgreSQL (using Travis).
The module itself is here: https://github.com/silvercommerce/taxable-currency
An example failed build is here: https://travis-ci.org/silvercommerce/taxable-currency/jobs/546838724
I don't have a huge amount of experience using PostgreSQL, but I am not really sure why this might be happening? The only thing I could think that might cause this is that I am trying to manually set the ID's in my fixtures file and maybe PostgreSQL not support this?
If this is not the case, does anyone have an idea what might be causing this issue?
Edit: I have looked again into this and the errors appear to be because of this assertion, which should be finding the Tax Rate vat but instead finds the Tax Rate reduced
I am guessing there is an issue in my logic that is causing the incorrect rate to be returned, though I am unsure why...
In the end it appears that Postgres has different default sorting to MySQL (https://www.postgresql.org/docs/9.1/queries-order.html). The line of interest is:
The actual order in that case will depend on the scan and join plan types and the order on disk, but it must not be relied on
In the end I didn't really need to test a list with multiple items, so instead I just removed the additional items.
If you are working on something that needs to support MySQL and Postgres though, you might need to consider defining a consistent sort order as part of your query.
firstly please excuse my relative inexperience with Hibernate I’ve only really been using it in fairly standard cases, and certainly never in a scenario where I had to manage the primary key’s (#Id) myself, which is where I believe my problems lies.
Outline: I’m bulk-loading facebook profile information through FB’s batch API's and need to mirror this information in a local database, all of which is fine, but runs into trouble when I attempt to do it in parallel.
Imagine a message queue processing batches of friend data in parallel and lots of the same shared Likes and References (between the friends), and that’s where my problem lies.
I run into repeated Hibernate ConstraintViolationException’s which are due to duplicate PK entries - as one transaction has tried to flush it’s session after determining an entity as transient when in fact another transaction has already made the same determination and beaten the first to committing, resulting in the below:
Duplicate entry '121528734903' for key 'PRIMARY'
And the ConstraintViolationException being raised.
I’ve managed to just about overcome this by removing all cascading from the parent entity, and performing atomic writes, one record per-transaction and trying to essentially just catching any exceptions, ignoring them if they do occur as I’d know that another transaction had already done the job, but I’m not very happy with this solution and cant imagine it's the most efficient use of hibernate.
I'd welcome any suggestions as to how I could improve the architecture…
Currently using : Hibernate 3.5.6 / Spring 3.1 / MySQL 5.1.30
Addendum: at the moment I'm using a hibernate merge() which checks initially for the existence of a row and will either merge (update) or insert dependant on existence, problem is even with an isolation level of READ_UNCOMMITTED sometimes the wrong determination is made, i.e. two transactions decide the same, and I've got an exception again.
Locking doesn't really help me either, optimistic or pessimistic as the condition is only a problem in the initial insert case and there's no row to lock, making it very difficult to handle concurrency...
I must be missing something but I've done the reading, my worry is that not being able to leave hibernate to manage PK's i'm kinda scuppered - as it checks for existence to early in the session and come time to synchronise the session state is invalid.
Anyone with any suggestion for me..? thanks.
Take this with a large grain of salt as I know very little about Hibernate, but it sounds like what you need to do is specify that the default mysql INSERT statement is instead made an INSERT IGNORE statement. You might want to take a look at #SQLInsert in Hibernate, I believe that's where you would need to specify the exact insert statement that should be used. I'm sorry I can't help with the syntax, but I think you can probably find what you need by looking at the Hibernate documentation for #SQLInsert and if necessary the MySQL documentation for INSERT IGNORE.
I'm writing up code using the node-mongodb-driver directly in nodeJS. I've set up a collection in my database that uses my own _id space, where each unique document I have is guaranteed to have a unique _id. That said, the way items get added to the database, there's a good chance that the same item will be inserted into the collection more than once, which means trying to use the same _id more than once.
What I'm doing right now to avoid any problems is to call collection.findOne(_id:ID) before inserting, to make sure that I don't try to insert docs that are already in the collection. However, since I'm adding lots of documents at a time and it needs to be asynchronous, I'm saving a large number of variables so that when findOne()'s callback is called, I can insert the right variable (if applicable).
I realized, however, that I can do away with saving variables if I just didn't bother to check whether or not a document already exists, and just went ahead and inserted them. If there already is a document in the collection with the same _id, I'll just end up getting an error saying that said _id already exists, and the code will keep running. Coding it like this would both decrease the running time of my software (less functions are called) and the space in RAM that it's taking up (many less variables are being saved).
However, I wanted to see if anybody thought that there is any reason not to do this. When a function like insert() returns an error, is there anything bad that's happening or could be happening that I might not be aware?
Best, and thanks,Sami
So your basic idea is correct, you are correct that findOne() does not solve the concurrency problem. But there are some wrinkles.
is there anything bad that's happening or could be happening that I might not be aware?
First problem is the insert may not be failing because of a duplicate error. Maybe it's failing because the DB is down or something else. So ensure that you're checking the error reason and handling appropriately.
Normally you don't want to throw lots of exceptions as they tend to be expensive. So watch that you're not doing this duplicate insert too often.
Second problem is tied to the insert data.
If server 1 generates an insert and server 2 generates an insert for the same document, do they generate the same insert statement?
If the answer is yes, then you're probably doing the right thing.
If the answer is no, then you may want to look at the upsert command. This does not work for all cases, but it may work for yours.
Additionally, there's also the findAndModify command. Instead of throwing exceptions, you can return the modified object. This has a larger learning curve, but it may be the best option.
I'm working on an iPhone application with a few data relationships (Author -> Books for example). When a user deletes an Author object from the application, I have a few SQLite triggers that run on the delete to remove any books from the database that have a foreign key matching the Author's primary key.
I'm also using a trigger to insert some data when a new item is created.
I can't help but shake the feeling that this might be bad design or lead to some problems down the road I am not thinking of. That said, should I rely on code in my app to handle propagating the deletes like this when the database has the capability built in to handle it?
What say you?
True. Use the inbuilt capabilities of the database as much as possible. Atleast try and start off like that and only compromise when things really demand so.
I would make use of the database's features to ensure relational integrity, especially with respect to updates/deletes. There are cases where I might use a trigger to insert some additional data (auditing comes to mind), though I would tend to avoid this and insert all of the data from my application. If you are doing multiple inserts, though, make sure to wrap it all in a single transaction so that you don't end up with a partial insert which could lead to loss of relational integrity.
I like the idea of using the database's built in functionality (I am not familiar with how it works).. but I would worry if I went back to the code a year from now, would I remember how it worked? (Given the code isn't right in front of me).
I imagine if you add a lot of comments to remind yourself about how it works now, if anything goes wrong in the future, at least you won't need to relearn the database features when you need to go do some debugging.
You're a few steps ahead of me: I recently learned about how to do that stuff with triggers and I am tempted to use them myself.
Based on the other answers here, it seems like a philosophical choice. It would probably be fine to use either triggers or code, but best to be consistent. So don't use triggers for cascading deletes on one table but then C code for another table.
Since you tagged the question iphone, I think the most important difference would be relative performance of C code versus a trigger. You'd probably have to code both and experiment to determine the difference, if any.
Another thing that comes to mind is that, of all the horror stories that I read on thedailywtf.com, about half of them seem to involve database triggers.
Unfortunately SQLite does NOT support on delete cascade etc. From the SQLite documentation:
http://www.sqlite.org/omitted.html
FOREIGN KEY constraints are parsed but are not enforced. However, the equivalent constraint enforcement can be achieved using triggers. The SQLite source tree contains source code and documentation for a C program that will read an SQLite database, analyze the foreign key constraints, and generate appropriate triggers automatically.
There is some support for triggers but it is not complete. Missing subfeatures include FOR EACH STATEMENT triggers (currently all triggers must be FOR EACH ROW), INSTEAD OF triggers on tables (currently INSTEAD OF triggers are only allowed on views), and recursive triggers - triggers that trigger themselves.
Therefore, the only way to code on delete cascade etc using SQLite requires triggers.
Kind regards,
Code goes in your app.
Triggers are code. The functionality goes in your app. Not in the database.
I think that databases should be used for data, not processing. I think apps should be used for processing, not data.
Database processing features merely muddy the water.