tsqlt - create separate database for unit testing - tsql

I have started using tsqlt, and my question is it possible to have a separate database with just the testing stuff? (tables/sp's/assemblies etc).
This testing database will sit on the same instance as the actual/target database.
If I try to fake a table I get the following error:
FakeTable could not resolve the object name, 'target_db.dbo.Sometable'
Has anyone had any experience with this?
Thanks.

As you discovered, this isn't currently possible as the mocking procedures don't accept three part names. This is something that's been covered at the User feedback forums of SQL Test (RedGate's product that acts as a front end to tSQLt) at : http://sqltest.uservoice.com/forums/140716-sql-test-forum/suggestions/2421628-reduce-the-footprint
Dennis Lloyd, one of the authors of the tSQLt framework wrote towards the end of that thread that support of a separate 'tSQLt' database was something they would keep under consideration.
Also a related issue of mocking remote objects at http://sqltest.uservoice.com/forums/140716-sql-test-forum/suggestions/2423449-being-able-to-mock-fake-remote-objects
I hope that helps,
Dave

You can now do this, so long as the tSQLt framework is in the other database:
EXEC tSQLt.FakeTable '[dbo].[Position]';
EXEC OtherDB.tSQLt.FakeTable '[dbo].[PositionArchive]';
Source
This means that you can at least put your tests where you want them, though you have to install the framework in the actual database under test. Which is not perfect, but it's better.

Related

Postgres AUTONOMOUS_TRANSACTION equivalent on the same DB

I'm currently working on a SpringBatch application that should insert some logs in case a certain type of error happens. The problem is that if the BatchJob fails, it automatically rollback everything done and that’s perfect, but it also rollback the error logs.
I need to achieve something similar to the AUTONOMOUS_TRANSACTION of Oracle while using PostgreSQL (14).
I’ve seen the DBLINK and it seem the only thing close to an alternative, but I have found some problems:
I need to avoid the connection string because the database host/port/name changes in the different environments, is it possible? I need to persist the data in the same database to technically I don’t need to connect to any other database but use the calling connection.
Is it possible to create a Function/Procedure that creates the takes care of all and I only have to call it Java side? Maybe this way I can somehow pass the connection data as a parameter in case that is not possible to avoid.
In a best case scenario I would be able to do something like:
dblink_exec(text sql);
That without arguments considers the same database where is been executed.
The problem is that I need this to be done without specifying any connection data, this will be inside a function on the executing db, in the same schema… that function will pass from one environment to the next one and the code needs to be the same so any name/user/pass needed must be avoided since they will change by environment. And since doing it in the same db and schema technically they can be inferred.
Thanks in advance!
At the moment I haven't try anything, I'm trying to get some information first.

Deploying DB2 user define functions in sequence of dependency

We have about 200 user define functions in DB2. These UDF are generated by datastudio into a single script file.
When we create a new DB, we need to run the script file several times because some UDF are dependent on other UDF and cannot be create until the precedent functions are created first.
Is there a way to generate a script file so that the order they are deployed take into account this dependency. Or is there some other technique to arrange the order efficiently?
Many thanks in advance.
That problem should only happen if the setting of auto_reval is not correct. See "Creating and maintaining database objects" for details.
Db2 allows to create objects in an "unsorted" order. Only when the object is used (accessed), the objects and its depending objects are checked. The behavior was introduced a long time ago. Only some old, migrated databases keep auto_reval=disabled. Some environments might set it based on some configuration scripts.
if you still run into issues, try setting auto_reval=DEFERRED_FORCE.
The db2look system command can generate DDL by by object creation time with the -ct option, so that can help if you don't want to use the auto_reval method.

Change Properties of multiple diagrams in Enterprise Architect

I would like to change the properties of multiple diagrams together rather than clicking on them one by one. Does anyone know how this can be achieved?
You can use the scripting facility of Enterprise Architect to loop the diagrams you would like to change and update them.
See this section of the manual to get help.
There is a bunch of example scripts included with EA, either from the local scripts, or from the EAScriptLib MDG.
Another source of examples is my Github repository: https://github.com/GeertBellekens/Enterprise-Architect-VBScript-Library
You could write a SQL to manipulate your database. t_diagram.PDATA holds a long cryptic string where one part is ScalePI=0; (which is the default for no scaling). You can alter that to be ScalePI=1; (meaning scale to one page).
String manipulations vary from database to database. So you need to write your own which you can execute in a script using
Repository.Execute("UPDATE t_diagram ...")
Note that you should test this in a sandbox first since invalid SQLs can easily disrupt your whole repository.

Arquillian Persistence Extension - Long execution time, is it normal?

I'm writing some tests with arquillian for persistence layer in my app. I would like to use an Persistence Extension for database populating etc. The problem is that one test takes about ~15-25 seconds. Is it normal? Or am I doing something wrong? I've tried to run these tests on local postgres database (~10sec per test), remote postgres database (~15sec per test) and hsqldb at local container (~15sec per test).
Thanks in advance
P.S. When I'm not using "Persistence Extension" 12 tests takes about ~11sec (and that's acceptable), but I have to persist and delete entities from the code (hard to maintain and manage).
I am going to guess you are using APE (Arquillian Persistence Extension) v1.0.0a6. If this is the case what you are experiencing is the result of refactoring done between alpha5 and alpha6 which I filed the following ticket against: https://issues.jboss.org/browse/ARQ-1440
You could try using 1.0.0a5 which has some different issues that you might encounter and need to work around but it has 300% better performance then alpha6.

Managing database changes

I'm starting to move more logic into the database, using triggers, views, functions, CTEs, etc. When plv8/json comes out for postgres, I can see myself putting lots of logic in there.
I'm having problems with the "standard" way of doing database migrations in sequel and activerecord. Both sequel and activerecord let you put arbitrary sql code into timestamped files. When each file is ran, a schema_versions table is updated with the filename (or timestamp in the filename), which keeps record of which migrations have been applied to the current database.
If a lot of coding is being done at the database level, that means that modifications to existing views, functions, etc follow the below pattern:
Migration 1 defines a function and a view that uses that function.
-- Migration 1
create function calculate(x int) returns int as $$
return x + 1;
$$ language sql;
create view foos as (
select something, calculate(something) from a_table
);
Requirements change, and I need to change a function type. In Migration 2 I have to drop all objects that depend on foo, and recreate them by copying their entire body -- even if there weren't any changes in most of the other code!
-- Migration 2
-- Have to drop all views and functions that depend on the
-- `calculate(int)` function.
drop view foos;
create or replace calculate(x bigint) returns bigint as $$
return x + 1;
$$ language sql;
-- I could do `drop function calculate(int) cascade`,
-- but I might accidentally drop some objects that wouldn't get recreated below.
-- Now I have to recreate foo.
create view foos as (
select something, calculate(something) from a_table
);
If I'm building a system based on views and functions and triggers, my migrations would be filled with duplicated code, and it's difficult to find the latest version of the code. You might say "don't do that!", but for my purposes (e-commerce, shipping, transactions), I'm finding it's a lot easier and faster to have the database ensure the integrity of the data by doing the logic inside the database.
You can (of course) dump the current database schema (which includes all the code definitions), but I think you lose comments. And you wouldn't generally want to edit a giant file that contains the whole schema.
Any ideas on how to solve this problem?
My best idea is to how the sql code contained in their own canonical files (app/sql/orders/shipping.sql, app/sql/orders/creation.sql, etc). Everyone develops directly on these. Whenever it's time for a release, then you'd want to make a new migration file, look at all the changed code since the previous release, figure out the dependency chain of the database objects that need to be dropped and recreated, and then copy the sql from the canonical sql files into a new sequel/activerecord migration file. But it's a pain. :/
Thoughts are very welcome. I hope I explained this well enough, I'm cutting back on my caffeine intake and I'm a little groggy atm.
Oh, I asked a similar question on Stack Overflow: Changing the type of a column used in other views The answer was a function that let me pass in:
sql code to run
database views to drop and recreate
The function would retrieve the view definition, drop the views, run the sql code, then recreate the view definition (in reverse order of dropping). Perhaps a system of functions like this would help solve the problem of having to copy/paste sql code into the migration files.
I'd recommend liquibase.
You create files which track the changes to your database and these will be run into the database in the correct migration order.
You might find Dave Wheeler's blog-posts interesting starting from here:
http://justatheory.com/computers/databases/simple-sql-change-management.html
My rate of database change is fairly small but I tend to be careless and make small changes to the schema directly, so I've had to come up with a fair bit of infrastructure to catch when I've done so. The basic elements are:
A makefile that can rebuild a development database from scratch
A set of schema-files separated into "modules" (lookups_schema.sql, lookup_data.sql)
A set of update files that transition from one revision to the next
I don't usually have the corresponding downgrade scripts, some people do
A script to populate my database with a plausible amount of test data
Crucially, a test suite via pgTAP that checks my various functions, views and also the upgrade scripts. The upgrade tests can be run against a live database too.
If you have a separate instance of PostgreSQL set up with fsync turned off / on ramdisk etc then rebuilding the whole DB and populating it can take seconds (if you don't have too much test data).
Start with #1, #2, then add #6 (pgTAP is very cool), then the rest. The crucial thing is a test suite that checks your in-database code.
There are tools that try to automate schema changes for you, but they are really only good at adding a new column to a table and that sort of thing. Once you have code in your db then they're not much help.