Conditional Scala Play Evolutions - scala

I would like to implement an evolution that applies only if a condition is met on a Scala Play framework application. The condition is that the application should be in a certain environment.
I have this evolution right now:
# payments SCHEMA
# --- !Ups
INSERT INTO table1 (id, provider_name, provider_country, provider_code, status, flag)
VALUES (10, 'XXXXX', 'XX', 'XXXXX', '1', '0');
# --- !Downs
DELETE FROM table2
WHERE id = 10;
I want the evolution to run if this condition is met
if(config.env == 'dev'){
//execute evolution
}
How do I achieve this? Is this a function of the evolution or the application logic?

One approach might be to use a stored procedure in conjunction with a db-based app 'setting'. Assume your app had an appSetting table for storing app settings.
create table appSetting (
name varchar(63) not null primary key,
value varchar(255)
) ;
-- insert into appSetting values ('environment','dev');
Then, something along the following lines would create a tmpLog table (or insert a value into table1) only if appSetting has a value of 'dev' for setting 'environment' at the time of running the evolution:
# --- !Ups
create procedure doEvolution31()
begin
declare environment varchar(31);;
select value
into environment
from appSetting
where name='environment'
;;
if (environment='dev') then
create table tmpLog (id int not null primary key, text varchar(255));;
-- or INSERT INTO table1 (id, provider_name, provider_country, provider_code, status, flag) VALUES (10, 'XXXXX', 'XX', 'XXXXX', '1', '0');
end if;;
end
;
call doEvolution31();
# --- !Downs
drop procedure doEvolution31;
drop table if exists tmpLog;
-- or delete from table2 where id=10;
You don't mention which db you are using. The above is MYSQL syntax. There might be a way to get a config value into the stored proc, perhaps via some sbt magic, but I think we would use the above if we had such a requirement. (BTW The double semicolons are for escaping out a single semicolon so that individual statements of the procedures are not executed when the procedure is being created.)

Why do you need it at all? Don't you use separate db for different environments as it's being told at documentation?
If you do - then you probably have different db configurations, probably at different files. That, probably, looks something like that:
# application.conf
db.default {
driver=com.mysql.jdbc.Driver
url="jdbc:mysql://localhost/playdb"
username=playdbuser
password="a strong password"
}
# dev.conf
db.dev {
driver=com.mysql.jdbc.Driver
url="jdbc:mysql://localhost/playdb"
username=playdbuser
password="a strong password"
}
# staging.conf
db.staging {
driver=com.mysql.jdbc.Driver
url="jdbc:mysql://localhost/playdb"
username=playdbuser
password="a strong password"
}
# prod.conf
db.prod {
driver=com.mysql.jdbc.Driver
url="jdbc:mysql://localhost/playdb"
username=playdbuser
password="a strong password"
}
Actually nothing stops you to make it the same db but don't - just use proper db per environment. Assuming you are using jdbc connector and PlayEvolutions plugin - just put your evolution to right directory and you'll achieve what you want.
The other question is actually: "How to use proper db per environment?" And the answer is strongly depend on your choice of DI.

Related

Case-Sensitive Linked-Server merge script - Set IDENTITY_INSERT <table> ON not working

I am doing a experimental script to do a SQL Comparison (COLLATED as case-sensitive) and I am having issues with the SET IDENTITY_INSERT <Table> ON
I have switched on this option and disabled foreign key checks, but it still seems to be complaining about the latter.
Here are the steps I followed:
1 - I created a linked server
EXEC sp_addlinkedserver #Server=N'xxx.xxx.xxx.xxx', #srvproduct=N'SQL Server'
2 - I added the login credentials
EXEC master.dbo.sp_addlinkedsrvlogin
#rmtsrvname = N'xxx.xxx.xxxx.xxx',
#locallogin = NULL ,
#useself = N'False',
#rmtuser = N'xxxxxxxxxxx',
#rmtpassword = N'xxxxxxxxxxx'
3 - In the same batch, I set the identity_insert, disabled foreign key checks and ran the following merge script. Note, the deferred query returns an XML field which is disallowed over distributed servers, so I casted to NVARCHAR(MAX)
SET IDENTITY_INSERT [DATABASE1].[dbo].[TABLE1] ON
ALTER TABLE [DATABASE1].[dbo].[TABLE1] NOCHECK CONSTRAINT ALL
MERGE [DATABASE1].[dbo].[TABLE1]
USING OPENQUERY([xxx.xxx.xxx.xxx], 'SELECT S.ID, S.EventId, S.SnapshotTypeID, CAST(S.Content AS NVARCHAR(MAX)) AS Content FROM [DATABASE1].[dbo].[TABLE1] AS S') AS S
ON (CAST([DATABASE1].[dbo].[TABLE1].Content AS NVARCHAR(MAX)) = S.Content)
WHEN NOT MATCHED BY TARGET
THEN INSERT VALUES (S.ID, S.EventId, S.SnapshotTypeID, CAST(S.Content AS XML))
WHEN MATCHED
THEN UPDATE SET [DATABASE1].[dbo].[TABLE1].EventId = S.EventId,
[DATABASE1].[dbo].[TABLE1].SnapshotTypeID = S.SnapshotTypeID,
[DATABASE1].[dbo].[TABLE1].Content = S.Content
COLLATE Latin1_General_CS_AS;
GO
The error message I am getting reads as follows:
Msg 8101, Level 16, State 1, Line 4
An explicit value for the identity column in table 'Database1.dbo.Table' can only be specified when a column list is used and IDENTITY_INSERT is ON.
How can I fix this? As I mentioned, this script is only an experiment for one of the systems I am writing. I am probably reinventing the wheel somewhere, but its all about learning in this exercise.
An explicit value for the identity column in table 'Database1.dbo.Table' can only be specified when a column list is used and IDENTITY_INSERT is ON.
You have no column list

Postgres Rules Preventing CTE Queries

Using Postgres 9.3:
I am attempting to automatically populate a table when an insert is performed on another table. This seems like a good use for rules, but after adding the rule to the first table, I am no longer able to perform inserts into the second table using the writable CTE. Here is an example:
CREATE TABLE foo (
id INT PRIMARY KEY
);
CREATE TABLE bar (
id INT PRIMARY KEY REFERENCES foo
);
CREATE RULE insertFoo AS ON INSERT TO foo DO INSERT INTO bar VALUES (NEW.id);
WITH a AS (SELECT * FROM (VALUES (1), (2)) b)
INSERT INTO foo SELECT * FROM a
When this is run, I get the error
"ERROR: WITH cannot be used in a query that is rewritten by rules
into multiple queries".
I have searched for that error string, but am only able to find links to the source code. I know that I can perform the above using row-level triggers instead, but it seems like I should be able to do this at the statement level. Why can I not use the writable CTE, when queries like this can (in this case) be easily re-written as:
INSERT INTO foo SELECT * FROM (VALUES (1), (2)) a
Does anyone know of another way that would accomplish what I am attempting to do other than 1) using rules, which prevents the use of "with" queries, or 2) using row-level triggers? Thanks,
        
TL;DR: use triggers, not rules.
Generally speaking, prefer triggers over rules, unless rules are absolutely necessary. (Which, in practice, they never are.)
Using rules introduces heaps of problems which will needlessly complicate your life down the road. You've run into one here. Another (major) one is, for instance, that the number of affected rows will correspond to that of the very last query -- if you're relying on FOUND somewhere and your query is incorrectly reporting that no rows were affected by a query, you'll be in for painful bugs.
Moreover, there's occasional talk of deprecating Postgres rules outright:
http://postgresql.nabble.com/Deprecating-RULES-td5727689.html
As the other answer I definitely recommend using INSTEAD OF triggers before RULEs.
However if for some reason you don't want to change existing VIEW RULEs and still want use WITH you can do so by wrapping the VIEW in a stored procedure:
create function insert_foo(int) returns void as $$
insert into foo values ($1)
$$ language sql;
WITH a AS (SELECT * FROM (VALUES (1), (2)) b)
SELECT insert_foo(a.column1) from a;
This could be useful when using some legacy db through some system that wraps statements with CTEs.

Activerecord-import & serial column in PostgreSQL

I am in the process of upgrading a Rails 2.3.4 project to Rails 3.1.1. The old version used ar-extensions to handle a data import. I pulled out ar-extensions and replaced it with activerecord-import, which I understand has exactly the same interfaces.
My code calls looks like this
Student.import(columns, values)
Both args are valid arrays holding the correct data, but I get a big fat error!
The error stack looks like this:
NoMethodError (You have a nil object when you didn't expect it!
You might have expected an instance of Array.
The error occurred while evaluating nil.split):
activerecord (3.1.1) lib/active_record/connection_adapters/postgresql_adapter.rb:828:in 'default_sequence_name'
activerecord (3.1.1) lib/active_record/base.rb:647:in `reset_sequence_name'
activerecord (3.1.1) lib/active_record/base.rb:643:in `sequence_name'
activerecord-import (0.2.9) lib/activerecord-import/import.rb:203:in `import'
Looking through the code it seems as though Activerecord-import calls activerecord which in turn looks for the name and next value of the Postgres sequence.
So activerecord-import looks for the sequence_name
lib/activerecord-import/import.rb:203
# Force the primary key col into the insert if it's not
# on the list and we are using a sequence and stuff a nil
# value for it into each row so the sequencer will fire later
-> if !column_names.include?(primary_key) && sequence_name && connection.prefetch_primary_key?
column_names << primary_key
array_of_attributes.each { |a| a << nil }
end
It calls active record ...
lib/active_record/base.rb:647:in `reset_sequence_name'
# Lazy-set the sequence name to the connection's default. This method
# is only ever called once since set_sequence_name overrides it.
def sequence_name #:nodoc:
-> reset_sequence_name
end
def reset_sequence_name #:nodoc:
-> default = connection.default_sequence_name(table_name, primary_key)
set_sequence_name(default)
default
end
The code errors when serial_sequence returns nil and default_sequence_name tries to split it.
lib/active_record/connection_adapters/postgresql_adapter.rb
# Returns the sequence name for a table's primary key or some other specified key.
def default_sequence_name(table_name, pk = nil) #:nodoc:
-> serial_sequence(table_name, pk || 'id').split('.').last
rescue ActiveRecord::StatementInvalid
"#{table_name}_#{pk || 'id'}_seq"
end
def serial_sequence(table, column)
result = exec_query(<<-eosql, 'SCHEMA', [[nil, table], [nil, column]])
SELECT pg_get_serial_sequence($1, $2)
eosql
result.rows.first.first
end
When I execute pg_get_serial_sequence() directly against the database I get no value returned:
SELECT pg_get_serial_sequence('student', 'id')
But I can see that in the database there is a sequence called student_id_seq
I am using the following versions of Ruby, rails PG etc..
Rails 3.1.1
Ruby 1.9.2
Activerecord-import 0.2.9
pg 0.12.2
psql (9.0.5, server 9.1.3)
I have migrated the database from MySQL to PostgreSQL, I don't think this has any bearing on the problem but I thought that I'd better add it for completeness.
I can't work out why this isn't working!
Summary of your description:
The table student exists.
The column id exists.
The sequence student_id_seq exists.
pg_get_serial_sequence('student', 'id') still returns NULL.
Two possible explanations:
1) The sequence is not linked to the column.
Column default and the tie between column and sequence are independent features. The mere existence of a fitting sequence does not mean it does what you presume. If you create a column as serial you get the whole package, though. Read the details in the manual.
To fix this (and if you are sure that's how it should be), you can mark the sequence as "owned by" student.id:
ALTER SEQUENCE student_id_seq OWNED BY student.id;
Also check if the column default is set as expected:
SELECT column_name, column_default
FROM information_schema.columns
WHERE table_name = 'student'
-- AND schema = 'your_schema' -- if needed
If not, repair:
ALTER TABLE student ALTER COLUMN id SET DEFAULT nextval('student.id')
2) A mixup of host address / port / database / schema / capitalization of the table name.
It happens all the time. Make sure you check the same database that your app connects to, With the same user or at least the same search_path. Make sure, the objects are in the schema where you expect them and there isn't, for instance, another student table in another schema that got mixed up.

the max function requires 1 argument(s)

I wrote this very simple SP in SQL 2008:
Create procedure dbo.GetNextID
(
#TableName nvarchar(50),
#FieldName nvarchar(50)
)
AS
BEGIN
exec('select isnull(max('+#FieldName+'),0)+1 as NewGeneratedID from '+ #TableName);
END
When I execute this procedure in Visual Studio SQL Express and pass a table name and a field name, it works fine. But when I try to add this SP as a query in a QueryTableAdapter in my ADO DataSet, I receive this error before clicking on Finish button:
the max function requires 1 argument(s)
can anyone help me with this?
I guess that VS tries to determine a field list by executing the SP. But as it does not know what to pass to the SP, it uses empty parameters. Now, of course, your select statement fails.
You could try adding the following to your SP:
IF ISNULL(#TableName,'') = '' SET #TableName = '<Name of a test table>';
IF ISNULL(#FieldName,'') = '' SET #FieldName = '<Name of some field>';
Use the names of some field and table that do exist here (for example names that you'd use from your application, too).
Alternatively you could add the following above the exec:
IF (ISNULL(#TableName, '') = '') OR (ISNULL(#FieldName, '') = '')
BEGIN
SELECT -1 AS NewGeneratedId
RETURN 0
END
EDIT
On a side note, I'd like to warn you about concurrency issues that I see coming up from what your code does. If this code is supposed to return a unique ID for a new record in some table, I'd redesign this as follows:
Create a table NumberSeries where each row contains a unique name, a possible range for IDs and the current ID value.
Create a stored procedure that uses UPDATE ... OUTPUT to update the current ID for a number series and retrieve it in one step.
That way you can make sure that creating a new ID is a single operation that does not cause concurrency problems.

Sequence Generators in T-SQL

We have an Oracle application that uses a standard pattern to populate surrogate keys. We have a series of extrinsic rows (that have specific values for the surrogate keys) and other rows that have intrinsic values.
We use the following Oracle trigger snippet to determine what to do with the Surrogate key on insert:
IF :NEW.SurrogateKey IS NULL THEN
SELECT SurrogateKey_SEQ.NEXTVAL INTO :NEW.SurrogateKey FROM DUAL;
END IF;
If the supplied surrogate key is null then get a value from the nominated sequence, else pass the supplied surrogate key through to the row.
I can't seem to find an easy way to do this is T-SQL. There are all sorts of approaches, but none of which use the notion of a sequence generator like Oracle and other SQL-92 compliant DBs do.
Anybody know of a really efficient way to do this in SQL Server T-SQL? By the way, we're using SQL Server 2008 if that's any help.
You may want to look at IDENTITY. This gives you a column for which the value will be determined when you insert the row.
This may mean that you have to insert the row, and determine the value afterwards, using SCOPE_IDENTITY().
There is also an article on simulating Oracle Sequences in SQL Server here: http://www.sqlmag.com/Articles/ArticleID/46900/46900.html?Ad=1
Identity is one approach, although it will generate unique identifiers at a per table level.
Another approach is to use unique identifiers, in particualr using NewSequantialID() that ensues the generated id is always bigger than the last. The problem with this approach is you are no longer dealing with integers.
The closest way to emulate the oracle method is to have a separate table with a counter field, and then write a user defined function that queries this field, increments it, and returns the value.
Here is a way to do it using a table to store your last sequence number. The stored proc is very simple, most of the stuff in there is because I'm lazy and don't like surprises should I forget something so...here it is:
----- Create the sequence value table.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[SequenceTbl]
(
[CurrentValue] [bigint]
) ON [PRIMARY]
GO
-----------------Create the stored procedure
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE procedure [dbo].[sp_NextInSequence](#SkipCount BigInt = 1)
AS
BEGIN
BEGIN TRANSACTION
DECLARE #NextInSequence BigInt;
IF NOT EXISTS
(
SELECT
CurrentValue
FROM
SequenceTbl
)
INSERT INTO SequenceTbl (CurrentValue) VALUES (0);
SELECT TOP 1
#NextInSequence = ISNULL(CurrentValue, 0) + 1
FROM
SequenceTbl WITH (HoldLock);
UPDATE SequenceTbl WITH (UPDLOCK)
SET CurrentValue = #NextInSequence + (#SkipCount - 1);
COMMIT TRANSACTION
RETURN #NextInSequence
END;
GO
--------Use the stored procedure in Sql Manager to retrive a test value.
declare #NextInSequence BigInt
exec #NextInSequence = sp_NextInSequence;
--exec #NextInSequence = sp_NextInSequence <skipcount>;
select NextInSequence = #NextInSequence;
-----Show the current table value.
select * from SequenceTbl;
The astute will notice that there is a parameter (optional) for the stored proc. This is to allow the caller to reserve a block of ID's in the instance that the caller has more than one record that needs a unique id - using the SkipCount, the caller need make only a single call for however many IDs are needed.
The entire "IF EXISTS...INSERT INTO..." block can be removed if you remember to insert a record when the table is created. If you also remember to insert that record with a value (your seed value - a number which will never be used as an ID), you can also remove the ISNULL(...) portion of the select and just use CurrentValue + 1.
Now, before anyone makes a comment, please note that I am a software engineer, not a dba! So, any constructive criticism concerning the use of "Top 1", "With (HoldLock)" and "With (UPDLock)" is welcome. I don't know how well this will scale but this works OK for me so far...