I have created default
CREATE DEFAULT dbo.enviroment AS 'UAT'
I have created user defined type
CREATE TYPE [dbo].[Environment] FROM [varchar](50) NOT NULL
Then I have used it to define user defined type dbo.environment
(using Studio not SP so I see dbo.environment in input "default" but it can be done : sp_bindefault 'dbo.environment','dbo.environment'
)
Now I declare variable and select:
declare #env dbo.environment
SELECT #env as env
Result is NULL
I would expect 'UAT'
How default should work?
Am I doing sth wrong?
Making use of default is possible like in this code:
create table dbo.test (id int, env dbo.environment)
insert into test (id) values(1)
select * from test
but is it possible without a table? (with variable table is not possible as well :( )
I was thinking about providing semi global variable in this way.
SQL SERVER 2008 R2
Related
I have created very simple function in DB2oC as below, which has one UPDATE sql statement and one SELECT sql statement along with MODIFIES SQL DATA. But still I get the below error, though I have specified MODIFIES SQL DATA. I did GRANT ALL on that TEST table to my user id and also did GRANT EXECUTE ON FUNCTION to my user id on safe side. Can you please help to explain on what could be the issue?
I have simply invoked the function using SELECT statement like below:
SELECT TESTSCHEMA.MAIN_FUNCTION() FROM TABLE(VALUES(1));
SQL Error [38002]: User defined routine "TESTSCHEMA.MAIN_FUNCTION"
(specific name "SQL201211013006981") attempted to modify data but was
not defined as MODIFIES SQL DATA.. SQLCODE=-577, SQLSTATE=38002,
DRIVER=4.27.25
CREATE OR REPLACE FUNCTION MAIN_FUNCTION()
RETURNS VARCHAR(20)
LANGUAGE SQL
MODIFIES SQL DATA
BEGIN
DECLARE val VARCHAR(20);
UPDATE TEST t SET t.CONTENT_TEXT = 'test value' WHERE t.ID = 1;
select CONTENT_TEXT into val from TEST where ID = 1;
return val;
end;
Appreciate your help.
For the modifies SQL data clause , the usage of the function is restricted on Db2-LUW.
These restrictions do not apply for user defined functions that do not modify data.
For your specific example, that UDF will operate when used as the sole expression on the right hand side of an assignment statement in a compound-SQL compiled statemnent.
For example:
create or replace variable my_result varchar(20) default null;
begin
set my_result = main_function();
end#
Consider using stored procedures to modify table contents, instead of user defined functions.
You could avoid using a function, and just use a single "change data statement"
SELECT CONTENT_TEXT
FROM NEW TABLE(
UPDATE TEST t
SET t.CONTENT_TEXT = 'test value'
WHERE t.ID = 1
)
I found at several places to be able to drop a schema in DB2 along with all of its contents (indexes, SPs, triggers, sequences etc) using
CALL SYSPROC.ADMIN_DROP_SCHEMA('schema_name', NULL, 'ERRORSCHEMA', 'ERRORTAB');
However, I am getting the following error while using this command:
1) [Code: -469, SQL State: 42886] The parameter mode OUT or INOUT is not valid for a parameter in the routine named "ADMIN_DROP_SCHEMA" with specific name "ADMIN_DROP_SCHEMA" (parameter number "3", name "ERRORTABSCHEMA").. SQLCODE=-469, SQLSTATE=42886, DRIVER=4.22.29
2) [Code: -727, SQL State: 56098] An error occurred during implicit system action type "2". Information returned for the error includes SQLCODE "-469", SQLSTATE "42886" and message tokens "ADMIN_DROP_SCHEMA|ADMIN_DROP_SCHEMA|3|ERRORTABSCHEMA".. SQLCODE=-727, SQLSTATE=56098, DRIVER=4.22.29
Can anyone help me suggest what's wrong here? I tried to look at several places but didn't get any idea. It doesn't seem it's an authorization issue. Using DB2 version 11.5.
You are using the ADMIN_DROP_SCHEMA procedure parameters incorrectly, assuming you are CALLing the procedure from SQL and not the CLP.
The third and fourth parameters cannot be a literal (despite the documentation giving such an example), instead they must be host-variables (because the the procedure requires them to be input/output parameters).
If the stored-procedure completes without errors it sets these parameters to NULL. so your code should check for this.
If the stored-procedure detects errors, it creates and adds rows to the specified table and leaves the values of these parameters unchanged, and you must then query that table to list the error(s). You should drop this table before calling the stored procedure otherwise the procedure will fail with -601.
Example:
--#SET TERMINATOR #
drop table errschema.errtable#
set serveroutput on#
begin
declare v_errschema varchar(20) default 'ERRSCHEMA';
declare v_errtab varchar(20) default 'ERRTABLE';
CALL SYSPROC.ADMIN_DROP_SCHEMA('SOMESCHEMA', NULL, v_errschema, v_errtab);
if v_errschema is null and v_errtab is null
then
call dbms_output.put_line('The admin_drop_schema reported success');
else
call dbms_output.put_line('admin_drop_schema failed and created/populated table '||rtrim(v_errschema)||'.'||rtrim(v_errtab) );
end if;
end#
You can use global variables if you would like to use ADMIN_DROP_SCHEMA outside of compound SQL
E.g.
CREATE OR REPLACE VARIABLE ERROR_SCHEMA VARCHAR(128) DEFAULT 'SYSTOOLS';
CREATE OR REPLACE VARIABLE ERROR_TAB VARCHAR(128) DEFAULT 'ADS_ERRORS';
DROP TABLE IF EXISTS SYSTOOLS.ADS_ERRORS;
CALL ADMIN_DROP_SCHEMA('MY_SCHEMA', NULL, ERROR_SCHEMA, ERROR_TAB);
I would like to implement an evolution that applies only if a condition is met on a Scala Play framework application. The condition is that the application should be in a certain environment.
I have this evolution right now:
# payments SCHEMA
# --- !Ups
INSERT INTO table1 (id, provider_name, provider_country, provider_code, status, flag)
VALUES (10, 'XXXXX', 'XX', 'XXXXX', '1', '0');
# --- !Downs
DELETE FROM table2
WHERE id = 10;
I want the evolution to run if this condition is met
if(config.env == 'dev'){
//execute evolution
}
How do I achieve this? Is this a function of the evolution or the application logic?
One approach might be to use a stored procedure in conjunction with a db-based app 'setting'. Assume your app had an appSetting table for storing app settings.
create table appSetting (
name varchar(63) not null primary key,
value varchar(255)
) ;
-- insert into appSetting values ('environment','dev');
Then, something along the following lines would create a tmpLog table (or insert a value into table1) only if appSetting has a value of 'dev' for setting 'environment' at the time of running the evolution:
# --- !Ups
create procedure doEvolution31()
begin
declare environment varchar(31);;
select value
into environment
from appSetting
where name='environment'
;;
if (environment='dev') then
create table tmpLog (id int not null primary key, text varchar(255));;
-- or INSERT INTO table1 (id, provider_name, provider_country, provider_code, status, flag) VALUES (10, 'XXXXX', 'XX', 'XXXXX', '1', '0');
end if;;
end
;
call doEvolution31();
# --- !Downs
drop procedure doEvolution31;
drop table if exists tmpLog;
-- or delete from table2 where id=10;
You don't mention which db you are using. The above is MYSQL syntax. There might be a way to get a config value into the stored proc, perhaps via some sbt magic, but I think we would use the above if we had such a requirement. (BTW The double semicolons are for escaping out a single semicolon so that individual statements of the procedures are not executed when the procedure is being created.)
Why do you need it at all? Don't you use separate db for different environments as it's being told at documentation?
If you do - then you probably have different db configurations, probably at different files. That, probably, looks something like that:
# application.conf
db.default {
driver=com.mysql.jdbc.Driver
url="jdbc:mysql://localhost/playdb"
username=playdbuser
password="a strong password"
}
# dev.conf
db.dev {
driver=com.mysql.jdbc.Driver
url="jdbc:mysql://localhost/playdb"
username=playdbuser
password="a strong password"
}
# staging.conf
db.staging {
driver=com.mysql.jdbc.Driver
url="jdbc:mysql://localhost/playdb"
username=playdbuser
password="a strong password"
}
# prod.conf
db.prod {
driver=com.mysql.jdbc.Driver
url="jdbc:mysql://localhost/playdb"
username=playdbuser
password="a strong password"
}
Actually nothing stops you to make it the same db but don't - just use proper db per environment. Assuming you are using jdbc connector and PlayEvolutions plugin - just put your evolution to right directory and you'll achieve what you want.
The other question is actually: "How to use proper db per environment?" And the answer is strongly depend on your choice of DI.
How can I set a default for column B to be the value in column A?
I know, it is possible in Microsoft SQL Server:
http://www.ideaexcursion.com/2010/04/19/default-column-value-to-identity-of-different-column/
Is it possible in PostgreSQL?
The linked example shows how to intialize one column with the value of the identity column of the same table.
That is possible in Postgres
create table identdefault
(
a serial not null,
b int not null default currval('identdefault_a_seq')
);
serial will create a sequence in the background that is named tablename_column_seq thus we know that the sequence for identdefault.a will be named identdefault_a_seq and we can access the last value through the currval function.
Running:
insert into identdefault default values;
insert into identdefault default values;
insert into identdefault default values;
select *
from identdefault
will output:
a | b
--+--
1 | 1
2 | 2
3 | 3
This seems to only work with Postgres 9.4, when I tried that with 9.3 (on SQLFiddle) I got an error. But in that case it is possible as well - you just can't use the "shortcut" serial but need to create the sequence explicitly:
create sequence identdefault_a_seq;
create table identdefault
(
a int not null default nextval('identdefault_a_seq'),
b int not null default currval('identdefault_a_seq')
);
insert into identdefault default values;
insert into identdefault default values;
insert into identdefault default values;
If you want to have an identical definition as with the serial column, you just need to make the sequence belong to the column:
alter sequence identdefault_a_seq owned by identdefault.a;
SQLFiddle: http://sqlfiddle.com/#!15/0aa34/1
The answer to the much broader question "How can I set a default for column B to be the value in column A?" is unfortunately: no, you can't (see klin's comment)
You can't do it with an actual DEFAULT, but it's trivial with a BEFORE trigger.
CREATE OR REPLACE FUNCTION whatever() RETURNS trigger LANGUAGE plpgsql AS $$
BEGIN
NEW.onecol := NEW.othercol;
RETURN NEW;
END;
$$;
CREATE TRIGGER whatever_tg
BEFORE INSERT ON mytable
FOR EACH ROW EXECUTE PROCEDURE whatever();
"You cannot do that" and postgres don't go well together. There's almost always a way you can do that (whatever "that" turns out to be).
The question is more like: How do you want to do it?
One way, that is nice to DB-Admins would be: Create a before-Trigger, manipulate the new row before it is written.
If your rules to create that new column are very fancy: Turn to one of the embedded languages (like perl).
So: Is it possible? Of course it is.
I am using postgresql 9.4 and while writing functions I want to use self-defined error_codes (int). However I may want to change the exact numeric values later.
For instance
-1 means USER_NOT_FOUND.
-2 means USER_DOES_NOT_HAVE_PERMISSION.
I can define these in a table codes_table(code_name::text, code_value::integer) and use them in functions as follows
(SELECT codes_table.code_value FROM codes_table WHERE codes_table.code_name = 'USER_NOT_FOUND')
Is there another way for this. Maybe global variables?
Postgres does not have global variables.
However you can define custom configuration parameters.
To keep things clear define your own parameters with a given prefix, say glb.
This simple function will make it easier to place the parameter in queries:
create or replace function glb(code text)
returns integer language sql as $$
select current_setting('glb.' || code)::integer;
$$;
set glb.user_not_found to -1;
set glb.user_does_not_have_permission to -2;
select glb('user_not_found'), glb('user_does_not_have_permission');
User-defined parameters are local in the session, therefore the parameters should be defined at the beginning of each session.
Building on #klin's answer, there are a couple of ways to persist a configuration parameter beyond the current session. Note that these require superuser privieges.
To set a value for all connections to a particular database:
ALTER DATABASE db SET abc.xyz = 1;
You can also set a server-wide value using the ALTER SYSTEM command, added in 9.4. It only seems to work for user-defined parameters if they have already been SET in your current session. Note also that this requires a configuration reload to take effect.
SET abc.xyz = 1;
ALTER SYSTEM SET abc.xyz = 1;
SELECT pg_reload_conf();
Pre-9.4, you can accomplish the same thing by adding the parameter to your server's postgresql.conf file. In 9.1 and earlier, you also need to register a custom variable class.
You can use a trick and declare your variables as a 1-row CTE, which you then CROSS JOIN to the rest. See example:
WITH
variables AS (
SELECT 'value1'::TEXT AS var1, 10::INT AS var2
)
SELECT t.*, v.*
FROM
my_table AS t
CROSS JOIN variables AS v
WHERE t.random_int_column = var2;
Postgresql does not support global variables on the DB level. Why not add it:
CREATE TABLE global_variables (
key text not null PRIMARY KEY
value text
);
INSERT INTO global_variables (key, value) VALUES ('error_code_for_spaceship_engine', '404');
If different types may be the values, consider JSON to be the type for value, but then deserialization code is required for each type.
You can use this
CREATE OR REPLACE FUNCTION globals.maxCities()
RETURNS integer AS
$$SELECT 100 $$ LANGUAGE sql IMMUTABLE;
.. and directly use globals.maxCities() in the code.