TSQL Default Minimum DateTime - tsql

Using Transact SQL is there a way to specify a default datetime on a column (in the create table statement) such that the datetime is the minimum possible value for datetime values?
create table atable
(
atableID int IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
Modified datetime DEFAULT XXXXX??????
)
Perhaps I should just leave it null.

As far as I am aware no function exists to return this, you will have to hard set it.
Attempting to cast from values such as 0 to get a minimum date will default to 01-01-1900.
As suggested previously best left set to NULL (and use ISNULL when reading if you need to), or if you are worried about setting it correctly you could even set a trigger on the table to set your modified date on edits.
If you have your heart set on getting the minimum possible date then:
create table atable
(
atableID int IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
Modified datetime DEFAULT '1753-01-01'
)

I agree with the sentiment in "don't use magic values". But I would like to point out that there are times when it's legit to resort to such solutions.
There is a price to pay for setting columns nullable: NULLs are not indexable. A query like "get all records that haven't been modified since the start of 2010" includes those that have never been modified. If we use a nullable column we're thus forced to use [modified] < #cutoffDate OR [modified] IS NULL, and this in turn forces the database engine to perform a table scan, since the nulls are not indexed. And this last can be a problem.
In practice, one should go with NULL if this does not introduce a practical, real-world performance penalty. But it can be difficult to know, unless you have some idea what realistic data volumes are today and will be in the so-called forseeable future. You also need to know if there will be a large proportion of the records that have the special value - if so, there's no point in indexing it anyway.
In short, by deafult/rule of thumb one should go for NULL. But if there's a huge number of records, the data is frequently queried, and only a small proportion of the records have the NULL/special value, there could be significant performance gain for locating records based on this information (provided of course one creates the index!) and IMHO this can at times justify the use of "magic" values.

"Perhaps I should leave it null"
Don't use magic numbers - it's bad practice - if you don't have a value leave it null
Otherwise if you really want a default date - use one of the other techniques posted to set a default date

Unless you are doing a DB to track historical times more than a century ago, using
Modified datetime DEFAULT ((0))
is perfectly safe and sound and allows more elegant queries than '1753-01-01' and more efficient queries than NULL.
However, since first Modified datetime is the time at which the record was inserted, you can use:
Modified datetime NOT NULL DEFAULT (GETUTCDATE())
which avoids the whole issue and makes your inserts easier and safer - as in you don't insert it at all and SQL does the housework :-)
With that in place you can still have elegant and fast queries by using 0 as a practical minimum since it's guranteed to always be lower than any insert-generated GETUTCDATE().

Sometimes you inherit brittle code that is already expecting magic values in a lot of places. Everyone is correct, you should use NULL if possible. However, as a shortcut to make sure every reference to that value is the same, I like to put "constants" (for lack of a better name) in SQL in a scaler function and then call that function when I need the value. That way if I ever want to update them all to be something else, I can do so easily. Or if I want to change the default value moving forward, I only have one place to update it.
The following code creates the function and a table using it for the default DateTime value. Then inserts and select from the table without specifying the value for Modified. Then cleans up after itself. I hope this helps.
-- CREATE FUNCTION
CREATE FUNCTION dbo.DateTime_MinValue ( )
RETURNS DATETIME
AS
BEGIN
DECLARE #dateTime_min DATETIME ;
SET #dateTime_min = '1/1/1753 12:00:00 AM'
RETURN #dateTime_min ;
END ;
GO
-- CREATE TABLE USING FUNCTION FOR DEFAULT
CREATE TABLE TestTable
(
TestTableId INT IDENTITY(1, 1)
PRIMARY KEY CLUSTERED ,
Value VARCHAR(50) ,
Modified DATETIME DEFAULT dbo.DateTime_MinValue()
) ;
-- INSERT VALUE INTO TABLE
INSERT INTO TestTable
( Value )
VALUES ( 'Value' ) ;
-- SELECT FROM TABLE
SELECT TestTableId ,
VALUE ,
Modified
FROM TestTable ;
-- CLEANUP YOUR DB
DROP TABLE TestTable ;
DROP FUNCTION dbo.DateTime_MinValue ;

I think your only option here is a constant. With that said - don't use it - stick with nulls instead of bogus dates.
create table atable
(
atableID int IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
Modified datetime DEFAULT '1/1/1753'
)

I think this would work...
create table atable
(
atableID int IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
Modified datetime DEFAULT ((0))
)
Edit: This is wrong...The minimum SQL DateTime Value is 1/1/1753. My solution provides a datetime = 1/1/1900 00:00:00. Other answers have the correct minimum date...

Related

Using DDL, how can the default value for a Numeric(8,0) field be set to today's date as YYYYMMDD?

I'm trying to convert this, which works:
create_timestamp for column
CREATETS TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
to something that works like this, but this code is not working:
date_created for column
DTCREATE NUMERIC(8,0) NOT NULL DEFAULT VARCHAR_FORMAT(CURRENT_TIMESTAMP, 'YYYYMMDD'),
Can anyone advise DDL to accomplish what I'm going for? Thank you.
When asking for help with Db2, always specify your Db2-server platform (Z/OS , i-series, linux/unix/windows) and Db2-server version, because the answer can depend on these facts.
The default-clause for a column does not have syntax that you expect, and that is the reason you get a syntax error.
It's can be a mistake to store a date as a numeric, because it causes no end of hassle to programmers and reporting tools, and data exchange. It's usually a mistake based on false assumptions.
If you want to store a date (not a timestamp) then use the column datatype DATE which lets you use:
DTCREATE DATE NOT NULL DEFAULT CURRENT DATE
How you choose, or future programmers choose , to render the value of a date on the SQL output is a different matter.
You may use BEFORE INSERT trigger to emulate a DEFAULT clause with such an unsupported function instead.
CREATE TRIGGER MYTAB_BIR
BEFORE INSERT ON MYTAB
REFERENCING NEW AS N
FOR EACH ROW
WHEN (N.DATE_CREATED IS NULL)
SET DATE_CREATED = VARCHAR_FORMAT(CURRENT_TIMESTAMP, 'YYYYMMDD');

How to add a date column which is 7 days later than an existing column in a Postgres table? [duplicate]

Does PostgreSQL support computed / calculated columns, like MS SQL Server? I can't find anything in the docs, but as this feature is included in many other DBMSs I thought I might be missing something.
Eg: http://msdn.microsoft.com/en-us/library/ms191250.aspx
Postgres 12 or newer
STORED generated columns are introduced with Postgres 12 - as defined in the SQL standard and implemented by some RDBMS including DB2, MySQL, and Oracle. Or the similar "computed columns" of SQL Server.
Trivial example:
CREATE TABLE tbl (
int1 int
, int2 int
, product bigint GENERATED ALWAYS AS (int1 * int2) STORED
);
fiddle
VIRTUAL generated columns may come with one of the next iterations. (Not in Postgres 15, yet).
Related:
Attribute notation for function call gives error
Postgres 11 or older
Up to Postgres 11 "generated columns" are not supported.
You can emulate VIRTUAL generated columns with a function using attribute notation (tbl.col) that looks and works much like a virtual generated column. That's a bit of a syntax oddity which exists in Postgres for historic reasons and happens to fit the case. This related answer has code examples:
Store common query as column?
The expression (looking like a column) is not included in a SELECT * FROM tbl, though. You always have to list it explicitly.
Can also be supported with a matching expression index - provided the function is IMMUTABLE. Like:
CREATE FUNCTION col(tbl) ... AS ... -- your computed expression here
CREATE INDEX ON tbl(col(tbl));
Alternatives
Alternatively, you can implement similar functionality with a VIEW, optionally coupled with expression indexes. Then SELECT * can include the generated column.
"Persisted" (STORED) computed columns can be implemented with triggers in a functionally equivalent way.
Materialized views are a related concept, implemented since Postgres 9.3.
In earlier versions one can manage MVs manually.
YES you can!! The solution should be easy, safe, and performant...
I'm new to postgresql, but it seems you can create computed columns by using an expression index, paired with a view (the view is optional, but makes makes life a bit easier).
Suppose my computation is md5(some_string_field), then I create the index as:
CREATE INDEX some_string_field_md5_index ON some_table(MD5(some_string_field));
Now, any queries that act on MD5(some_string_field) will use the index rather than computing it from scratch. For example:
SELECT MAX(some_field) FROM some_table GROUP BY MD5(some_string_field);
You can check this with explain.
However at this point you are relying on users of the table knowing exactly how to construct the column. To make life easier, you can create a VIEW onto an augmented version of the original table, adding in the computed value as a new column:
CREATE VIEW some_table_augmented AS
SELECT *, MD5(some_string_field) as some_string_field_md5 from some_table;
Now any queries using some_table_augmented will be able to use some_string_field_md5 without worrying about how it works..they just get good performance. The view doesn't copy any data from the original table, so it is good memory-wise as well as performance-wise. Note however that you can't update/insert into a view, only into the source table, but if you really want, I believe you can redirect inserts and updates to the source table using rules (I could be wrong on that last point as I've never tried it myself).
Edit: it seems if the query involves competing indices, the planner engine may sometimes not use the expression-index at all. The choice seems to be data dependant.
One way to do this is with a trigger!
CREATE TABLE computed(
one SERIAL,
two INT NOT NULL
);
CREATE OR REPLACE FUNCTION computed_two_trg()
RETURNS trigger
LANGUAGE plpgsql
SECURITY DEFINER
AS $BODY$
BEGIN
NEW.two = NEW.one * 2;
RETURN NEW;
END
$BODY$;
CREATE TRIGGER computed_500
BEFORE INSERT OR UPDATE
ON computed
FOR EACH ROW
EXECUTE PROCEDURE computed_two_trg();
The trigger is fired before the row is updated or inserted. It changes the field that we want to compute of NEW record and then it returns that record.
PostgreSQL 12 supports generated columns:
PostgreSQL 12 Beta 1 Released!
Generated Columns
PostgreSQL 12 allows the creation of generated columns that compute their values with an expression using the contents of other columns. This feature provides stored generated columns, which are computed on inserts and updates and are saved on disk. Virtual generated columns, which are computed only when a column is read as part of a query, are not implemented yet.
Generated Columns
A generated column is a special column that is always computed from other columns. Thus, it is for columns what a view is for tables.
CREATE TABLE people (
...,
height_cm numeric,
height_in numeric GENERATED ALWAYS AS (height_cm * 2.54) STORED
);
db<>fiddle demo
Well, not sure if this is what You mean but Posgres normally support "dummy" ETL syntax.
I created one empty column in table and then needed to fill it by calculated records depending on values in row.
UPDATE table01
SET column03 = column01*column02; /*e.g. for multiplication of 2 values*/
It is so dummy I suspect it is not what You are looking for.
Obviously it is not dynamic, you run it once. But no obstacle to get it into trigger.
Example on creating an empty virtual column
,(SELECT *
From (values (''))
A("virtual_col"))
Example on creating two virtual columns with values
SELECT *
From (values (45,'Completed')
, (1,'In Progress')
, (1,'Waiting')
, (1,'Loading')
) A("Count","Status")
order by "Count" desc
I have a code that works and use the term calculated, I'm not on postgresSQL pure tho we run on PADB
here is how it's used
create table some_table as
select category,
txn_type,
indiv_id,
accum_trip_flag,
max(first_true_origin) as true_origin,
max(first_true_dest ) as true_destination,
max(id) as id,
count(id) as tkts_cnt,
(case when calculated tkts_cnt=1 then 1 else 0 end) as one_way
from some_rando_table
group by 1,2,3,4 ;
A lightweight solution with Check constraint:
CREATE TABLE example (
discriminator INTEGER DEFAULT 0 NOT NULL CHECK (discriminator = 0)
);

Does Postgres support virtual columns? [duplicate]

Does PostgreSQL support computed / calculated columns, like MS SQL Server? I can't find anything in the docs, but as this feature is included in many other DBMSs I thought I might be missing something.
Eg: http://msdn.microsoft.com/en-us/library/ms191250.aspx
Postgres 12 or newer
STORED generated columns are introduced with Postgres 12 - as defined in the SQL standard and implemented by some RDBMS including DB2, MySQL, and Oracle. Or the similar "computed columns" of SQL Server.
Trivial example:
CREATE TABLE tbl (
int1 int
, int2 int
, product bigint GENERATED ALWAYS AS (int1 * int2) STORED
);
fiddle
VIRTUAL generated columns may come with one of the next iterations. (Not in Postgres 15, yet).
Related:
Attribute notation for function call gives error
Postgres 11 or older
Up to Postgres 11 "generated columns" are not supported.
You can emulate VIRTUAL generated columns with a function using attribute notation (tbl.col) that looks and works much like a virtual generated column. That's a bit of a syntax oddity which exists in Postgres for historic reasons and happens to fit the case. This related answer has code examples:
Store common query as column?
The expression (looking like a column) is not included in a SELECT * FROM tbl, though. You always have to list it explicitly.
Can also be supported with a matching expression index - provided the function is IMMUTABLE. Like:
CREATE FUNCTION col(tbl) ... AS ... -- your computed expression here
CREATE INDEX ON tbl(col(tbl));
Alternatives
Alternatively, you can implement similar functionality with a VIEW, optionally coupled with expression indexes. Then SELECT * can include the generated column.
"Persisted" (STORED) computed columns can be implemented with triggers in a functionally equivalent way.
Materialized views are a related concept, implemented since Postgres 9.3.
In earlier versions one can manage MVs manually.
YES you can!! The solution should be easy, safe, and performant...
I'm new to postgresql, but it seems you can create computed columns by using an expression index, paired with a view (the view is optional, but makes makes life a bit easier).
Suppose my computation is md5(some_string_field), then I create the index as:
CREATE INDEX some_string_field_md5_index ON some_table(MD5(some_string_field));
Now, any queries that act on MD5(some_string_field) will use the index rather than computing it from scratch. For example:
SELECT MAX(some_field) FROM some_table GROUP BY MD5(some_string_field);
You can check this with explain.
However at this point you are relying on users of the table knowing exactly how to construct the column. To make life easier, you can create a VIEW onto an augmented version of the original table, adding in the computed value as a new column:
CREATE VIEW some_table_augmented AS
SELECT *, MD5(some_string_field) as some_string_field_md5 from some_table;
Now any queries using some_table_augmented will be able to use some_string_field_md5 without worrying about how it works..they just get good performance. The view doesn't copy any data from the original table, so it is good memory-wise as well as performance-wise. Note however that you can't update/insert into a view, only into the source table, but if you really want, I believe you can redirect inserts and updates to the source table using rules (I could be wrong on that last point as I've never tried it myself).
Edit: it seems if the query involves competing indices, the planner engine may sometimes not use the expression-index at all. The choice seems to be data dependant.
One way to do this is with a trigger!
CREATE TABLE computed(
one SERIAL,
two INT NOT NULL
);
CREATE OR REPLACE FUNCTION computed_two_trg()
RETURNS trigger
LANGUAGE plpgsql
SECURITY DEFINER
AS $BODY$
BEGIN
NEW.two = NEW.one * 2;
RETURN NEW;
END
$BODY$;
CREATE TRIGGER computed_500
BEFORE INSERT OR UPDATE
ON computed
FOR EACH ROW
EXECUTE PROCEDURE computed_two_trg();
The trigger is fired before the row is updated or inserted. It changes the field that we want to compute of NEW record and then it returns that record.
PostgreSQL 12 supports generated columns:
PostgreSQL 12 Beta 1 Released!
Generated Columns
PostgreSQL 12 allows the creation of generated columns that compute their values with an expression using the contents of other columns. This feature provides stored generated columns, which are computed on inserts and updates and are saved on disk. Virtual generated columns, which are computed only when a column is read as part of a query, are not implemented yet.
Generated Columns
A generated column is a special column that is always computed from other columns. Thus, it is for columns what a view is for tables.
CREATE TABLE people (
...,
height_cm numeric,
height_in numeric GENERATED ALWAYS AS (height_cm * 2.54) STORED
);
db<>fiddle demo
Well, not sure if this is what You mean but Posgres normally support "dummy" ETL syntax.
I created one empty column in table and then needed to fill it by calculated records depending on values in row.
UPDATE table01
SET column03 = column01*column02; /*e.g. for multiplication of 2 values*/
It is so dummy I suspect it is not what You are looking for.
Obviously it is not dynamic, you run it once. But no obstacle to get it into trigger.
Example on creating an empty virtual column
,(SELECT *
From (values (''))
A("virtual_col"))
Example on creating two virtual columns with values
SELECT *
From (values (45,'Completed')
, (1,'In Progress')
, (1,'Waiting')
, (1,'Loading')
) A("Count","Status")
order by "Count" desc
I have a code that works and use the term calculated, I'm not on postgresSQL pure tho we run on PADB
here is how it's used
create table some_table as
select category,
txn_type,
indiv_id,
accum_trip_flag,
max(first_true_origin) as true_origin,
max(first_true_dest ) as true_destination,
max(id) as id,
count(id) as tkts_cnt,
(case when calculated tkts_cnt=1 then 1 else 0 end) as one_way
from some_rando_table
group by 1,2,3,4 ;
A lightweight solution with Check constraint:
CREATE TABLE example (
discriminator INTEGER DEFAULT 0 NOT NULL CHECK (discriminator = 0)
);

make a column in sybase default to the current date/time of row insertion

I have a sybase 15 DB and for one of my tables, I want to make a column default to the current date/time of the row insert. Is this possible?
In a sybase text, the following is said:
ALTER TABLE sales_order
MODIFY order_date DEFAULT CURRENT DATE
On my DB this doesn't do anything, as CURRENT DATE is not recognized.
using getDate() is a valid solution, you must have had a syntax error. Try it like this:
create table test_tbl (
date_data DATETIME default getDate() NOT NULL
)
Try using getDate() instead
... DEFAULT GETDATE() is correct. the case is irrelevant; mixed case may indicate a Java method, but it is a straight TSQL Function. Please post the exact error msg if you want further assistance.
Also, the ALTER TABLE method sets the Default for future INSERTS; if you want the existing data changed, you need to UPDATE (good for small tables) or unload/reload the table (demanded for the large).
Watch the NULL/NOT NULL: you do not want to change that without understanding. Again, the existing/future issue needs address. NOT NULL prevents NULL being explicitly passed as an INSERT VALUE.
CURRENT_DATE is a SQL standard that isn't universally adopted.
As noted elsewhere the getdate() T-SQL function should be used instead.

Sequence Generators in T-SQL

We have an Oracle application that uses a standard pattern to populate surrogate keys. We have a series of extrinsic rows (that have specific values for the surrogate keys) and other rows that have intrinsic values.
We use the following Oracle trigger snippet to determine what to do with the Surrogate key on insert:
IF :NEW.SurrogateKey IS NULL THEN
SELECT SurrogateKey_SEQ.NEXTVAL INTO :NEW.SurrogateKey FROM DUAL;
END IF;
If the supplied surrogate key is null then get a value from the nominated sequence, else pass the supplied surrogate key through to the row.
I can't seem to find an easy way to do this is T-SQL. There are all sorts of approaches, but none of which use the notion of a sequence generator like Oracle and other SQL-92 compliant DBs do.
Anybody know of a really efficient way to do this in SQL Server T-SQL? By the way, we're using SQL Server 2008 if that's any help.
You may want to look at IDENTITY. This gives you a column for which the value will be determined when you insert the row.
This may mean that you have to insert the row, and determine the value afterwards, using SCOPE_IDENTITY().
There is also an article on simulating Oracle Sequences in SQL Server here: http://www.sqlmag.com/Articles/ArticleID/46900/46900.html?Ad=1
Identity is one approach, although it will generate unique identifiers at a per table level.
Another approach is to use unique identifiers, in particualr using NewSequantialID() that ensues the generated id is always bigger than the last. The problem with this approach is you are no longer dealing with integers.
The closest way to emulate the oracle method is to have a separate table with a counter field, and then write a user defined function that queries this field, increments it, and returns the value.
Here is a way to do it using a table to store your last sequence number. The stored proc is very simple, most of the stuff in there is because I'm lazy and don't like surprises should I forget something so...here it is:
----- Create the sequence value table.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[SequenceTbl]
(
[CurrentValue] [bigint]
) ON [PRIMARY]
GO
-----------------Create the stored procedure
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE procedure [dbo].[sp_NextInSequence](#SkipCount BigInt = 1)
AS
BEGIN
BEGIN TRANSACTION
DECLARE #NextInSequence BigInt;
IF NOT EXISTS
(
SELECT
CurrentValue
FROM
SequenceTbl
)
INSERT INTO SequenceTbl (CurrentValue) VALUES (0);
SELECT TOP 1
#NextInSequence = ISNULL(CurrentValue, 0) + 1
FROM
SequenceTbl WITH (HoldLock);
UPDATE SequenceTbl WITH (UPDLOCK)
SET CurrentValue = #NextInSequence + (#SkipCount - 1);
COMMIT TRANSACTION
RETURN #NextInSequence
END;
GO
--------Use the stored procedure in Sql Manager to retrive a test value.
declare #NextInSequence BigInt
exec #NextInSequence = sp_NextInSequence;
--exec #NextInSequence = sp_NextInSequence <skipcount>;
select NextInSequence = #NextInSequence;
-----Show the current table value.
select * from SequenceTbl;
The astute will notice that there is a parameter (optional) for the stored proc. This is to allow the caller to reserve a block of ID's in the instance that the caller has more than one record that needs a unique id - using the SkipCount, the caller need make only a single call for however many IDs are needed.
The entire "IF EXISTS...INSERT INTO..." block can be removed if you remember to insert a record when the table is created. If you also remember to insert that record with a value (your seed value - a number which will never be used as an ID), you can also remove the ISNULL(...) portion of the select and just use CurrentValue + 1.
Now, before anyone makes a comment, please note that I am a software engineer, not a dba! So, any constructive criticism concerning the use of "Top 1", "With (HoldLock)" and "With (UPDLock)" is welcome. I don't know how well this will scale but this works OK for me so far...