T-SQL: Disable default FLOAT rounding when using Table with its View - tsql

Good day!
Here is the problem of float rounding when I want insert data into Table using its View.
Data NEED to store as varchar in Table.
Table:
CREATE TABLE [dbo].[TestTableFloat_E]
(
[id] [int] IDENTITY(1,1) NOT NULL,
[FloatField] [varchar](100) NOT NULL
)
View:
ALTER VIEW [dbo].[TestTableFloat]
AS
SELECT id
,Convert(float, FloatField) as FloatField
FROM dbo.TestTableFloat_E
The data, what I select from Table using its View - have to have float type (field FloatField).
I can't insert data into Table, if i need, I need do it with its View. So, I can't (this is task rule) insert data directly into Table, just with View. I create trigger to insert data:
CREATE TRIGGER [dbo].[tr_TestTableFloatInsert]
ON [dbo].[TestTableFloat]
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
insert into TestTableFloat_E (FloatField)
select FloatField
FROM INSERTED
Actions: I try insert data into TestTableFloat_E (table) with help of its View (TestTableFloat), rises trigger and insert data into table.
Problem: When I insert float number, I have rounding, that I don't need:
insert TestTableFloat (FloatField)
select 123.123456
I have 123,123 in the Table TesttableFloat. I need it doesn't round, I have to have 123.123456
What can it be?

You can have it store more digits, but it's going to switch to scientific notation:
CREATE TRIGGER [dbo].[tr_TestTableFloatInsert]
ON [dbo].[TestTableFloat]
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
insert into TestTableFloat_E (FloatField)
select CONVERT(varchar(100),FloatField,2)
FROM INSERTED
END
Which ends up storing 1.231234560000000e+002 into the table.
From CAST and CONVERT:
When expression is float or real, style can be one of the values shown in the following table. Other values are processed as 0.
0 (default)
A maximum of 6 digits. Use in scientific notation, when appropriate.
1
Always 8 digits. Always use in scientific notation.
2
Always 16 digits. Always use in scientific notation.
Insert usual caveats about futility of expecting float and a decimal representation of the same to always be exactly convertible, and of using inappropriate data types to store particular types of data
It would appear the best data type to use for this data would be decimal with appropriate scale and precision, which is neither of the types you're working with. But you claim that those are the "required" types. E.g. if the view uses a decimal instead of a float, the stored varchar(100) value is exactly as expected.

Related

when does a (stored) GENERATED COLUMN get regenerated?

On any update to the row (which would be somehow dumb and I would expect a performance warning on the documentation page then) or is it smart enough of analyzing the generation expression and only regenerate the computed column when the input column(s) have changed?
From the documentation it's rather clear
A stored generated column is computed when it is written (inserted or updated) and occupies storage as if it were a normal column. A virtual generated column occupies no storage and is computed when it is read. Thus, a virtual generated column is similar to a view and a stored generated column is similar to a materialized view (except that it is always updated automatically).
So it seams that the generated always column is generated always.
Below a small test case to verify
We define a immutable function used in the formula with pg_sleepinside to see if the function was called
create or replace function wait10s(x numeric)
returns int
as $$
SELECT pg_sleep(10);
select x as result;
$$ language sql IMMUTABLE;
Table DDL
create table t
(col1 numeric,
col2 numeric,
gen_col numeric generated always as ( wait10s(col2) ) STORED
);
Insert
as expected we wait 10 seconds
insert into t (col1, col2) values (1,1);
Update of column used in formula
update t set col2 = 2
Again expected wait
Update of column NOT used in formula
update t set col1 = 2
No wait so it seems that there is an optimizing step calling the formula only in case of necessity.
This makes perfect sense, but of course you should take it with care as this behavior is not documented and may change...

Inserting data in SQL Server table not showing error on both condition

I have use procedure to insert data into table for fixed column size. But data trim and inserted successfully without any error.
I have lost some content from that variable.
This code snippet is not showing any error:
declare #temp varchar(5)
declare #tm table (a varchar(5))
set #temp ='abcdefghijkl'
insert into #tm
values(#temp)
select * from #tm
But this code snippet is showing this error:
String or binary data would be truncated
declare #temp1 varchar(5)
declare #tm1 table (a varchar(5))
insert into #tm1
values('abcdefghijkl')
select * from #tm1
The fact that the second code snippet is raising an error is a good thing.
It prevents you from corrupting data by mistake.
In the first code snippet, however, SQL Server will silently trim the string due to implicit conversion rules.
Whenever you attempt to populate a variable with a data that has a different data type, SQL Server will attempt to implicitly convert the data type to the data type of the variable.
This is well documented in the Converting Character Data section of the char and varchar (Transact-SQL) page:
When character expressions are converted to a character data type of a different size, values that are too long for the new data type are truncated.
This does not happen when inserting into a table, providing ANSI_WARNINGS is set to ON (which is the default state).
When ANSI_WARNINGS is set to ON, you get the
String or binary data would be truncated
error message.
When it's set to OFF, however, the implicit conversion will silently truncate the data:
set ansi_warnings off;
declare #temp1 varchar(5)
declare #tm1 table (a varchar(5))
insert into #tm1
values('abcdefghijkl')
select * from #tm1
Result:
a
abcde
Note: The ansi_warnings state does not have effect the implicit conversion when setting a variable value - it will always be truncated regardless of the ansi_warnings state.

Show all numeric rows or vice-versa postgresql

I have a table named "temp_table" and a column named "temp_column" of type varchar. The problem is "temp_column" must be of type integer. If I will just automatically update the table into type integer, it will generate an error since some data has non-numeric data in it.
I want a query that will show all rows if "temp_column" has non-numeric values in it (or the other way around) and update or SET the value accordingly. I'm having a hard time since ISNUMERIC is not available in postgresql.
how to do this?
This will show all rows where you have non-integer values in that column. It uses a regular expression to find all values that have anything else than just numbers in it:
select *
from temp_table
where temp_column ~ '[^0-9]';
this can also be used in an update statement:
update temp_table
set temp_column = null
where temp_column ~ '[^0-9]';
This will also filter out "numeric" values like 3.14 as those aren't integers.

How can I generate a unique string per record in a table in Postgres?

Say I have a table like posts, which has typical columns like id, body, created_at. I'd like to generate a unique string with the creation of each post, for use in something like a url shortener. So maybe a 10-character alphanumeric string. It needs to be unique within the table, just like a primary key.
Ideally there would be a way for Postgres to handle both of these concerns:
generate the string
ensure its uniqueness
And they must go hand-in-hand, because my goal is to not have to worry about any uniqueness-enforcing code in my application.
I don't claim the following is efficient, but it is how we have done this sort of thing in the past.
CREATE FUNCTION make_uid() RETURNS text AS $$
DECLARE
new_uid text;
done bool;
BEGIN
done := false;
WHILE NOT done LOOP
new_uid := md5(''||now()::text||random()::text);
done := NOT exists(SELECT 1 FROM my_table WHERE uid=new_uid);
END LOOP;
RETURN new_uid;
END;
$$ LANGUAGE PLPGSQL VOLATILE;
make_uid() can be used as the default for a column in my_table. Something like:
ALTER TABLE my_table ADD COLUMN uid text NOT NULL DEFAULT make_uid();
md5(''||now()::text||random()::text) can be adjusted to taste. You could consider encode(...,'base64') except some of the characters used in base-64 are not URL friendly.
All existing answers are WRONG because they are based on SELECT while generating unique index per table record. Let us assume that we need unique code per record while inserting: Imagine two concurrent INSERTs are happening same time by miracle (which happens very often than you think) for both inserts same code was generated because at the moment of SELECT that code did not exist in table. One instance will INSERT and other will fail.
First let us create table with code field and add unique index
CREATE TABLE my_table
(
code TEXT NOT NULL
);
CREATE UNIQUE INDEX ON my_table (lower(code));
Then we should have function or procedure (you can use code inside for trigger also) where we 1. generate new code, 2. try to insert new record with new code and 3. if insert fails try again from step 1
CREATE OR REPLACE PROCEDURE my_table_insert()
AS $$
DECLARE
new_code TEXT;
BEGIN
LOOP
new_code := LOWER(SUBSTRING(MD5(''||NOW()::TEXT||RANDOM()::TEXT) FOR 8));
BEGIN
INSERT INTO my_table (code) VALUES (new_code);
EXIT;
EXCEPTION WHEN unique_violation THEN
END;
END LOOP;
END;
$$ LANGUAGE PLPGSQL;
This is guaranteed error free solution not like other solutions on this thread
Use a Feistel network. This technique works efficiently to generate unique random-looking strings in constant time without any collision.
For a version with about 2 billion possible strings (2^31) of 6 letters, see this answer.
For a 63 bits version based on bigint (9223372036854775808 distinct possible values), see this other answer.
You may change the round function as explained in the first answer to introduce a secret element to have your own series of strings (not guessable).
The easiest way probably to use the sequence to guarantee uniqueness
(so after the seq add a fix x digit random number):
CREATE SEQUENCE test_seq;
CREATE TABLE test_table (
id bigint NOT NULL DEFAULT (nextval('test_seq')::text || (LPAD(floor(random()*100000000)::text, 8, '0')))::bigint,
txt TEXT
);
insert into test_table (txt) values ('1');
insert into test_table (txt) values ('2');
select id, txt from test_table;
However this will waste a huge amount of records. (Note: the max bigInt is 9223372036854775807 if you use 8 digit random number at the end, you can only have 922337203 records. Thou 8 digit is probably not necessary. Also check the max number for your programming environment!)
Alternatively you can use varchar for the id and even convert the above number with to_hex() or change to base36 like below (but for base36, try to not expose it to customer, in order to avoid some funny string showing up!):
PostgreSQL: Is there a function that will convert a base-10 int into a base-36 string?
Check out a blog by Bruce. This gets you part way there. You will have to make sure it doesn't already exist. Maybe concat the primary key to it?
Generating Random Data Via Sql
"Ever need to generate random data? You can easily do it in client applications and server-side functions, but it is possible to generate random data in sql. The following query generates five lines of 40-character-length lowercase alphabetic strings:"
SELECT
(
SELECT string_agg(x, '')
FROM (
SELECT chr(ascii('a') + floor(random() * 26)::integer)
FROM generate_series(1, 40 + b * 0)
) AS y(x)
)
FROM generate_series(1,5) as a(b);
Use primary key in your data. If you really need alphanumeric unique string, you can use base-36 encoding. In PostgreSQL you can use this function.
Example:
select base36_encode(generate_series(1000000000,1000000010));
GJDGXS
GJDGXT
GJDGXU
GJDGXV
GJDGXW
GJDGXX
GJDGXY
GJDGXZ
GJDGY0
GJDGY1
GJDGY2

TSQL Default Minimum DateTime

Using Transact SQL is there a way to specify a default datetime on a column (in the create table statement) such that the datetime is the minimum possible value for datetime values?
create table atable
(
atableID int IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
Modified datetime DEFAULT XXXXX??????
)
Perhaps I should just leave it null.
As far as I am aware no function exists to return this, you will have to hard set it.
Attempting to cast from values such as 0 to get a minimum date will default to 01-01-1900.
As suggested previously best left set to NULL (and use ISNULL when reading if you need to), or if you are worried about setting it correctly you could even set a trigger on the table to set your modified date on edits.
If you have your heart set on getting the minimum possible date then:
create table atable
(
atableID int IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
Modified datetime DEFAULT '1753-01-01'
)
I agree with the sentiment in "don't use magic values". But I would like to point out that there are times when it's legit to resort to such solutions.
There is a price to pay for setting columns nullable: NULLs are not indexable. A query like "get all records that haven't been modified since the start of 2010" includes those that have never been modified. If we use a nullable column we're thus forced to use [modified] < #cutoffDate OR [modified] IS NULL, and this in turn forces the database engine to perform a table scan, since the nulls are not indexed. And this last can be a problem.
In practice, one should go with NULL if this does not introduce a practical, real-world performance penalty. But it can be difficult to know, unless you have some idea what realistic data volumes are today and will be in the so-called forseeable future. You also need to know if there will be a large proportion of the records that have the special value - if so, there's no point in indexing it anyway.
In short, by deafult/rule of thumb one should go for NULL. But if there's a huge number of records, the data is frequently queried, and only a small proportion of the records have the NULL/special value, there could be significant performance gain for locating records based on this information (provided of course one creates the index!) and IMHO this can at times justify the use of "magic" values.
"Perhaps I should leave it null"
Don't use magic numbers - it's bad practice - if you don't have a value leave it null
Otherwise if you really want a default date - use one of the other techniques posted to set a default date
Unless you are doing a DB to track historical times more than a century ago, using
Modified datetime DEFAULT ((0))
is perfectly safe and sound and allows more elegant queries than '1753-01-01' and more efficient queries than NULL.
However, since first Modified datetime is the time at which the record was inserted, you can use:
Modified datetime NOT NULL DEFAULT (GETUTCDATE())
which avoids the whole issue and makes your inserts easier and safer - as in you don't insert it at all and SQL does the housework :-)
With that in place you can still have elegant and fast queries by using 0 as a practical minimum since it's guranteed to always be lower than any insert-generated GETUTCDATE().
Sometimes you inherit brittle code that is already expecting magic values in a lot of places. Everyone is correct, you should use NULL if possible. However, as a shortcut to make sure every reference to that value is the same, I like to put "constants" (for lack of a better name) in SQL in a scaler function and then call that function when I need the value. That way if I ever want to update them all to be something else, I can do so easily. Or if I want to change the default value moving forward, I only have one place to update it.
The following code creates the function and a table using it for the default DateTime value. Then inserts and select from the table without specifying the value for Modified. Then cleans up after itself. I hope this helps.
-- CREATE FUNCTION
CREATE FUNCTION dbo.DateTime_MinValue ( )
RETURNS DATETIME
AS
BEGIN
DECLARE #dateTime_min DATETIME ;
SET #dateTime_min = '1/1/1753 12:00:00 AM'
RETURN #dateTime_min ;
END ;
GO
-- CREATE TABLE USING FUNCTION FOR DEFAULT
CREATE TABLE TestTable
(
TestTableId INT IDENTITY(1, 1)
PRIMARY KEY CLUSTERED ,
Value VARCHAR(50) ,
Modified DATETIME DEFAULT dbo.DateTime_MinValue()
) ;
-- INSERT VALUE INTO TABLE
INSERT INTO TestTable
( Value )
VALUES ( 'Value' ) ;
-- SELECT FROM TABLE
SELECT TestTableId ,
VALUE ,
Modified
FROM TestTable ;
-- CLEANUP YOUR DB
DROP TABLE TestTable ;
DROP FUNCTION dbo.DateTime_MinValue ;
I think your only option here is a constant. With that said - don't use it - stick with nulls instead of bogus dates.
create table atable
(
atableID int IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
Modified datetime DEFAULT '1/1/1753'
)
I think this would work...
create table atable
(
atableID int IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
Modified datetime DEFAULT ((0))
)
Edit: This is wrong...The minimum SQL DateTime Value is 1/1/1753. My solution provides a datetime = 1/1/1900 00:00:00. Other answers have the correct minimum date...