Inserting data in SQL Server table not showing error on both condition - sql-server-2008-r2

I have use procedure to insert data into table for fixed column size. But data trim and inserted successfully without any error.
I have lost some content from that variable.
This code snippet is not showing any error:
declare #temp varchar(5)
declare #tm table (a varchar(5))
set #temp ='abcdefghijkl'
insert into #tm
values(#temp)
select * from #tm
But this code snippet is showing this error:
String or binary data would be truncated
declare #temp1 varchar(5)
declare #tm1 table (a varchar(5))
insert into #tm1
values('abcdefghijkl')
select * from #tm1

The fact that the second code snippet is raising an error is a good thing.
It prevents you from corrupting data by mistake.
In the first code snippet, however, SQL Server will silently trim the string due to implicit conversion rules.
Whenever you attempt to populate a variable with a data that has a different data type, SQL Server will attempt to implicitly convert the data type to the data type of the variable.
This is well documented in the Converting Character Data section of the char and varchar (Transact-SQL) page:
When character expressions are converted to a character data type of a different size, values that are too long for the new data type are truncated.
This does not happen when inserting into a table, providing ANSI_WARNINGS is set to ON (which is the default state).
When ANSI_WARNINGS is set to ON, you get the
String or binary data would be truncated
error message.
When it's set to OFF, however, the implicit conversion will silently truncate the data:
set ansi_warnings off;
declare #temp1 varchar(5)
declare #tm1 table (a varchar(5))
insert into #tm1
values('abcdefghijkl')
select * from #tm1
Result:
a
abcde
Note: The ansi_warnings state does not have effect the implicit conversion when setting a variable value - it will always be truncated regardless of the ansi_warnings state.

Related

when does a (stored) GENERATED COLUMN get regenerated?

On any update to the row (which would be somehow dumb and I would expect a performance warning on the documentation page then) or is it smart enough of analyzing the generation expression and only regenerate the computed column when the input column(s) have changed?
From the documentation it's rather clear
A stored generated column is computed when it is written (inserted or updated) and occupies storage as if it were a normal column. A virtual generated column occupies no storage and is computed when it is read. Thus, a virtual generated column is similar to a view and a stored generated column is similar to a materialized view (except that it is always updated automatically).
So it seams that the generated always column is generated always.
Below a small test case to verify
We define a immutable function used in the formula with pg_sleepinside to see if the function was called
create or replace function wait10s(x numeric)
returns int
as $$
SELECT pg_sleep(10);
select x as result;
$$ language sql IMMUTABLE;
Table DDL
create table t
(col1 numeric,
col2 numeric,
gen_col numeric generated always as ( wait10s(col2) ) STORED
);
Insert
as expected we wait 10 seconds
insert into t (col1, col2) values (1,1);
Update of column used in formula
update t set col2 = 2
Again expected wait
Update of column NOT used in formula
update t set col1 = 2
No wait so it seems that there is an optimizing step calling the formula only in case of necessity.
This makes perfect sense, but of course you should take it with care as this behavior is not documented and may change...

How do I insert a decimal in postgres sql

I am trying to insert into a numeric column a decimal but I keep getting an error.
Below is my statement
INSERT INTO blse VALUES (2082.7, 'Total Nonfarm' ,'Alabama','01/31/2020');
it basically says i need to cast this statement. I do not know what I am doing wrong.. I am beginner taking this class.
and this is the error:
It is highly recommended that you specify the columns in an INSERT statement:
INSERT INTO tab (col1, col2, col3)
VALUES (val1, val2, val3);
That way, you can be certain what value is inserted where.
Since you didn't do that, the first value in the VALUES clause gets inserted into the first table column, which is of type date. That causes the error you observe.

Find what row holds a value which cannot be cast to integer

I have some operations in heavy rearanging data tables which goes good so far.
In one table with more than 50000 rows I have text column where text should be numbers only.
Now I would like to convert it to integer column.
So:
ALTER TABLE mytable ALTER COLUMN mycolumn TYPE integer;
That produces an error 42804: *datatype_mismatch*
By reading docs I find solution:
ALTER TABLE mytable ALTER COLUMN mycolumn TYPE integer USING (TRIM(mycolumn)::integer);
But I am aware that data may not be correct in mean of number order since this "masks" an error and there is possibility that column was edited (by hand). After all, maybe is only trailing space added or some other minor editing was made.
I have backup of data.
How would I find which exact cell of given column contain an error and which value cannot be casted to int with some handy query suitable for use from pgadmin?
Please that query if is not complicated too much.
Expanding on #dystroy's answer, this query should cough the precise value of any offending rows:
CREATE OR REPLACE FUNCTION convert_to_integer(v_input text)
RETURNS INTEGER AS $$
BEGIN
BEGIN
RETURN v_input::INTEGER;
EXCEPTION WHEN OTHERS THEN
RAISE EXCEPTION 'Invalid integer value: "%". Returning NULL.', v_input;
RETURN NULL;
END;
END;
$$ LANGUAGE plpgsql;
Original answer:
If the following works:
ALTER TABLE mytable
ALTER COLUMN mycolumn TYPE integer USING (TRIM(mycolumn)::integer);
Then you should probably be able to run the following to locate the trash:
select mycolumn from mytable
where mycolumn::text <> (TRIM(mycolumn)::integer)::text;

Upon insert, how can I programmatically convert a null value to the column's default value?

I have a table in for which I have provided default values for some of its columns. I want to create a function with arguments corresponding to the columns that will insert a record in the table after modifying any null value to the column's default value.I dont want to programmatically construct the query based on which arguments are null. Essentially I would like something like
INSERT into Table (c1, c2, c3, c4)
Values (coalesce(somevar, DEFAULT(c1)), ...)
Is this possible? I ve seen that mysql can do this. Does postgres offer anything similar?
I am using version 9.1
UPDATE: This question provides some interesting solution but unfortunately the results are always text. I would like to get the default value as its true datatype so that I can use it for inserting it. I have tried to find a solution that will cast the default value from text to its datatype (which is provided as text) but I can't find a way:
SELECT column_name, column_default, data_type
FROM information_schema.columns
WHERE (table_schema, table_name) = ('public', 'mytable')
AND column_name = 'mycolumn'
ORDER BY ordinal_position;
The above returns the column_default and data_type as text so how can I cast the column_default to the value of data_type? If I could do this, then my problem would be solved.
If the table definition accepts an INSERT with the default values for all columns, the two-steps method below may work:
CREATE FUNCTION insert_func(c1 typename, c2 typename, ...)
RETURNS VOID AS $$
DECLARE
r record;
BEGIN
INSERT into the_table default values returning *,ctid INTO r;
UPDATE the_table set
col1=coalesce(c1,r.col1),
col2=coalesce(c2,r.col2),
...
WHERE the_table.ctid=r.ctid;
END;
$$ language plpgsql;
The trick is to get all the default values into the r record variable while inserting and use them subsequently in an UPDATE to replace any non-default value. ctid is a pseudo-column that designates the internal unique ID of the row that has just been inserted.
Caveat: this method won't work if some columns have a default null value AND a non-null check (or any check that implies that the default value is not accepted), since the first INSERT would fail.
I ve worked around my problem with a solution similar to Daniel's by creating a temp table with LIKE and INCLUDING DEFAULTS clauses in order to match my rowtype, then i use
INSERT INTO temp_table (c1, c2, ...) VALUES(x1, DEFAULT, ..)
using the default keyword for whatever column i am interested in. Then I insert to the real table by selecting from the temporary and using
VALUES( x1, coalesce(x2, temp_table.c2), ...).
I dont like it, but it works ok: I can select which on which columns I would like to do this "null-replace-with-default" check and it could work for many rows with one pass if I overload my function to accept a record array.

T-SQL: Disable default FLOAT rounding when using Table with its View

Good day!
Here is the problem of float rounding when I want insert data into Table using its View.
Data NEED to store as varchar in Table.
Table:
CREATE TABLE [dbo].[TestTableFloat_E]
(
[id] [int] IDENTITY(1,1) NOT NULL,
[FloatField] [varchar](100) NOT NULL
)
View:
ALTER VIEW [dbo].[TestTableFloat]
AS
SELECT id
,Convert(float, FloatField) as FloatField
FROM dbo.TestTableFloat_E
The data, what I select from Table using its View - have to have float type (field FloatField).
I can't insert data into Table, if i need, I need do it with its View. So, I can't (this is task rule) insert data directly into Table, just with View. I create trigger to insert data:
CREATE TRIGGER [dbo].[tr_TestTableFloatInsert]
ON [dbo].[TestTableFloat]
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
insert into TestTableFloat_E (FloatField)
select FloatField
FROM INSERTED
Actions: I try insert data into TestTableFloat_E (table) with help of its View (TestTableFloat), rises trigger and insert data into table.
Problem: When I insert float number, I have rounding, that I don't need:
insert TestTableFloat (FloatField)
select 123.123456
I have 123,123 in the Table TesttableFloat. I need it doesn't round, I have to have 123.123456
What can it be?
You can have it store more digits, but it's going to switch to scientific notation:
CREATE TRIGGER [dbo].[tr_TestTableFloatInsert]
ON [dbo].[TestTableFloat]
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
insert into TestTableFloat_E (FloatField)
select CONVERT(varchar(100),FloatField,2)
FROM INSERTED
END
Which ends up storing 1.231234560000000e+002 into the table.
From CAST and CONVERT:
When expression is float or real, style can be one of the values shown in the following table. Other values are processed as 0.
0 (default)
A maximum of 6 digits. Use in scientific notation, when appropriate.
1
Always 8 digits. Always use in scientific notation.
2
Always 16 digits. Always use in scientific notation.
Insert usual caveats about futility of expecting float and a decimal representation of the same to always be exactly convertible, and of using inappropriate data types to store particular types of data
It would appear the best data type to use for this data would be decimal with appropriate scale and precision, which is neither of the types you're working with. But you claim that those are the "required" types. E.g. if the view uses a decimal instead of a float, the stored varchar(100) value is exactly as expected.