How to identify truncated columns in SQL Server 2016 - tsql

I have been experimenting using the code below and it seems it does not work.
DBCC TRACEON (460);
DECLARE #aa as TABLE (name varchar(5))
INSERT INTO #aa
SELECT '1234567890'
Error
String or binary data would be truncated
Expected error:
String or binary data would be truncated in table #aa, column name. Truncated value: '1234567890'
According to https://www.procuresql.com/blog/2018/09/26/string-or-binary-data-get-truncated/ SQL Sever 2019 will be able to identify the columns that have been truncated, but can be used in SQL Server 2016 using TRACEON 460.
In terms of roles, I have "public", "processadmin", and "sysadmin".
In the sys.messages I think the patch for this feature based on message_id=2628:
+------------+------------------------------------------------------------------------------------------------------+
| message_id | text |
+------------+------------------------------------------------------------------------------------------------------+
| 2628 | String or binary data would be truncated in table '%.*ls', column '%.*ls'. Truncated value: '%.*ls'. |
| 8152 | String or binary data would be truncated. |
+------------+------------------------------------------------------------------------------------------------------+
Details:
Microsoft SQL Server 2016 Standard (64-bit)
Version : 13.0.5149.0
Is Clustered : False
Is HADR Enabled : False
Is XTP Supported : True

The new error message hasn't yet been back-ported to SQL Server 2016. From this post (emphasis mine):
This new message is also backported ... (and in an upcoming SQL Server 2016 SP2 CU) ...
This CU has not been delivered yet. The most recent, CU5 (13.0.5264.1), was released in January and did not include it.
And just a small correction, you need to opt in to this behavior (via the trace flag) even in the SQL Server 2019 CTPs. The reason is that a different error number is produced, and this could break existing applications and unit tests that behave based on the error number raised. This will be documented as a breaking change when SQL Server 2019 is released, but I'm sure it will still bite some people when they upgrade.

Related

How does a SQL SERVER V 2017 only function run on a lower compatibility level database

My LOCAL, DEV & PROD servers are at v 2017 but the databases are at compatibility level 2012 (110).
A co-worker claims he's having issues getting latest as STRING_AGG is not supported. I'm going to assume he is running something lower than SSMS 2017
I assumed I'd have to up the compatibility level before deployment but that does not seem to be the case.
Why does this work? Is this an unsupported "feature"?
My DEV server version is 2017 and the compatibility_level is 110
DROP TABLE IF EXISTS test
CREATE TABLE test (id int IDENTITY(1,1), dt date DEFAULT(GETDATE()), theData VARCHAR(20) NULL)
INSERT INTO [dbo].[test]([theData])
VALUES(NULL),('some data'),('old data'),(NULL)
SELECT STRING_AGG(t.[theData],', ') [testing string_agg method]
FROM [dbo].[test] AS [t]

How to easily determine version of .fdb-file (Firebird database)

When looking at a .fdb-database of a proprietary software (probably using Firebird Embedded), how can I determine which version of Firebird I need to setup?
The only way I can currently imagine is by having a look with a hex viewer at 'ODS-version' which is part of a page header, which is most likely also used as format for the file header, and then somehow by digging through respository history find out which Firebird release supports which ODS-Version. ODS-version, atleast nowadays, is encoded like stated below.
Related docs: https://firebirdsql.org/file/documentation/reference_manuals/reference_material/Firebird-Internals.pdf
Related code:
https://github.com/FirebirdSQL/firebird/blob/3dd6a2f5366e0ae3d0e6793ef3da02f0fd05823a/src/jrd/ods.h
and
inline USHORT DECODE_ODS_MAJOR(USHORT ods_version)
{
return ((ods_version & 0x7FF0) >> 4);
}
inline USHORT DECODE_ODS_MINOR(USHORT ods_version)
{
return (ods_version & 0x000F);
}
Is there really no easier way to determine required firebird version, e.g. with some cli-tool?
If you have a Firebird installation at hand, you can use gstat to check the ODS for a database. For example:
gstat -h <path-to-your-database>
If the ODS version of the database is supported by the version of gstat, you'll get something like:
Database "D:\DATA\DB\FB4\FB4TESTDATABASE.FDB"
Gstat execution time Sat Mar 17 18:08:09 2018
Database header page information:
Flags 0
Generation 308
System Change Number 0
Page size 16384
ODS version 13.0
Oldest transaction 393
Oldest active 394
Oldest snapshot 394
Next transaction 395
Sequence number 0
Next attachment ID 150
Implementation HW=AMD/Intel/x64 little-endian OS=Windows CC=MSVC
Shadow count 0
Page buffers 0
Next header page 0
Database dialect 3
Creation date Jan 6, 2017 14:05:48
Attributes force write
Variable header data:
*END*
Here ODS version 13.0 means it is a Firebird 4 database.
If the gstat version does not support the ODS version of the database, you will get an error like (eg in this case using a Firebird 4 gstat on a Firebird 2.5/ODS 11.2 database):
Wrong ODS version, expected 13, encountered 11
This has its downsides though: it doesn't provide the ODS minor versions, and for example when using a Firebird 2.0 (ODS 11.0) or 2.1 (ODS 11.1) gstat to access a Firebird 2.5 (ODS 11.2) database, this will lead to the unhelpful error message:
Wrong ODS version, expected 11, encountered 11
The quickest route is to use a Firebird 2.5 gstat as this will allow you to pinpoint the exact ODS versions between 10 (Firebird 1) and 11.2 (Firebird 2.5), and at the same time the error message will allow you to pinpoint if you need a newer version (eg ODS 12 is Firebird 3, ODS 13 is Firebird 4).
However, you will also need to look at the Implementation output of gstat. Firebird database files have platform specific storage (although this has been reduced since Firebird 2.0). For example in Firebird 1.5 and earlier (ODS 10), a database from a 32 bit Firebird cannot be accessed by a 64 bit Firebird. A Firebird database from a little endian platform (most common) cannot be read on a big endian platform (and vice versa).
Within these limitations, a Firebird 2.5 installation can read databases with ODS 10 through 11.2. Firebird 3 can only read ODS 12, and Firebird 4 only ODS 13.
If there are platform mismatches (eg old 32/64 bit or little/big endian) or unsupported ODS versions, you will need to have a transportable backup (gbak) to convert and/or upgrade.
For an overview of ODS versions and accompanying Firebird (or InterBase) version, see All Firebird and InterBase On-Disk-Structure (ODS) versions.

Attempted to delete invisible tuple + Postgres

I have system where we perform large number of inserts and updates query(something upsert as well)
I see occasionally error on my logs that states ..
PG::ObjectNotInPrerequisiteState: ERROR: attempted to delete invisible tuple
INSERT INTO call_records(plain_crn,efd,acd,slt,slr,ror,raw_processing_data,parsed_json,timestamp,active,created_at,updated_at) VALUES (9873,2016030233,'R',0,0,'PKC01','\x02000086000181f9000101007 ... ')
What I fail to understand even when no (delete) query is performed (the above error appear on insert clause) yet the error was thrown.
I have been googling around this issue but no conclusive evidence of why this happen.
Version of Postgres.
database=# select version();
version
--------------------------------------------------------------------------------------------------------------
PostgreSQL 9.5.2 on x86_64-apple-darwin14.5.0, compiled by Apple LLVM version 7.0.0 (clang-700.1.76), 64-bit
(1 row)
Any Clue ??

Need query to get function execution stats in SQL Server 2008 R2

Like below query, is there any query which can return the execution status of a function in SQL server using sys.dm_exec_?
SELECT TOP 1
d.object_id,
d.database_id,
OBJECT_NAME(object_id,database_id) 'proc name',
d.cached_time,d.last_execution_time, d.total_elapsed_time,
(d.total_elapsed_time/d.execution_count)/1000 AS [avg_elapsed_time],
d.last_elapsed_time/1000 as last_elapsed_time,
d.execution_count,
*
FROM
sys.dm_exec_procedure_stats AS d
WHERE
OBJECT_NAME(object_id, database_id) = 'ssp_StoredProcedureName'
ORDER BY
d.Last_Execution_Time DESC
You can't get exact function execution stats in versions below SQL2016.But from SQLSERVER 2016,we have sys.dm_exec_function_stats.
Applies to: SQL Server (SQL Server 2016 Community Technology Preview
3.2 (CTP 3.2) through current version), Azure SQL Database, Azure SQL Data Warehouse Public Preview.

PostgreSQL timezone error with DbSchema

I want to setup my postgreSQL server to 'Europe/Berlin' but having an error:
SET time zone 'Europe/Berlin';
ERROR: invalid value for parameter "TimeZone": "Europe/Berlin"
But the real issue is with DdbSchema, when I want to connect to my DB i've got the error
FATAL: invalid value for parameter "TimeZone": "Europe/Berlin"
DbSchema works when I connect to my local db but not with my NAS (Synology) DB.
Any idea ?
Found a way to solve the problem:
You have to start java with the proper time zone.
In my case, my server is GMT, so i had to add the args -Duser.timezone=GMT
For DbSchema, edit the file DbSchema.bat or DbSchema.sh
Find the declaration of SWING_JVM_ARGS
Add the argument -Duser.timezone=GMT a the end of the line
Start DbSchema with this script DbSchema.bat or DbSchema.sh
I think your solution is only a workaround for the actual problem concerning the zoneinfo on the synology diskstation.
I got exactly the same error when trying to connect to the postgres database on my diskstation. The query select * from pg_timezone_names; gives you all timezone names postgresql is aware of.
There are 87 entries all starting with "Timezone":
name | abbrev | utc_offset | is_dst
------------------------+--------+------------+--------
Timezone/Kuwait | AST | 03:00:00 | f
Timezone/Nairobi | EAT | 03:00:00 | f
...
The configured postgres timezonesets contain much more entries, so there must be another source that postgres is building this view of at startup. I discovered that there is a compile-option --with-system-tzdata=DIRECTORY that tells postgres to obtain its values from system zoneinfo.
I looked in /usr/share/zoneinfo and found one subdirectory called Timezone with exactly 87 entries. And there obviously was no subdirectory called Europe (with a timezone file called Berlin). I did not quickly find a solution for the diskstation to update the tzdata automatically or manually by unpacking tzdata2016a.tar.gz and making (make not found...). As a quickfix I copied the Berlin timezone file from another linux system and the problem was solved, so that I now can connect via java/jdbc using the correct timezone "Europe/Berlin"!