Can Compatibility level be changed for a query in SQL Server 2019 - sql-server-2019

In the past I have changed SQL Server compatibility level using an Alter Database statement. Is there a way to do it for an individual query rather than setting it for the complete database?

Starting with later versions of SQL, (I believe SQL 2017 and afterward), you can add this as a query hint.
SELECT * FROM my_table QUERY_OPTIMIZER_COMPATIBILITY_LEVEL_100
for example, to run in 2008 mode. Or you could change the 100 to 150 to run in 2019 compatibility mode or anywhere in-between.

SELECT *
FROM FOO
OPTION(USE HINT('QUERY_OPTIMIZER_COMPATIBILITY_LEVEL_150'))
QUERY_OPTIMIZER_COMPATIBILITY_LEVEL hint docs

Related

Logging slow queries on Google Cloud SQL PostgreSQL instances

The company I work for uses Google Cloud SQL to manage their SQL databases in production.
We're having performance issues and I thought it'd be a good idea (among other things) to see/monitor all queries above a specific threshold (e.g. 250ms).
By looking at the PostgreSQL documentation I think log_min_duration_statement seems like the flag I need.
log_min_duration_statement (integer)
Causes the duration of each completed statement to be logged if the statement ran for at least the specified number of milliseconds. Setting this to zero prints all statement durations.
But judging from the Cloud SQL documentation I see that is only possible to set a narrow set of database flags (as in for each DB instance) but as you can see from here log_min_duration_statement is not among those supported flags.
So here comes the question. How do I log/monitor my slow PostgreSQL queries with Google Cloud SQL? If not possible then what kind of tool/methodologies do you suggest I use to achieve a similar result?
April 3, 2019 UPDATE
It is now possible to log slow queries on Google Cloud SQL PostgreSQL instances, see https://cloud.google.com/sql/docs/release-notes#april_3_2019:
database_flags = [
{
name = "log_min_duration_statement"
value = "1000"
},
]
Once you enable log_min_duration_statement, you can view the logs using Stackdriver logging. Select Cloud SQL Database -> cloudsql.googleapis.com/postgres.log and you will see the log like this.
[103402]: [9-1] db=cloudsqladmin,user=cloudsqladmin LOG: duration: 11.211 ms statement: [YOUR SQL HERE]
References:
Full list of supported flags (CTRL+F for log_min_duration_statement): https://cloud.google.com/sql/docs/postgres/flags#postgres-l
Issue tracker: https://issuetracker.google.com/issues/74578509#comment54
PostgreSQL docs: https://www.postgresql.org/docs/9.6/runtime-config-logging.html#GUC-LOG-MIN-DURATION-STATEMENT
The possibility of monitoring slow PostgreSQL queries for Cloud SQL instances is currently not available. As you comment, the log_min_duration_statement flag is currently not supported by Cloud SQL.
Right now, work is being made on adding this feature to Cloud SQL, and you can keep track on the progress made through this link. You can click on the star icon on the top left corner to get email notifications whenever any significant progress has been achieved.
There is a way to log slow queries through the pg_stat_statements extension which is supported by Cloud SQL.
Since Cloud SQL doesn't grant superuser right to any of the users you need to use some workaround.
First, you need to enable the extension with
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
then you can check slow queries with a query like
SELECT pd.datname,
us.usename,
pss.userid,
pss.query AS SQLQuery,
pss.rows AS TotalRowCount,
(pss.total_time / 1000) AS TotalSecond,
((pss.total_time / 1000) / calls) as TotalAverageSecond
FROM pg_stat_statements AS pss
INNER JOIN pg_database AS pd
ON pss.dbid = pd.oid
INNER JOIN pg_user AS us
ON pss.userid = us.usesysid
ORDER BY TotalAverageSecond DESC
LIMIT 10;
As postgres user you can have a look on all slow queries, but since the user is not superuser you will see <insufficient privilege> on all other users' queries.
To get around this limitation you can install the extension on other databases too (normally only postgres user has rigths to install extensions) and you can check the query texts with the owner of the db.
Not ideal by any measure, but what we do is run something like this on a cron once a minute and log out the result:
SELECT EXTRACT(EPOCH FROM now() - query_start) AS seconds, query
FROM pg_stat_activity
WHERE state = 'active' AND now() - query_start > interval '1 seconds' AND query NOT LIKE '%pg_stat_activity%'
ORDER BY seconds DESC LIMIT 20
You'd need to fiddle with the query to get millisecond granularity, and even then it'll only catch queries that overlap with your cron frequency, but probably better than nothing?

Query referencing aliased column in order by in SQLServer 2012

I have a SQL Select query that's embedded in a piece of C# code which I don't want to change. My problem is that the query executes fine on SQLServer 2008 but not 2012.
The offending line of code is:
Select code as SiteCode from TimeSheetContracts S order by S.SiteCode
Executed in a database on SQL2008 it works fine. The same database upgraded to SQLServer 2012 errors with the following...
Msg 207, Level 16, State 1, Line 2
Invalid column name 'SiteCode'.
If I edit the query to be
Select code as SiteCode from TimeSheetContracts S order by SiteCode
it works fine. Can anyone explain this?
There is no column in TimeSheetContracts called SiteCode, so a reference to s.SiteCode is not valid. Aliasing in ORDER BY is a little more strict since SQL Server 2000, when the syntax was a little more forgiving. The only way s.SiteCode would have worked on your SQL Server 2008 instance was if your database was in COMPATIBILITY_LEVEL = 80 (go ahead and try it on a different database that is 90 or greater). Once you move to SQL Server 2012, 80 is no longer an option. On a 2005, 2008 or 2008 R2 instance, try this:
CREATE DATABASE floob;
GO
USE floob;
GO
CREATE TABLE dbo.SalesOrderHeader(SalesOrderID INT);
GO
SELECT SalesOrderID AS ID FROM dbo.SalesOrderHeader AS h ORDER BY h.ID; -- fails
GO
ALTER DATABASE floob SET COMPATIBILITY_LEVEL = 80;
GO
SELECT SalesOrderID AS ID FROM dbo.SalesOrderHeader AS h ORDER BY h.ID; -- works
GO
USE master;
GO
DROP DATABASE floob;
If you want to use the column alias, you'll need to (and should always have been) just use the alias. If you want to use the table alias prefix, you'll need to use s.code.

SQL Server OpenQuery() behaving differently then a direct query from TOAD

The following query works efficiently when run directly against Oracle 11 using TOAD (with native Oracle drivers)
select ... from ... where ...
and srvg_ocd in (
select ocd
from rptofc
where eff_endt = to_date('12/31/9999','mm/dd/yyyy')
and rgn_nm = 'Boston'
) ...
;
The exact same query "never" returns if passed from SQL Server 2008 to the same Oracle database via openquery(). SQL Server has a link to the Oracle database using an Oracle Provider OLE DB driver.
select * from openquery( servername, '
select ... from ... where ...
and srvg_ocd in (
select ocd
from rptofc
where eff_endt = to_date(''12/31/9999'',''mm/dd/yyyy'')
and rgn_nm = ''Boston''
) ...
');
The query doesn't return in a reasonable amount of time, and the user kills the query. I don't know if it would eventually return with the correct result.
This result where the direct TOAD query works efficiently and the openquery() version "never" returns is reproducible.
A small modification to the openquery() gives the correct efficient result: Change eff_endt to trunc(eff_endt).
That is well and good, but it doesn't seem like the change should be necessary.
openquery() is supposed to be pass through, so how can there be a difference between the TOAD and openquery() behavior?
The reason we care is because we frequently develop complex queries with TOAD directly accessing Oracle. Once we have the query functioning and optimized, we convert it to an openquery() string for use in a SQL Server application. It is extremely aggravating to have a query suddenly fail with openquery() when we know it worked as a direct query. Then we have to search for a work-around through trial and error.
I would like to see the Oracle trace files for the two scenarios, but the Oracle server is within another organization, and we are not getting cooperation from the Oracle DBAs.
Does anyone know of any driver, or TOAD, or ??? issues that could account for the discrepancy? Is there any way to eliminate the problem such that both methods always give the same result?
I know you asked this a while ago but I just came across your question.
I agree, they should be the same. Obviously there is a difference. We need to find out where the difference is.
I am thinking out loud as I type...
What happens if you specify just a few column instead of select * from openquery?
How many rows are supposed to be returned?
What if, in the oracle select, you limit the returned rows?
How quickly does the openquery timeout?
Are TOAD and SS on the same machine? Are you RDPing into the SS and running toad from there?
Are they using the same drivers? including bit? (32/64) version?
Are they using the same account on oracle?
It is interesting that using the trunc() makes a difference. I assume [eff_endt] is one of the returned fields?
I am wondering if SS is getting all the rows back but it is choking on doing the date conversions. The date type in oracle may need to be converted to a ss date type before ss shows it to you.
What if you insert the rows from the openquery into a table where the date field is just a (n)varchar. I am thinking ss might just dump the date it is getting back from oracle into that text field without trying to convert it.
something like:
insert into mytable(f1,f2,f3,datetimeX)
select f1,f2,f3,datetimeX from openquery( servername, '
select f1,f2,f3,datetimeX from ... where ...
and srvg_ocd in (
select ocd
from rptofc
where eff_endt = to_date(''12/31/9999'',''mm/dd/yyyy'')
and rgn_nm = ''Boston''
) ...
');
What if toad or ss is modifying the query statement before sending it to oracle. You could fire up wireshark and see what toad and ss are actually sending.
I would be very curious if you get this resolved. I link ss to oracle often and have not run into this issue.
Here are basic things you can check for to see what the database is doing after it receives the query. First, check that the execution plans are the same in TOAD as when the query runs using openquery. You could plan the query yourself in TOAD using:
explain plan set statement_id = 'openquery_test' for <your query here>;
select *
from table(dbms_xplan.display(statement_id => 'openquery_test';
then have someone initiate the query using openquery() and have someone with permissions to view v$ tables to run:
select sql_id from v$session where username = '<user running the query>';
(If there's more than one connection with the same user, you'll have to find an additional attribute to isolate the row representing the session running the query.)
select *
from table(dbms_xplan.display_cursor('<value from query above'));
If those look the same then I'd move on to checking database waits and see what it's stuck on.
select se.username
, sw.event
, sw.p1text
, sw.p2text
, sw.p3text
, sw.wait_time_micro/1000000 as seconds_in_wait
, sw.state
, sw.time_since_last_wait_micro/1000000 as seconds_since_last_wait
from v$session se
inner join
v$session_wait sw
on se.sid = sw.sid
where se.username = '<user running the query>'
;
(again, if there's more than one session with the same username, you'll need another attribute to whittle it down to the one you're interested in.)
If the plans are different, then you need to find out why, or if they're the same, look into what it's waiting on (e.g. SQL*Net message to client ?) and why.
I noticed a difference using OLEDB through MS Access (2013) connecting to Oracle 10g & 11g tables, in that it did not always recognize indexes or primary keys on the Oracle tables properly. The same query through an MS Access 2000 database (using odbc) worked fine / had no problem with indexes & keys. The only way I found to fix the OLEDB version was to include all of the key fields in the SELECT -- which was not a satisfying answer, but it's all I could find. This might be an option to try through SSMS / OpenQuery(...) as well.
Besides that... you can try some alternatives to OPENQUERY, such as:
4-part names: SELECT ... FROM Server..Schema.Table
Execute AT: EXEC ('select...') at linked server
But as for why the OLEDB provider works differently than the native Oracle Provider -- the providers are not identical, and the native provider would be more likely to pave-over Oracle quirks than the more generic OLEDB provider would.

Selecting 2 tables in the same editor

In SQL I can write 2 separate SELECT statements and execute them at the same time, such as:
SELECT * FROM Table_Name
SELECT * FROM Another_Table
Is it possible to do this in PostgreSQL?
It sounds to me like you're really asking a PGAdmin question. In SQL Server's query analyzer / Enterprise Manager if you execute two select statements, two result sets will be displayed in the output area.
PGAdmin doesn't do this; you need two separate query windows to see both result sets. I know of no other free GUI admin tool for PostgreSQL. Maybe someone else will.
EDIT:
I've recently started using Squirrel-SQL and DBeaver. Either of these may be more what you're looking for. I prefer DBeaver myself.
Brian

SQL: OPENQUERY Not returning all rows

i have the following which queries a linked server i have to talk to.
SELECT
*
FROM
OPENQUERY(DWH_LINK, 'SELECT * FROM TABLEA ')
It will typically return most of the data but some rows are missing?
The linkeds server is coming from an oracle client
Is this a problem anyone has encountered w/ openquery?
I had exactly the same problem.
The root cause is that you've set up your linked server using ODBC instead of OLE DB.
Here's how I fixed it:
Delete the linked server from SQL Server
Right click on the "Linked Servers" folder and select "New Linked Server..."
Linked Server: enter anything..this will be the name of your new linked server
Provider: Select "Oracle Provider for OLE DB"
Product Name: enter "Oracle" (without the double quotes)
Data Source: enter the alias from your TNSNAMES.ORA file. In my case, it was "ABC.WORLD" (without the double quotes)
Provider String: leave it blank
Location: leave it blank
Catalog: leave it blank
Now go to the "Security" tab, and click the last radio button that says "Be made using this security context:" and enter the username & password for your connection
That should be it!
This seems to be related to the underlying provider capabilities and others have also run into this and similar size/row limitations. One possible work-around would be to implement an iterative/looping query with some filtering built in to pull back a certain amount of rows. With oracle, I think this might be using the rownum (not very familiar with oracle).
So something like
--Not tested sql, just winging it syntax-wise
SELECT * FROM OPENQUERY(DWH_LINK, 'SELECT * FROM TABLEA where rownum between 0 AND 500')
SELECT * FROM OPENQUERY(DWH_LINK, 'SELECT * FROM TABLEA where rownum between 500 AND 1000')
SELECT * FROM OPENQUERY(DWH_LINK, 'SELECT * FROM TABLEA where rownum ...')
BOL:
link
This is subject to the capabilities of the OLE DB provider. Although the query may return multiple result sets, OPENQUERY returns only the first one.
I had this same problem using the Oracle 10 instant client and ODBC. I used this client as I am connecting to an Oracle 10gR2 database. I opened a ticket with Microsoft support and they suggested using the Oracle 11 instant client. Surprise! Uninstalling the 10g instant client, installing the 11g instant client and rebooting resolved the issue.
Ken
I had exact same problem with an SQL 2014 getting data from SQL 2000 through OPENQUERY. Because ODBC compatibility problem, I had to keep generic OLE DB for ODBC driver. Moreover, the problem was only with SQL non-admin account.
So finally, the solution I found was to add SET ROWCOUNT 0:
SELECT * FROM OPENQUERY(DWH_LINK, 'SET ROWCOUNT 0 SELECT * FROM TABLEA ')
It seems the rowcount might been change somewhere through the SQL procedure (or for this user session), so setting it to 0 force it to return "all rows".