When writing code in SSMS 17.6 I am noticing that IntelliSense does not work inside of aggregate functions. I always start with my table names so that way IntelliSense will suggest the column names for me which works fine. But when I for example have
select max(
from table
IntelliSense does not work inside the aggregate.
Is there a fix for this or is this just the way the system works?
I got it, it works if you close the parenthesis of the aggregate and then type.
Related
I was creating a function following an example from a database class which included the creation of a temporary variable (base_salary) and using a SELECT INTO to calculate its value later.
However, I did not realize I used a different order for the syntax (SELECT ... FROM ... INTO base_salary) and the function could be used later without any visible issues (values worked as expected).
Is there any difference in using "SELECT ... FROM ... INTO" syntax order? I tried looking about it in the PostgreSQL documentation but found nothing about it. Google search did not provide any meaningful information neither. Only thing I found related to it was from MySQL documentation, which only mentioned about supporting the different order in an older version.
There is no difference. From the docs of pl/pgsql:
The INTO clause can appear almost anywhere in the SQL command.
Customarily it is written either just before or just after the list of
select_expressions in a SELECT command, or at the end of the command for other command types. It is recommended that you follow
this convention in case the PL/pgSQL parser becomes stricter in future
versions.
Notice that in (non-procedural) SQL, there is also a SELECT INTO command which works like CREATE TABLE AS, in this version the INTO must come right after the SELECT clause.
I always use SELECT ... INTO ... FROM , I believe that is the standard supported notation
https://www.w3schools.com/sql/sql_select_into.asp
I would recommend using this, also if there are any updates or if the other version might become unsupported as you mentioned...
i have a DB2 data source and an Oracle 12c target.
The Oracle has a DB link to the DB2 defined which is working in general.
Now i have a huge table in the DB2 which has a timestamp column (lets call it ROW_CHANGED) for row changes. I want to retrieve rows which have changed after a particular time.
Running
SELECT * FROM lib.tbl WHERE ROW_CHANGED >'2016-08-01 10:00:00'
on the DB2 returns exactly 1 row after ca. 90 secs which is fine.
Now i try the same query from the Oracle via the db link:
SELECT * FROM lib.tbl#dblink_name WHERE ROW_CHANGED >TO_TIMESTAMP('2016-08-01 10:00:00')
This runs for hours and ends up in a timeout.
I read some Oracle docs and found distributed query optimization tips but most of them refer to joining a local to a remote table which is not my case.
In my desperation, i have tried the DRIVING_SITE hint, without effect.
Now i wonder when the WHERE part of the query will be evaluated. Since i have to use Oracle syntax and not DB2 syntax for the query, is it possible the Oracle will try to first copy the full table and apply the where clause afterwards? I did some research but did not find anything which would help me in this direction.
The ROW_CHANGED is a hidden column in the DB2, if that matters.
Thx for any hint in advance.
Update
Thanks#all for help. I'll share what did the trick for me.
First of all i have used TO_TIMESTAMP since the DB2 column is also Timestamp (not date) and i had expected to circumvent implicit conversions by this.
Without the explicit conversion i ran into ORA-28534: Heterogeneous Services preprocessing error and i have no hope of touching the DB config within reasonable time.
The explain plan btw did not bring much. It showed a FULL hint and no conversion on the predicates. Indeed it showed the ROW_CHANGED column as Date, i wonder why.
I have tried Justins suggestion to use a bind variable, however i got ORA-28534 again. Next thing i did was to wrap it into a pl/sql block (will run in a SP anyway later).
declare
v_tmstmp TIMESTAMP := 01.08.16 10:00:00;
begin
INSERT INTO ORAUSER.TMP_TBL (SRC_PK,ROW_CHANGED)
SELECT SRC_PK,ROW_CHANGED
FROM lib.tbl#dblink_name
WHERE ROW_CHANGED > v_tmstmp;
end;
This was executing in the same time as in DB2 itself. The date format is DD.MM.YY here since it is the default unfortunately.
When changing the variable assignment to
v_tmstmp TIMESTAMP := TO_TIMESTAMP('01.08.16 10:00:00','DD.MM.YY HH24:MI:SS');
I got the same problem as before.
Meanwhile the DB2 operators have created an index in the ROW_CHANGED column which i requested earlier that day. This has solved the problem in general it seems. Even my original query finishes in no time now.
If you are actually using an Oracle-specific conversion function like to_timestamp, that forces the predicate to be evaluated on the Oracle side. Oracle isn't going to know how to convert a built-in function like to_timestamp into an exactly equivalent function call in DB2.
If you used a bind variable, that would be more likely to get evaluated on the DB2 side. But that may be complicated by the data type mapping between different databases-- there may not be a perfect mapping between one engine's date and another engine's timestamp data type. If this was a numeric column, a bind variable would be almost certain to get pushed. In this case, it probably involves playing around a bit to figure out exactly what data type to use for your variable that works for your framework, Oracle, and DB2.
If using a bind variable doesn't work, you can force the predicate to be evaluated on the remote server using the dbms_hs_passthrough package. That lets you send a query verbatim to the remote server which allows you to do things like use functions defined in your DB2 database. That's a bit of overkill in this situation, hopefully, but it's nice to have the hammer as your backup if the simpler solution doesn't work quickly enough.
How can I use autocomplete in proc SQL queries?
For example, when I use proc print I can use autocomplete for libnames, tables and fields.
How can I do the same in proc SQL?
Thanks
In Enterprise Guide 5.1, you can autocomplete table names only in PROC SQL, as far as I can tell. So:
proc sql;
select * from sashelp.class
where age > 13;
quit;
When you started typing sashelp it would not autocomplete for you, but when you start typing .class it will (assuming your SAS session is connected, and you wait a bit for it to search the metadata - in my uses I almost always have to type the ., then backspace over it, and type it again, for it to work properly). You also couldn't autocomplete age.
In EG 7.1 (and not 6.1), you can also autocomplete the libname (sashelp). The where clause however seems to still be not autocompleted. PROC SQL syntax is a bit more complex for the autocomplete parser to parse, so it's probably left out for that reason.
As Raphael notes in comments, ctrl+L is the hotkey for Autocomplete Library; this works anywhere in an editor window (including in open code), so this would be a workaround for EG 5.1/6.1 not having this by default in PROC SQL.
However, the key combination for Data Set Variable is ctrl+shift+V, and this does not work in PROC SQL (but does work in Data Step, even in code that doesn't normally work this way). My guess is that SQL is harder to parse which dataset a variable is from, and SAS hasn't worked out this functionality yet.
I need to generate query like this:
SELECT **DISTINCT ON** (article.code) article.code, article.title
First I try to make it via ORM distinct method and send it a list with fields. But it wont work. Second, I try to make it via sqlalchemy.sql.select - and it also generate sql query like this:
SELECT DISTINCT article.code, article.title
I Need SELECT **DISTINCT ON** (article.code)...
I look at source code and found in sqlalchemy.dialects.postgresql.base.PGCompiler.get_select_precolumns code for generating constructions like: 'DISTINCT ON'
But this method do not called. Instead of this called another method - sqlalchemy.sql.compiler.get_select_precolumns - it hasn't code for generating DISTINCT ON only for DISTINCT Maybe I should configure my session to called properly method?
This bug report suggests that DISTINCT ON works correctly in SQLAlchemy 0.7+. I think an upgrade is in order, unless you've uncovered a bug in 0.7.
Workarounds . . .
Volunteer to help get the 0.7 package
ready for Ubuntu.
Download and install from
source.
Rewrite queries to avoid DISTINCT
ON. I'm not sure whether that's
possible in the most general case.
Can anybody help me understand the expected format of data for creating MVA (multi-value)
attributes in Sphinx?
I have a MySQL function which returns a row of comma-separated integers, collated with
GROUP_CONCAT, as a blob. I have two further MVA attributes which collate the results of a
JOIN statement, with GROUP_CONCAT, as a blob (as generated by ThinkingSphinx). These are all included in my sql_query in my sphinx.conf.
I've tried running the SQL on a small result set in the console, and it works: for all
the MVA columns, the results are a blob containing data such as:
2432,35345,342347,8975,453645
and so on. The two MVA attributes generated with the JOIN/GROUP_CONCAT combination index correctly. However, the MVA attribute generated with the MySQL function causes the
indexing to fail silently (seemingly little or no data is indexed). This is despite the query working absolutely fine in the console..
So the data format seems to be identical, but Sphinx is rejecting one of the columns. Does anybody know of any gotchas with defining MVA attributes which might help me debug
this?
I've never used thinking-sphinx (being a PHP shop here), but I don't think you should be group_concat'ing your results. From a working example in one of my sphinx.conf files:
sql_attr_multi = uint categories from query; SELECT entry_id, cat_id FROM exp_category_posts
I solved this problem eventually. It was happening because of something
which seemed unrelated: a 'sql_attr_str2ordinal' attribute which seemed to be affected
(or effect) the SQL query/indexing in ways I don't fully understand.
See: http://www.sphx.org/forum/view.html?id=2867
Fortunately, in my case I was able to remove it entirely, and indexing now seems to work.