SalesForce ADO.NET Source in SSDT 2013 SOQL Statement Using Date Literals Not Returning Results - date

I am having an issue with a date literal not returning any results when I run an SOQL query. The query is as follows:
Select * From dbo.Case WHERE CreatedDate = YESTERDAY
With the query, I would like to obtain case data from the previous day. I know there is data available that was created the previous day. The results of the query when I preview it, though, are an empty set with no error message.
A different wrinkle that makes this not quite a strict SOQL issue is that I am trying to use this query as an SQL command on an ADO.NET connection using the CData ADO.NET driver to connect to a SalesForce.com instance. My goal is to be able to build SSDT packages that will allow me to stage the data from SalesForce into our SQL Server for processing there.
I have similar issues using the LAST_N_DAYS date literal as well.
I believe I should be using SOQL to query in the SQL command text field for the ADO.NET source connection but I am not 100% sure about that. I know for certain that I cannot use T-SQL because it does not recognize the GETDATE().
Any guidance on how to pull the records from Case for the previous day or where the query I am using might be wrong would be greatly appreciated.

Found an answer. The following SQL command will pull the data from the previous day:
Select * From dbo.Case Where CreatedDate = 'YESTERDAY'
The single quotes evaluate the yesterday date literal as expected.
Likewise, the following SQL statement will get the last 30 days of data.
Select * From dbo.Case Where CreatedDate = 'LAST_N_DAYS:30'
Thanks to anyone who researched and attempted the question! :)

Related

Converting SQL query with FORMAT command to use in entity framework core

I have an SQL query:
SELECT
FORMAT(datetime_scrapped, 'MMMM-yy') [date],
count(FORMAT(datetime_scrapped, 'MMMM-yy')) as quantity
FROM scrap_log
GROUP BY FORMAT(datetime_scrapped, 'MMMM-yy')
It basically summarises all the entries in the scrap_log table by month/year and counts how many entries are in each month/year. Returns two columns (date and quantity). But I need to execute this in an ASP.NET core API using Entity Framework core. I tried using .fromSqlRaw(), but this expects all columns to be returned and so doesn't work.
I can find plenty of info on EF to implement group by and count etc... But I cannot find anything for the FORMAT(datetime, "MMMM-yy") part. Please could somebody explain to me how to do this?
EDIT: Seems already I appear to be going about this the wrong way in terms of efficiency. I will look into alternative solutions based on comments already made. Thanks for the fast response.

Oracle DB link - where clause evaluation

i have a DB2 data source and an Oracle 12c target.
The Oracle has a DB link to the DB2 defined which is working in general.
Now i have a huge table in the DB2 which has a timestamp column (lets call it ROW_CHANGED) for row changes. I want to retrieve rows which have changed after a particular time.
Running
SELECT * FROM lib.tbl WHERE ROW_CHANGED >'2016-08-01 10:00:00'
on the DB2 returns exactly 1 row after ca. 90 secs which is fine.
Now i try the same query from the Oracle via the db link:
SELECT * FROM lib.tbl#dblink_name WHERE ROW_CHANGED >TO_TIMESTAMP('2016-08-01 10:00:00')
This runs for hours and ends up in a timeout.
I read some Oracle docs and found distributed query optimization tips but most of them refer to joining a local to a remote table which is not my case.
In my desperation, i have tried the DRIVING_SITE hint, without effect.
Now i wonder when the WHERE part of the query will be evaluated. Since i have to use Oracle syntax and not DB2 syntax for the query, is it possible the Oracle will try to first copy the full table and apply the where clause afterwards? I did some research but did not find anything which would help me in this direction.
The ROW_CHANGED is a hidden column in the DB2, if that matters.
Thx for any hint in advance.
Update
Thanks#all for help. I'll share what did the trick for me.
First of all i have used TO_TIMESTAMP since the DB2 column is also Timestamp (not date) and i had expected to circumvent implicit conversions by this.
Without the explicit conversion i ran into ORA-28534: Heterogeneous Services preprocessing error and i have no hope of touching the DB config within reasonable time.
The explain plan btw did not bring much. It showed a FULL hint and no conversion on the predicates. Indeed it showed the ROW_CHANGED column as Date, i wonder why.
I have tried Justins suggestion to use a bind variable, however i got ORA-28534 again. Next thing i did was to wrap it into a pl/sql block (will run in a SP anyway later).
declare
v_tmstmp TIMESTAMP := 01.08.16 10:00:00;
begin
INSERT INTO ORAUSER.TMP_TBL (SRC_PK,ROW_CHANGED)
SELECT SRC_PK,ROW_CHANGED
FROM lib.tbl#dblink_name
WHERE ROW_CHANGED > v_tmstmp;
end;
This was executing in the same time as in DB2 itself. The date format is DD.MM.YY here since it is the default unfortunately.
When changing the variable assignment to
v_tmstmp TIMESTAMP := TO_TIMESTAMP('01.08.16 10:00:00','DD.MM.YY HH24:MI:SS');
I got the same problem as before.
Meanwhile the DB2 operators have created an index in the ROW_CHANGED column which i requested earlier that day. This has solved the problem in general it seems. Even my original query finishes in no time now.
If you are actually using an Oracle-specific conversion function like to_timestamp, that forces the predicate to be evaluated on the Oracle side. Oracle isn't going to know how to convert a built-in function like to_timestamp into an exactly equivalent function call in DB2.
If you used a bind variable, that would be more likely to get evaluated on the DB2 side. But that may be complicated by the data type mapping between different databases-- there may not be a perfect mapping between one engine's date and another engine's timestamp data type. If this was a numeric column, a bind variable would be almost certain to get pushed. In this case, it probably involves playing around a bit to figure out exactly what data type to use for your variable that works for your framework, Oracle, and DB2.
If using a bind variable doesn't work, you can force the predicate to be evaluated on the remote server using the dbms_hs_passthrough package. That lets you send a query verbatim to the remote server which allows you to do things like use functions defined in your DB2 database. That's a bit of overkill in this situation, hopefully, but it's nice to have the hammer as your backup if the simpler solution doesn't work quickly enough.

How to execute a raw sql query from LINQ to Entities 4.5?

Problem
I need to execute a raw SQL query from LINQ to Entities and retrieve the result. The query returns the current date/time from the SQL Server instance, and looks like this:
SELECT GETDATE()
[Edit]
I'm using a data model that was created database-first.
[/Edit]
What I've Tried
I've researched this issue on the interwebz and been unable to find a technique to do this. I was able to learn how to do this using LINQ to SQL, but since I'm not using that, it's of no help.
Heres what you are after
var time = context.Database.SqlQuery<DateTime>("SELECT GETDATE()").FirstOrDefault();
You can read more about raw sql and EF here http://msdn.microsoft.com/en-us/data/jj592907.aspx

Entity Framework: Convert.ToDecimal not supported, any ideas? EF gives an error

I have been doing queries in EF and everything working great but now i have in the db 2 fields that are actually CHAR.. They hold a date but in the form of a number, in SQL Management Studio i can do date1 >= date2 for example and i can also check to see if a number i have falls in between these 2 dates.
Its nothing unusual, but basically a field that represents a date (the number grows as the date does)...
Now in EF when i try to do >= it states you can't do this on a string, ok understand its c# so i tried doing Convert.ToDecimal(date1) but it gives me an error saying that its not supported.
I have no option of changing the db fields, they are set in stone :-(
the way i got it to work was request of details and do a .ToList and then use the .ToDecimal and it works but of course this is doing it in memory! and this defeats the object of EF i.e. for example adding to the query using iqueryable.
Another way i got it to work was to pass the SQL query to SqlQuery of the dbcontext but again i lose a lot of ef functionality.
Can anyone help?
I am really stuck
As you say that you tried >= I assume that it would work for you if you could do that in plain SQL. And that is possible by doing
String.Compare(date1, date2) >= 0
EF is smart enough to translate that into a >= operator.
The advantage is that you do not need to compare converted values, so indexes can be used in execution plans.
First of all, you can at least enable deferred execution of the query by using AsEnumerable() instead of ToList(). This won't change the fact that the database would need to return all the records when you do in fact execute the query, however.
To let the database perform the filtering, you need your query to be compatible with SQL. Since you can't do ToDecimal() in SQL, you need to work with strings directly by converting your myvar to a string that is in the same format as dateStart and dateEnd, then form your query.

Why does SQL Server 2000 treat SELECT test.* and SELECT t.est.* the same?

I butter-fingered a query in SQL Server 2000 and added a period in the middle of the table name:
SELECT t.est.* FROM test
Instead of:
SELECT test.* FROM test
And the query still executed perfectly. Even SELECT t.e.st.* FROM test executes without issue.
I've tried the same query in SQL Server 2008 where the query fails (error: the column prefix does not match with a table name or alias used in the query). For reasons of pure curiosity I have been trying to figure out how SQL Server 2000 handles the table names in a way that would allow the butter-fingered query to run, but I haven't had much luck so far.
Any sql gurus know why SQL Server 2000 ran the query without issue?
Update: The query appears to work regardless of the interface used (e.g. Enterprise Manager, SSMS, OSQL) and as Jhonny pointed out below it bizarrely even works when you try:
SELECT TOP 1000 dbota.ble.* FROM dbo.table
Maybe table names are constructed from a naive concatenation of prefix and base name.
't' + 'est' == 'test'
And maybe in the later versions of SQL Server, the distinction was made more semantic/more rigorously.
{ owner = t, table = est } != { table = test }
SQL Server 2005 and up has a "proper" implementation of schemas. SQL 2000 and earlier did not. The details escape me (its been years since I used SQL 2000), all I recall clearly is that you'd be nuts to create anything that wasn't owned by "dbo". It all ties into users and object ownership, but the 2000 and earlier model was pretty confusticated. Hopefully someone will read up on BOL, do some experimentation, and post their results here.
S-SQL reference manual:
"[dot] Can be used to combine multiple names into a name of the form A.B to refer to a column in a table, or a table in a schema. Note that you calso just use a symbol with a dot in it."
So I think if you referenced tblTest as tblT.est it would work OK as long as there isn't a column called 'est' in tblTest.
If it can't find a column name referenced with the dot I imagine it checks the parent of the object.
I found a reference to it being a bug
Note: as a result of a comparison
algorithm bug in SQL Server 2000, dot
symbols themselves have no effect on
matching, so "dbo.t" will successfully
match with tables "dbot", "d.b.o.t",
etc
from Link
It's been fixed in SQL Server 2005. Same link > Changes introduced in SQL Server 2005
Dot-related comparison bug has been fixed.
Is it in the "Open table" view of SSMS or via Enterprise Manager or via an SSMS Query Window?
There is/was a SQL Server 2005 issue with SSMS so how you run the query affects how it behaves.
This is a bug.
It has to do with internal representation of column names in SQL server 2000 that leaked out.
You will also not be able to create tablecolumn with a name which collides with table+column concatenation with another column, like, if you have tables User and UserDetail, you won't be able to have columns DetailAge and Age in these tables, respectively.