WCF Data services remove milliseconds from DateTime when expand - entity-framework

I get some strange behavior, when using WCF Data Services 5.6.
In my case, I have table, with 1 column set with Concurrency=Fixed, and this column hold date time field from database, updated each time when row is edited.
In case I just retrieve entity - this column has correct value with milliseconds. But if I do mapping - milliseconds are removed.
Here is a issue at glance :
====================================================================
void Main()
{
var b = from p in TABLE1 where p.ID == 100 select p;
b.Dump();
}
Request in this case is : http://xxxx/Data.svc/TABLE1(100M)
And data returned from service is :
<d:COL1 m:type="Edm.DateTime">2015-02-16T12:13:52.972</d:COL1>
====================================================================
As you can see , here time is returned with milliseconds - .972
In other case :
void Main()
{
var tmp = from p in TABLE1 where p.ID == 100 select
new TABLE1()
{
ID=p.ID,
COL1=p.COL1
};
var a1 = tmp.ToList();
a1.Dump();
}
Request in this case is : http://xxxx/Data.svc/TABLE1(100M)?$select=ID,COL1
<d:COL1 m:type="Edm.DateTime">2015-02-16T12:13:52</d:COL1>
====================================================================
Time is returned without milliseconds.
Does anybody have same problem? May be its a bug in the WCF Data services or in the model?

Ok, seems like I found an answer to this, or at least way to avoid the problem.
First I traced generated SQL from framework, and I see that in first case, im getting SQL
SELECT ID, COL1 FROM TABLE1
and in second case, I got
SELECT ID, CAST( COL1 AS DATETIME) FROM TABLE1
which cause the problem.
Then I tried to update EF to version 6, WCF Data services to version 5.6.3,
Oracle ODP to latest one, tried to use Oracle managed driver... no any success.
Then I played little bit with table definition, and I saw that my col1 with type TIMESTAMP in database and DateTime in the model was defined as NOT NULL.
If I remove this from database definition, I got right value with milliseconds.
So , may be this is a bug in Oracle driver, but probably this is a bug in the WCF Data Services. At least I found a way to use concurrency in my case with this solution.

Related

How to increment a value using one command using entity framework

How can i transform this sql query to an EF linq command
"update dbo.table set col1= col1 + 1 where Id = 27"
i want to execute this query using one command to avoid concurrency problems
in case of another client modify the record in the same time
i'm looking for doing that using EF but in one command
i tried this but i'm looking for a better solution :
context.table1.FromSqlInterpolated($"update dbo.table set col1= col1+ 1 where Id=27").FirstOrDefaultAsync();
I would propose to use linq2db.EntityFrameworkCore (note that I'm one of the creators)
Then you can do that with ease:
await context.table1.Where(x => x.Id == 27)
.Set(x => x.Col1, prev => prev.Col1 + 1)
.UpdateAsync();
There are ways to update a column without first querying, but the problem you have is that the update is based on the existing value.
Entity Framework can't help you there. You can only do what you want with a direct SQL statement.
Even the original SQL statement should be executed within a transaction if you want to be sure no other changes can occur between reading and updating the value. It's one SQL statement, but the db still has to read the value, increment and store.

Group By and Count query fails to translate from Linq to SQL

I have created the linq query shown below, in LINQPad. This fails when using EF Core 3.0. I know the issues around Group By and the reason EF Core fails when the translation cannot be made rather than pulling back all the data and doing it client side. My problem is that I have so far not worked out if it is possible to do what I want to do to get my query to run.
My table has a CorrelationId which is the same value only for related records. I need to create a query which just returns the records where the CorrelationId only exists once. If the CorrelationId appears more than once then that indicates there are related records and none of these should be returned. The query must run on the server so pulling back all the data is not an option.
Here is my Linq query that fails with the "...could not be translated. Either rewrite the query in a form that can be translated..." error.
from x in History
where
(
from d in History
group d by new {d.CorrelationId} into g
where g.Count() == 1
select new {Id = g.Key.CorrelationId}
).Contains(new { Id = x.CorrelationId })
select new {Id = x.Id, FileName = x.FileName}
Try using Any instead of Contains
.Any(y => y.Id == x.CorrelationId)

DbContext Remove generates DbUpdateConcurrencyException when DATETIME includes milliseconds

I have a "legacy" table that I'm managing in a webapp. It uses a compound key, rather than an auto-increment integer, which I register during OnModelCreating:
modelBuilder.Entity<EmployeeLog>().HasKey(table => new { table.EmployeeID, table.DateCreated });
If it matters, the EmployeeID is a VARCHAR`` and DateCreated is a DATETIME.
When I attempt to delete a record:
...
// the UI stores the date/time without milliseconds, use a theta join to bracket the time
var model = await _context.EmployeeLogs.FirstOrDefaultAsync(l => l.EmployeeID == driverID && (l.DateCreated >= date && l.DateCreated < date.AddSeconds(1)));
_context.Remove(model);
var result = await _context.SaveChangesAsync();
I get an error when SaveChangesAsync is called:
DbUpdateConcurrencyException: Database operation expected to affect 1 row(s) but actually affected 0 row(s). Data may have been modified or deleted since entities were loaded.
The model is correctly identified and contains the expected data.
I appears that the issue is related to the granularity of the date. If milliseconds are included, for example 2021-04-27 08:06:33.193, then the error is generated. If the date/time does not include milliseconds, 2021-04-27 08:06:33.000 for example, the deletion works correctly.
I can truncate milliseconds when creating the record, but I'd like to know if there is a way to handle this if the record already contains milliseconds.
** edit **
I don't have any control over the vendor's database decisions, so I need a solution that addresses that reality.

SQL: Change the datetime to the exact string returned

See below for what is returned in my automated test for this query:
Select visit_date
from patient_visits
where patient_id = '50'
AND site_id = '216'
ORDER by patient_id
DESC LIMIT 1
08:52:48.406 DEBUG Executing : Select visit_date from patient_visits
where patient_id = '50' AND site_id = '216' ORDER by patient_id DESC
LIMIT 1 08:52:48.416 TRACE Return: [(datetime.date(2017, 2, 17),)]
When i run this in workbench i get
2017-02-17
How can i make the query return this instead of the datetime.date bit above. Some formatting needed?
What you got from the database is python's datetime.date object - and that happens due to the db connector drivers casting the DB records to the their corresponding python counterparts. Trust me, it's much better this way than plain strings the user would have to parse and cast himself later.
Imaging the result of this query is stored in a variable ${record}, there are a couple of things to get to it, in the form you want.
First, the response is (pretty much always) a list of tuples; as in your case it will always be a single record, go for the 1st list member, and its first tuple member:
${the_date}= Set Variable ${record[0][0]}
Now {the_date} is the datetime.date object; there are at least two ways to get its string representation.
1) With strftime() (the pythonic way):
${the_date_string}= Evaluate $the_date.strftime('%Y-%m-%d') datetime
here's a link for the strftime's directives
2) Using the fact it's a regular object, access its attributes and construct the result as you'd like:
${the_date_string}= Set Variable ${the_date.year}-${the_date.month}-${the_date.day}
Note that this ^ way, you'd most certainly loose the leading zeros in the month and day.

Converting complex query with inner join to tableau

I have a query like this, which we use to generate data for our custom dashboard (A Rails app) -
SELECT AVG(wait_time) FROM (
SELECT TIMESTAMPDIFF(MINUTE,a.finished_time,b.start_time) wait_time
FROM (
SELECT max(start_time + INTERVAL avg_time_spent SECOND) finished_time, branch
FROM mytable
WHERE name IN ('test_name')
AND status = 'SUCCESS'
GROUP by branch) a
INNER JOIN
(
SELECT MIN(start_time) start_time, branch
FROM mytable
WHERE name IN ('test_name_specific')
GROUP by branch) b
ON a.branch = b.branch
HAVING avg_time_spent between 0 and 1000)t
GROUP BY week
Now I am trying to port this to tableau, and I am not being able to find a way to represent this data in tableau. I am stuck at how to represent the inner group by in a calculated field. I can also try to just use a custom sql data source, but I am already using another data source.
columns in mytable -
start_time
avg_time_spent
name
branch
status
I think this could be achieved new Level Of Details formulas, but unfortunately I am stuck at version 8.3
Save custom SQL for rare cases. This doesn't look like a rare case. Let Tableau generate the SQL for you.
If you simply connect to your table, then you can usually write calculated fields to get the information you want. I'm not exactly sure why you have test_name in one part of your query but test_name_specific in another, so ignoring that, here is a simplified example to a similar query.
If you define a calculated field called worst_case_test_time
datediff(min(start_time), dateadd('second', max(start_time), avg_time_spent)), which seems close to what your original query says.
It would help if you explained what exactly you are trying to compute. It appears to be some sort of worst case bound for avg test time. There may be an even simpler formula, but its hard to know without a little context.
You could filter on status = "Success" and avg_time_spent < 1000, and place branch and WEEK(start_time) on say the row and column shelves.
P.S. Your query seems a little off. Don't you need an aggregation function like MAX or AVG after the HAVING keyword?