I am having trouble with en EF method returning duplicate rows of data. When I am running this, in my example, it returns four rows from a database view. The fourth row includes details from the third row.
The same query in SSMS returns four individual rows with the correct details. I have read somewhere about EK and problems with optimization when there are no identity column. But - is there anyway to alter the below code to force EK to read all records with all details?
public List<vs_transactions> GetTransactionList(int cID)
{
using (StagingDataEntities db = new StagingDataEntities())
{
var res = from trans in db.vs_transactions
where trans.CreditID == cID
orderby trans.ActionDate descending
select trans;
return res.ToList();
}
}
Found the solution :) MergeOption.NoTracking
public List<vs_transactions> GetTransactionList(int cID)
{
db.vs_transactions.MergeOption = MergeOption.NoTracking;
using (StagingDataEntities db = new StagingDataEntities())
{
var res = from trans in db.vs_transactions
where trans.CreditID == cID
orderby trans.ActionDate descending
select trans;
return res.ToList();
}
}
Related
Our current system is using Lazyloading by default (it is something I am going to be disabling but it can't be done right now)
For this basic query I want to return two tables, CustomerNote and Note.
This is my query
using (var newContext = new Entities(true))
{
newContext.Configuration.LazyLoadingEnabled = false;
var result = from customerNotes in newContext.CustomerNotes.Include(d=>d.Note)
join note in newContext.Notes
on customerNotes.NoteId equals note.Id
where customerNotes.CustomerId == customerId
select customerNotes;
return result.ToList();
}
My result however only contains the data in the CustomerNote table
The linked entities Customer and Note are both null, what am I doing wrong here?
I got it working with the following which is much simpler than what I've found elsewhere
Context.Configuration.LazyLoadingEnabled = false;
var result = Context.CustomerNotes.Where<CustomerNote>(d => d.CustomerId == customerId)
.Include(d=>d.Note)
.Include(d=>d.Note.User);
return result.ToList();
This returns my CustomerNote table, related Notes and related Users from the Notes.
That is callled eager loading you want to achieve.
var customerNotes = newContext.CustomerNotes.Include(t=> t.Node).ToList();
This should work, i don't really understand the keyword syntax.
If the code above doesn't work try this:
var customerNotes = newContext.CustomerNotes.Include(t=> t.Node).Select(t=> new {
Node = t.Node,
Item = t
}).ToList();
I am using Scala 2.11.8
I am trying to read queries from my Property File. Each Query Set has multiple parts (explained below)
And i have certain sequence in which these queries must execute.
Code:
import com.typesafe.config.ConfigFactory
object ReadProperty {
def main(args : Array[String]): Unit = {
val queryRead = ConfigFactory.load("testqueries.properties").getConfig("select").getStringList("caseInc").toArray()
val localRead = ConfigFactory.load("testqueries.properties").getConfig("select").getStringList("caseLocal").toArray.toSet
queryRead.foreach(println)
localRead.foreach(println)
}
}
PropertyFile Content :
select.caseInc.2 = Select emp_salary, emp_dept_id from employees
select.caseLocal.1 = select one
select.caseLocal.3 = select three
select.caseRemote.2 = Select e1.emp_name, d1.dept_name, e1.salary from emp_1 e1 join dept_1 d1 on(e1.emp_dept_id = d1.dept_id)
select.caseRemote.1 = Select * from departments
select.caseInc.1 = Select emp_id, emp_name from employees
select.caseLocal.2 = select two
select.caseLocal.4 = select four
Output:
Select emp_id, emp_name from employees
Select emp_salary, emp_dept_id from employees
select one
select two
select three
select four
As we can see in output, The result is Sorted . In the property if you see i have tried numbering the queries in the sequence it should run.(passing the caseInc, caseLocal as arguments).
With getStringList() i am always getting the Sorted List on the basis of the sequence number i am providing.
Even when i tried using toArray() & toArray().toSet i am getting sorted output.
So far its Good
But how to be sure that it will always return in Sorted Order which i have provided in the property file. I am confused because somehow i am not able to find the API which says that the returned List will be Sorted.
I think you can rely on this fact. Looking into the code of DefaultTransformer you can see following piece of logic:
} else if (requested == ConfigValueType.LIST && value.valueType() == ConfigValueType.OBJECT) {
// attempt to convert an array-like (numeric indices) object to a
// list. This would be used with .properties syntax for example:
// -Dfoo.0=bar -Dfoo.1=baz
// To ensure we still throw type errors for objects treated
// as lists in most cases, we'll refuse to convert if the object
// does not contain any numeric keys. This means we don't allow
// empty objects here though :-/
AbstractConfigObject o = (AbstractConfigObject) value;
Map<Integer, AbstractConfigValue> values = new HashMap<Integer, AbstractConfigValue>();
for (String key : o.keySet()) {
int i;
try {
i = Integer.parseInt(key, 10);
if (i < 0)
continue;
values.put(i, o.get(key));
} catch (NumberFormatException e) {
continue;
}
}
if (!values.isEmpty()) {
ArrayList<Map.Entry<Integer, AbstractConfigValue>> entryList = new ArrayList<Map.Entry<Integer, AbstractConfigValue>>(
values.entrySet());
// sort by numeric index
Collections.sort(entryList,
new Comparator<Map.Entry<Integer, AbstractConfigValue>>() {
#Override
public int compare(Map.Entry<Integer, AbstractConfigValue> a,
Map.Entry<Integer, AbstractConfigValue> b) {
return Integer.compare(a.getKey(), b.getKey());
}
});
// drop the indices (we allow gaps in the indices, for better or
// worse)
ArrayList<AbstractConfigValue> list = new ArrayList<AbstractConfigValue>();
for (Map.Entry<Integer, AbstractConfigValue> entry : entryList) {
list.add(entry.getValue());
}
return new SimpleConfigList(value.origin(), list);
}
}
Note how keys are parsed as integer values and then sorted using Integer.compare
A newbie question. I am using EntityFramework 4.0. The backend database has a function that will return a subset of records based on time.
Example of working code is:
var query = from rx in context.GetRxByDate(tencounter,groupid)
select rx;
var result = context.CreateDetachedCopy(query.ToList());
return result;
I need to verify that a record does not exist in the database before inserting a new record. Before performing the "Any" filter, I would like to populate the context.Rxes with a subset of the larger backend database using the above "GetRxByDate()" function.
I do not know how to populate "Rxes" before performing any further filtering since Rxes is defined as
IQueryable<Rx> Rxes
and does not allow "Rxes =.. ". Here is what I have so far:
using (var context = new EnityFramework())
{
if (!context.Rxes.Any(c => c.Cform == rx.Cform ))
{
// Insert new record
Rx r = new Rx();
r.Trx = realtime;
context.Add(r);
context.SaveChanges();
}
}
I am fully prepared to kick myself since I am sure the answer is simple.
All help is appreciated. Thanks.
Edit:
If I do it this way, "Any" seems to return the opposite results of what is expected:
var g = context.GetRxByDate(tencounter, groupid).ToList();
if( g.Any(c => c.Cform == rx.Cform ) {....}
I'm updating a Qt software, to make it compatible with both SQLite and PostgreSQL.
I have a C++ method that is used to count elements of a given table with given clauses.
In SQLite, the following worked and gave me a number N (the count).
SELECT COUNT(*) FROM table_a
INNER JOIN table_b AS
ON table_b.fk_table_a = table_a.id
WHERE table_a.start_date_time <> 0
ORDER BY table_a.creation_date_time DESC
With PostgreSQL (I'm using 9.3), I have the following error :
ERROR: column "table_a.creation_date_time" must appear in the
GROUP BY clause or be used in an aggregate function
LINE 5: ORDER BY
table_a.creation_date_time DESC
If I add, GROUP BY table_a.creation_date_time, it gives me a table with N rows.
I've read a lot of stuff about how different DBMS allow you to omit columns in the GROUP BY clause. Now, I'm just confused.
For those who are curious, the C++ method is:
static int count(const QString &table, const QString &clauses = QString(""))
{
int success = -1;
if (!table.isEmpty())
{
QString statement = QString("SELECT COUNT(*) FROM ");
statement.append(table);
if (!clauses.isEmpty())
{
statement.append(" ").append(clauses) ;
}
QSqlQuery query;
if(!query.exec(statement))
{
qWarning() << query.lastError();
qWarning() << statement;
}
else
{
if (query.isActive() && query.isSelect() && query.first())
{
bool ok = false;
success = query.value(0).toInt(&ok);
if (ok == false)
{
success = -1;
return success;
}
}
}
}
return success;
}
If you're just doing a count(*) on the table in order to get a single scalar-value result, then surely having the order by present is obsolete ?
solution
Remove the obsolete order by to get "standard" query behavior across multiple dbms
I'm new to BIRT and need an answer to the following question:
How to compare two data rows in one data set in BIRT and then print it out to the document?
I am assuming you have a reason for not using a self-join query to bring in the data. One simple thing you could do is have 2 identical datasets and then create a new joint dataset using the 2.
With an Oracle DB, you could easily achieve this with pure SQL using the "Analytic Function" LAG (see the Oracle documentation for details).
Independent from the DB, with BIRT, you could use a variable last_row:
Create some computed columns to keep the results of your comparisons. e.g. "FIRST_COLUMN_CHANGED" as boolean.
afterOpen event:
last_row = null;
onFetch event (pls note I'm not sure wether the actual data columns start at 0 or 1):
if (last_row != null) {
if (last_row[0] == row[0]) {
row["FIRST_COLUMN_CHANGED"] = false;
} else {
row["FIRST_COLUMN_CHANGED"] = true;
}
} else {
// do computations for the first record.
row["FIRST_COLUMN_CHANGED"] = true;
}
// Copy the current row to last_row
last_row = {};
// modify depending on the number of columns
for (var i=0; i<10; i++) {
last_row[i] = row[i];
}