I have a Postgres DB and using ObjectionJS as my ORM.
I have the following ObjectionJS code to patch two columns of a user_system table.
public static async DeleteSystem(systemId: number, context: Context): Promise<UserSystem> {
const system = await UserSystem.query().context(context).patch({
state: PointInTimeState.inactive,
receiveNotifications: false
}).where({
system_id: systemId
}).throwIfNotFound().debug()
//("as any" for now because the system variable is a number at this point (i.e NOT of type Promise<UserSystem> as in the signature)
return system as any
}
Questions
Is there a way in which I could return all the Rows that were not affected by this patch?
If so, how, without having to write two separate queries (to update and then requery new data) to the back end ?
As far as I know writing CTE combining two queries to single is the only way.
With objection / knex they can be done with https://knexjs.org/#Builder-with
So in with query you do the .patch(...).where(...).returning('id') and in main query you select all rows from the table, which are not in id set returned by the first query.
Something like this (not sure if this works at all like this with objection):
UserSystem.query().context(context).with('patchedIds',
UserSystem.query().patch({
state: PointInTimeState.inactive,
receiveNotifications: false
}).where({
system_id: systemId
}).returning('id')
).whereNotIn('id', builder => {
builder.from('patchedId').select('id')
})
Related
List list = new List();
I have a list of Guid. What is the best to check all guid exits or not using ef core table?
I am currently using the below code but the performance is very bad. assume user table as 1 million records.
for Example
public async Task<bool> IsIdListValid(IEnumerable<int> idList)
{
var validIds = await _context.User.Select(x => x.Id).ToListAync();
return idList.All(x => validIds.Contains(x));
}
The performance is bad because you are reading each row of the table into memory, and then iterating through it (ToList materializes the query.) Try using the Any() method to take advantage of the strength of the database. Use something like the following: bool exists = _context.User.Any(u => idList.Contains(u));. This should translate to an SQL IN clause.
Provided you assert that the # of IDs being sent in is kept reasonable, you could do the following:
var idCount = _context.User.Where(x => idList.Contains(x.Id)).Count();
return idCount == idList.Count;
This assumes that you are comparing on a unique constraint like the PK. We get a count of how many rows have a matching ID from the list, then compare that to the count of IDs sent.
If you're passing a large # of IDs, you would need to break the list up into reasonable sets as there are limits to what you can do with an IN clause and potential performance costs as well.
As I understand it, the following code should generate a query containing only the RouteId, RouteNo, and ShipId
var tow = (from t in _context.AllTowData
where t.RouteId == id
orderby t.RouteNo descending
select new TowDefaults {
Id = t.RouteId,
TowNo = t.RouteNo,
ShipId = t.ShipId,
LastTow = t.RouteNo
})
.FirstOrDefault();
However, I get:
SELECT v.route_id, v.route_no, v.tow_id, v.analysis_complete, v.checks_complete, v.cpr_id, v.date_created, v.date_last_modified, v.factor, v.fromportname, v.instrument_data_file, v.instrument_id, v.internal_number, v.mastername, v.message, v.miles_per_division, v.month, v.number_of_samples, v.number_of_samples_analysed_fully, v.prop_setting, v.route_status, v.sampled_mileage, v.serial_no_per_calendar_month, v.ship_speed, v.silk_reading_end, v.silk_reading_start, v.toportname, v.tow_mileage, v.validity, v.year
FROM view_all_tow_data AS v
WHERE v.route_id = '#__id_0'
ORDER BY v.route_no DESC
LIMIT 1
That's every column except the explicitly requested ShipId! What am I doing wrong?
This happens using both a SQL Server and a PostGres database
The property ShipIdis not mapped, either by a [NotMapped] annotation or a mapping instruction. As far as EF is concerned, the property doesn't exist. This has two effects:
EF "notices" that there's an unknown part the final Select and it switches to client-side evaluation (because it's a final Select). Which means: it translates the query before the Select into SQL which doesn't contain the ShipId column, executes it, and materializes full AllTowData entities.
It evaluates the Select client-side and returns the requested TowDefaults objects in which ShipId has its default value, or any value you initialize in C# code, but nothing from the database.
You can verify this by checking _context.AllTowData.Local after the query: it will contain all AllTowData entities that pass the filter.
From your question it's impossible to tell what you should do. Maybe you can map the property to a column in the view. If not, you should remove it from the LINQ query. Using it in LINQ anywhere but in a final Select will cause a runtime exception.
How insert item on top table in PostgreSQL? That it is possible? In the table I have only two fields as text. First is primary key.
CREATE TABLE news_table (
title text not null primary key,
url text not null
);
I need a simple query for the program in java.
OK, this is my code:
get("/getnews", (request, response) -> {
List<News> getNews = newsService.getNews();
List<News> getAllNews = newsService.getAllNews();
try (Connection connection = DB.sql2o.open()) {
String sql = "INSERT INTO news_table(title, url) VALUES (:title, :url)";
for (News news : getNews) {
if (!getAllNews.contains(news)) {
connection.createQuery(sql, true)
.addParameter("title", news.getTitle())
.addParameter("url", news.getUrl())
.executeUpdate()
.getKey();
}
}
}
return newsService.getNews();
}, json());
The problem is that as it calls getnews method for the second time this new news adds at the end of the table, and there is no extant hronologi news. How this resolve? I use Sql2o + sparkjava.
Probably already I know. I need to reverse the List getnews before I will must contains object getnews and getallnews?
There is no start or end in a table. If you want to sort your data, just use an ORDER BY in your SELECT statements. Without ORDER BY, there is no order.
Relational theory, the mathematical foundation of relational databases, lays down certain conditions that relations (represented in real databases as tables) must obey. One of them is that they have no ordering (i.e., the rows will neither be stored nor retrieved in any particular order, since they are treated as a mathematical set). It's therefore completely under the control of the RDBMS where a new row is entered into a table.
Hence there is no way to ensure a particular ordering of the data without using an ORDER BY clause when you retrieve the data.
Environment : MongoDb 3.2, Morphia 1.1.0
So lets say i am having a collection of Employees and Employee entity has several fields. I need to do something like apply multiple filters (conditional) and return a batch of 10 records per request.
pesudocode as below.
#Entity("Employee")
Employee{
String firstname,
String lastName,
int salary,
int deptCode,
String nationality
}
and in my EmployeeFilterRequesti carry the request parameter to the dao
EmployeeFilterRequest{
int salaryLessThen
int deptCode,
String nationality..
}
Pseudoclass
class EmployeeDao{
public List<Employee> returnList;
public getFilteredResponse(EmployeeFilterRequest request){
DataStore ds = getTheDatastore();
Query<Employee> query = ds.createQuery(Emploee.class).disableValidation();
//conditional request #1
if(request.filterBySalary){
query.filter("salary >", request.salary);
}
//conditional request #2
if(request.filterBydeptCode){
query.filter("deptCode ==", request.deptCode);
}
//conditional request #3
if(request.filterByNationality){
query.filter("nationality ==", request.nationality);
}
returnList = query.batchSize(10).asList();
/******* **THIS IS RETURNING ME ALL THE RECORDS IN THE COLLECTION, EXPECTED ONLY 10** *****/
}
}
SO as explained above in the code.. i want to perform conditional filtering on multiple fields. and even if batchSize is present as 10, i am getting complete records in the collection.
how to resolve this ???
Regards
Punith
Blakes is right. You want to use limit() rather than batchSize(). The batch size only affects how many documents each trip to the server comes back with. This can be useful when pulling over a lot of really large documents but it doesn't affect the total number of documents fetched by the query.
As a side note, you should be careful using asList() as it will create objects out of every document returned by the query and could exhaust your VM's heap. Using fetch() will let you incrementally hydrate documents as you need each one. You might actually need them all as a List and with a size of 10 this is probably fine. It's just something to keep in mind as you work with other queries.
Following is the code which is blowing up if the list which is being passed in to "IN" clause has several values. In my case the count is 1400 values. Also the customer table has several thousands (arround 100,000) of records in it. The query is executing against DERBY database.
public List<Customer> getCustomersNotIn(String custType, List<Int> customersIDs) {
TypedQuery<Customer> query = em.createQuery("from Customer where type=:custType and customerId not in (:customersIDs)", Customer.class);
query.setParameter("custType", custType);
query.setParameter("customersIDs", customersIDs);
List<Customer> customerList = query.getResultList();
return customerList;
}
The above mentioned method perfectly executes if the list has less values ( probably less than 1000 ), if the list customersIDs has more values since the in clause executes based on it, it throws an error saying "Statement too complex"
Since i am new to JPA can any one please tell me how to write the above mention function in the way described below.. * PLEASE READ COMMENTS IN CODE *
public List<Customer> getCustomersNotIn(String custType, List<Int> customersIDs) {
// CREATE A IN-MEMORY TEMP TABLE HERE...
// INSERT ALL VALUES FROM customerIDs collection into temp table
// Change following query to get all customers EXCEPT THOSE IN TEMP TABLE
TypedQuery<Customer> query = em.createQuery("from Customer where type=:custType and customerId not in (:customersIDs)", Customer.class);
query.setParameter("custType", custType);
query.setParameter("customersIDs", customersIDs);
List<Customer> customerList = query.getResultList();
// REMOVE THE TEMP TABLE FROM MEMORY
return customerList;
}
The Derby IN clause support does have a limit on the number of values that can be supplied in the IN clause.
The limit is related to an underlying limitation in the size of a single function in the Java bytecode format; Derby currently implements IN clause execution by generating Java bytecode to evaluate the IN clause, and if the generated bytecode would exceed the JVM's basic limitations, Derby throws the "statement too complex" error.
There have been discussions about ways to fix this, for example see:
DERBY-6784
DERBY-6301, or
DERBY-216
But for now, your best approach is probably to find a way to express your query without generating such a large and complex IN clause.
Ok here is my solution that worked for me. I could not change the part generating the customerList since it is not possible for me, so the solution has to be from within this method. Bryan your explination was the best one, i am still confuse how "in" clause worked perfectly with table. Please see below solution.
public List<Customer> getCustomersNotIn(String custType, List<Int> customersIDs) {
// INSERT customerIds INTO TEMP TABLE
storeCustomerIdsIntoTempTable(customersIDs)
// I AM NOT SURE HOW BUT, "not in" CLAUSE WORKED INCASE OF TABLE BUT DID'T WORK WHILE PASSING LIST VALUES.
TypedQuery<Customer> query = em.createQuery("select c from Customer c where c.customerType=:custType and c.customerId not in (select customerId from TempCustomer)");
query.setParameter("custType", custType);
List<Customer> customerList = query.getResultList();
// REMOVE THE DATA FROM TEMP TABLE
deleteCustomerIdsFromTempTable()
return customerList;
}
private void storeCustomerIdsIntoTempTable(List<Int> customersIDs){
// I ENDED UP CREATING TEMP PHYSICAL TABLE, INSTEAD OF JUST IN MEMORY TABLE
TempCustomer tempCustomer = null;
try{
tempCustomerDao.deleteAll();
for (Int customerId : customersIDs) {
tempCustomer = new TempCustomer();
tempCustomer.customerId=customerId;
tempCustomerDao.save(tempCustomer);
}
}catch(Exception e){
// Do logging here
}
}
private void deleteCustomerIdsFromTempTable(){
try{
// Delete all data from TempCustomer table to start fresh
int deletedCount= tempCustomerDao.deleteAll();
LOGGER.debug("{} customers deleted from temp table", deletedCount);
}catch(Exception e){
// Do logging here
}
}
JPA and the underlying Hibernate simply translate it into a normal JDBC-understood query. You wouldn't write a query with 1400 elements in the IN clause manually, would you? There is no magic. Whatever doesn't work in normal SQL, wouldn't in JPQL.
I am not sure how you get that list (most likely from another query) before you call that method. Your best option would be joining those tables on the criteria used to get those IDs. Generally you want to execute correlated filters like that in one query/transaction which means one method instead of passing long lists around.
I also noticed your customerId is double - a poor choice for a PK. Typically people use long (autoincremented/sequenced, etc.) And I don't get the "temp table" logic.