Dynamic limit in Spring Repository find Request - spring-data-jpa

I have a repository, where I select and process 100 customers every day.
This is done with:
List<Customer> customer = customerRepository.findFirst100();
That is clear so far.
But the number of 100 needs to be configurable.
So I added the variable, which is read at every startup out of the config file:
#Value("${customers.daily-limit:100}")
private Integer dailyLimit;
List<Customer> customer = customerRepository.findFirst100();
customer.stream().limit(dailyLimit).forEach(c -> {...})
The problem is, that in this way I am limited to the maximum of the findFirst100 and can't exceed it. So I can use findAll(), but if I have over 100'000 customers in my repository, the performance would be very bad if I load all, just to take 200 instead of 100 out.
So, is there any way, I can limit the amount of customer it should load with the dynamic parameter?
Thank you for your answer.
Edit:
So you mean something like this?
Pageable pageable = PageRequest.of(0, dailyLimit, Direction.ASC, "id");
Page<Customer> pagedResult = customerRepository.findAll(pageable);
List<Customer> customers = pagedResult.getContent();

Related

Dynamics CRM + Plugin Code to Calculate Sum of fields across records and update in the another record

I have the below requirement to be implemented in a plugin code on an Entity say 'Entity A'-
Below is the data in 'Entity A'
Record 1 with field values
Price = 100
Quantity = 4
Record 2 with field values
Price = 200
Quantity = 2
I need to add the values of the fields and update it in a new record. Example shown below -
Record 3
Price = 100 + 200 = 300
Quantity = 4 + 2 = 6
Entity A has a button named "Perform Addition" and once clicked this will trigger the plugin code.
I need some ideas/pseudocode on how can i implement this. The example explained above is just a simpler version of my entity. In Real the entity has more than 60 fields on it , and for each of the fields i need to perform the sum and update in the third one. Also the number of records on which the addition would be performed can be beyond 2.
Hence more the no of records, more i will have to loop through each records and perform sum, I would like to know if there are simpler and better ways to writing this logic.
Just need guidance on how the logic should be written.
Any help would be appreciated.
Solution :
Tried the below code and it worked as suggested by the answer -
AttributeList is the list of fields i need to perform sum on. All fields are decimal
Entity EntityA = new EntityA();
EntityA.Id = new Guid({"Guid String"});
var sourceEntityDataList = service.RetrieveMultiple(new FetchExpression(fetchXml)).Entities;
foreach (var value in AttributeList)
{
EntityA[value]= sourceEntityDataList.Sum(e => e.Contains(value) ? e.GetAttributeValue<Decimal>(value) : 0);
}
service.Update(EntityA);
I recently did something similar for a client. We looked into using rollup fields, but they want the results more quickly than 12 hours.
The records containing the values that we're summing are the child records and the record containing the results is the parent.
The way it works is:
From the plugin's target entity, get the parent id.
Retrieve the parent.
Retrieve its children (via LINQ, FetchXML, or a QueryExpression).
Sum the values from the children. Since you have a lot of fields to sum, you could create a list of the field names, then use a method like this (assuming they're all Money fields)
private decimal sum(string field) => Records.Sum(b => b.GetAttributeValue<Money>(field)?.Value ?? 0);
If the fields are different types you could also create different sum methods like sumMoney, sumInt, etc. You'd then need to separate the fields by type and pass them to their appropriate method.
Populate the results on the parent Entity.
Update the parent record in Dynamics.
If the information is not that time sensitive maybe look into the out-of-box rollup fields.

specifying what data to return from a REST query

what is the preferred way to specify what columns to return for a resource?
a resource is a noun, so when I say, GET employees, I can specify query parameters to get a limited set of employees. But what about the info on each employee? If the employees table has 12 columns, but I want only three, what is the best way to specify them? Or, do I treat them as different resources?
GET employees(all columns)
GET employees(name, age)
GET employees(id, salary)
I have seen suggestions such as (note to overzealous editors: fictitious example below, please don't obfuscate it with markdown syntax)
http://path/to/server/employees/?q=queryparams&cols=col1,col5,co7
but that seems to be mixing the data to return with the query string. Should work but seems inelegant.
Usually REST results should contain all columns except big or complex properties.
GET /employees returns a list of employees (possibly paged);
GET /employees/100 returns the employee with all columns of primitive types;
GET /employees/100/photo returns big binary property photo;
In generally, remote services should return large objects due to network latency.
According to the standard JSON API you can include related objects in the result with the include parameter:
GET /employees/100?include=manager,salary
You want to use QueryMap in Retrofit rest API
For Example
In API Service
#GET("/employees")
Call<List<Employees>> getEmployees(
#QueryMap Map<String, String> options
);
In Activity
private void getEmployees() {
Map<String, String> data = new HashMap<>();
data.put("q", "queryparams");
data.put("cols", "col1,col5,co7");
// simplified call to request the news with already initialized service
Call<List<Employees>> call = Service.getEmployees(data);
call.enqueue(…);
}
Fore More Details Please Visit Retrofit Docs : Retrofit-Rest

Entity framework get customers by zip code

So I have by entity framework 5 set up. I have a Customers table in the database. What would be most efficient way to get customers of a given zip code for example 94023? I have this:
var customersOfLosAltos =
(myDbContext.CreateObjectSet<Customer>()).Where(c=>c.Zip == "94023");
But, intuitively, that seems pretty inefficient because as I understand it, it basically retrieves all customers from the data source, and then filter it out by the given zip. It might be OK if I only have a few hundred customers, what if I have a million customers?
Any thoughts? Thanks.
as I understand it, it basically retrieves all customers from the data source, and then filter it out by the given zip.
Your understanding is wrong. Entity framework turns your code in to a SQL query, so what the server actually returns is the result for the query
select * from Customer where Zip = '94023'
If you changed your code to
var customers = myDbContext.CreateObjectSet<Customer>().ToList();
var customersOfLosAltos= customers.Where(c=>c.Zip == "94023");
then because of that .ToList() it now does a unfiltered query to the database then in memory filters on the client it to just the customers you want. This is why you want to try to keep your query as a IQueryable for as long as possible before you get the results because any tweaks or changes you make to the query propagate back to the query performed on the server.
To make your query even more efficient you could add a Select clause
var lastNamesOfCustomersOfLosAltos = (myDbContext.CreateObjectSet<Customer>())
.Where(c=>c.Zip == "94023")
.Select(c=>c.LastName);
The SQL server now performs the query (when you retreive the results via a ToList(), or in a foreach, or via a .AsEnumerable(), ect.)
select LastName from Customer where Zip = '94023'

Best Way to Sequentially Parse Through a Table in T-SQL

I'm writing a stored procedure in SQL Server and hoping someone can suggest a more computationally efficient way to handle this problem:
I have a table of Customer Orders (i.e., "product demand") data that contains 3000 line items. Each record expresses the Order Qty for a specific product.
I also have another table of Production Orders (i.e., "product supply") data that contains about 200 line items. Each record expresses the Qty Available for each specific product.
The problem is that there is typically less supply than demand and, therefore, the Custom Order table contains an Allocation Priority value that shows each Customer Order's position in line to receive product.
What's the best way to allocate Qty Available in Production Orders to the Order Qty in Customer Orders? Note that you can't allocate more to each Customer Order than has been ordered.
I can do this by creating a WHILE loop and doing the allocation product-by-product, line-by-line but it is very slow.
Is there a faster set-based way to approach this problem?
I don't have data to test against. This would not try and fill partial qty.
select orders.custID, orders.priority, orders.prodID, orders.qty, SUM(cumu.qty) as 'cumu'
from orders
join orders as cumu
on cumu.prodID = orders.prodID
and cumu.priority <= orders.priority
join available
on availble.prodID = orders.prodID
group by orders.custID, orders.priority, orders.prodID, orders.qty
having SUM(cumu.qty) < availble.qty
order by orders.custID, orders.priority, orders.prodID

How to update a long list of documents (rows) in mongodb based on the expiring date?

I would need a cron job that filters all the rows that have (travellingPath.endDate>now) and set them (travellingPath.isActive=false). The travelling path has a toCity property. Now I want to update the quantity of the toCity based on the quantity of the travellingPath and another settings collection.
For example:
a travelling path expired
cron job catches it
get the toCity from the travelling path
get the conversionRate from another collection
based on the toCity.quantity, travellingPath.quantity, conversionRate and random I update the toCity.quantity to a new value, and I also might change the toCity.owner
update the travelling path to isActive=false
My idea would be to query each travelling path that has endDate>now but this could end up with 100000 results so it's not great. I might limit it to 250 results to work properly. Then for each travellingPath I get it's toCity and make the calculations and update the toCity and the travellingPath.
But this seems so not efficient..
do you have better ideas? thanks (:
Yes, that's the way to go. MongoDB updates don't support expressions that depend on other fields. So you're stuck with this:
Retrieve documents one by one or in small batches;
Calculate new values for the fields;
Send updates to the database (one by one or in batches);
Get next portion of documents, repeat until done.