Trigger on opportunity scenario - apex

I want to write a trigger on opportunity
there is a field on account Opportunity_Amount__c
if suppose Opportunity_Amount__c has value 200 and sum of its relted Opportunities is 160 and you are adding opportunity of amount 60 then it should throw error. if you are adding 40 or 30 it should not throw error while saving opportunity.
sum of total opportunities should not be greater than value in Opportunity_Amount__c.
Thanks
I want to write a trigger on opportunity
there is a field on account Opportunity_Amount__c
if suppose Opportunity_Amount__c has value 200 and sum of its relted Opportunities is 160 and you are adding opportunity of amount 60 then it should throw error. if you are adding 40 or 30 it should not throw error while saving opportunity.
sum of total opportunities should not be greater than value in Opportunity_Amount__c.

Related

How do I make the trigger run after all the data is inserted into the batch class?

I want to use Apex Batch class to put 10,000 pieces of data into an object called A and use After Insert trigger to update the weight field value of 10,000 pieces of data to 100 if the largest number of weight fields is 100.
But now, if Batch size is 500, the number with the largest weight field value out of 500 data is applied to 500 data.
Of the following 500 data, the number with the largest weight field value applies to 500 data.
For example, if the weight field for the largest number of the first 500 data is 50,
Weight field value for data 1-50: 50
If the weight field for the largest number of the following 500 data is 100,
Weight field value for data 51-100: 100
I'm going to say that if the data is 10,000, the weight field is the largest number out of 10,000 data.
I want to update the weight field value of all data.
How shall I do it?
Here's the code for the trigger I wrote.
trigger myObjectTrigger on myObject_status__c (after insert) {
List<myObject_status__c> objectStatusList = [SELECT Id,Weight FROM myObject_status__c WHERE Id IN: Trigger.newMap.KeySet() ORDER BY Weight DESC];
Decimal maxWeight= [SELECT Id,Weight FROM myObject_status__c ORDER BY Weight DESC Limit 1].weight
for(Integer i=0;i<objectStatusList();i++){
objectStatusList[i].Weight = maxWeight;
}
update objectStatusList;
}
A trigger will not know whether the batch is still going on. Trigger works on scope of max 200 records at a time and normally sees only that. There are ways around it (create some static variable?) but even then it'd be limited to whatever is the batch's size, what came to single execute(). So if you're running in chunks of 500 - not even static in a trigger would help you.
Couple ideas:
How exactly do you know it'll be 10K? You're inserting them based on on another record? You're using the "Iterator" variant of batch? Could you "prescan" the records you're about to insert, figure out the max weight, then apply it as you insert, eliminating the need for update?
if it's never going to be bigger than 10K (and there are no side effects, no DMLs running on update) - you could combine Database.Stateful and finish() method. Keep updating the max value as you go through executes(), then in finish() update them 1 last time. Cutting it real close though.
can you "daisy chain". Submit another batch from this batch's finish. Passing same records and the max you figured out.
can you stamp the records inserted in same batch with same value, like maybe put the batch job's id into a hidden field. Then have another batch (daisy chained?) that looks for them, finds the max in the given range and applies to any that share the batch job id but not have the value applied yet
Set the weight in your finish method of the batch class, it runs once all batches have finished. Track the max weight record in a static variable in the class.

Perform calculation on a different number of rows for each ID

Hopefully someone smart here can help me with a description how to solve this issue. I am relative new to SPSS and want to select cases with a certain requirement.
I have a group of Identeties who has made a mathtest multiple of times. We have 1000 ID where each person (ID) has done the test 10 times. Now i wanna select how many of these persons have scored atleast 40/50 once in this test. I have managed to do so.
Here is the problem. I now wanna calculate the average score of all the tests every individual has done after the first time they scored atleast 40 points.
Example: ID nr 8 has a score of; 34,35,27,37,32,45,41,32,34,47
These are all in 10 different rows. So ID nr 1 appears in 10 different rows. ID 2 in 10 other rows and so on.
Like this:
ID 1 Score 34
ID 1 Score 35
ID 1 Score 27
As you can see the person has scored atleast 40p at the 6th time. And i wanna take the average henceforth from this point. So in this case (45+41+32+34+47)/6.
I also wanna know how if i consider a person "smart" or not. A smart person is someone with atleast 2 mathscores with 40p+ (dosent have to be after each other, 2 seperate occations is ok)
How do i do that?
In the following code I assume you have a variable that identifies the order of the tests for each ID:
sort cases by ID TestNumber.
compute ScoreOver40 = score >= 40. /* this identifies all scores g/e 40.
compute seq=ScoreOver40.
* if seq was 1, all following seq values for same ID will also become 1.
if ID = lag(ID) and lag(seq)=1 seq=1.
if seq=1 scoresAfterFirst40 = score.
*now to aggregate by ID.
dataset declare agg.
aggregate /out=agg /break=ID /meanAfterFirst40=mean(scoresAfterFirst40)
/NumScores40P = sum(ScoreOver40).
In the new dataset called agg you should find for every ID the mean of scores after the first score of 40 or more, and the count of scores of 40 or more in all the 10 tests.
EDIT:
Now you can use the aggregated data for further analysis. For example, you can determine which IDs had two or more high (40+) scores:
dataset activate agg.
compute GoodAtMaths = (NumScores40P >= 2).
exe.

TimescaleDB query to select rows where column value changed from previous row

Just recently started using TimescaleDB with Postgres to handle most requests for data.
However I'm running into an issue where I have a horribly inefficient request for time series of data.
It's a data series that can be any length of time, with specific Integer values.
Most of the time the value will be the same unless there's an anomaly. So rather than fetching +10,000 rows of data. I would like to aggregate this into "time blocks".
Let's say there 97 items in a row where the value is 100 (new item for every 5 minutes) #98 the value is 48 for 5 items in a row and then it goes back up to 100 for another 2,900 rows.
I don't want to fetch 3002 items to display this data. I should only need to fetch 3 items.
1 item that says the value is 100 from a startDate
1 item that says the value is 48 from a startDate after #1
1 item that says the value is 100 again from a startDate after #2
But I'm having some trouble figuring out how I can do this with timescaledb.
basically, if the value is the same as the last value, aggregate it. That's all I need it to do.
Does anyone know how to construct a VIEW for this kind of situation in timescaleDB using continuous aggregation (or if there's a faster way) to fetch this?
You can achieve the desired result with window functions and a subselect:
SELECT time, value FROM (
SELECT
time,
value,
value - LAG(value) OVER (ORDER BY time) as diff
FROM hypertable) ht
WHERE diff IS NULL OR diff != 0;
You use a window function to calculate the diff to the previous row and then filter all the rows where the diff is 0 in the outer query.

Crystal Report Grouping

I'm having some problems with grouping and running totals in crystal reports, I've found loads of info whilst googling but I'm looking a bit more info.
Currently I have 3 tables, jobs, times and costs, I have a running total to calculate the value of the time.hours and another for the costs.amount.
The issue I'm having is that the costs are being multiplied by the amount of time.hours records that exist.
so if a job has 3 time.hours records and there is only one record of costs.amount that
cost is being multiplied by that about of records
So what i end up with is this
Job Hours Costs
123 5 0
123 4 100
123 3 0
=========================
12 100
which is exactly right, now if i add another running total without resetting then my total ends up as
========================
12 300
I think this is happening because of the way i have them grouped but im not sure how to make them group to only job.ID
This was solved using ( ) to break the formula down
({timesheet.Time_Date} <={?timDateUJC} and {jobsheet1.Job_JobCompleted} = 0)
or
({materialcost.Inv_Date} <= {?supDateUJC} and {jobsheet1.Job_JobCompleted} = 0)

how do i subtotal on groups based on a condition

My report is grouped on clinic, staffname with subtotals by clinic. I need to count patients by staff where they had more than 1 admit date. I can get the correct grand total, but on the detail and subtotals, it is a progressive number.
Here's what I want
clinic1
staffname1 10
staffname2 95
subtotal 105
clinic2
staffname3 6
subtotal 6
grand total 111
Here is what I get:
clinic1
staffname1 10
staffname2 105
subtotal 105
clinic2
staffname3 111
subtotal 111
grand total 111
A lot of this may depend on the structure of your data, e.g., what is in your "detail" level. I am also assuming that you want to count of how many patients had more than one admit date, not the total number of admits for patients with multiple admits. Given that, and assuming a patient will only appear once per admit date, then this should work:
Group by patient also, so it's clinic -> staff -> patient, but suppress that group.
Create a formula to count if the number of records in each patient group is more than 1, something like this: if count({patient},{patient}) > 1 then 1 else 0
Take the formula you just created, and use it to make a summary field wherever you want a total, e.g., in the staff head it will give you a count or that staff member, in clinic it will do so for clinic, etc.
Something else to consider: I'm guessing this could be intended to gauge staff on their quality by seeing how many patients had to seek additional treatment. Even if that's not the full intent, whatever this is being used for could be skewed by staff encountering more/fewer patients. For example, staff that have 10 readmits out of 100 visits would look worse than someone who only had 5 readmits, but had also only seen 20 patients.
So: Along with the metric on which you are requesting information, I would also add a ratio metric. In the staff header, this would be straightforward: count({patients}) / distinctcount({patients}) which will give you the ratio of distinct to repeat visits. Keep in mind also that this could skew high for a staff member that had, for example, 50 patients, but one of them came back a dozen times.
To get the count of the fields that has Greater than 1.
Assuming the count field is a database field and value coming directly from database
Create a formula #Count and write below code. Place the forumla in details.
:
if(databaswefield.count > 1)
then 1
else 0
Now take the summary of the #count formula in required group footers.
Let me know if you are looking for something different.
Edit..............................................................
If I got your comment correctly.
Though you take distinct count you can use that in your calculations by storing the value in a shared variable. Something like below and you can retrive value from that variable
Shared Numbervar count;
count:=distinctcount(patientid)