Postgres, Supabase and Math - postgresql

I am new to Supabase and I know it uses Postgres. I have database containing 2 tables (for now):
meals
days of diet
meals contain:
name, photo, calories, protein, carbs etc.
days of diet contain:
array with meals id, total calories, total proteins, total carbs etc.
Can I set Postgres to do the math for me? Something like:
day of diet calories: sum all calories of array with meals
Also, since I can't have foreign key array it would have to be something like:
day of diet calories: for each (get meal with id) calories sum it up
Thank you for answering - I know doing it first method is possible but I don't think it's the best way. Hope you have a great day.
I can use my "backend" to add meal to day of diet, take "total calories" and add new meal calories to it. It is working great BUT what if I edit meal? I change calories? Then day of diet is wrong. I could code my backend to "refresh" data of every day of diet containing edited meal but it seems like a slow approach.

I almost finished this with Postgres Functions and Triggers :P Going well, will post an answer once I finish.
BEGIN
UPDATE ingredient
SET iname = NEW.iname,
calories = NEW.calories_per_100 * ingredient.grams / 100,
protein = NEW.protein_per_100 * ingredient.grams / 100,
carbs = NEW.carbs_per_100 * ingredient.grams / 100,
typ = NEW.typ
WHERE ingredient = NEW.id;
RETURN NEW;
END

Related

How to query a reference table for a value between a dates for a specified category in Apps Script?

I have a background in data analytics and have done a similar workflow in SQL but am brand new to Apps Script. I am a bit at a loss on where to even start in Apps Script. Any advice or pointing me in the direction of useful examples would be truly appreciated!
Currently, I have a reference table on one sheet with categories and values and the start and end date that value applies to. Then I have a data table on another sheet where I add an entry date and a category and I would like to have Apps Script write the corresponding value for that category on the date.
Reference table data (a blank end date means that is the current rate):
Category
Value
Start date
End date
A
25
01/01/2022
3/31/2022
B
40
01/01/2022
C
30
01/01/2022
A
15
04/01/2022
The data table where the entry date and the category are added manually over time. I want to use the reference table to write the value for that category for that entry date.
Entry Date
Category
Value
02/20/2022
B
40
02/27/2022
A
25
03/20/2022
A
25
04/16/2022
C
15
05/12/2022
A
30
06/02/2022
B
40
How do you get the query the reference data for that entry date and category to find the row with the corresponding value?
Description
As I said I'm not good at QUERY but I finally got something to work. I'm sure other can improve on it.
First I created a named range TestQuery for the table of data. I could have just as easily used range "A1:D6"
Next I fill in the End Date with =TODAY() so it has a date value. Then I build my query.
=QUERY(TestQuery,"select B where ( ( A = '"&B11&"' ) and ( date '"&TEXT(A11,"yyyy-mm-dd")&"' > C ) and ( date '"&TEXT(A11,"yyyy-mm-dd")&"' < D ) )")
Reference
Query Language
Compare Dates in Query
Getting data from a table on a sheet
function getData() {
const ss = SpreadsheetApp.getActive();
const sh = ss.getSheetByName("Sheet0");
const values = sh.getRange("A2:D" + sh.getLastRow()).getValues();
Logger.log(JSON.stringify(values));//2d array
}
A2 is assumed to be the upper left corner of the data

Optimize KDB query time to get rolling average price from each contributor

Each time a contributor gives an updated price I want to use this quote along with the latest prices of other quotes to calculate the total average at that moment.
t:`time xasc flip (`userID`time`price)!(`quote1`quote2`quote3`quote3`quote3`quote3`quote4`quote2`quote4`quote3`quote2`quote3`quote1`quote3`quote4`quote1`quote4`quote2`quote2`quote4;(21:11:37 03:13:29 15:35:39 09:59:13 04:34:15 13:09:01 21:21:55 16:54:39 04:03:04 18:22:39 17:05:44 05:08:40 07:35:50 15:46:15 17:32:29 19:42:47 03:28:48 04:20:03 14:16:55 09:02:12);86.4 84.4 54.26 7.76 63.75 97.61 53.97 71.63 38.86 52.23 87.25 65.69 96.25 37.15 17.45 58.97 95.51 61.59 70.25 35.5)
Desired output below
delete userIDPriceList,userIDComps from t,'raze {[idx;tab] select avgPrice:avg price, userIDPriceList:price,userIDComps:userID from select last price by userID from t where i <= idx}[;t] each til count t
userIDPriceList,userIDComps columns are not required in final output
Performance is slow and looking for better way to calculate.
q) \t do[200000;delete userIDPriceList,userIdComps from t,'raze {[idx;tab] select avgPrice:avg price, userIDPriceList:price,userIDComps:userID from select last price by userID from t where i <= idx}[;t] each til count t]
10152j
Thanks in advance
Based on your clarified requirements, another approach is to accumulate using scan:
update avgPrice:avg each{x,(1#y)!1#z}\[();userID;price] from t
Igors solution is faster if the data is static (aka you can prep the table with the attribute once).
Below code gives average of all previous prices for given userID including current row:
ungroup 0!select time, price, avgPrice: avgs price by userID from t
Just ensure that t is appropriately sorted by time before getting averages.
According to your comment to one of the answers, you're "trying to take the average prices of each userID as of the time of the record while ignoring any future records."
This query will do exactly that:
select userID,time,price,avgPrice:(avgs;price)fby userID from t
A query of yours (delete userIDPriceList ...) results in something different as #Anton Dovzhenko pointed out in his comment to your original question.
Update
After reading your comment I think I understood your requirement. Probably you could do this.
prices:exec `s#time!price by userID from t;
update avgPrice:avg each flip prices[;time] from t

Indicate more than one record with matching fields

How can I indicate multiple records with the same Invoice number, but a different Sales Person ID? Our commissions can be split into multiple Salespeople, so there can be two different Salespeople per an invoice.
For example:
Grouped by: Sales Person ID (No Changing this option)
These records are in the Group Footer.
Sales Person ID: Invoice: Invoice Amt: Commissions: (Indicator)
4433 R100 20,000 3,025 * More than one record on the same invoice with a different sales person
4450 R096 1,987 320
4599 R100 20,000 3,025 * More than one record on the same invoice with a different sales person
4615 R148 560 75
4777 R122 2,574 356
If your report has less than 1000 invoices, you may try something like this.
This will return true when a second ocurrence of the invoice shows up. Then you can make something like set the row background do red.
Global NumberVar Array invoices;
numbervar nextIndex := count(invoices) + 1;
if nextIndex <= 1000 and not ({Result.InvoiceNumber} in invoices) then (
redim invoices [nextIndex];
invoices[nextIndex] := {Result.InvoiceNumber};
true;
)
else false;
If you want to detect the first occurrence, you will need something more sophisticated.
I think a SQL Expression Field would be a good way to achieve the result you want. You already have an InvoiceNo in each row of data. You just need a SQL Expression Field that uses that INvoiceNo to execute a query to count the number of salespersons who get a commission.
Something along the lines of:
(
Select Count(Sales_Person_Id)
From [Table]
Where [Table].InvoiceNo = InvoiceNo
)
This will return an integer value that represents the number of salespersons who are associated with one invoice. You can either drop the SQL Expression Field in your Indicator column, or write some other formula to do something special.

Retrieve field based on record field

I have 3 tables
items,
item_units,
order_items
First table items has the list of items that can be ordered.
Second table item_units has units for the items as well as the amount of those items in this unit
Third table has items that were ordered... ie... item_code , unit, qty
Here are the columns for items
[item_code]
,[desc]
Here are the columns for item_units
,[item_code]
,[unit]
,[amount]
,[price]
,[default_sprd_sht]
Here are the columns for order_items
,[order_id]
,[item_code]
,[item_desc]
,[unit]
,[qty]
,[price]
Note the [default_sprd_sht]. This field is a boolean. If it's set to true this unit is never put into order_items table. This field will be used as calculation field.
For example:
If 1 customer orders 2 6 packs of bread and another orders 3 dozens of bread, the baker needs to know how many pans of bread to make.
Now a 6 pack unit has 6 breads as amount, meaning 2 * 6 = 12. And a dozen unit has 12 breads.. 12 * 3 = 36. A pan bread unit has 20 breads. So i need to add up all the bread units amounts and divide them by the pan amount like so
((2*6) + (12 * 3)) / 20 = 2.4
So the first thing I did to create a report for the baker was
Create a group for order_items.item_code and then order_item.unit.
This needs to be done since the same item and unit cobination will be repeated in different orders. The baker needs to see how many bagels or breads he needs to bake in total.
in the order_item.unit group header I created a formula field that multiplies the order_item.unit by item.amount
Sum ({order_items.qty}, {order_items.unit}) * {items_units.amount}
That was easy.
But I aslo need to group all order items if there exists a record in the items_units with the same item_code and with [default_sprd_sht] set to true
This would look like so
(Sum ({order_items.qty}, {order_items.unit}) * {items_units.amount}) / (get amount for unit with the same item_code and [default_sprd_sht] = 1)
I have two problems accomplishing this.
How to check if this order item has a unit with same item_code and
[default_sprd_sht] = 1?
How to further group order items only if there is a unit with same
item_code and [default_sprd_sht] = 1?

how to use multiple arguments in kdb where query?

I want to select max elements from a table within next 5, 10, 30 minutes etc.
I suspect this is not possible with multiple elements in the where clause.
Using both normal < and </: is failing. My code/ query below:
`select max price from dat where time</: (09:05:00; 09:10:00; 09:30:00)`
Any ideas what am i doing wrong here?
The idea is to get the max price for each row within next 5, 10, 30... minutes of the time in that row and not just 3 max prices in the entire table.
select max price from dat where time</: time+\:(5 10 30)
This won't work but should give the general idea.
To further clarify, i want to calculate the max price in 5, 10, 30 minute intervals from time[i] of each row of the table. So for each table row max price within x+5, x+10, x+30 minutes where x is the time entry in that row.
You could try something like this:
select c1:max price[where time <09:05:00],c2:max price[where time <09:10:00],c3:max price from dat where time< 09:30:00
You can paramatize this query however you like. So if you have a list of times, l:09:05:00 09:10:00 09:15:00 09:20:00 ... You can create a function using a functional form of the query above to work for different lengths of l, something like:
q)f:{[t]?[dat;enlist (<;`time;max t);0b;(`$"c",/:string til count t)!flip (max;flip (`price;flip (where;((<),/:`time,/:t))))]}
q)f l
You can extend f to take different functions instead of max, work for different tables etc.
This works but takes a lot of time. For 20k records, ~20 seconds, too much!. Any way to make it faster
dat: update tmlst: time+\:mtf*60 from dat;
dat[`pxs]: {[x;y] {[x; ts] raze flip raze {[x;y] select min price from x where time<y}[x] each ts }[x; y`tmlst]} [dat] each dat;
this constructs a step dictionary to map the times to your buckets:
q)-1_select max price by(`s#{((neg w),x)!x,w:(type x)$0W}09:05:00 09:10:00 09:30:00)time from dat
you may also be able to abuse wj:
q)wj[{(prev x;x)}09:05:00 09:10:00 09:30:00;`time;([]time:09:05:00 09:10:00 09:30:00);(delete sym from dat;(max;`price))]
if all your buckets are the same size, it's much easier:
q)select max price by 300 xbar time from dat where time<09:30:00 / 300-second (5-min) buckets