MS Access datasheet with expression at each row - forms

I am creating forms and reports in MS Access and need to display records from transactions table with additional field, which dynamically conditional sum previous total.
TRANSACTIONS
DATE credit debit
22-01-2015 100 0
23-01-2015 0 50
25-01-2015 0 100
26-01-2015 200 0
want to display above table in reports & forms as below, notice Balance field is not there in database table
TRANSACTIONS
DATE credit debit balance
22-01-2015 100 0 100
23-01-2015 0 50 50
25-01-2015 0 100 -150
26-01-2015 200 0 50
actually it is query fetched from 2 tables and combined using union clause
and having not unique id. because unique_id of 1 table can cause duplication with unique_id of 2nd table

I have solved this using mysql Query as
SELECT
tbl.*,
(
(select sum(debit) from q_balance where q_balance.entry_date<= tbl.entry_date)-(select sum(credit) from q_balance where q_balance.entry_date<= tbl.entry_date)
) AS balance
FROM q_balance AS tbl
ORDER BY tbl.entry_date;
now I have query and can create forms & reports for this query.

Related

CloudWatch Insights Query - How to count distinct messages ending with phrase

I'd like to get the total distinct count of each message in log groups.
The format of the message is: Total of [n] rows have been loaded to [table_name]
example message:
#message
Total of 1234 rows have been loaded to table1
Total of 14 rows have been loaded to table3
Total of 345 rows have been loaded to table2
Total of 864 rows have been loaded to table3
and my expected output would be 3 instead of 4. How can I execute this on my query?
fields #message
| parse "Table of * rows have been loaded to *" as rows, table_index
| stats count_distinct(table_index)
You might have to filter/parse little differently, but this is the general idea.

Running calculation over two another column

I want to create a balance column over (partition by add subtract) order by Date as below
ID Date Add Subtract Balance
a 2019/01/01 500 0 500
a 2019/01/02 0 300 200
b 2019/03/01 800 0 800
b 2019/03/10 300 0 1100
I saw the solution once, but can not find it again. According to my remember, we need coalesce, lag, lead.
Pls help or give me a link to the relate question.
With the data you have, you need only the sum() window function:
select *,
sum(add - subtract)
over (partition by id
order by date) as balance
from your_table

Filter a value relevant to the maximum field

Here is my detail field with Order number and Amount.
Order Number Amount
2 3450
4 2300
8 4500
3 5100
Here the latest order is the maximum order number and I need to show it as follows in the report but not all these other records. So here I need to pick up the maximum order number and the relevant value for it. Help please.
Order Number Amount
8 4500
There are many ways to solve this one of the way is to use SQL Expression Fields.
Create a new SQL experssion field and write below formula
DB2 syntax
Select order number,amount from orders order by order number desc fetch first row only
oracle syntax:
SELECT order number,amount FROM (
select order number,amount ,ROW_NUMBER () OVER (ORDER BY order number DESC) RowNo from orders)
WHERE ROWNO<2
Now drag this to detail section.
Note: Above syntax is for DB2 if you are using oracle syntax will change..Let me know if you are using other than DB2 database

Postgresql create fixed size groups of rows

We have a table of items that each item has an invoice id. We process this data in chunks based on invoice id (100 "invoices" at a time). Can you assist in creating a query that will assign a group id to each set of 100 invoices (chunk). Here's a logical example of what we wish to attain:
In this scenario, we know we have 9 rows and 5 invoices in advance. We want to create groups that each group contains 2 invoices except the last group.
SELECT n1.*,
n2.r
FROM dbtable n1,
-- Group distinct inv_ids per group of 2
-- or any other number by changing the /2 to e.g., /4
(SELECT inv_id,
((row_number() OVER())-1)/2 AS r
FROM
-- Get distinct inv_ids
(SELECT DISTINCT inv_id AS inv_id
FROM dbtable
ORDER BY inv_id) n2a) n2
WHERE n1.inv_id=n2.inv_id ;
This query has the advantage that will select correct groups of inv_ids even when the inv_ids are not consecutive.
SQL fiddle here

How to update 400 000 records through batch wise

I have the following table named business_extra
business_id address neighbourhood
==========================================
1
2
3
..
400 000 records
The address column contains null values, so I want to update that column using another table
I have written the following query:
update b2g_mns_v2.txn_business_extra a
set mappable_address=b.mappable_address
from b2g_mns_v2.temp_business b
where b.import_reference_id=a.import_reference_id
but got the error message:
out of shared memory
update b2g_mns_v2.txn_business_extra a
set mappable_address=b.mappable_address
from b2g_mns_v2.temp_business b
where b.import_reference_id=a.import_reference_id
and a.mappable_address is null
limit 10000
Do this a few times (batches of 10000).
As a_horse_with_no_name mentioned, better ensure you query is OK by providing the execution plan.