sphinx get MVA attribute count for each value group by the same MVA attribute - sphinx

I am trying to get the count of a MVA on sphinx 2.2.10 the problem is that I have 2 queries on the same table that gives me different counts when trying to run them here are the queries :
note that lang_skill_lvl is the MVA, and table_name is the same table for both queries.
SELECT count(*) FROM table_name WHERE lang_skill_lvl IN (8001, 8002, 8003, 8004);
+----------+
| count(*) |
+----------+
| 896941 |
+----------+
SELECT
groupby(), count(*) as count
FROM
table_name
GROUP BY lang_skill_lvl
ORDER BY count DESC LIMIT 0, 1000;
+-----------+---------+
| groupby() | mycount |
+-----------+---------+
| 8001 | 112485 |
| 8002 | 90656 |
| 8003 | 694194 |
| 1001 | 146812 |
| 48003 | 139820 |
| 8004 | 71 |
...
if you try to get the sum of values 8001 ,8002 ,8003 ,8004 you will find that it is 897406‬.
I think the issue might be with the groupby() it self since it is a MVA is there anything I am missing please let me know.
Thank you,

I suspect you have documents that contain multiple values in the MVA. Eg a document with 8001 AND 8002 (just an example, lots of other possible combinations!)
The first query will only count each document once, even if multiple matches in the MVA. It's effectively count(distinct id)
Whereas the second query, can count such documents multiple times - once for each MVA match, the single document would be counted in the 8001 count, and the 8002 count.

Related

How can I `SUM()` in PostgreSQL based on certain condition? For summing debits and credits in accounting journal table

I have a database full with accounting journals. There is table for accounting journal itself (the accounting journal's metadata) and there is a table for accounting journal line (for each account with its debit or credit).
I have database like this:
+----+---------------+--------+---------+
| ID | JOURNAL_NAME | DEBIT | CREDIT |
+----+---------------+--------+---------+
| | | | |
| 1 | INV/0001 | 100 | 0 |
| | | | |
| 2 | INV/0001 | 0 | 100 |
| | | | |
| 3 | INV/0002 | 200 | 0 |
| | | | |
| 4 | INV/0002 | 0 | 200 |
+----+---------------+--------+---------+
I want to have all journal with the same name to be summed in one, their debits and credits. So from the above table... I want to have a query that makes something like this:
+--------------+--------+---------+
| JOURNAL_NAME | DEBIT | CREDIT |
+--------------+--------+---------+
| | | |
| INV/0001 | 100 | 100 |
| | | |
| INV/0002 | 200 | 200 |
+--------------+--------+---------+
I have tried with:
SELECT DISTINCT ON (accounting_journal.id)
accounting_journal.name,
accounting_journal_line.debit,
accounting_journal_line.credit
FROM accounting_journal_line
JOIN accounting_journal ON accounting_journal.id = accounting_journal_line.move_id
ORDER BY accounting_journal.id ASC
LIMIT 3;
With the above query, I have all the journal and the journal lines. I just need to have the above query to sum the debits and credits for every same accounting_journal.name.
I have tried with SUM() but it always stuck in GROUP BY` clause.
SELECT DISTINCT ON (accounting_journal.id)
accounting_journal.name,
accounting_journal.ref,
accounting_journal_line.name,
SUM(accounting_journal_line.debit),
SUM(accounting_journal_line.credit)
FROM accounting_journal_line
JOIN accounting_journal ON accounting_journal.id = accounting_journal_line.move_id
ORDER BY accounting_journal.id ASC
LIMIT 3;
The error:
Error in query (7): ERROR: column "accounting_journal.name" must appear in the GROUP BY clause or be used in an aggregate function
LINE 2: accounting_journal.name,
I hope I can get assistance or pointer where I need to look at, here. Thanks!
When you are using any aggregation function with normal columns then your have to mention all the non-aggregating column in group by clause,
So try This:
SELECT DISTINCT ON (accounting_journal.id)
accounting_journal.name,
accounting_journal.ref,
accounting_journal_line.name,
SUM(accounting_journal_line.debit),
SUM(accounting_journal_line.credit)
FROM accounting_journal_line
JOIN accounting_journal ON accounting_journal.id = accounting_journal_line.move_id
group by 1,2,3
ORDER BY accounting_journal.id ASC
LIMIT 3;
In your query you are having 3 non-aggregation column so you can mention column number in group by clause to achieve it.
You can use the Sum Window Function, it does not require "group by". So:
select aj.id journal_id
aj.name journal_name,
aj.ref journal_ref,
ajl.name line_name,
sum(ajl.debit) over(partition by aj.id) total_debit,
sum(ajl.credit) over(partition by aj.id) total_credit
from accounting_journal_line ajl
join accounting_journal aj
on aj.id = ajl.move_id
order by aj.id;
See fiddle for a working example.

Selecting value for the latest two distinct columns

I am trying to do an SQL which will return the latest data value of the two distinct columns of my table.
Currently, I select distinct the values of the column and afterwards, I iterate through the columns to get the distinct values selected before then order and limit to 1. These tags can be any number and may not always be posted together (one time only tag 1 can be posted; whereas other times 1, 2, 3 can).
Although it gives the expected outcome, this seems to be inefficient in a lot of ways, and because I don't have enough SQL experience, this was so far the only way I found of performing the task...
--------------------------------------------------
| name | tag | timestamp | data |
--------------------------------------------------
| aa | 1 | 566 | 4659 |
--------------------------------------------------
| ab | 2 | 567 | 4879 |
--------------------------------------------------
| ac | 3 | 568 | 1346 |
--------------------------------------------------
| ad | 1 | 789 | 3164 |
--------------------------------------------------
| ae | 2 | 789 | 1024 |
--------------------------------------------------
| af | 3 | 790 | 3346 |
--------------------------------------------------
Therefore the expected outcome is {3164, 1024, 3346}
Currently what I'm doing is:
"select distinct tag from table"
Then I store all the distinct tag values programmatically and iterate programmatically through these values using
"select data from table where '"+ tags[i] +"' in (tag) order by timestamp desc limit 1"
Thanks,
This comes close, but beware if you have two rows with the same tag share a maximum timestamp you will get duplicates in the result set
select data from table
join (select tag, max(timestamp) maxtimestamp from table t1 group by tag) as latesttags
on table.tag = latesttags.tag and table.timestamp = latesttags.maxtimestamp

How to aggregate Postgres table so that ID is unique and column values are collected in array?

I'm not sure how to call what I'm trying to do, so trying to look it up didn't work very well. I would like to aggregate my table based on one column and have all the rows from another column collapsed into an array by unique ID.
| ID | some_other_value |
-------------------------
| 1 | A |
| 1 | B |
| 2 | C |
| .. | ... |
To return
| ID | values_array |
-------------------------
| 1 | {A, B} |
| 2 | {C} |
Sorry for the bad explanation, I'm really lacking the vocabulary here. Any help with writing a query that achieves what's in the example would be very much appreciated.
Try the following.
select id, array_agg(some_other_value order by some_other_value ) as values_array from <yourTableName> group by id
You can also check here.
See Aggregate Functions documentation.
SELECT
id,
array_agg(some_other_value)
FROM
the_table
GROUP BY
id;

Postgresql use more than one row as expression in sub query

As the title says, I need to create a query where I SELECT all items from one table and use those items as expressions in another query. Suppose I have the main table that looks like this:
main_table
-------------------------------------
id | name | location | //more columns
---|------|----------|---------------
1 | me | pluto | //
2 | them | mercury | //
3 | we | jupiter | //
And the sub query table looks like this:
some_table
---------------
id | item
---|-----------
1 | sub-col-1
2 | sub-col-2
3 | sub-col-3
where each item in some_table has a price which is in an amount_table like so:
amount_table
--------------
1 | 1000
2 | 2000
3 | 3000
So that the query returns results like this:
name | location | sub-col-1 | sub-col-2 | sub-col-3 |
----------------------------------------------------|
me | pluto | 1000 | | |
them | mercury | | 2000 | |
we | jupiter | | | 3000 |
My query currently looks like this
SELECT name, location, (SELECT item FROM some_table)
FROM main_table
INNER JOIN amount_table WHERE //match the id's
But I'm running into the error more than one row returned by a subquery used as an expression
How can I formulate this query to return the desired results?
you should decide on expected result.
to get one-tp-many relation:
SELECT name, location, some_table.item
FROM main_table
JOIN some_table on true -- or id if they match
INNER JOIN amount_table --WHERE match the id's
to get one-to-one with all rows:
SELECT name, location, (SELECT array_agg(item) FROM some_table)
FROM main_table
INNER JOIN amount_table --WHERE //match the id's

How do I generate a random sample of groups, including all people in the group, where the group_id (but not the person_id) changes across time?

I have data that looks like this:
+----------+-----------+------------+------+
| group_id | person_id | is_primary | year |
+----------+-----------+------------+------+
| aaa1 | 1 | TRUE | 2000 |
| aaa2 | 1 | TRUE | 2001 |
| aaa3 | 1 | TRUE | 2002 |
| aaa4 | 1 | TRUE | 2003 |
| aaa5 | 1 | TRUE | 2004 |
| bbb1 | 2 | TRUE | 2000 |
| bbb2 | 2 | TRUE | 2001 |
| bbb3 | 2 | TRUE | 2002 |
| bbb1 | 3 | FALSE | 2000 |
| bbb2 | 3 | FALSE | 2001 |
+----------+-----------+------------+------+
The data design is such that
person_id uniquely identifies an individual across time
group_id uniquely identifies a group within each year, but may change from year to year
each group contains primary and non-primary individuals
My goal is three-fold:
Get a random sample, e.g. 10%, of primary individuals
Get the data on those primary individuals for all time periods they appear in the database
Get the data on any non-primary individuals that share a group with any of the primary individuals that were sampled in the first and second steps
I'm unsure where to start with this, since I need to first pull a random sample of primary individuals and get all observations for them. Presumably I can do this by generating a random number that's the same within any person_id, then sample based on that. Then, I need to get the list of group_id that contain any of those primary individuals, and pull all records associated with those group_id.
I don't know where to start with these queries and subqueries, and unfortunately, the interface I'm using to access this database can't link information across separate queries, so I can't pull a list of random person_id for primary individuals, then use that text file to filter group_id in a second query; I have to do it all in one query.
A quick way to get this done is:
select
data_result.*
from
data as data_groups join
(select
person_id
from
data
where
is_primary
group by
person_id
order by
random()
limit 1) as selected_primary
ON (data_groups.person_id = selected_primary.person_id)
JOIN data AS data_result ON (data_groups.group_id = data_result.group_id AND data_groups.year = data_result.year)
I even made a fiddle so you can test it.
The query is pretty straightforward, it gets the sample, then it gets their groups and then it gets all the users of those groups.
Please pay atention to the Limit 1 clause that is there because the data set was so little. You can put a value or a query that gets the right percentage.
If anyone has an answer using windowing functions I'd like to see that.
Note: next time please provide the schema and the data insertion so it is easier to answer.