how to get grouped query data from the resultset? - iphone

I want to get grouped data from a table in sqlite. For example, the table is like below:
Name Group Price
a 1 10
b 1 9
c 1 10
d 2 11
e 2 10
f 3 12
g 3 10
h 1 11
Now I want get all data grouped by the Group column, each group in one array, namely
array1 = {{a,1,10},{b,1,9},{c,1,10},{h,1,11}};
array2 = {{d,2,11},{e,2,10}};
array3 = {{f,3,12},{g,3,10}}.
Because i need these 2 dimension arrays to populate the grouped table view. the sql statement maybe NSString *sql = #"SELECT * FROM table GROUP BY Group"; But I wonder how to get the data from the resultset. I am using the FMDB.
Any help is appreciated.

Get the data from sql with a normal SELECT statement, ordered by group and name:
SELECT * FROM table ORDER BY group, name;
Then in code, build your arrays, switching to fill the next array when the group id changes.

Let me clear about GroupBy. You can group data but that time its require group function on other columns.
e.g. Table has list of students in which there are gender group mean Male & Female group so we can group this table by Gender which will return two set . Now we need to perform some operation on result column.
e.g. Maximum marks or Average marks of each group
In your case you want to group but what kind of operation you require on price column ?.
e.g. below query will return group with max price.
SELECT Group,MAX(Price) AS MaxPriceByEachGroup FROM TABLE GROUP BY(group)

Related

Postgres get null if row doesn't exist in where clause

I've a postgres table with data like
Name
Attendance
Jackie
2
Jade
5
Xi
10
Chan
15
In my query I want all present by name, and if name doesn't exist return "null" instead of no row for that particular name
Eg
query where Name in ('Jackie', 'Jade', 'Cha', 'Xi')
Should return
Name
Attendance
Jackie
2
Jade
5
NULL
NULL
Xi
10
To produce the desired rows, you need to join with a table or set of rows which has all those names.
You can do this by inserting the names into a temp table and joining on that, but in Postgres you can turn an array of names into a set of rows using unnest. Then left join with the table to return a row for every value in the array.
select attendances.*
from
unnest(ARRAY['Jackie','Jade','Cha','Xi']) as names(name)
left join attendances on names.name = attendances.name;

Calculating correlation coefficient using PostgreSQL with data from same table?

I would like to expand existing question.
I have 3 tables:
Rivers' names and id - Let's name it river
Rivers' id and Hydrols' ids - Let's name it id
Hydrols' ids, volume and date - Let's name it data
I know how to select data from this 3 tables:
SELECT DISTINCT river.name, AVG(data.volume)
FROM data
INNER JOIN id
ON id.id_hydrol = data.id_hydrol
INNER JOIN river
ON river.id_river = id.id_river
AND river.name = 'NAME_1'
GROUP BY river.name
ORDER BY AVG(data.volume) DESC
What do i have to write instead of ? mark ?
How can i compare volume of 2 rivers with different names and date has to be the same?
How i put 2 different requests in corr() function?

Min value with GROUP BY in Power BI Desktop

id datetime new_column datetime_rankx
1 12.01.2015 18:10:10 12.01.2015 18:10:10 1
2 03.12.2014 14:44:57 03.12.2014 14:44:57 1
2 21.11.2015 11:11:11 03.12.2014 14:44:57 2
3 01.01.2011 12:12:12 01.01.2011 12:12:12 1
3 02.02.2012 13:13:13 01.01.2011 12:12:12 2
3 03.03.2013 14:14:14 01.01.2011 12:12:12 3
I want to make new column, which will have minimum datetime value for each row in group by id.
How could I do it in Power BI desktop using DAX query?
Use this expression:
NewColumn =
CALCULATE(
MIN(
Table[datetime]),
FILTER(Table,Table[id]=EARLIER(Table[id])
)
)
In Power BI using a table with your data it will produce this:
UPDATE: Explanation and EARLIER function usage.
Basically, EARLIER function will give you access to values of different row context.
When you use CALCULATE function it creates a row context of the whole table, theoretically it iterates over every table row. The same happens when you use FILTER function it will iterate on the whole table and evaluate every row against the filter condition.
So far we have two row contexts, the row context created by CALCULATE and the row context created by FILTER. Note FILTER use the EARLIER to get access to the CALCULATE's row context. Having said that, in our case for every row in the outer (CALCULATE's row context) the FILTER returns a set of rows that correspond to the current id in the outer context.
If you have a programming background it could give you some sense. It is similar to a nested loop.
Hope this Python code points the main idea behind this:
outer_context = ['row1','row2','row3','row4']
inner_context = ['row1','row2','row3','row4']
for outer_row in outer_context:
for inner_row in inner_context:
if inner_row == outer_row: #this line is what the FILTER and EARLIER do
#Calculate the min datetime using the filtered rows
...
...
UPDATE 2: Adding a ranking column.
To get the desired rank you can use this expression:
RankColumn =
RANKX(
CALCULATETABLE(Table,ALLEXCEPT(Table,Table[id]))
,Table[datetime]
,Hoja1[datetime]
,1
)
This is the table with the rank column:
Let me know if this helps.

Calculate value based on existence of records matching given criteria - FileMaker Pro 13

How can I write a calculation field in a table that outputs '1' if there are other (related) records in the same table that meet a given set of criteria and '0' otherwise?
Here's my problem explained in more detail:
I have a table containing 'students' and another containing 'exam results'. The 'exam results' table looks like this:
StudentID SubjectID Level Result
3234 1 2 A-
3234 2 4 B+
4739 1 4 C+
A student can only pass a Level 4 exam in subject 2 if they have also passed a Level 2 exam in subject 1 with a B+ or higher. I want to define a field in the 'students' table that contains a '1' if there exists an exam result belonging to the right student that meets these criteria and a '0' otherwise.
What would be the best way to do this?
Let us take an example of a Results table where the results are also calculated as a numeric value, e.g.
StudentID SubjectID Level Result cResultNum
3234 1 2 A- 95
3234 2 4 B+ 85
4739 1 4 C+ 75
and an Exams table with the following fields (among others):
RequiredSubjectID
RequiredLevel
RequiredResultNum
Given these, you can construct a relationship between Exams and (another occurrence of) Results as:
Exams::RequiredSubjectID = Results 2::SubjectID
AND
Exams::RequiredLevel = Results 2::Level
AND
Exams::RequiredResultNum ≤ Results 2::cResultNum
This allows each exam record to calculate a list of students that are eligible to take that exam as =
List ( Results 2::StudentID )
I want to define a field in the 'students' table that contains a '1'
if there exists an exam result belonging to the right student that
meets these criteria and a '0' otherwise.
This request is unclear, because there are many exams a student may want to take, and a field in the Students table can calculate only one result.
You need to do a self-join in the table for the field you want to check, for example:
Exam::Level = Exam2::Level
Exam::Student = Exam2::Student
And for the "was passed" criteria I think you could do an "If" on the calculation like this:
If ( Last(Exam2::Result) = "D" and ...(all the pass values) ; 1 ; 0 )
Edit:
It could be just with the not pass value hehe I miss that it will be like this:
If ( Last(Exam2::Result) = "F" ; 0 ; 1 )
I hope this helps you.

T-SQL Query to process data in batches without breaking groups

I am using SQL 2008 and trying to process the data I have in a table in batches, however, there is a catch. The data is broken into groups and, as I do my processing, I have to make sure that a group will always be contained within a batch or, in other words, that the group will never be split across different batches. It's assumed that the batch size will always be much larger than the group size. Here is the setup to illustrate what I mean (the code is using Jeff Moden's data generation logic: http://www.sqlservercentral.com/articles/Data+Generation/87901)
DECLARE #NumberOfRows INT = 1000,
#StartValue INT = 1,
#EndValue INT = 500,
#Range INT
SET #Range = #EndValue - #StartValue + 1
IF OBJECT_ID('tempdb..#SomeTestTable','U') IS NOT NULL
DROP TABLE #SomeTestTable;
SELECT TOP (#NumberOfRows)
GroupID = ABS(CHECKSUM(NEWID())) % #Range + #StartValue
INTO #SomeTestTable
FROM sys.all_columns ac1
CROSS JOIN sys.all_columns ac2
This will create a table with about 435 groups of records containing between 1 and 7 records in each. Now, let's say I want to process these records in batches of 100 records per batch. How can I make sure that my GroupID's don't get split between different batches? I am fine if each batch is not exactly 100 records, it could be a little more or a little less.
I appreciate any suggestions!
This will result in slightly smaller batches than 100 entries, it'll remove all groups that aren't entirely in the selection;
WITH cte AS (SELECT TOP 100 * FROM (
SELECT GroupID, ROW_NUMBER() OVER (PARTITION BY GroupID ORDER BY GroupID) r
FROM #SomeTestTable) a
ORDER BY GroupID, r DESC)
SELECT c1.GroupID FROM cte c1
JOIN cte c2
ON c1.GroupID = c2.GroupID
AND c2.r = 1
It'll select the groups with the lowest GroupID's, limited to 100 entries into a common table expression along with the row number, then it'll use the row number to throw away any groups that aren't entirely in the selection (row number 1 needs to be in the selection for the group to be, since the row number is ordered descending before cutting with TOP).