I have a table T2 with 5 columns and 5 rows. The table columns are 'FirstName','Height','Shoesize','Gender' and 'Profession'.
I have to create a second table containing the 'FirstName','Height' and 'Profession' of the person with the maximum 'Shoesize'.
So far, I found the index of the maximum 'Shoesize'.
[m,i] = max(T2{:,3})
However I am struggling to index into the table to abstract the relevant values. Any help is highly appreciated!
Related
New to PySpark and would like to make a table that counts the unique pairs of values from two columns and shows the average of another column over all rows with those pairs of values. My code so far is:
df1 = df.withColumn('trip_rate', df.total_amount / df.trip_distance)
df1.groupBy('PULocationID', 'DOLocationID').count().orderBy('count', ascending=False).show()
I want to add the average of the trip rate for each unique pair as a column. Can you help me please?
I'm trying to delete rows of a table, when both values of 2 specific columns are equal to zeros. I've tried to use ismember(col1 & col2, 0),:)=[]; but it deletes the rows when only one of the column is zero.
Ideally, i would also like to do the opposite: delete every rows where the cells of these 2 columns aren't both zero.
I know it would be easier if I wasn't using a table, unfortunately there is some needed variables that aren't numeric.
It would be great if you know a way to do what i need with a table.
Cheers
following problem:
I have a very large matrix and several rows share the same identifier in column 1. For these rows I need to do some averaging, reformatting etc.
Currently I am identifying all unique identifier values in column 1 by using the function unique and then do averaging, reformatting of values in other columns within this loop for each set of rows sharing the same column 1 value within a loop.
ID = unique(data.1);
for i = 1:length(ID);
do stuff
end
I guess this is highly inefficient and slow but I cannot think of a better way of handling this.
I'm having a table with a column which contains areas of soil types and a column with the productivities of the different soil types (ton/ha). I would like to multiply these columns to have the total productivity (in ton) but I cannot figure out how to do this. Can someone help?
I have a challenge in Oracle Apex - I would to sum multiple columns to give 3 extra rows namely points, Score, %score. There are more columns but I'm only choosing a few for now.
Below is an example structure of my data:
Town | Sector | Outside| Inside |Available|Price
Roy-----Formal----0----------0----------1------0
Kobus --Formal----0 ---------0--------- 1------0
Wika ---Formal----0----------0--------- 1------0
Mevo----Formal----1----------1----------1------0
Hoch----Formal----1----------1----------1------1
Points------------2----------2----------5------1
Score------------10---------10---------10------10
%score-----------20---------20---------50------10
Each column has a constant weighting (which serves as a factor and it can change depending on the areas). In this case, the weighting for the areas are in the first row of the sector Formal:
Sector |Outside| Inside |Available|Price
Formal----1----------1 ----------1-----1
Informal--1----------0 ----------2-----1
I tried using the aggregate sum function in apex but it wont work because I need the factor in the other table. This is where my challenge began.
To compute the rows below the report
points = sum per column * weighting factor per column
Score = sum of no of shops visited (in this case its 5) * weighting factor per column
% score = points/Score * 100
The report should display as described above. With the new computed rows below.
I kindly ask anyone to assist me with this challenge as I have tried searching for solutions but haven't come across any.
Thanks a lot for your support in advance!!