Creating a For loop that iterates through all the numbers in a column of a table in Matlab - matlab

I am a new user of MatlabR2021b and I have a table where the last column (with name loadings) spans multiple sub-columns (all sub-columns were added under the same variable/column and are threated as one column). I wanto to create a For loop that goes through each separate loading column and iterates through them, prior to creating a tbl that I will input into a model. The sub-columns contain numbers with rows corresponding to the number of participants.
Previously, I had a similar analogy where the loop was iterating through the names of different regions of interest, whereas now the loop has to iterate through columns that have numbers in them. First, the numbers in the first sub-column, then in the second, and so on.
I am not sure whether I should split the last column with T1 = splitvars(T1, 'loadings') first or whether I am not indexing into the table correctly or performing the right transformations. I would appreciate any help.
roi.ic = T.loadings;
roinames = roi.ic(:,1);
roinames = [num2str(roinames)];
for iroi = 1:numel(roinames)
f_roiname = roinames{iroi};
tbl = T1;
tbl.(roinames) = T1.loadings(:,roiname);
**tbl.(roinames) = T1.loadings_rsfa(:,roiname)
Unable to use a value of type cell as an index.
Error in tabular/dotParenReference (line 120)
b = b(rowIndices,colIndices)**

Related

Is there a way to expand dynamically tables found in multiple columns using Power Query?

I have used the List.Accumulate() to merge mutliple tables. This is the output I've got in this simple example:
Now, I need a solution to expand all these with a formula, because in real - world I need to merge multiple tables that keep increasing in number (think Eurostat tables, for instance), and modifying the code manually wastes much time in these situations.
I have been trying to solve it, but it seems to me that the complexity of syntax easily becomes the major limitation here. For instance, If I make a new step where I nest in another List.Accumulate() the Table.ExpandTableColumns(), I need to pass inside a column name of an inner table, as a text. Fine, but to drill it down actually, I first need to pass a current column name in [] in each iteration - for instance, Column 1 - and it triggers an error if I store column names to a list because these are between "". I also experimented with TransformColumns() but didn't work either.
Does anyone know how to solve this problem whatever the approach?
See https://blog.crossjoin.co.uk/2014/05/21/expanding-all-columns-in-a-table-in-power-query/
which boils down to this function
let Source = (TableToExpand as table, optional ColumnNumber as number) =>
//https://blog.crossjoin.co.uk/2014/05/21/expanding-all-columns-in-a-table-in-power-query/
let ActualColumnNumber = if (ColumnNumber=null) then 0 else ColumnNumber,
ColumnName = Table.ColumnNames(TableToExpand){ActualColumnNumber},
ColumnContents = Table.Column(TableToExpand, ColumnName),
ColumnsToExpand = List.Distinct(List.Combine(List.Transform(ColumnContents, each if _ is table then Table.ColumnNames(_) else {}))),
NewColumnNames = List.Transform(ColumnsToExpand, each ColumnName & "." & _),
CanExpandCurrentColumn = List.Count(ColumnsToExpand)>0,
ExpandedTable = if CanExpandCurrentColumn then Table.ExpandTableColumn(TableToExpand, ColumnName, ColumnsToExpand, NewColumnNames) else TableToExpand,
NextColumnNumber = if CanExpandCurrentColumn then ActualColumnNumber else ActualColumnNumber+1,
OutputTable = if NextColumnNumber>(Table.ColumnCount(ExpandedTable)-1) then ExpandedTable else ExpandAll(ExpandedTable, NextColumnNumber)
in OutputTable
in Source
alternatively, unpivot all the table columns to get one column, then expand that value column
ColumnsToExpand = List.Distinct(List.Combine(List.Transform(Table.Column(#"PriorStepNameHere", "ValueColumnNameHere"), each if _ is table then Table.ColumnNames(_) else {}))),
#"Expanded ColumnNameHere" = Table.ExpandTableColumn(#"PriorStepNameHere", "ValueColumnNameHere",ColumnsToExpand ,ColumnsToExpand ),

How to query the first row efficiently?

I have a table with large amount of records:
date instrument price
2019.03.07 X 1.1
2019.03.07 X 1.0
2019.03.07 X 1.2
...
When I query for the day opening price, I use:
1 sublist select from prices where date = 2019.03.07, instrument = `X
It takes a long time to execute because it selects all the prices on that day and get the first one.
I also tried:
select from prices where date = 2019.03.07, instrument = `X, i = 0 //It does not return any record (why?)
select from prices where date = 2019.03.07, instrument = `X, i = first i //Seem to work. Does it?
In Oracle an equivalent will be:
select * from prices where date = to_date(...) and instrument = "X" and rownum = 1
and Oracle will stop immediately when it finds the first record.
How to do this in KDB (e.g. stop immediately after it finds the first record)?
In kdb, where subclauses in select statements are executed sequentially. i.e. only those records which pass the first "test" get passed to the second test. With that in mind, looking at your two attempts:
select from prices where date = 2019.03.07, instrument = `X, i = 0 //It does not return any record (why?)
This doesn't (necessarily) return anything, because by the time it gets to the i=0 check, you've already filtered out some records (possibly including the first record in the original table, which would have i=0)
select from prices where date = 2019.03.07, instrument = `X, i = first i //Seem to work. Does it?
This one should work. First you filter by date. Then within the records for that date, you select the records for instrument `X. Then within those records, you take the record where i is the first i (where i has already been filtered down, so first i is simply the index of the first record [still the index from the original table, not the filtered down version])
Q-SQL equivalent for that is select[n] which also performs better than other approaches in most of the cases. Positive 'n' will give first n records and negative will give last n records.
q) select[1] from prices where date = 2019.03.07, instrument = `X
There is no inbuilt functionality to stop after first match. You can write custom function for that but that would probably execute slower than above supported version.

Filter on parts of words in Matlab tables

Similar to Excel, I need to find out how to filter out rows of a table that do not contain a certain string.
For example, I need only rows that contain the letters "MX". Within the sheet, there are rows with strings like ZMX01, MX002, and US001. I would want the first two rows.
This seems like a simple question, so I am surprised I couldn't find any help for this!
It is similar to the question Filter on words in Matlab tables (as in Excel)
You may not find a lot of information on tables in MATLAB, as they were introduced with version R2013a, which is not that long ago. So, about your question: Let's first create a sample table:
% Create a sample table
col1 = {'ZMX01'; 'MX002'; 'US001'};
col2 = {5;7;3};
T = table(col1, col2);
T =
col1 col2
_______ ____
'ZMX01' [5]
'MX002' [7]
'US001' [3]
Now, MATLAB provides the rowfun function to apply any function to each row in a table. By default, the function you call has to be able to work on all columns of the table.
To only apply rowfun to one column, you can use the 'InputVariables' parameter, which lets you specify either the number of the column (e.g. 2 for the second column) or the name of the column (e.g. 'myColumnName').
Then, you can set 'OutputFormat' to 'uniform' to get an array and not a new table as output.
In your case, you'll want to use strfind on the column 'col1'. The return value of strfind is either an empty array (if 'MX' wasn't found), or an array of all indices where 'MX' was found.
% Apply rowfun
idx = rowfun(#(x)strfind(x,'MX'), T, 'InputVariables', 'col1', 'OutputFormat', 'uniform');
The output of this will be
idx =
[2]
[1]
[]
i.e. a 3-by-1 cell array, which is empty for 'US001' and contains a positive value for both other inputs. To create a subset of the table with this data, we can do the following:
% Create logical array, which is true for all rows to keep.
idx = ~cellfun(#isempty, idx);
% Save these rows and all columns of the table into a new table
R = T(idx,:);
And finally, we have our resulting table R:
R =
col1 col2
_______ ____
'ZMX01' [5]
'MX002' [7]

filtering on a range of values in a db column with sqlalchemy orm

I have a postgresql database and in one particular table, with many rows. One column in this table, called data, is a float array, REAL[], and gets filled with an array of ~4500 elements. I want to access this table through some query via SQLAlchemy and the ORM.
How do I select all rows in the table where a subset of this column satisfies some condition, e.g.contains a range of values? Like I want to select all rows where the data contains values >= 10, or values between >=10 and <=20.
Can I do this with a straight session query like
rows = session.query(Table).filter(Table.data.(some conditional)).all()
where my conditional is something like "VALUES >= 10 and VALUES <= 20"?
Or do I need to define some special methods, or setup, when I'm defining my SQLAlchemy table class. For example, I have my table set up as
class Table(Base):
__tablename__ = 'table'
__table_args__ = {'autoload' : True, 'schema' : 'testdb', 'extend_existing':True}
data = deferred(Column(ARRAY(Float)))
def __repr__(self):
return '<Table (pk={0})>'.format(self.pk)
Ideally I'd like to set it up so I can just do simple filtering in my session.query calls. Is this possible? I'm not super familiar with the ORM, so maybe it is?
I've had a look at the ARRAY Comparator sqlalchemy docs but those only seem to work on exact values. My data is precise to 6 sigfigs, and I don't know the exact values ahead of time.
What's the best way to do this? Thanks.
EDIT:
Based on the below comment, here is the code I'm using in attempting to select all rows (out of 1000) that have data (from 1 column) >= 1.0. There should be 537 rows.
rows = session.query(datadb.Table).filter(datadb.Table.data.any(1.0,operator=operators.le)).all()
This gives the correct subset number. len(rows) = 537. However, I don't understand the logic of with this operator, where to select data >=1.0 , I use the le operator? Also, along those same lines, there should be 234 rows that have data between the values >=1.0 and <1.0, but this statement fails to give the correct subset..
rows = session.query(datadb.Table).filter(datadb.Table.data.any(1.0,operator=operators.le)).filter(datadb.Table.data.any(1.2,operator=operators.ge)).all()
* EDIT 2 *
Here's an example of my database Table with a few rows. pk is an integer, and data is a real[].
db datadb
schema Table
pk data
0 [0.0,0.0,0.5,0.3,1.3,1.9,0.3,0.0,0.0]
1 [0.1,0.0,1.0,0.7,1.1,1.5,1.2,0.3,1.4]
2 [0.0,0.6,0.4,0.3,1.6,1.7,0.4,1.3,0.0]
3 [0.0,0.1,0.2,0.4,1.0,1.1,1.2,0.9,0.0]
4 [0.0,0.0,0.5,0.3,0.2,0.1,0.7,0.3,0.1]
I have 5 rows, 4 of them have data with values >= 1.0, while just 2 have values in the range >= 1.0 and <= 1.2. The query I would do to grab the rows is in the first case
rows = session.query(datadb.Table).filter(datadb.Table.data.any(1.0,operator=operators.le)).all()
This should return the 4 rows, at pk=0,1,2,3. This query does what I expect. The second case
rows = session.query(datadb.Table).filter(datadb.Table.data.any(1.0,operator=operators.le)).filter(datadb.Table.data.any(1.2,operator=operators.ge)).all()
and should return the 2 rows at pk=1,3. However this query just returns the 4 rows from the first query. For the second query, I also tried
rows = session.query(datadb.Table).filter(datadb.Table.data.any(1.0,operator=operators.le),datadb.Table.data.any(1.2,operator=operators.ge)).all()
which also didn't work.
Please read documentation on ARRAY.Comparator, according to which you should be able to do the following:
rows = (session.query(Table)
.filter(Table.data.any(10, operator=operators.le))
.filter(Table.data.any(20, operator=operators.ge)
).all()
EDIT:
# combined filter does not work,
# but applying one or the other is still useful as it reduces the result set
q = (session.query(MyTable)
.filter(MyTable.data.any(1.0, operator=operators.le))
# .filter(MyTable.data.any(1.2, operator=operators.ge))
)
# filter in memory
items = [_row for _row in q.all()
if any(1.0 <= item <= 1.2 for item in _row.data)]
for item in items:
print(item)

how to access one by on value of the stored procedure and calculate the average of them row wise

I executed a stored procedure in java which results into a table of 16 column and 3600 rows.Now I want to access 3 column of the retrieved table in row by row fashion and calculate the average of all column in row wise fashion.To do this,I need to access values of columns one by one .How to do it. My code for accessing column is
while (rs.next())
{
a1=rs.getDouble(7);
a2=rs.getDouble(8);
a3=rs.getDouble(10);
sum +=(a1+a2+a3);
}
return sum/3;
but through this,i think i'am getting a total average of all the columns .But i need average of column row by row i.e average of all three column of first row,then second row ...like this.
You are indeed adding all the values from 3 columns and doing row by row addition.
If you want row by row, then for each entry that you get from database you need to store the result in a list like below and return it where needed:
List<Double> averageList = new ArrayList<Double>();
while (rs.next())
{
a1=rs.getDouble(7);
a2=rs.getDouble(8);
a3=rs.getDouble(10);
averageList.add((a1+a2+a3)/3.0);
}
return averageList;