Lookup table from .mat file running batch (RSIM) - matlab

At the moment I have a lookup table in a Simulink model which is reading the first and second columns from the workspace; in other words, I put the names of my vector variables in these fields. Then, I generated the code using rsim.tlc, run the batch and got the expected results. Nevertheless, when I try to run the batch again changing the vectors (they have different length in comparison with the ones I used when compiled the executable), I always get a message which says that the checksum number mismatches. I already verified if the rtP structure correctly reads the new values I´m using in my lookup table, thus I have no idea how to solve this.
Could someone help me?
As additional information, my target is to output the number if a second column of the lookuptable based on a clock number which is seek in the first column of the table. I would not mind using a .mat in the table, but I don´t have any idea how to do so.
I would appreacite any hint to solve this.
Don´t hesitate to ask me for more info! thanks in advance

Related

Inputting data row by row from a large data set into a hypertable (Postgre)

I have a csv file with 5 columns of data and 5000+ rows. My task is to input the data ONE by ONE into a hypertable which I already created.
My question is : the COPY function copies the entire file into the hypertable. I could just sit and use the INPUT function and input the data one by one - however, this is very painful and very time consuming.
I'm not sure how the conditional loops work, the documentation available is a little hazy. I have experience in C and python if that helps.
Any guidance is much appreciated!

How to join multiple tSortRow into one tFileOutputExcel

I have to do a job with Talend, but I'm a beginner and the software is very complex, I'm a bit lost.
My current problem is to join multiple tSplitRow into one output excel file.
I don't know if I must use tMap? How to configure it? Or if another object exist to do that.
Each tSplitRow has the same structure: LastName,FirstName,Course,Grade.
My current structure
Thank's for help.
Since your components have the same structure, you can use tUnite to do a union of your rows. It takes multiple input links, and has a single output.

How to append separate datasets to make a combined stacked dataset in Stata without losing information

I'm trying to merge two datasets from two time periods, time 1 & 2, to make a combined repeated measures dataset. There are some observations in time 1 which do not appear in time 2, as the observations are for participants who dropped out after time 1.
When I use the append command in Stata, it appears to drop the observations from time 1 that don't have corresponding data at time 2. It does, however, append observations for new participants who joined at time 2.
I would like to keep the time 1 data of those participants who dropped out, so that I can still use that information in the combined dataset.
How can I tell Stata not to automatically drop these participants?
Thanks,
Steve
Perhaps the best way of interesting people in advising you on your problems is to respect those answer your questions. You have been repeatedly advised, even as recently as yesterday, to review https://stackoverflow.com/help/someone-answers and provide the feedback that reflects itself in the reputation scores of those who take the time to help you.
In any event, append does not work as you describe it. If you take the time to work out a small reproducible example, by creating two small datasets, appending them, and then listing the results, you may find the roots of your misunderstanding. Or you will at least be able to provide substantive information from which others can work, should someone be interested in helping you.

How to handle large columnar data files in Octave WITH headers?

I have a .dat file that is space/tab delimited with 1 line of headers. There are about 60 columns of data in this file. There are others with other numbers of columns.
How can I read in the headers (as a vector, perhaps?) such that I can index into the appropriate column of the data-matrix without having to count my way manually to the correct column?
I seem to recall Matlab being able to create cell-arrays with headers as indexes. Is anything like that remotely possible in Octave?
So far, all I can get is the actual data according to this:
a = dlmread('Core.dat'," ",r0=1,c0=0);
Any and all help is much appreciated! Thanks!
I've been looking for an easy way to do this using just the standard packages, also, but there doesn't seem to be one.
However, it does look like the dataframe package, might let you do this sort of thing.
It does seem like something simpler should be built into the language, though.

How can I limit DataSet.WriteXML output to typed columns?

I'm trying to store a lightly filtered copy of a database for offline reference, using ADO.NET DataSets. There are some columns I need not to take with me. So far, it looks like my options are:
Put up with the columns
Get unmaintainably clever about the way I SELECT rows for the DataSet
Hack at the XML output to delete the columns
I've deleted the columns' entries in the DataSet designer. WriteXMl still outputs them, to my dismay. If there's a way to limit WriteXml's output to typed rows, I'd love to hear it.
I tried to filter the columns out with careful SELECT statements, but ended up with a ConstraintException I couldn't solve. Replacing one table's query with SELECT * did the trick. I suspect I could solve the exception given enough time. I also suspect it could come back again as we evolve the schema. I'd prefer not to hand such a maintenance problem to my successors.
All told, I think it'll be easiest to filter the XML output. I need to compress it, store it, and (later) load, decompress, and read it back into a DataSet later. Filtering the XML is only one more step — and, better yet, will only need to happen once a week or so.
Can I change DataSet's behaviour? Should I filter the XML? Is there some fiendishly simple way I can query pretty much, but not quite, everything without running into ConstraintException? Or is my approach entirely wrong? I'd much appreciate your suggestions.
UPDATE: It turns out I copped ConstraintException for a simple reason: I'd forgotten to delete a strongly typed column from one DataTable. It wasn't allowed to be NULL. When I selected all the columns except that column, the value was NULL, and… and, yes, that's profoundly embarrassing, thank you so much for asking.
It's as easy as Table.Columns.Remove("UnwantedColumnName"). I got the lead from
Mehrdad's wonderfully terse answer to another question. I was delighted when Table.Columns turned out to be malleable.