How to upgrade simulink lookup blocks to Lookup tables? - upgrade

I have a simulink model developed in matlab older version. I would like to upgrade the lookup and Lookup2D blocks to 1-D and 2-D Lookup Tables through a matlab script. Thanks for the help.

The specifics of what you want will depend on the previous version and the new version, and how you are using the blocks. But in general you might find that using slupdate or Update Advisor may already do what you need.
Failing that you'll need to use a collection of functions from the MATLAB-Simulink API, such as find_system and replace_block, possibly along with set_param and get_param.

Related

accessing p-values in PySpark UnivariateFeatureSelector module

I'm currently in the process of performing feature selection on a fairly large dataset and decided to try out PySpark's UnivariateFeatureSelector module.
I've been able to get everything sorted out except one thing -- how on earth do you access the actual p-values that have been calculated for a given set of features? I've looked through the documentation and searched online and I'm wondering if you can't... but that seems like such a gross oversight for such this package.
thanks in advance!

Coupling Lua and MATLAB

I am in the situation where I have a part of the codebase written in MATLAB and another part in Lua (which is used for scripting of a 3rd party program). As of now the exchange of data between them is makeshift, using the file I/O system. This evolved to be a substantial part of the code, even though that wasn't really planned.
The program is structured in a way, that some Lua scripts are run, then some MATLAB evaluation is done based on which some more Lua is run and so on. It handles simulations and evaluations (scientific code) and creates new simulations based on that. It handles thousands of files and sims.
To streamline the process I started looking into possibilities to change the data I/O and make easy calls from one to another.
I wanted to hear some opinions on how to solve the problem, the optimal solution would be one where I could call everything from MATLAB or Lua, and organize the large datasets in a more consistent and accessible way.
Solutions:
Use the Lua C API to create bindings for the Lua modules, and to add this to MATLAB as a C-Library. In this way I should hopefully be able to achieve my goals and reduce the system complexity.
Some smarter data format for the exchange of datasets (HDF?), and some functions which read the needed workspace variables. This way the parts of the program remain independent, but the data exchange gets solved.
Create wrappers for Lua/MATLAB functions, so they can be called more easily. Data exchange could be done through the return parameters of the functions.
Suggestions?
I would suggest 1 or if you aren't adverse to spending a lot of money, use MATLAB coder to generate C functions from the MATLAB side of the analysis, compile the generated code as a shared library, import the library with the LuaJIT FFI, and run everything from Lua. With this solution you would not have to change any of the MATLAB code and not much of the Lua code thanks to the LuaJIT's semantics regarding array indexing. Solution 1 is free, but it is not as efficient because of the constant marshaling between the two languages' data structures. It would also be a lot of work writing the interface. But either solution would be more efficient than file I/O.
As a easy performance boost, have you tried keeping the files in memory using a RAMdisk or tmpfs?

can we use loop functions in tableau

Can we use loop functions(for,while,do while) in tableau calculated Fields? If we can, how can we use the these functions in calculated fields and how can we initialise the variables which are declared in these functions?
No we can't. There are some hacks to do some calculations like that, using PREVIOUS_VALUE and other table calculations, but there is no loop functions in Tableau.
Why? Because Tableau isn't meant to be a data processing tool, but rather a data visualization tool. Don't get me wrong, Tableau engine is very good to process data, but only to perform "query-like" operations.
So why don't you post exactly what you are trying to achieve and we can think if it's possible to be accomplished with Tableau, or you require some pre-processing in your data

DATASTAGE capabilities

I'm a Linux programmer.
I used to write code in order to get things done: java perl php c.
I need to start working with DATA STAGE.
All I see is that DATA STAGE is working on table/csv style data and doing it line by line.
I want to know if DATA STAGE can work on file that are not table/csv like. can it load
data into data structures and run function on them, or is it limited to working
only on one line at a time.
thank you for any information that you can give on the capabilities of DATA SATGE
IBM (formerly Ascential) DataStage is an ETL platform that, indeed, works on data sets by applying various transformations.
This does not necessarily mean that you are constrained on applying only single line transformations (you can also aggregate, join, split etc). Also, DataStage has it's own programming language - BASIC - that allows you to modify the design of your jobs as needed.
Lastly, you are still free to call external scripts from within DataStage (either using the DSExecute function, Before Job property, After Job property or the Command stage).
Please check the IBM Information Center for a comprehensive documentation on BASIC Programming.
You could also check the DSXchange forums for DataStage specific topics.
Yes it can, as Razvan said you can join, aggregate, split. It can uses loops and external scripts, it can also handles XML.
My advice for you is that if you have large quantities of data you're gonna have to work on then datastage is your friend, else if the data that you're going to have to load is not very big then it's going to be easier to use JAVA, c, or any programming language that you know.
You can all times of functions , conversions , manipulate the data. mainly Datastage is used for ease of use when you handling humongous data from datamart /datawarehouse.
The main process of datastage would be ETL - Extraction Transformation Loading.
If a programmer uses 100 lines of code to connect to some database here we can do it with one click.
Anything can be done here even c , c++ coding in a rountine activity.
If you are talking about hierarchical files, like XML or JSON, the answer is yes.
If you are talking about complex files, such as are produced by COBOL, the answer is yes.
All using in-built functionality (e.g. Hierarchical Data stage, Complex Flat File stage). Review the DataStage palette to find other examples.

How to combine version control with data analysis

I do a lot of solo data analysis, using a combination of tools such as R, Python, PostgreSQL, and whatever I need to get the job done. I use version control software (currently Subversion, though I'm playing around with git on the side) to manage all of my scripts, but the data is perpetually a challenge. My scripts tend to run for a long period of time (hours, or occasionally days) to generate small or large datasets, which I in turn use as input for more scripts.
The challenge I face is in how to "rollback" what I do if I want to check out my scripts from an earlier point in time. Getting the old scripts is easy. Getting the old data would be easy if I put my data into version control, but conventional wisdom seems to be to keep data out of version control because it's so darned big and cumbersome.
My question: how do you combine and/or manage your processed data with a version control system on your code?
Subversion, maybe other [d]vcs as well, supports symbolic links. The idea is to store raw data 'well organized' on a filesystem, while tracking the relation between 'script' and 'generated date' with symbolic links under version control.
data -> data-1.2.3
All your scripts will call load data to retrieve a given dataset, being linked through versioned symbolic link to a given dataset.
Using this approach, code and calculated datasets are tracked within one tool, without bloating your repository with binary data.