Is it possible to use graph-on-parent with [clone]d abstractions in some way? - puredata

I know you can open an abstraction with the vis message, but I haven't found a way to present my abstractions in the patch containing the clone object. Perhaps dynamic patching is the only way to achieve this? I have searched the pd forum, mailing list and Facebook group without success.

Currently (as pd 0.48-1) there is no way of making the [clone] read the GOP of it's contents.
As a workaround you can encapsulate the [clone] object into an abstraction that provides a GUI that displays information about the selected clonede instance.
For example, let's say you have a Object called [HarmonicSeries] that, given an fundamental, it uses a [clone] object to create 8 instances of [Harmonic], each one containing a osc~ of the desired frequency. And you want to display the frequency of each Harmonic. Instead of using GOP on [Harmonic] you would use GOP on [HarmonicSeries] and provide an Interface to selected the desired harmonic to collect information.
The [harmonic] abstraction: it expects two parameters:
The fundamental frequency
The index of the harmonic
Then it multiplies both to get the harmonic's frequency and store it on an [float]. When it receives a bang it then outputs that frequency to it's left outlet.
[
Let's [clone] it and wrap it into the [HarmonicSeries] abstraction.
When the user clicks on the [hradio] to select the desired harmonic it sends a bang message to the correct harmonic, which in turn sends the stored frequency to it's outlet. It then displays the harmonic's index and the harmonic's frequency in number boxes.
Here's an example of it working (in the [HarmonicSeries-help] object)
This is a simple example but the principle is the same with complex cases. You encapsulate the [clone] into an abstraction that provides an interface for reading data from the cloned instances.

Related

Reducing sampling in simulation

Is there any way in Modelica, to reduce the sampeling during simulations? I have a DCDC converter with high frequency, consequently generating huge dataset. I am wondering, if there is any way to reduce the size of the dataset during simulation/exportation?
Thanks
Trying to create smaller dataset from models that generate huge ones (models with high frequencies).
Basically when you simulate, do not press the -> press the S on the toolbar and you get several tabs and the ones you care about are General and Output.
In General you can specify the number of intervals to reduce the data stored. It will only be stored at each interval.
In the Output you can say not to store events for example. You can also filter out variables that you are not interested in to reduce the result file size. Note that "Equidistant Time Grid" is activated by default, if not this would generate quite a lot of output maybe even several times per interval.
See more here about the things you have in General/Output:
https://openmodelica.org/doc/OpenModelicaUsersGuide/1.20/omedit.html#simulating-a-model
By including our desired variables in the Variable Filter in Output tab one can reduce the size of the output file without compromising the interval points. Each entry follows a POSIX EXTENDED regular expression format, say:
for a list of x,y,z it is ^x|y|z$. Another best practice could be unchecking the flag Store variables at Events. The best answer for this question is already answered by Adrian Pop.

How to plot all the stream lines in paraview?

I am simulating the case "Cavity driven lid" and I try to get all the stream lines with the stream tracer of paraview, but I only get the ones that intersect the reference line, and because of that there are vortices that are not visible. How can I see all the stream-lines in the domain?
Thanks a lot in adavance.
To add a little bit to Mathieu's answer, if you really want streamlines everywhere, then you can create a Stream Tracer With Custom Source (as Mathieu suggested) and set your data to both the Input and the Seed Source. That will create a streamline originating from every point in your dataset, which is pretty much what you asked for.
However, while you can do this, you will probably not be happy with the results. First of all, unless your data is trivially small, this will take a long time to compute and create a large amount of data. Even worse, the result will be so dense that you won't be able to see anything. You will get all those interesting streamlines through vortices, but they will be completely hidden by all the boring streamlines around them.
Thus, you are better off with trying to derive a data set that contains seed points that are likely to trace a stream through the vortices that you are interested in. One thing you might want to try is to compute the vorticity of your vector field (Gradient Of Unstructured Data Set when turning on advanced option Compute Vorticity), find the magnitude of that (Calculator), and then use the Threshold filter to pull out the cells with large vorticity. Then use that as your Seed Source.
Another (probably better) option if your data is 2D or you can extract an interesting surface along the flow of your data is to use the Surface LIC plugin. Details can be found at https://www.paraview.org/Wiki/ParaView/Line_Integral_Convolution.
You have to choose a representative source for your streamline.
You could use a "Sphere Source", so in the StreamTracer properties.
If that fails, you can use a StreamTracerWithCustomSource and use your own source that you will have to create yourself first.

How to automatically optimize a classifier in Weka in order to have a given class to contain 100 % sure data?

I have two (or three) classes and each classes can only possess one label.
I want to optimize (automatically if possible) parameters and thresholds of classifiers in order for my first class to contain only 100 % sure data. Even if it contains a small number of instances.
I don't mind for the remaining classes to contain false alarm or correct rejection.
I don't mind to have unclassified data.
I have already been searching on stackoverflow and on the weka's wiki but maybe my lack of knowledge concerning weka made me miss some keywords.
I also tried to perform the task with the well-known "iris" database but I think that in this case, any class can be 100 % sure.
Yet, I have only succeed in testing multiple classifiers and tuning them manually but without performing 100 % correct for my first class. (I checked this result in the confusion matrix given by weka's report.)
Somehow, I know it is possible for my class to contain 100% sure data because I managed to do it in Matlab with simple threshold set manually. But I would like to try out a bigger database, to obtain better threshold and to use the power of weka.
Any suggestions would be helpful, thanks !
You probably need the "Cost Sensitive Classifier" among "meta" classifiers.
If you are working in the Explorer, here is the dialog you get.
Choose the your "classifier" (something beyond ZeroR :) ).
Set your "cost matrix". For 2-class problem this will be 2x2 matrix.
By setting one non-diagonal component very large (>>1, let us say 1000) you ensure that misclassifying one class (your "first" class) is 1000 times more expensive than misclassifying another class. This should do the job.

Unsupervised Anomaly Detection with Mixed Numeric and Categorical Data

I am working on a data analysis project over the summer. The main goal is to use some access logging data in the hospital about user accessing patient information and try to detect abnormal accessing behaviors. Several attributes have been chosen to characterize a user (e.g. employee role, department, zip-code) and a patient (e.g. age, sex, zip-code). There are about 13 - 15 variables under consideration.
I was using R before and now I am using Python. I am able to use either depending on any suitable tools/libraries you guys suggest.
Before I ask any question, I do want to mention that a lot of the data fields have undergone an anonymization process when handed to me, as required in the healthcare industry for the protection of personal information. Specifically, a lot of VARCHAR values are turned into random integer values, only maintaining referential integrity across the dataset.
Questions:
An exact definition of an outlier was not given (it's defined based on the behavior of most of the data, if there's a general behavior) and there's no labeled training set telling me which rows of the dataset are considered abnormal. I believe the project belongs to the area of unsupervised learning so I was looking into clustering.
Since the data is mixed (numeric and categorical), I am not sure how would clustering work with this type of data.
I've read that one could expand the categorical data and let each category in a variable to be either 0 or 1 in order to do the clustering, but then how would R/Python handle such high dimensional data for me? (simply expanding employer role would bring in ~100 more variables)
How would the result of clustering be interpreted?
Using clustering algorithm, wouldn't the potential "outliers" be grouped into clusters as well? And how am I suppose to detect them?
Also, with categorical data involved, I am not sure how "distance between points" is defined any more and does the proximity of data points indicate similar behaviors? Does expanding each category into a dummy column with true/false values help? What's the distance then?
Faced with the challenges of cluster analysis, I also started to try slicing the data up and just look at two variables at a time. For example, I would look at the age range of patients accessed by a certain employee role, and I use the quartiles and inter-quartile range to define outliers. For categorical variables, for instance, employee role and types of events being triggered, I would just look at the frequency of each event being triggered.
Can someone explain to me the problem of using quartiles with data that's not normally distributed? And what would be the remedy of this?
And in the end, which of the two approaches (or some other approaches) would you suggest? And what's the best way to use such an approach?
Thanks a lot.
You can decide upon a similarity measure for mixed data (e.g. Gower distance).
Then you can use any of the distance-based outlier detection methods.
You can use k-prototypes algorithm for mixed numeric and categorical attributes.
Here you can find a python implementation.

is there a way in Simulink to use the same set of blocks on multiple signals (without copying those blocks)?

I am implementing some head tracking and I get 2 matrices of horizontal velocities. (A vector field decomposed into vertical and horizontal velocities). For each of these matrices I do some math to calculate the actual head tracking.
My question is, is there a way to do that math (which is a set of blocks) on both matrices without copying the math blocks onto each signal?
It's hard to explain so here's a screen shot of my model:
You can see that the "complex to real-imag" block has 2 outputs (this is the little one in the middle). The mean block and the integrator circuit then calculate the head velocity and position for the real matrix (horizontal position). I want to do exactly the same routine on the imaginary matrix (vertical direction). Obviously I can just copy the blocks, but surely there must be a better way of doing it? In a way I'm looking for an analogue of a loop in "normal programming" like C or something, where a block of code is executed several times on different inputs.
You can create a Library in Simulink that contains code you can reference multiple times.
Go to File -> New -> Library. In the model window that opens, you can create any number of subsystems with whatever code you want. Then, just drag a subsystem from the library into your model. The subsystem will now appear in your model with a little arrow icon in the lower left. This indicates that the subsystem in the model is a link. You can drag as many instances of the library subsystem into your model as you wish, just as you can call a function as many times as you wish in any other programming language.
If you right-click on the subsystem in your model, you can select "Link Options -> Go To Library Block" to get back to the library. You can make changes in your model and propogate them back to the library as well.
One way to easily reuse a set of blocks is to create a subsystem out of them. In your case, you can create a subsystem by grouping existing blocks, then simply copy and paste your subsystem to use it for your imaginary output.
Although potentially more complicated, you could also look into using mux signals to avoid having to copy parts of your model.