Which polarity does it takes when computing component ERPs in EEGLAB - matlab

I am reading the tutorials from eeglab and I would like to know which polarities are used for the computation of component ERPs. What it says in the tutorial is:
link so
I want to know because I have realized that when I am plotting individual datatests I have differences in the scalp map polarities even if in my cluster the scalp maps look the same.
Thank you in advance.
Eva

Related

Understanding a code for deep learning NOMA system in MATLAB

I'm trying very hard to understand this code about a Deep Learning-Based NOMA system based in MATLAB. I am really new to MATLAB coding but I really need to understand this entire code as it will help in my school project and I am struggling.
I think as of right now I do not need to know how the mathematical formulas work, but instead, the focus is on what the code is doing and its flow.
This is part of the code in the trainData.m file that I am struggling with right now
Why are the pilot symbols calculated and then replaced right after?
Why is the idx_sc (20) selected to be replaced? What is its significance? Is it the only subcarrier selected for the training of the DL model? Why only that?
This portion of the code in the picture is labeled "generate training data for each class". From my understanding, it is generating OFDM packets for each label, simulating the transmission and reception, and then getting the features and labels for each of the 16 classes. Is that correct?
The code and all relevant function files can be found in the link below.
Please help me understand the code!!! Please! Much thanks!
https://www.mathworks.com/matlabcentral/fileexchange/75478-deep-learning-for-signal-detection-in-noma-systems
To get you started, In lines 91 the code initializes the entire variable as 0. Subsequent lines (92-96) are just replacing pieces of the variable based on the indexing inside the “(…)”

Inner workings of Google's Quick Draw

I'm asking this here because I didn't find anything online.
I would be interested in how Google Quick Draw works, specifically:
1) How does it output the answer - does it have a giant output vector with a probability for each type of drawing?
2) How does it read the data - I see they've implemented some sort of order aware input system, but does that mean that they input positions of the interpolated lines that users draw? This is problematic because it's variable length - how did they solve it?
3) And, finally, which training algorithm are they using? The data grows each time someone draws something new, or do they just feed it into the algorithm when it's created?
If you know any papers on this or by miracle you work at Google and/or can explain how it works, I would be really greatful. :)

How to plot all the stream lines in paraview?

I am simulating the case "Cavity driven lid" and I try to get all the stream lines with the stream tracer of paraview, but I only get the ones that intersect the reference line, and because of that there are vortices that are not visible. How can I see all the stream-lines in the domain?
Thanks a lot in adavance.
To add a little bit to Mathieu's answer, if you really want streamlines everywhere, then you can create a Stream Tracer With Custom Source (as Mathieu suggested) and set your data to both the Input and the Seed Source. That will create a streamline originating from every point in your dataset, which is pretty much what you asked for.
However, while you can do this, you will probably not be happy with the results. First of all, unless your data is trivially small, this will take a long time to compute and create a large amount of data. Even worse, the result will be so dense that you won't be able to see anything. You will get all those interesting streamlines through vortices, but they will be completely hidden by all the boring streamlines around them.
Thus, you are better off with trying to derive a data set that contains seed points that are likely to trace a stream through the vortices that you are interested in. One thing you might want to try is to compute the vorticity of your vector field (Gradient Of Unstructured Data Set when turning on advanced option Compute Vorticity), find the magnitude of that (Calculator), and then use the Threshold filter to pull out the cells with large vorticity. Then use that as your Seed Source.
Another (probably better) option if your data is 2D or you can extract an interesting surface along the flow of your data is to use the Surface LIC plugin. Details can be found at https://www.paraview.org/Wiki/ParaView/Line_Integral_Convolution.
You have to choose a representative source for your streamline.
You could use a "Sphere Source", so in the StreamTracer properties.
If that fails, you can use a StreamTracerWithCustomSource and use your own source that you will have to create yourself first.

Matlab BlobAnalysis (for cell counting)

I have been researching on how to program image processing for counting objects and I found the following homepage about Matlab for cell counting
I am not familiar with Matlab, but found their ideas interesting, so reading this, I see that they are using a BlobAnalysis Object to find the centroid of the segmented cells (You can see the image :
Now, this seem very interesting but I wonder what this operation is doing exactly? (please don't give me the definition written in the docs-"it is finding the property in the blobs". How? is it segmenting it? separating it? ) As far as I can see either you separate the cells first (that is the whole point of counting- I am doing this in other program using watersheding) and then finding the centroids is just something added and not so important, OR somehow this blob analysis is doing some interesting segmentation itself that I would like to know.
Anyone familiar with this can give me some pointers or advice here?

Plot data in google map from matlab

Is there anyway to plot my data consisting of lat/lon and some feature values in google map from matlab. I have certain data points having different properties based upon that I want to show like markers with different color/size on google map. Is that possible
Google Maps allows you to import data in the form of a KML file. There are various tutorials available online that show how to perform this import step (here's one that I just quickly found). Also, here is some basic info on KML from google.
So then the only challenge becomes exporting your data from MATLAB into KML form. If you have MATLAB's Mapping Toolbox, then this is extremely easy. Just use the kmlwrite command.
If you don't have the Mapping Toolbox, already, it's probably a good idea to have if you are performing any sort of complex mapping operations (things get pretty complicated when you try to flatten a round globe into a map). If this is just a one-off project and that toolbox is overkill, then you may be able to manually create a KML file by writing XML from MATLAB (either using xmlwrite or going the very manual route of writing with fprintf).
Additionally, I would not be too surprised if Google Maps allows you to import certain data in the form of CSV files (though perhaps this has limitations compared to KML). If so, you can simply make use of csvwrite from MATLAB to export your data (no extra toolboxes required).
==EDIT==
If you'd like to find out how to convert from CSV to KML, this previous SO post might help.
There is the KLM-Toolbox that doesn't require the matlab Mapping Toolbox:
http://www.mathworks.com/matlabcentral/fileexchange/34694
It should do the job.