I am using Matlab. I have a large column vector consisting of complex values. e.g.
data=[
-0.4447 + 0.6263i
0.3114 + 0.8654i
0.7201 + 0.6808i
0.7566 + 0.8177i
-0.7532 - 0.8085i
-0.7851 + 0.6042i
-0.7351 - 0.8725i
-0.4580 + 0.8053i
0.5775 - 0.6369i
0.7073 - 0.5565i
0.4939 - 0.7015i
-0.4981 + 0.8112i
....
]
This represents a constellation diagram which is shown below.
I would like to colour grade the constellation points depending on frequency at a particular point. I presume I need to create a histogram, but I am not sure how to do this using complex vectors and then how to plot the colour grade.
Any help appreciated.
I think you want to do a heat map:
histdata = [real(data), imag(data)];
nbins_x = nbins_y = 10;
[N, C] = hist3(histdata, [nbins_x, nbins_y]); % the second argument is optional.
imagesc(N);
Here hist3 creates the histogram-matrix, imagesc draws a scaled heat-map. If you prefer a 3d-visualization, just type hist3(histdata).
If you just right-click on N in the workspace window there are plenty of other visualization options. I suggest also trying contourf(N) which is a filled contour plot.
So, what you want to do is to find a two-2 histogram. The easiest way would be to separate out the real and imaginary points, and use the hist2d function, like this:
rdata=real(data);
idata=imag(data);
hist2d([rdata;idata]);
Related
I have found several tools which convert isosurface - class or meshgrid data in MATLAB to an STL format. Examples include stlwrite and surf2stl . What I can't quite figure out is how to take a delaunayTriangulation object and either uses it to create an STL file or convert it into an isosurface object.
The root problem is that I'm starting with an N-by-2 array of boundary points for irregular polygons, so I don't have any simple way to generate an xyz meshgrid. If there's a way to convert the boundary list into an isosurface of the interior region (constant Z-height is all I need), that would also solve my problem.
Otherwise, I need some way to convert the delaunayTriangulation object into something the referenced MATLAB FEX tools can handle.
edit to respond to Ander B's suggestion:
I verified that my triangulated set inside MATLAB is a 2-D sector of a circle. But when I feed the data to stlwrite , and import into Cura , I get a disaster - triangles at right angles or rotate pi from desired, or worse. Whether this is the fault of stlwrite , Cura being sensitive to some unexpected value, or both I can't tell. HEre's what started out as a disc:
As an example, here's a set of points which define a sector of a circle. I can successfully create a delaunayTriangulation object from these data.
>> [fcx1',fcy1']
ans =
100.4563 26.9172
99.9712 28.6663
99.4557 30.4067
98.9099 32.1378
98.3339 33.8591
97.7280 35.5701
97.0924 37.2703
96.4271 38.9591
95.7325 40.6360
95.0087 42.3006
94.2560 43.9523
93.4746 45.5906
92.6647 47.2150
91.8265 48.8250
90.9604 50.4202
90.0666 52.0000
89.1454 53.5640
88.1970 55.1116
87.2217 56.6425
86.2199 58.1561
85.1918 59.6519
84.1378 61.1297
83.0581 62.5888
81.9531 64.0288
80.8232 65.4493
79.6686 66.8499
78.4898 68.2301
77.2871 69.5896
76.0608 70.9278
74.8113 72.2445
73.5391 73.5391
72.2445 74.8113
70.9278 76.0608
69.5896 77.2871
68.2301 78.4898
66.8499 79.6686
65.4493 80.8232
64.0288 81.9531
62.5888 83.0581
61.1297 84.1378
59.6519 85.1918
58.1561 86.2199
56.6425 87.2217
55.1116 88.1970
53.5640 89.1454
52.0000 90.0666
50.4202 90.9604
48.8250 91.8265
47.2150 92.6647
45.5906 93.4746
43.9523 94.2560
42.3006 95.0087
40.6360 95.7325
38.9591 96.4271
37.2703 97.0924
35.5701 97.7280
33.8591 98.3339
32.1378 98.9099
30.4067 99.4557
28.6663 99.9712
26.9172 100.4563
25.1599 100.9108
23.3949 101.3345
21.6228 101.7274
19.8441 102.0892
18.0594 102.4200
16.2692 102.7196
14.4740 102.9879
12.6744 103.2248
10.8710 103.4303
9.0642 103.6042
7.2547 103.7467
5.4429 103.8575
3.6295 103.9366
1.8151 103.9842
0 104.0000
-1.8151 103.9842
-3.6295 103.9366
-5.4429 103.8575
-7.2547 103.7467
-9.0642 103.6042
-10.8710 103.4303
-12.6744 103.2248
-14.4740 102.9879
-16.2692 102.7196
-18.0594 102.4200
-19.8441 102.0892
-21.6228 101.7274
-23.3949 101.3345
-25.1599 100.9108
-26.9172 100.4563
0 0
Building on Ander B's answer, here is the complete sequence. These steps ensure that even concave polygons are properly handled.
Start with two vectors containing all the x and the y coordinates. Then:
% build the constraint list
constr=[ (1:(numel(x)-1))' (2:numel(x))' ; numel(x) 1;];
foodel = delaunayTriangulation(x',y',constr);
% get logical indices of interior triangles
inout = isInterior(foodel);
% if desired, plot the triangles and the original points to verify.
% triplot(foodel.ConnectivityList(inout, :),...
foodel.Points(:,1),foodel.Points(:,2), 'r')
% hold on
% plot(fooa.Points(:,1),fooa.Points(:,2),'g')
% now solidify
% need to create dummy 3rd column of points for a solid
point3 = [foodel.Points,ones(numel(foodel.Points(:,1)),1)];
% pick any negative 'elevation' to make the area into a solid
[solface,solvert] = surf2solid(foodel.ConnectivityList(inout,:),...
point3, 'Elevation', -10);
stlwrite('myfigure.stl',solface,solvert);
I've successfully turned some 'ugly' concave polygons into STLs that Cura is happy to turn into gCode.
STL is just a format to store in memory mesh information, thus you have the data if you have a mesh, you just need to write it to memory using the right format.
It appears that you input the vertices and faces to the stlwrite function as
stlwrite(FILE, FACES, VERTICES);
And the delaunayTriangulation output gives you a object that has easy access to this data as for an object DT, DT.Points is the vertices, and DT.ConnectivityList is the faces.
You can read more about it in the documentation you linked.
There are two related things I would like to ask help with.
1) I'm trying to shift a "semi-log" chart (using semilogy) such that the new line passes through a given point on the chart, but still appears to be parallel to the original.
2) Shift the "line" exactly as in 1), but then also invert the slope.
I think that the desired results are best illustrated with an actual chart.
Given the following code:
x = [50 80];
y = [10 20];
all_x = 1:200;
P = polyfit(x, log10(y),1);
log_line = 10.^(polyval(log_line,all_x));
semilogy(all_x,log_line)
I obtain the following chart:
For 1), let's say I want to move the line such that it passes through point (20,10). The desired result would look something like the orange line below (please note that I added a blue dot at the (20,10) point only for reference):
For 2), I want to take the line from 1) and take an inverse of the slope, so that the final result looks like the orange line below:
Please let me know if any clarifications are needed.
EDIT: Based on Will's answer (below), the solution is as follows:
%// to shift to point (40, 10^1.5)
%// solution to 1)
log_line_offset = (10^1.5).^(log10(log_line)/log10(10^1.5) + 1-log10(log_line(40))/log10(10^1.5));
%// solution to 2)
log_line_offset_inverted = (10^1.5).^(1 + log10(log_line(40))/log10(10^1.5) - log10(log_line)/log10(10^1.5));
To do transformations described as linear operations on logarithmic axes, perform those linear transformations on the logarithm of the values and then reapply the exponentiation. So for 1):
log_line_offset = 10.^(log10(log_line) + 1-log10(log_line(20)));
And for 2):
log_line_offset_inverted = 10.^(2*log10(log_line_offset(20)) - log10(log_line_offset));
or:
log_line_offset_inverted = 10.^(1 + log10(log_line(20)) - log10(log_line));
These can then be plot with semilogy in the same way:
semilogy(all_x,log_line,all_x, log_line_offset, all_x,log_line_offset_inverted)
I can't guarantee that this is a sensible solution for the application that you're creating these plots and their underlying data though. It seems an odd way to describe the problem, so you might be better off creating these offsets further up the chain of calculation.
For example, log_line_offset can just as easily be calculated using your original code but for an x value of [20 50], but whether that is a meaningful way to treat the data may depend on what it's supposed to represent.
I have a matrix time-series data for 8 variables with about 2500 points (~10 years of mon-fri) and would like to calculate the mean, variance, skewness and kurtosis on a 'moving average' basis.
Lets say frames = [100 252 504 756] - I would like calculate the four functions above on over each of the (time-)frames, on a daily basis - so the return for day 300 in the case with 100 day-frame, would be [mean variance skewness kurtosis] from the period day201-day300 (100 days in total)... and so on.
I know this means I would get an array output, and the the first frame number of days would be NaNs, but I can't figure out the required indexing to get this done...
This is an interesting question because I think the optimal solution is different for the mean than it is for the other sample statistics.
I've provided a simulation example below that you can work through.
First, choose some arbitrary parameters and simulate some data:
%#Set some arbitrary parameters
T = 100; N = 5;
WindowLength = 10;
%#Simulate some data
X = randn(T, N);
For the mean, use filter to obtain a moving average:
MeanMA = filter(ones(1, WindowLength) / WindowLength, 1, X);
MeanMA(1:WindowLength-1, :) = nan;
I had originally thought to solve this problem using conv as follows:
MeanMA = nan(T, N);
for n = 1:N
MeanMA(WindowLength:T, n) = conv(X(:, n), ones(WindowLength, 1), 'valid');
end
MeanMA = (1/WindowLength) * MeanMA;
But as #PhilGoddard pointed out in the comments, the filter approach avoids the need for the loop.
Also note that I've chosen to make the dates in the output matrix correspond to the dates in X so in later work you can use the same subscripts for both. Thus, the first WindowLength-1 observations in MeanMA will be nan.
For the variance, I can't see how to use either filter or conv or even a running sum to make things more efficient, so instead I perform the calculation manually at each iteration:
VarianceMA = nan(T, N);
for t = WindowLength:T
VarianceMA(t, :) = var(X(t-WindowLength+1:t, :));
end
We could speed things up slightly by exploiting the fact that we have already calculated the mean moving average. Simply replace the within loop line in the above with:
VarianceMA(t, :) = (1/(WindowLength-1)) * sum((bsxfun(#minus, X(t-WindowLength+1:t, :), MeanMA(t, :))).^2);
However, I doubt this will make much difference.
If anyone else can see a clever way to use filter or conv to get the moving window variance I'd be very interested to see it.
I leave the case of skewness and kurtosis to the OP, since they are essentially just the same as the variance example, but with the appropriate function.
A final point: if you were converting the above into a general function, you could pass in an anonymous function as one of the arguments, then you would have a moving average routine that works for arbitrary choice of transformations.
Final, final point: For a sequence of window lengths, simply loop over the entire code block for each window length.
I have managed to produce a solution, which only uses basic functions within MATLAB and can also be expanded to include other functions, (for finance: e.g. a moving Sharpe Ratio, or a moving Sortino Ratio). The code below shows this and contains hopefully sufficient commentary.
I am using a time series of Hedge Fund data, with ca. 10 years worth of daily returns (which were checked to be stationary - not shown in the code). Unfortunately I haven't got the corresponding dates in the example so the x-axis in the plots would be 'no. of days'.
% start by importing the data you need - here it is a selection out of an
% excel spreadsheet
returnsHF = xlsread('HFRXIndices_Final.xlsx','EquityHedgeMarketNeutral','D1:D2742');
% two years to be used for the moving average. (250 business days in one year)
window = 500;
% create zero-matrices to fill with the MA values at each point in time.
mean_avg = zeros(length(returnsHF)-window,1);
st_dev = zeros(length(returnsHF)-window,1);
skew = zeros(length(returnsHF)-window,1);
kurt = zeros(length(returnsHF)-window,1);
% Now work through the time-series with each of the functions (one can add
% any other functions required), assinging the values to the zero-matrices
for count = window:length(returnsHF)
% This is the most tricky part of the script, the indexing in this section
% The TwoYearReturn is what is shifted along one period at a time with the
% for-loop.
TwoYearReturn = returnsHF(count-window+1:count);
mean_avg(count-window+1) = mean(TwoYearReturn);
st_dev(count-window+1) = std(TwoYearReturn);
skew(count-window+1) = skewness(TwoYearReturn);
kurt(count-window +1) = kurtosis(TwoYearReturn);
end
% Plot the MAs
subplot(4,1,1), plot(mean_avg)
title('2yr mean')
subplot(4,1,2), plot(st_dev)
title('2yr stdv')
subplot(4,1,3), plot(skew)
title('2yr skewness')
subplot(4,1,4), plot(kurt)
title('2yr kurtosis')
I have a tab separated XYZ file which contains 3 columns, e.g.
586231.8 2525785.4 15.11
586215.1 2525785.8 14.6
586164.7 2525941 14.58
586199.4 2525857.8 15.22
586219.8 2525731 14.6
586242.2 2525829.2 14.41
Columns 1 and 2 are the X and Y coordinates (in UTM meters) and column 3 is the associated Z value at the point X,Y; e.g. the elevation (z) at a point is given as z(x,y)
I can read in this file using dlmread() to get 3 variables in the workspace, e.g. X = 41322x1 double, but I would like to create a surface of size (m x n) using these variables. How would I go about this?
Following from the comments below, I tried using TriScatteredInterp (see commands below). I keep getting the result shown below (it appears to be getting some of my surface though):
Any ideas what is going on to cause this result? I think the problem lies with themeshgrid command, though I'm not sure where (or why). I am currently putting in the following set of commands to calculate the above figure (my X and Y columns are in meters, and I know my grid size is 8m, hence ti/tj going up in 8s):
F = TriScatteredInterp(x,y,z,'nearest');
ti = ((min(x)):8:(max(x)));
tj = ((min(y)):8:(max(y)));
[qx,qy] = meshgrid(ti,tj);
qz = F(qx,qy);
imagesc(qz) %produces the above figure^
I think you want the griddata function. See Interpolating Scattered Data in MATLAB help.
Griddata and tirscattteredinterp are extremely slow. Use the utm2deg function on the file exchange and from there a combination of both vec2mtx to make a regular grid and then imbedm to fit the data to the grid.
I.E.
for i = 1:length(X)
[Lat,Lon ] = utm2deg(Easting ,Northing ,Zone);
end
[Grid, R] = vec2mtx(Lat, Lon, gridsize);
Grid= imbedm(Lat, Lon,z, Grid, R);
Maybe you are looking for the function "ndgrid(x,y)" or "meshgrid(x,y)"
I'm working in matlab processing images for steganography. In my work so far I have been working with block processing command blockproc to break the image up into blocks to work on it. I'm now looking to start working with two image, the secret and the cover, but i can't find anyway to use blockproc with two input matrices instead of one.
Would anyone knowof a way to do this?
blockproc allows you to iterate over a single image only, but doesn't stop you from operating on whatever data you would like. The signature of the user function takes as input a "block struct", which contains not only the data field (which is used in all the blockproc examples) but also several other fields, one of which is "location". You can use this to determine "where you are" in your input image and to determine what other data you need to operate on that block.
for example, here's how you could do element-wise multiplication on 2 same-size images. This is a pretty clunky example but just here to demonstrate how this could look:
im1 = rand(100);
im2 = rand(100);
fun = #(bs) bs.data .* ...
im2(bs.location(1):bs.location(1)+9,bs.location(2):bs.location(2)+9);
im3 = blockproc(im1,[10 10],fun);
im4 = im1 .* im2;
isequal(im3,im4)
Using the "location" field of the block struct you can figure out the appropriate parts of a 2nd, 3rd, 4th, etc. data set you need for that particular block.
hope this helps!
-brendan
I was struggling with the same thing recently and solved it by combining both my input matrices into a single 3D matrix as follows. The commented out lines were my original code, prior to introducing block processing to it. The other problem I had was using variables other than the image matrix in the function: I had to do that part of the calculation first. If someone can simplify it please let me know!
%%LAB1 - L*a*b nearest neighbour classification
%distance_FG = ((A-FG_A).^2 + (B-FG_B).^2).^0.5;
%distance_BG = ((A-BG_A).^2 + (B-BG_B).^2).^0.5;
distAB = #(bs) ((bs.data(:,:,1)).^2 + (bs.data(:,:,2)).^2).^0.5;
AB = A - FG_A; AB(:,:,2) = B - FG_B;
distance_FG = blockproc(AB, [1000, 1000], distAB);
clear AB
AB = A - BG_A; AB(:,:,2) = B - BG_B;
distance_BG = blockproc(AB, [1000, 1000], distAB);
clear AB
I assume the solution to your problem lies in creating a new matrix that contains both input matrices.
e.g. A(:,:,1) = I1; A(:,:,2) = I2;
Now you can use blockproc on A.