PLY polygon number - numbers

I currently work on PLY (Polygon File Format) reader in C++ openGl.
Is the number of points same for every polygon in PLY file?
EXAMPLE: 'N 1 2 3 .. M' is given polygon row in PLY file, it says that the polygon is made of N points.
Is the N same for whole file?

No, the number of points per face in a PLY file may vary from one face to the next. For example, the face list may contain a mix of both triangles and quadrilateral polygons, like this:
3 0 1 2
3 0 3 1
4 4 0 2 5
4 6 4 5 7
You can see an example of a full PLY file containing faces with different numbers of vertices at Paul Bourke's PLY File Format page.

Related

How can I calculate the average between two text files based on matrixes?

I am fairly new to using MATLAB. I have two text files that have 500 rows x 100 columns and every one of it has a random value.
For example:
1 2 3 4 5
6 7 8 9 2
4 5 6 6 8
So my question is, corresponding to the row and column, how can I calculate the average of each particular field corresponding to the row and column?
For instance, at (1,1) in text1.txt is 20 and at (1,1) in text2.txt is 40. The average is 30 and I want to display this result in an another file at (1,1).
Many thanks.
assuming the files are named 'a' and 'b':
load a
load b
c=(a+b)/2;
save('c','c','-ascii');

How to draw a Histogram in Matlab

I have a set of around 35000 data. These data are the signal strengths received only from a single location for different time interval of time. I want to plot a Histogram using these data. My X-axis will give the information about "Signal Strengths" and my Y-axis will give the information about "Probability". My histogram will consists of different bars which will give information about the signal strength and probabilities.
For example, suppose I have the following data
a= [ 1 1 1 1 1 1 2 2 2 3 3 3 3 3 3 3 3 3 4 4 4 5 6 6 6 6 6 6 6 6 6 6 6]
How can I plot the graph using data at X-axis and Probability at Y-axis? Any help will be appreciated. Thanks!
This should work just fine if you don't want to use some predefined functions:
una=unique(a);
normhist=hist(a,size(unique(a),2))/sum(hist(a));
figure, stairs(una,normhist)
Una has only the unique values of a, normhist is now between 0 and 1 and it's the probability of occurring of the individual signal because you divide it by the number of elements included in the data.

Similarity matching between images using multiple features in Matlab

In image retrieval matlab project, I have extracted 14 features from each image, and is shown in following Table.
**Si.No Feature name Size Example Values **
1 Feature_1 1x64 {96.02, 100.29, 69.04, 91.23,……89.42}
2 Feature_2 1x64 {0.070, 0.0671, 0.0876, …….. 0.065}
3 Feature_3 1x64 {0.837, 0.949, 0.992, 1.015 .…. 1.306}
4 Feature_4 1x64 { 5.00, 5.831, 8.6023, 6.403,…..8.602}
5 Feature_5 1x64 {-18.875, -10.85, -5.12, … 39.2005}
6 Feature_6 1x1 0.6465494
7 Feature_7 1x1 0.89150039
8 Feature_8 1x1 0.888859
9 Feature_9 1x1 0.990652599
10 Feature_10 1x1 157.8198719
11 Feature_11 1x1 0.60112219
12 Feature_12 1x1 0.060502114
13 Feature_13 1x1 0.139164909
14 Feature_14 1x1 5.7084825
The above set of feature is for single image. To compute the similarity between two images, I tried following ways.
First, I applied distance computation on a whole feature set by constructing a feature matrix (size: 14 x n ) for an image.
Second, there are 14 distance values are computed from individual features of two images then the individual distance values are added to obtain final distance value. Here the problem is some features dominate and make other features of no effect. (eg. Feature_1 and Feature_10 are gives large distance values, so addition of other 12 feature’s distance to the them will not give any effect. So I normalized the 14 individual distance values to the range 0 to 1, the problem again is small distance values gets too small after normalization.
But in both methods the retrieval results are not satisfactory. Is there any other methods to compute the similarity between images that accounts all the above features with equal importance?
Regards,
P.Arjun

Import a txt weighted adjacency list into a matrix

I want to create a weighted adj matrix. Is there a good way that will work even with a huge data set?
I have this abc.txt file for example:
abc.txt
1 2 50
2 3 70
3 1 42
1 3 36
result should be
matrix=
0 50 36
0 0 70
42 0 0
How can I construct a weighted adjacency matrix from input dataset graph file as shown above that will contain the weights?
So basically input file has 3 columns and the third column is the weights of each edge.
You could also apply spconvert to the output of importdata:
matrix = full(spconvert(importdata('abc.txt')));
What you have is a sparse definition of a matrix, using sparse is the simplest way to create it. If your matrix is thin (many zeros) you may also stick with the sparse matrix because it requires less memory. Then delete the last line.
S=load('abc.txt')
M=sparse(S(:,1),S(:,2),S(:,3))
M=full(M)

Scramble an nx1 matrix in matlab efficiently?

I need to randomly scramble the values of an nx1 matrix in matlab. I'm not sure how to do this efficiently, I need to do it many times for n > 40,000.
Example
Matrix before:
1 2 2 2 3 4 5 5 4 3 2 1
Scrambled:
3 5 2 1 2 2 3 4 1 4 5 2
thank you
If your data is stored in matrix data, then you can generate "scrambled" data using randperm like so:
scrambled = data(randperm(numel(data)));
This is sampling without replacement, so every value in data will appear once in scrambled.
For sampling with replacement (values in data may appear in scrambled multiple times and some may not appear at all), you could use randi like this:
scrambled = data(randi(numel(data),1,numel(data)));