Related
I have some weather data stored in a csv file in the form of: „id, date, temperature, rainfall“, with id being the weather station and, obviously, date being the date of measurement. The file contains the data of 3 different stations over a period of 10 years.
What I'd like to do is analyze the data of each station and each year. For example: I'd like to calculate day-to-day differences in temperature [abs((n+1)-n)] for each station and each year.
I thought while-loops could be a possibility, with the loop calculating something as long as the id value is equal to the one in the next row.
But I’ve no idea how to do it.
Best regards
If you still need assistance, I would consider importing the .csv file data using "readtable". So long as only the first row are text, MATLAB will create a 'table' variable (this shouldn't be an issue for a .csv file). The individual columns can be accessed via "tablename.header" and can be reestablished as double data type (ex variable_1=tablename.header). You can then concatenate your dataset as you like. As for sorting by date and station id, I would advocate using "sortrows". For example, if the station id is the first column, sortrow(data,1) will sort "data" by the station id. sortrow(data, [1 2]) will sort "data" by the first column, then by the second column. From there, you can write an if statement to compare the station id's and perform the required calculations. I hope my brief answer is somewhat helpful.
A basic code structure would be:
path=['copy and paste file path here']; % show matlab where to look
data=readtable([path '\filename.csv'], 'ReadVariableNames',1); % read the file from csv format to table
variable1=data.header1 % general example of making double type variable from table
variable2=data.header2
variable3=data.header3
double_data=[variable1 variable2 variable3]; % concatenates the three columns together
sorted_data=sortrows(double_data, [1 2]); % sorts double_data by column 1 then column 2
It always helps to have actual data to work on and specifics as to what kind of output format is expected. Basically, ins and outs :) With the little info provided, I figured I would generate random data for you in the first section, and then calculate some stats in the second. I include the loop as an example since that's what you asked, but I highly recommend using vectorized calculations whenever available, such as the one done in summary stats.
%% example for weather stations
% generation of random data to correspond to what your csv file looks like
rng(1); % keeps the random seed for testing purposes
nbDates = 1000; % number of days of data
nbStations = 3; % number of weather stations
measureDates = repmat((now()-(nbDates-1):now())',nbStations,1); % nbDates days of data ending today
stationIds = kron((1:nbStations)',ones(nbDates,1)); % assuming 3 weather stations with IDs [1,2,3]
temp = rand(nbStations*nbDates,1)*70+30; % temperatures are in F and vary between 30 and 100 degrees
rain = max(rand(nbStations*nbDates,1)*40-20,0); % rain fall is 0 approximately half the time, and between 0mm and 20mm the rest of the time
csv = table(measureDates, stationIds, temp, rain);
clear measureDates stationIds temps rain;
% augment the original dataset as needed
years = year(csv.measureDates);
data = [csv,array2table(years)];
sorted = sortrows( data, {'stationIds', 'measureDates'}, {'ascend', 'ascend'} );
% example looping through your data
for i = 1 : size( sorted, 1 )
fprintf( 'Id=%d, year=%d, temp=%g, rain=%g', sorted.stationIds( i ), sorted.years( i ), sorted.temp( i ), sorted.rain( i ) );
if( i > 1 && sorted.stationIds( i )==sorted.stationIds( i-1 ) && sorted.years( i )==sorted.years( i-1 ) )
fprintf( ' => absolute difference with day before: %g', abs( sorted.temp( i ) - sorted.temp( i-1 ) ) );
end
fprintf( '\n' ); % new line
end
% depending on the statistics you wish to do, other more efficient ways of
% accessing summary stats might be accessible, for example:
grpstats( data ...
, {'stationIds','years'} ... % group by categories
, {'mean','min','max','meanci'} ... % statistics we want
, 'dataVars', {'temp','rain'} ... % variables on which to calculate stats
) % doesn't require data to be sorted or any looping
This produces one line printed for each row of data (and only calculates difference in temperature when there is no year or station change). It also produces some summary stats at the end, here's what I get:
stationIds years GroupCount mean_temp min_temp max_temp meanci_temp mean_rain min_rain max_rain meanci_rain
__________ _____ __________ _________ ________ ________ ________________ _________ ________ ________ ________________
1_2016 1 2016 82 63.13 30.008 99.22 58.543 67.717 6.1181 0 19.729 4.6284 7.6078
1_2017 1 2017 365 65.914 30.028 99.813 63.783 68.045 5.0075 0 19.933 4.3441 5.6708
1_2018 1 2018 365 65.322 30.218 99.773 63.275 67.369 4.7039 0 19.884 4.0615 5.3462
1_2019 1 2019 188 63.642 31.16 99.654 60.835 66.449 5.9186 0 19.864 4.9834 6.8538
2_2016 2 2016 82 65.821 31.078 98.144 61.179 70.463 4.7633 0 19.688 3.4369 6.0898
2_2017 2 2017 365 66.002 30.054 99.896 63.902 68.102 4.5902 0 19.902 3.9267 5.2537
2_2018 2 2018 365 66.524 30.072 99.852 64.359 68.69 4.9649 0 19.812 4.2967 5.6331
2_2019 2 2019 188 66.481 30.249 99.889 63.647 69.315 5.2711 0 19.811 4.3234 6.2189
3_2016 3 2016 82 61.996 32.067 98.802 57.831 66.161 4.5445 0 19.898 3.1523 5.9366
3_2017 3 2017 365 63.914 30.176 99.902 61.932 65.896 4.8879 0 19.934 4.246 5.5298
3_2018 3 2018 365 63.653 30.137 99.991 61.595 65.712 5.3728 0 19.909 4.6943 6.0514
3_2019 3 2019 188 64.201 30.078 99.8 61.319 67.082 5.3926 0 19.874 4.4541 6.3312
I was wondering if the procedure applied trying to download the sample rate was the appropriate as follows the instruction: y = downsample(x,n)
downsamp_rate = 40;
downsampled_data = downsample(X,downsamp_rate);
.. because my doubt relays in why the first column from both matrices is exactly the same (the original matrix and the sample donwloaded)maintaining the same data....
then the other data have already transformed to a lower sample rate.
Thank you so much!
Best!
edited: Sample data. I pasted the data but I can upload de .mat files.
Original data.
column 1 column 2 column 3
-0,593600000000000 -0,592699999999996 -0,591899999999995
2,42180000000000 2,41010000000000 2,40360000000000
1,78550000000000 1,79020000000000 1,79530000000000
-1,30590000000000 -1,31520000000000 -1,31530000000000
-0,707800000000003 -0,712699999999999 -0,727700000000003
-0,986500000000001 -0,996000000000002 -1,00460000000000
-0,989699999999999 -0,989699999999999 -0,989699999999999
1,23500000000000 1,22970000000000 1,21880000000000
0,122899999999998 0,127899999999997 0,128899999999998
0,938300000000003 0,937500000000002 0,936200000000004
0,248600000000004 0,248500000000002 0,248700000000002
-0,381499999999996 -0,393199999999999 -0,393699999999997
0,294099999999997 0,279299999999999 0,271299999999997
-0,223200000000001 -0,223699999999999 -0,227299999999997
0,0879999999999992 0,117300000000004 0,122500000000003
-0,167899999999999 -0,170999999999999 -0,174800000000003
-0,687499999999996 -0,697199999999998 -0,701600000000002
-0,681700000000002 -0,682200000000000 -0,683000000000000
1,19659999999999 1,19670000000000 1,19490000000000
-0,565500000000008 -0,565199999999999 -0,557400000000008
Downsampled data
column 1 column 2 column 3
-0,593600000000000 0,821900000000003 0,936300000000001
2,42180000000000 1,14610000000000 -0,255400000000000
1,78550000000000 2,86550000000000 3,66890000000000
-1,30590000000000 7,01950000000000 12,9564000000000
-0,707800000000003 3,05920000000000 0,852999999999998
-0,986500000000001 -0,372200000000000 -0,951000000000002
-0,989699999999999 -0,988000000000000 -1,21730000000000
1,23500000000000 5,79700000000000 3,40880000000000
0,122899999999998 5,32230000000000 5,19260000000000
0,938300000000003 4,88130000000000 7,55900000000000
0,248600000000004 4,79290000000000 2,96620000000000
-0,381499999999996 -0,400000000000000 0,641500000000000
0,294099999999997 -0,131400000000004 -1,20040000000000
-0,223200000000001 1,49610000000000 1,59030000000000
0,0879999999999992 0,418700000000000 -0,0114999999999976
-0,167899999999999 0,0149999999999983 -0,857500000000000
-0,687499999999996 -0,593100000000002 0,119700000000000
-0,681700000000002 -0,170000000000003 0,126799999999999
1,19659999999999 1,17670000000000 1,15780000000000
-0,565500000000008 8,89019999999999 6,58569999999999
A possible for your output is a periodic input signal with a period length of downsamp_rate-1. To give a short demonstration:
>> X=repmat(1:39,1,10);
>> downsampled_data = downsample(X,downsamp_rate);
>> downsampled_data
downsampled_data =
Columns 1 through 9
1 2 3 4 5 6 7 8 9
Column 10
10
Thus, take a look at your rows 40,41,42. I assume the first value is identical to your row 1,2,3
I'm quite new with Matlab and I've been searching, unsucessfully, for the following issue: I have an unstructure txt file, with several rows I don't need, but there are a number of rows inside that file that have an structured format. I've been researching how to "load" the file to edit it, but cannot find anything.
Since i don't know if I was clear, let me show you the content in the file:
8782 PROJCS["UTM-39",GEOGC.......
1 676135.67755473056 2673731.9365976951 -15 0
2 663999.99999999302 2717629.9999999981 -14.00231124135486 3
3 709999.99999999162 2707679.2185399458 -10 2
4 679972.20003752434 2674637.5679516452 0.070000000000000007 1
5 676124.87132483651 2674327.3183533219 -18.94794942571912 0
6 682614.20527054626 2671000.0000000549 -1.6383425512446661 0
...........
8780 682247.4593014461 2676571.1515358146 0.1541080392180566 0
8781 695426.98657108378 2698111.6168302582 -8.5039945992245904 0
8782 674723.80100125563 2675133.5486935056 -19.920312922947179 0
16997 3 21
1 2147 658 590
2 1855 2529 5623
.........
I'd appreciate if someone can just tell me if there is the possibility to open the file to later load only the rows starting with 1 to the one starting with 8782. First row and all the others are not important.
I know than manually copy and paste to a new file would be a solution, but I'd like to know about the possibility to read the file and edit it for other ideas I have.
Thanks!
% Now lines{i} is the string of the i'th line.
lines = strsplit(fileread('filename'), '\n')
% Now elements{i}{j} is the j'th field of the i'th line.
elements = arrayfun(#(x){strsplit(x{1}, ' ')}, lines)
% Remove the first row:
elements(1) = []
% Take the first several rows:
n_rows = 8782
elements = elements(1:n_rows)
Or if the number of rows you need to take is not fixed, you can replace the last two statements above by:
firsts = arrayfun(#(x)str2num(x{1}{1}), elements)
n_rows = find((firsts(2:end) - firsts(1:end-1)) ~= 1, 1, 'first')
elements = elements(1:n_rows)
I have since several days problems with reading my measurement csv files and make some simple calculations. I hope someone can help me.
My Aim
Read CSV data file, as followed:
Open with Excel:
date: 20140202 time: 083736 Cycles total: 74127 T_zer: 56 T_op1: 90.000
Actu state: stoppes ! T1: -23 T2: -12 T3: -32 T4: -65
*-*
324203 0 34724 0 0 0 2
431040 0 0 0 0 0 1
230706 0 0 0 0 0 1
340810 0 0 0 0 0 1
..............
....
.
-->Here 1st question: If I open with editor, I can only see one delimiter, its ";". But there must be two? One for row , one for columns? How can Excel separate it correctly into row and col, if there is only ";" ?
However... now I tried to csvread this file with octave. There I get it into octave, but everything only in one column:/. For me it would be very comfortable Octave could read it into a 7x X Matrix. In this case I can handle the data easy.
Here my Code:
clc
clear all
[fname,pname] =uigetfile();
fname;
extra="/";
pname;
b=strcat(pname,extra,fname);
m = csvread(b);
Result:
m as double with 4003x1. 4003 is corretct, but everything in one colum:/
m =
0
0
0
454203
561040
340706
I tried now to handle this problem up to several days, but no result.
Not a Octave expert, but looks like you can use the dlmread function to read a CSV files, it has many parameters which can help you read the file correctly.
start reading the data from row X (and not from the start)
only have Y columns
defined the separator between fields
I used 1~200 data as trainning data, 201~220 as testing data
format likes: 3 class(class 1,class 2, class 3) and 20 features
2 1:100 2:96 3:88 4:94 5:96 6:94 7:72 8:68 9:69 10:70 11:76 12:70 13:73 14:71 15:74 16:76 17:78 18:81 19:76 20:76
2 1:96 2:100 3:88 4:88 5:90 6:98 7:71 8:66 9:63 10:74 11:75 12:66 13:71 14:68 15:74 16:78 17:78 18:85 19:77 20:76
2 1:88 2:88 3:100 4:96 5:91 6:89 7:70 8:70 9:68 10:74 11:76 12:71 13:73 14:74 15:79 16:77 17:73 18:80 19:78 20:78
2 1:94 2:87 3:96 4:100 5:92 6:88 7:76 8:73 9:71 10:70 11:74 12:67 13:71 14:71 15:76 16:77 17:71 18:80 19:73 20:73
2 1:96 2:90 3:91 4:93 5:100 6:92 7:74 8:67 9:67 10:75 11:75 12:67 13:74 14:73 15:77 16:77 17:75 18:82 19:76 20:74
2 1:93 2:98 3:90 4:88 5:92 6:100 7:73 8:66 9:65 10:73 11:78 12:69 13:73 14:72 15:75 16:74 17:75 18:83 19:79 20:77
3 1:73 2:71 3:73 4:76 5:74 6:73 7:100 8:79 9:79 10:71 11:65 12:58 13:67 14:73 15:74 16:72 17:60 18:63 19:64 20:60
3 1:68 2:66 3:70 4:73 5:68 6:67 7:78 8:100 9:85 10:77 11:57 12:57 13:58 14:62 15:68 16:64 17:59 18:57 19:57 20:59
3 1:69 2:64 3:70 4:72 5:69 6:65 7:78 8:85 9:100 10:70 11:56 12:63 13:62 14:61 15:64 16:69 17:56 18:55 19:55 20:51
3 1:71 2:74 3:74 4:70 5:76 6:73 7:71 8:73 9:71 10:100 11:58 12:58 13:59 14:60 15:58 16:65 17:57 18:57 19:63 20:57
1 1:77 2:75 3:76 4:73 5:75 6:79 7:66 8:56 9:56 10:59 11:100 12:77 13:84 14:79 15:82 16:80 17:82 18:82 19:81 20:82
1 1:70 2:66 3:71 4:67 5:67 6:70 7:63 8:57 9:62 10:58 11:77 12:100 13:84 14:75 15:76 16:78 17:73 18:72 19:87 20:80
1 1:73 2:72 3:73 4:71 5:74 6:74 7:68 8:58 9:61 10:59 11:84 12:84 13:100 14:86 15:88 16:91 17:81 18:81 19:84 20:86
1 1:71 2:69 3:75 4:71 5:73 6:73 7:74 8:61 9:61 10:60 11:79 12:75 13:86 14:100 15:90 16:88 17:74 18:79 19:81 20:82
1 1:74 2:74 3:80 4:76 5:78 6:76 7:73 8:66 9:64 10:59 11:81 12:76 13:88 14:90 15:100 16:93 17:74 18:83 19:81 20:85
1 1:76 2:77 3:77 4:76 5:78 6:75 7:73 8:64 9:68 10:65 11:80 12:78 13:91 14:88 15:93 16:100 17:79 18:79 19:82 20:83
1 1:78 2:78 3:73 4:71 5:75 6:75 7:61 8:58 9:57 10:56 11:82 12:73 13:81 14:74 15:74 16:80 17:100 18:85 19:80 20:85
1 1:81 2:85 3:79 4:80 5:82 6:82 7:63 8:56 9:55 10:57 11:82 12:72 13:81 14:79 15:83 16:79 17:85 18:100 19:83 20:79
1 1:76 2:77 3:78 4:75 5:76 6:79 7:65 8:57 9:57 10:63 11:81 12:87 13:84 14:81 15:81 16:82 17:80 18:83 19:100 20:87
1 1:76 2:76 3:78 4:73 5:75 6:78 7:60 8:59 9:51 10:57 11:82 12:80 13:86 14:82 15:85 16:83 17:85 18:79 19:87 20:100
Then, I write code to classify them:
% read the data set
[image_label, image_features] = libsvmread(fullfile('D:\...'));
[N D] = size(image_features);
% Determine the train and test index
trainIndex = zeros(N,1);
trainIndex(1:200) = 1;
testIndex = zeros(N,1);
testIndex(201:N) = 1;
trainData = image_features(trainIndex==1,:);
trainLabel = image_label(trainIndex==1,:);
testData = image_features(testIndex==1,:);
testLabel = image_label(testIndex==1,:);
% Train the SVM
model = svmtrain(trainLabel, trainData, '-c 1 -g 0.05 -b 1');
% Use the SVM model to classify the data
[predict_label, accuracy, prob_values] = svmpredict(testLabel, testData, model, '-b 1');
But the final result for predict_label are all class 1, so the accuracy is 50%, which that it cannot get the correct predict label for class 2 and 3.
Is there something wrong from the format of data, or the code that I implemented?
Please help me, thanks very much.
To elaborate a bit more about the problem, there are at least three problems here:
You just check one values of parameters C (c) and Gamma (g) - behaviour of SVM is heavily dependant on the good choice of these parameters, so it is a common approach to use a grid search using cross validation testing for selecting the best ones.
Data scale also plays an important role here, if some of the dimensions are much bigger then the rest, you will bias the whole classifier, in order to deal with it there are at least two basic approaches: 1. Scale linearly each dimension to some interval (like [0,1] or [-1,1]) or normalize the data by transformation through Sigma^(-1/2) where Sigma is a data covariance matrix
Label imbalance - SVM works best when you have exactly the same amount of points in each class. Once it is not true, you should use the class weighting scheme in order to get valid results.
After fixing these three issues you should get reasonable results.
My guess is that you'd want to tune your parameters.
Make a loop over your -c and -g values (typically logarithimically, eg -c 10^(-3:5) ) and pick the one that is best.
That said, it is advisable to normalize your data, eg. scale it such that all values are between 0 and 1.