How can I read and separate txt file containing character and two column of numbers? - numbers

I have a *.txt file containing character and two column of numbers. How can I read the file and just pick up numbers of column? In fact I want to plot the numbers in first column vs those in the second columns, but I cant separate the numbers from characters. My txt files appear as below.
thanks a lot.
Temperature: Not acquired
Spectrometer Type: S2000
ADC Type: ADC1000USB
Number of Pixels in File: 2048
Graph Title:
>>>>>Begin Spectral Data<<<<<
229.20 0.000
229.57 0.000
229.94 0.037
230.31 0.047
230.68 -0.027
231.05 0.027
etc ....

Related

How to load a .txt file into matlab ignoring -nan(ind) values or changing them to zeros

I'm struggling to create MATLAB code which will upload a .txt file which contains -nan(ind) in some cells. I've tried many of the functions like textscan and dlmread but none of them have worked so far.
These -nan(ind) cells only occur in the first two and last two rows. How can I:
upload the text and change all -nan(ind) cells to zero, or
only load the numerical portion of the matrix, i.e the matrix minus the first 2 and last 2 rows.
The matrix will always have 18 columns, but the length will vary. The .txt file is called segmental_force_moments.txt
Any help is greatly appreciated

Csvwrite with numbers larger than 7 digits

So, I have a file that's designed to parse through a rather large csv file to weed out a handful of data points. Three of the values (out of 400,000+) within the file is listed below:
Vehicle_ID Frame_ID Tot_Frames Epoch_ms Local_X
2 29 1707 1163033200 8.695
2 30 1707 1163033300 7.957
2 31 1707 1163033400 7.335
What I'm trying to do here is to take previously filtered data points like this and plug it into another csv file using csvwrite. However, csvread will only take in the Epoch_ms in double precision, storing the value as 1.1630e+09, which is sufficient for reading, as it does maintain the original value of the number for use in MATLAB operations.
However, during csvwrite, that precision is lost, and each data point is written as 1.1630e9.
How do I get csvwrite to handle the number with greater precision?
Use dlmwrite with a precision argument, such as %i. The default delimiter is a comma, just like a CSV file.
dlmwrite(filename, data, 'precision', '%i')

reading in text file and organising by lines using MATLAB

I want to read in a text file (using matlab) with data that is not in a convenient matlab matrix form. This is an example:
{926377200,926463600}
[(48, 13), (75, 147), (67, 13)]
{926463600,926550000}
[(67, 48)]
{926550000,926636400}
[]
{926636400,926722800}
[]
{926722800,926809200}
...
All I would like is a vector of all the numbers separated by commas. With them always being in pairs and the odd lines' numbers are of much greater magnitude each time, this can be differentiated by logic later.
I cannot figure out how to use textscan or the other methods. What makes this a bit tricky is that the matlab methods require a defined format for the strings separated by delimiters and here the even lines have non-restricted numbers of integer pairs.
You can do this with textscan. You just need to specify the {} etc as whitespace.
For example, if you put your sample data into the file tmp.txt (in the current directory) and run the following:
fid = fopen('tmp.txt','r');
if fid > 0
numbers = textscan(fid,'%f','whitespace','{,}[]() ');
fclose(fid);
numbers = numbers{:}
end
you should see
numbers =
926377200
926463600
48
13
75
147
67
13
926463600
926550000
67
48
926550000
926636400
926636400
926722800
926722800
926809200
Just iterate through each character. (use fscanf or fread or whatever). If the character is a number (use str2num) , store it as a number , if it is not a number, discard it and start storing a new number when you encounter the next number.

100 csv files to be analyse with MATLAB

I have used MATLAB before but used it to analyses data from *.txt file. Can someone help me out how can I program the MATLAB to read all the 100 csv file. Each csv file have 14 columns, and about 10,000 rows. These csv files only contain numbers, no text.
All I want is to read columns F,G and H. And from the to calculate the average value for column F, G and H, Then again average the value for the whole 100 csv files for column F, G and H.
You can enumerate all the files in a directory by doing
files= dir('folder_wilth_your_csv_files\*.csv');
And then you traverse that with
num_files = length(files);
for i=1:num_files
data=csvread(files(i).name)
end
csvread will allow you to read in only a certain row and col range if you want. Once you have your data, averaging is the trivial part.
Have you looked at dlmread or csvread?

read and rewrite in matlab

How do I write a text file in the same format that it is read in MATLAB?
I looked and my question is almost the same as above question.
I want to read in a file which is 84641 x 175 long.
and i want a make a new .txt file with 84641 x 40 , deleteling rest of the columns.
I have 2 rewrite the dates n times. date is on first column in format 6/26/2010 and time on 2nd column in format ' 00:00:04'
when i use the code put in above question i keep getting the error
??? Error using ==> reshape
Product of known dimensions, 181,
not divisible into total number
of elements, 14812175.
Error in ==>
write at
data = reshape(data{1},N+6,[])';
when i comment this it has error in printf statements for date and data write.
Any ideas??
thanks
As the author of the accepted answer in the question you link to, I'll try to explain what I think is going wrong.
The code in my answer is designed to read data from a file which has a date XX/XX/XXXX in the first column, a time XX:XX:XX in the second column, and N additional columns of data.
You list the number of elements in data as 14812175, which is evenly divisible by 175. This implies that your input data file has 2 columns for the date and time, then 169 additional columns of data. This value of 169 is what you have to use for N. When the date and time columns are read from the input file they are broken up into 3 columns each in data (for a total of 6 columns), which when added to the 169 additional columns of data gives you 175.
After reshaping, the size of data should be 84641-by-175. The first 6 columns contain the date and time values. If you want to write the date, the time, and the first 40 columns of the additional data to a new file, you would only have to change one line of the code in my answer. This line:
fprintf(fid,', %.1f',data(i,7:end)); %# Output all columns of data
Should be changed to this:
fprintf(fid,', %.1f',data(i,7:46)); %# Output first 40 columns of data