Matlab Importdata Precision - matlab

I'm trying to use importdata for several data files containing data of a precision up to 11 digits after the decimal, is Matlab seems to think I am only interested in the first 5 digits when using importdata, is there an alternative method I could use to load my data, or a method to define the precision to which I want my data loaded?

First try:
format long g
Also, can you paste some of the data you are trying to load?

Related

xlsread function with very small number matlab

I want to use xlsread function in matlab to import very small data like 10E-13. But it always shows 0 in vector 'num'. I want to read the exact number and export it.
So, does anyone know how to increase the accuracy or precision?
Thank you in advance!
You can't change the precision with which xlsread reads the data. However, the output array might actually contain the data in num, but MATLAB displays it as 0. Run format long g, then diplay it again.

Convert string to high precision number in matlab

I have a file with data values of the order 10^(-6).When I try to read it in matlab, it just give me accuracy of 10^(-4).I used like below,
[y]=textread('report.txt','%f')
I tried to change %f to %0.6f, but still it does not work.
Then I try to read file as %s and use str2double, again same result.
0.004586 is just 0.0045
Please help me
Use format to change the precision.
The format function affects only how numbers display in the Command Window, not how MATLAB computes or saves them.
View current format: get(0,'format')
Set current format in present session to long using: format long
Set current format to long for successive session using : set(0,'Format',long)
long format offers 15 digits after the decimal point for double values, and 7 digits after the decimal point for single values.
Type help format for more details.
Update the format of your number like this:
matlab>> format long

Save 4D matrix to a file with high precision (%1.40f) in Matlab

I need to write 4D matrix (M-(16x,101x,101x,6x) to a file with high precision ('precision'-'%1.40f') in MATLAB.
I've found save('filename.mat', 'M' ); for multidimensional matrix but precision cannot be set (only -double). On the other hand I've found dlmwrite('filename.txt', M, 'delimiter', '\t', 'precision', '%1.40f'); to set the precision but only limited to 2-D array.
Can somebody suggest a way to tackle with my problem?
What is the point in storing 40 digits of fractional part if double precision number in MATLAB keeps only 16 of them?
Try this code:
t=pi
whos
fprintf('%1.40f\n',t)
The output is
Name Size Bytes Class Attributes
t 1x1 8 double
3.1415926535897931000000000000000000000000
The command save('filename.mat', 'M' ); will store numbers in their binary representation (8 bytes per double-precision number). This is unbeatable in terms of space-saving comparing with plain-text representation.
As for the 4D shape the way j_kubik suggested seems simple enough.
I always thought that save will store exactly the same numbers you already have, with the precision that is already used to store them in matlab - you are not losing anything. The only problems might be disk space consumption (too precise numbers?) and closed format of .mat files (cannot be read by outside programs). If I wanted to just store the data and read them with matlab later on, I would definitely go with save.
save can also print ascii data, but it is (as dlmwrite) limited to 2D arrays, so using dlmwrite will be better in your case.
Another solution:
tmpM = [size(M), 0, reshape(M, [], 1)];
dlmwrite('filename.txt', tmpM, 'delimiter', '\t', 'precision', '%1.40f');
reading will be a bit more difficult, but only a bit ;)
Then you can just write your own function to write stuff to a file using fopen & fprintf (just as dlmwrite does) - there you can control every aspect of your file format (including precision).
Something I would have done if I really cared about precision, file-size and execution time (this is probably not the way for you) would be to write a mex function that takes a matrix parameter and stores it in a binary file by just copying raw data buffer from matlab. It would also need some indication of array dimensions, and would probably be the quickest (not sure if save doesn't already do something similar).

How can I control the formatting when saving a matrix to a file?

I save a matrix to a file like this:
save(filepath, 'mtrx', '-ascii');
Is there a way to tell MATLAB to write 0 instead of 0.0000000e+000 values? It would be nice because it would be faster and easier to see which values differ from zero.
I suggest using DLMWRITE instead of SAVE since you're dealing with ASCII files. It will give you more control over the formatting. For example, you could create an output file delimited by spaces with a field width of 10 and 6 digits after the decimal point (see more about format specifiers here):
dlmwrite(filepath,mtrx,'delimiter',' ','precision','%10.6g');

How do you save data to a text file in a given format?

I want to save a matrix to a text file, so I can read it by another program. Right now I use:
save('output.txt', 'A','-ascii');
But this saves my file as
6.7206983e+000 2.5896414e-001
6.5710723e+000 4.9800797e-00
6.3466334e+000 6.9721116e-001
5.9975062e+000 1.3346614e+000
6.0224439e+000 1.8127490e+000
6.3466334e+000 2.0517928e+000
6.3965087e+000 1.9721116e+000
But I would like to have them saved without the "e-notation" and not with all the digits. Is there an easy way to do this?
Edit: Thank you! That works just fine. Sorry, but I think I messed up your edit by using the rollback.
I would use the fprintf function, which will allow you to define for yourself what format to output the data in. For example:
fid = fopen('output.txt', 'wt');
fprintf(fid,'%0.6f %0.6f\n', A.');
fclose(fid);
This will output the matrix A with 6 digits of precision after the decimal point. Note that you also have to use the functions fopen and fclose.
Ditto gnovice's solution, if you need performance & custom formatting.
dlmwrite gives you some control (global, not per-field basis) of formatting. But it suffers from lower performance. I ran a test a few years ago and dlmwrite was something like 5-10x slower than the fopen/fprintf/fclose solution. (edit: I'm referring to large matrices, like a 15x10000 matrix)