Matlab fwrite saturation - matlab

I have an image (matrix) that has values on 16 bits, that is between 0 and 65535 and I want to write it in a binary file, so I am using fwrite, as it says in the documentation I have tried to use different precision to write the data on 2 bytes ('integer*2', 'uint16', etc), but it seems that the data gets saturated on 15 bits, that is the maximum value is 0x7ff, if I use more bytes, let's say 4, the data arrives complete, with values greater than 0x7ff and less than 0xffff. I read in the documentation that fwrite saturate the values so there will be no Inf or NaN, does that mean that I can write on x bytes, just (x*8 - 1) bits ?!?
Is there any other way to write the image to a bin file with the correct values on 2 bytes ?

Can you run this code and verify it works on your system?
%generate and show data
IM = uint16(((2^16)-1) .* rand(512));
imagesc(IM);axis image;colorbar
%write data
fid=fopen('image.dat','w');
fwrite(fid,IM(:),'uint16');
fclose(fid);
%read data
fid=fopen('image.dat','r');
IM2=fread(fid,inf,'*uint16');
fclose(fid);
IM2=reshape(IM2,512,512);
%check if they are equal
all(IM(:)==IM2(:))
>> 1
If this works, can you check where it differs from your code?

Related

How do I encode NaN values in xarray / zarr with integer dtype?

I have a large xarray DataArray containing NaNs and want to save it with zarr. I want to minimize the file size and am OK with losing a few bits of precision - 16 bits ought to be OK.
I tried using numcodecs.FixedScaleOffset(astype='u2') filter but this stores all NaNs as zero. Since the data also contains zeros as valid values, this is not very helpful.
NumPy's u2 (a.k.a. uint16) does not support NaN values (please this SO answer). Zarr is merely reflecting NumPy's behavior.
It doesn't work with numcodecs.Quantize, but the xarray encoding parameters can specify _FillValue:
dataset.to_zarr(store, encoding={'<array-name>': {'dtype': 'uint16', '_FillValue': 65535}})
See https://xarray.pydata.org/en/stable/io.html#writing-encoded-data

imwrite a double matrix in MATLAB

How can I imwrite this value [12 13.5; 15 107.75] without changing in imread?
I want to save my information. But if I uint8 this value when I imread this matrix, I have this [12 13; 15 108].
(Let [12 13.5;15 107.75] be A.)
From imwrite documentation:
imwrite(A,filename) writes image data A to the file specified by filename.
If A is of data type uint16 and the output file format supports 16-bit data (JPEG, PNG, and TIFF), then imwrite outputs 16-bit values.
So you can multiply A by 100 then and then convert it to uint16. You will get [1200 1350;1500 10775]. Write it to a (JPEG, PNG, or TIFF) eg.imwrite(A,'image.jpeg').
Now imread('image.jpeg') will return 16-bit integers. Convert them to double and then divide by 100 to get original data. (eg. out = double(imread('image.jpeg'))/100 )
Note: The highest value representable in 16 bits is 65536. So this means that you input after scaling up must have numbers less than 65536 or else you will lose information. If you are using doubles less that 255 with precision 2 or less (two places after decimal) then the highest value after scaling up would be 25599 which is smaller than 65536 so it is fine. Just take care if your input values have a different range or precision.
Still, I think you should write the data in a file using fprintf as suggested by T. Huang.
This cannot be done by imwrite.you may try the fuction fprintf.
http://cn.mathworks.com/help/matlab/ref/fprintf.html

Converting a .binary file into an image

I have a .binary file that contains Depth data from a kinect sensor.
I am trying to go through the .binary file and get back the actual image in MATLAB. So this is the MATLAB program that I came up with:
fid = fopen('E:\KinectData\March7Pics\Depth\Depth_Raw_0.binary');
col = 512; %// Change if the dimensions are not proper
row = 424;
frames = {}; %// Empty cell array - Put frames in here
numFrames = 0; %// Let's record the number of frames too
while (true) %// Until we reach the end of the file:
B = fread(fid, [col row],'ushort=>ushort'); %// Read in one frame at a time
if (isempty(B)) %// If there are no more frames, get out
break;
end
frames{end+1} = B.'; %// Transpose to make row major and place in cell array
numFrames = numFrames + 1; %// Count frame
imwrite(frames{numFrames},sprintf('Depth_%03d.png',numFrames));
end
%// Close the file
fclose(fid);
frm = frames{1};
imagesc(frm)
colormap(gray)
The above program works fine but it would not give me any image thats above 99.
That is, I would be processing the .binary file and the last image I obtained is Depth_099.png even though the full video has more than that.
Does anyone knows y?
Thanks
The reason why you're not getting the images above 99 is because of the way you are format specifying your integer as you are creating your file name string as you read in the file. Specifically, here:
imwrite(frames{numFrames},sprintf('Depth_%03d.png',numFrames));
%03d.png means that you are only specifying up to 3 digits of precision, and so 999 is the max you will get. If you surpass 999, then your characters for your file name will also expand in size, so Depth_1000.png or Depth_124141.png for example. The %03d in the formatting string ensures that your number has three digits of precision, zero-padding to the left of the number to ensure that you have that many digits. If you want to maintain the same number of characters for your file name, one fix is to probably increase the number of digits of precision, something like:
imwrite(frames{numFrames},sprintf('Depth_%05d.png',numFrames));
This way, the length of the string will be longer, and going with your convention, you'll get up to 'Depth_99999.png'. If you go beyond this, then your file names will increase in character count accordingly. If you specify %05d, you are guaranteed to have 5 digits of precision, zero-padding those numbers that have less than 5 digits accordingly.
Depending on how many frames your video contains, adjust the number accordingly.
However, given your comments below.... it could just be that you only have 99 frames of data :)... but the precision tip that I mentioned above should definitely be useful.

Reading time series of NII images getting slower

I am developing a program to read a time series of NIfTY format images to a 4D matrix in MATLAB. There are about 60 images in the stack and the program runs without problems until the 28th image. (All the images are approximately same size, same details) But after that the reading get slower and slower.
In fact, the delay is accumulating.
I checked the program again and there are no open files. Everything looks fine.
Can someone give me an advice?
Size of current array (double)
Unless you are running on a machine with more than ~20GB RAM memory your matrix simply becomes too large to handle.
To check the size of the first three dimensions of your matrix:
A = rand(512,512,160);
whos('A')
Output:
Name Size Bytes Class Attributes
A 512x512x160 335544320 double
Now multiply by 60 to obtain the size of your 4D matrix and divide by 1024^3 to obtain GB's:
335544320*60/1024^3 = 18.7500 GB
So yes, your matrix is most likely too large to handle efficiently/effectively.
A matrix exceeding your RAM memory forces MatLab to use the swap file (HDD/SSD) which is orders of magnitude slower than your random access memory (even if you have a SSD).
Switch to different data types
I you do not require double precision, i.e. 16 digits of accuracy, you can always switch to less digits, i.e. single precision floating point numbers. By doing this you can reduce size. You can even reduce size further is the numbers are for example unsigned integers in the range of 0-255. See code below:
% Create doubles
A_double = rand(512,512,160);
S1=whos('A_double');
% Create floats
A_float = single(A_double);
S2=whos('A_float');
% Create unsigned int range 0-255
A_uint=uint8(randi(256,[512,512,160])-1);
S3=whos('A_uint');
fprintf('Size A_double is %4.2f GB\n',(S1.bytes*60)/1024^3)
fprintf('Size A_float is %4.2f GB\n',(S2.bytes*60)/1024^3)
fprintf('Size A_uint is %4.2f GB\n',(S3.bytes*60)/1024^3)
Output:
Size A_double is 18.75 GB
Size A_float is 9.38 GB
Size A_uint is 2.34 GB
Which may just fit inside your RAM. Make sure you indeed pre-allocate memory first, i.e. create an empty matrix using the zeros() function.

Single-Column Matrix Indexing

So I've got a .tcl file with data representing a large three-dimensional matrix, and all values associated with the matrix appended in a single column, like this:
128 128 512
3.2867
4.0731
5.2104
4.114
2.6472
1.0059
0.68474
...
If I load the file into the command window and whos the variable, I have this:
whos K
Name Size Bytes Class Attributes
K 8388810x3 201331440 double
The two additional columns seem to be filled with NaNs, which don't appear in the original file. Is this a standard way for MATLAB to store a three-dimensional matrix? I'm more familiar with the .mat way of storing a matrix, and I'm curious if there's a quick command I can run to revert it to a more friendly format.
The file's first line (128 128 512) gives it 3 columns. I don't know why there are 2so many extra rows (128*128*512 = 8388608), but your 3d matrix can probably be constructed like this:
N = 128*128*512;
mat = reshape(tab(2:N+1,1),[128 128 512]);
What's on the last hundred lines of the table that gets loaded?