Create a certain size file and filled with no data on iOS - iphone

I'm developing an iphone app, I need to create a certain size file on filesystem and filled with NO data first, then seek to a offset and write data when get data from somewhere else
How can I do it?

The lseek BSD function is explicitly capable of that.
man lseek:
The lseek() function allows the file offset to be set beyond the end of the
existing end-of-file of the file. If data is later written at this point,
subsequent reads of the data in the gap return bytes of zeros (until data is
actually written into the gap).

NSMutableData or fseek is probably what you want

Related

in web-audio api how to obtain an array(eg. FLOAT32 array) from a stream (eg a microphone stream) for several seconds

I would like to fill an array from a stream for around ten seconds.{I wish to do some processing on the data)So far I can:
(a) obtain the microphone stream using mediaRecorder
(b) use analyser and analyser.getFloatTimeDomainData(dataArray) to obtain an array but it is size limited to only a little over half a second of data.I can also successfully output the data after processing back onto a stream and to outDestination.
(c) I have also experimented with obtaining a 'chunks' array from mediaRecorder directly but the problem then is that I can't find any mime type that would give me a simple array of values - ie an uncompressed sample by sample single channel set of value - ie a longer version of 'dataArray' in (b).
I am wondering if I am missing a simple way round this problem?
Solutions I have seen tend to use step (b) and do regular polls then reassemble a longer array - however it seems the timing is a bit tricky ..
I'v also seen suggestions to use audio workouts - I might have to do this but would prefer a simpler solution!
Or again, if someone knows how to drive mediaRecorder to output the chunks array in a simple array format FLOAT32.of one channel.That would do the trick.
Or maybe I'm missing something simpler?
I have code showing those steps that have been successful and will upload if anyone requests.

How to decode a large bytes synchronously (excel file)

I am using excel package version 2.0.0-null-safety-3 to read an excel file,
For small files, it looks good
But when reading a large file, the interface stops until the file is read
Excel.decodeBytes(_bytes);
decodeBytes method => sync is not supported
Is there a way to make the process synchronous
To be able to show the (download bar or waiting dialog) to the user
Thanks in advance.
Use Compute For Large Bytes of Data
await compute(function,param)
Flutter Compute

Can't open matlab file

I have a ".mat" file supposedly containing a [30720000x4 double] matrix (values from accelerometers). When I try to open this file with "Import data" in Matlab I get the following error:
Error using load
Can't read file F:\vibration_exp_2\GR_UB50n\bearing1\GR_UB50n_1_2.mat.
Error using load
Unknown text on line number 1 of ASCII file
F:\vibration_exp_2\GR_UB50n\bearing1\GR_UB50n_1_2.mat
"MATLAB".
Error in uiimport/runImportdata (line 456)
datastruct = load('-ascii', fileAbsolutePath);
Error in uiimport/gatherFilePreviewData (line 424)
[datastruct, textDelimiter, headerLines]= runImportdata(fileAbsolutePath,
type);
Error in uiimport (line 240)
[ctorPreviewText, ctorHeaderLines, ctorDelim] = ...
The filesize is 921MB which is the same as my other files that do open. I also tried opening the file using python, but no success. Any suggestions? I use MATLAB R2013b .
More info:
How the file was create:
%% acquisition of vibration data
% input:
% sample rate in Hz (max. 51200 Hz, should be used as bearing
% faults are high-frequent)
% time in seconds, stating the duration of the measurement
% (e.g. 600 seconds = 10 minutes)
% filename for the file to be saved
%
% examples:
% data = DAQ(51200, 600, 'NF1_1.mat');
% data = DAQ(51200, 600, 'NF1_2.mat');
function data = DAQ(samplerate,time,filename)
s = daq.createSession('ni'); % Creates the DAQ session
%%% Add the channels as accelerometer channels (meaning IEPE is turned on)
s.addAnalogInputChannel('cDAQ1Mod1','ai0','Accelerometer');
s.addAnalogInputChannel('cDAQ1Mod1','ai1','Accelerometer');
s.addAnalogInputChannel('cDAQ1Mod1','ai2','Accelerometer');
s.addAnalogInputChannel('cDAQ1Mod1','ai3','Accelerometer');
%s.addAnalogInputChannel('cDAQ1Mod2','ai0','Accelerometer');
s.Rate = samplerate;
s.NumberOfScans = samplerate*time;
%%% Defining the Sensitivities in V/g
s.Channels(1).Sensitivity = 0.09478; %31965, top outer
s.Channels(2).Sensitivity = 0.09531; %31966, back outer
s.Channels(3).Sensitivity = 0.09275; %31964, top inner
s.Channels(4).Sensitivity = 0.09363; %31963, back inner
data = s.startForeground(); %Acquiring the data
save(filename, 'data');
More info:
When I open the file using a simple text editor I can see a lot of characters that do not make sense​ but also the first line:
MATLAB 5.0 MAT-FILE, Platform: PCWIN64, Created on: Thu Apr 30
16:29:07 2015
More info:
The file itself: https://www.dropbox.com/s/r7mavil79j47xa2/GR_UB50n_1_2.mat?dl=0
It is 921MB.
EDIT:
How can I recover my data?
I've tried this, but got memory errors.
I've also tried this, but it did not work.
I fear I can't add many good news to what you know already, but it hasn't been mentioned yet.
The reason the .mat-file can't be load is due to the data beeing corrupted. What makes it 'unrecoverable' is the way it is stored internally. The exact format is specified in the MAT-File Format Documentation. So I decided to manually construct a simple reader to specifically read your .mat file.
It makes sense, that the splitmat.m can't recover anything, as it will basicly split the data into chunks, one stored variable per chunk, however in this case there is only 1 variable stored and thus only one chunk, which happens to be the corrupted one.
In this case, the data is stored as a miCOMPRESSED, which is a normal matlab array compressed using gzip. (Which, as a side note, doesn't seem like a good fit for 'random' vibration data.) This might explain previous comments about the smaller file size then the full data, as the filesize matches exatly with the internally stored value.
I extracted the compressed archive and tried to uncompress it in a variety of ways. Basicly it is a '.gz' without the header, that can be appended manually. Unfortunatly there seems to be a corrupted block near the start of the dataset. I am by no means an expert on gzip, but as far as I know the dictionary (or decryption key) is stored dynamicly which makes all data useless from the point the block is corrupted. If you are really eager, there seems to be a way to recover data even behind the point where data is corrupted, but that method is massively timeconsuming. Also the only way to validate data of those sections is manual inspection, which in your case might proof very difficult.
Below is the code, that I used to extract the .gz-file, so if you want to give it a try, this might get you started. If you manage to decrypt the data, you can read it as described in the MAT-File Format, 13f.
corrupted_file_id = fopen('corrupt.mat','r');
%% some header data
% can be skipped replacing this block with
% fread(id,132);
%header of .mat file
header_text = char(fread(corrupted_file_id,116,'char')');
subsystem_data_offset = fread(corrupted_file_id,8,'uint8');
version = fread(corrupted_file_id,1,'int16');
endian_indicator = char(fread(corrupted_file_id,2,'int8')');
data_type = fread(corrupted_file_id,4,'uint8');
%data_type is 15, so it is a compressed matlab array
%% save te content
data_size = fread(corrupted_file_id,1,'uint32');
gz_file_id = fopen('compressed_array.gz','w');
% first write a valid gzip head
fwrite(gz_file_id,hex2dec('1f8b080000000000'),'uint64',0,'b');
% then write the data sequentialy
step = 1:1e3:data_size;% 1MB steps
for idx = step
fwrite(gz_file_id,fread(corrupted_file_id,1e3,'uint8'));
end
step = step(end):data_size;% 1B steps
for idx = step
fwrite(gz_file_id,fread(corrupted_file_id,1,'uint8'));
end
fclose(gz_file_id);
fclose(corrupted_file_id);
To answer literally to the question, my suggestion would be to make sure first that the file is okay. This tool on File Exchange apparently knows how to diagnose corrupted .MAT files starting with version V5 (R8):
http://www.mathworks.com/matlabcentral/fileexchange/6893-matcat-mat-file-corruption-analysis-tool
The file's size (indices going out of range) seems to be a problem. Octave, which should read .mat files, gives the error
memory exhausted or requested size too large for range of Octave's index type
To find out what is wrong you may need to write a test program outside MatLab, where you have more control over memory management. Examples are here, including instructions on how to build them on your own platform. These stand-alone programs may not have the same memory issues. The program matdgns.c is specifically made to check .mat files for errors.

Perl Image::Magick get in-memory contents

I'm using Image::Magick to modify my images. I am then using an HTTP::Request to send the image content to an API.
HTTP::Request has a content method which allows you to set the content for the request, but obviously this requires that you have the content in memory.
I know that I can read the content of the image into a variable by opening a file and reading it. However, since Image::Magick already has the content of the image in memory, is there any way that I can get it via my Image::Magick object? Thanks!
Image::Magick keeps the image in a custom format in memory to make it more simple and efficient to manipulate. It has to compress the data to JPEG, PNG, or whatever format you request before it is written to a file, so you cannot just access the image in memory as it stands.
However, the module's ImageToBlob method will provide an in-memory copy of the data it would have written to disk, to save you writing it out and reading it back again.
Note that it returns a list of images to allow for an object that contains more than one frame, so if you have only a single frame you must write
my #blobs = $image->ImageToBlob;
$request->content($blobs[0]);
or
my ($blob) = $image->ImageToBlob;
$request->content($blob);

C#: Take Out Image Portion of JPEG to Backup Metadata?

This will be a little backwards from the typical approach.
I've used ExifTool for metadata manipulation before, but I really want to keep the best metadata backup I can before I make anything permanent.
What I want to do is remove the compressed image portion of a JPEG file to leave everything else intact. That's backing up EXIF, Makernotes, IPTC, XMP, etc whether at the beginning or end of the file.
What I've tried so far is to strip all metadata from a copy of the original JPEG, and use it as a basis of what bytes will be taken out of the original. After looking at the raw data, it doesn't seem like the stripped copy is contiguous in the original copy. There may be some header information still remaining in the stripped version. I don't really know. Not a good way to do it, I suppose.
Are there any markers that will absolutely tell me where the compressed JPEG image data starts and ends? I understand that JPEG files have 0xFFD8 and 0xFFD9 to mark the start and end of the image, but have come to find out that metadata is actually between those markers.
I'm using C#.
Thank you.
To do this properly you need to fully parse the JPEG/JFIF format and discard anything you don't want. Metadata is all kept in APP segments or trailers after the JPEG EOI, so presumably you will toss everything else. Full parsing of a JPEG/JFIF is not trivial, and for this I refer you to the JPEF/JFIF specification.
You can use the JpegSegmentReader class from my MetadataExtractor library to retrieve specific segments from a JPEG image.