how to use "FindBarcodesInUIImage"? - iphone

I am developing Barcode scanner application for iPhone.
Library: RedLaser
Just I want to scan the barcode from the existing image, not from camera.
I didn't get any documentation to call FindBarcodesInUIImage method manually.
Can I get any sample code ?

Does this snippet from the documentation help?
This method analyses a given image and returns information on any barcodes discovered in the image. It is intended to be used in cases where the user already has a picture of a barcode (in their photos library, for example) that they want to decode. This method performs a thorough check for all barcode symbologies we support, and is not intended for real-time use.
When scanning barcodes using this method, you cannot (and need not) specify a scan orientation or active scan region; the entire image is scanned in all orientations. Nor can you restrict the scan to particular symbol types. If such a feature is absolutely necessary, you can implement it post-scan by filtering the result set.
FindBarcodesInUIImage operates synchronously, but can be placed in a thread. Depending on image size and processor speed, it can take several seconds to process an image.

void ScanImageForBarcodes(UIImage *inputImage)
{
NSSet *resultSet = FindBarcodesInUIImage(inputImage);
// Print the results
NSLog(#"%#", resultSet);
}
If the SDK did not find any barcodes in the image, the log message will be (null). Otherwise, it will be something like:
{(
(0x19e0660) Code 39: 73250110 -- (1 finds)
)}
This log message indicates a found set containing one item, a Code 39 barcode with the value "73250110".
Remember that the SDK is not guaranteed to find barcodes in an image. Even if an image contains a barcode, the SDK might not be able to read it, and you will receive no results.

Related

in web-audio api how to obtain an array(eg. FLOAT32 array) from a stream (eg a microphone stream) for several seconds

I would like to fill an array from a stream for around ten seconds.{I wish to do some processing on the data)So far I can:
(a) obtain the microphone stream using mediaRecorder
(b) use analyser and analyser.getFloatTimeDomainData(dataArray) to obtain an array but it is size limited to only a little over half a second of data.I can also successfully output the data after processing back onto a stream and to outDestination.
(c) I have also experimented with obtaining a 'chunks' array from mediaRecorder directly but the problem then is that I can't find any mime type that would give me a simple array of values - ie an uncompressed sample by sample single channel set of value - ie a longer version of 'dataArray' in (b).
I am wondering if I am missing a simple way round this problem?
Solutions I have seen tend to use step (b) and do regular polls then reassemble a longer array - however it seems the timing is a bit tricky ..
I'v also seen suggestions to use audio workouts - I might have to do this but would prefer a simpler solution!
Or again, if someone knows how to drive mediaRecorder to output the chunks array in a simple array format FLOAT32.of one channel.That would do the trick.
Or maybe I'm missing something simpler?
I have code showing those steps that have been successful and will upload if anyone requests.

How to use kAudioUnitSubType_LowShelfFilter of kAudioUnitType_Effect which controls bass in core Audio?

i'm back with one more question related to BASS. I already had posted this question How Can we control bass of music in iPhone, but not get as much attention of your people as it should get. But now I have done some more search and had read the Core AUDIO. I got one sample code which i want to share with you people here is the link to download it iPhoneMixerEqGraphTest. Have a look on it in this code what i had seen is the developer had use preset Equalizer given by iPod in Apple. Lets see some code snippet too:----
// iPodEQ unit
CAComponentDescription eq_desc(kAudioUnitType_Effect, kAudioUnitSubType_AUiPodEQ, kAudioUnitManufacturer_Apple);
What kAudioUnitSubType_AUiPodEQ does is it get preset values from iPod's equalizer and return us in Xcode in an array which we can use in PickerView/TableView and can set any category like bass, rock, Dance etc. It is helpless for me as it only returns names of equalizer types like bass, rock, Dance etc. as i want to implement bass only and want to implement it on UISLider.
To implement Bass on slider i need values so that i can set minimum and maximum value so that on moving slider bass can be changed.
After getting all this i start reading Core Audio's Audio Unit framework's classes and got this
after that i start searching for bass control and got this
So now i need to implement this kAudioUnitSubType_LowShelfFilter. But now i don't know how to implement this enum in my code so that i can control the bass as written documentation. Even Apple had not write that how can we use it. kAudioUnitSubType_AUiPodEQ this category was returning us an array but kAudioUnitSubType_LowShelfFilter category is not returning any array. While using kAudioUnitSubType_AUiPodEQ this category we can use types of equalizer from an array but how can we use this category kAudioUnitSubType_LowShelfFilter. Can anybody help me regarding this in any manner? It would be highly appreciable.
Thanks.
Update
Although it's declared in the iOS headers, the Low Shelf AU is not actually available on iOS.
The parameters of the Low Shelf are different from the iPod EQ.
Parameters are declared and documented in `AudioUnit/AudioUnitParameters.h':
// Parameters for the AULowShelfFilter unit
enum {
// Global, Hz, 10->200, 80
kAULowShelfParam_CutoffFrequency = 0,
// Global, dB, -40->40, 0
kAULowShelfParam_Gain = 1
};
So after your low shelf AU is created, configure its parameters using AudioUnitSetParameter.
Some initial parameter values you can try would be 120 Hz (kAULowShelfParam_CutoffFrequency) and +6 dB (kAULowShelfParam_Gain) -- assuming your system reproduces bass well, your low frequency content should be twice as loud.
Can u tell me how can i use this kAULowShelfParam_CutoffFrequency to change the frequency.
If everything is configured right, this should be all that is needed:
assert(lowShelfAU);
const float frequencyInHz = 120.0f;
OSStatus result = AudioUnitSetParameter(lowShelfAU,
kAULowShelfParam_CutoffFrequency,
kAudioUnitScope_Global,
0,
frequencyInHz,
0);
if (noErr != result) {
assert(0 && "error!");
return ...;
}

How to avoid image duplication in image gallery of iOS device?

I want to store the images to photo gallary. But if images are already available at photo gallary then how to distinguish for whether imgaes are already exists or not? I didnt find property like unique identifier or should i compare by taking data in the form of NSdata?
thanks..
You can maintain a dictionary of hashes for each the image in the photo gallery, and then show only additional images who do not hashes are not present in the dictionary
As a reminder, you can check for an object in a dictionary by doing:
if ([myDictionary objectForKey:variableStoringImageHash] == nil) {
//No such image present
}
else {
//image is present
}
For a bit about hashing an image, this might help:
iPhone: fast hash function for storing web images (url) as files (hashed filenames)
I am not sure if what eitan27 says will work so as an alternative I would say that your best hope is comparing NSData objects. But this as you can see will become very tedious as there will be n number of images in library and comparing each one for repetition does't make sense , still if you want to compare data you look at this answer which will give you a fraction of how much the data is matching.

Issues with iPhone Http Streaming with concatenated video files

We are seeing this when "tying" two video files together.
Example we have Ad video that is segmented and content file which is also segmented.
We create a new file which has both Ad and content segment information together. However we are seeing an issue where either the Ad content is truncated or the content starts having A/V sync issues.
Both ad and content are segmented the same way , 5 sec segmentation. however since Ads are variable length the result file may have left over segment something like:
#EXTM3U
#EXT-X-TARGETDURATION:5
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:5,
fileSequence6.ts
#EXTINF:5,
fileSequence7.ts
#EXTINF:4,
fileSequence8.ts
#EXTINF:5,
fileSequence0.ts
#EXTINF:5,
fileSequence1.ts
#EXTINF:5,
fileSequence2.ts
#EXTINF:3,
fileSequence3.ts
Is this the proper way to play 2 files one after the other without rebuffering?
should generate-variant-plist be used to a play list of 2 files?
When you have a break in the stream to switch to a commercial, ad, or alternate video source then you want to introduce the discontinuity tag before the start of the next segment, for example:
#EXTM3U
#EXT-X-TARGETDURATION:5
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:5,
movie0.ts
#EXTINF:2,
movie1.ts
#EXT-X-DISCONTINUITY
#EXTINF:5,
commercial0.ts
#EXTINF:5,
commercial1.ts
#EXTINF:3,
commercial2.ts
This gets a little more complicated if you encrypt the streams because they use progressive encryption based on the prior segments encryption state and the sequence number which come together to form an "Initialization Vector". If you break the stream you have to reset the initialization vector so that the encryption/decryption can continue uninterrupted. This is an involved process so best to just search on Initialization Vector in Apple's docs.

What's the best approach to asynchronous image caching on the iPhone?

I'm creating an iPhone app that will pull data down from a Web API, including email addresses. I'd like to display an image associated with each email address in table cells, so I'm searching the Address Book for images and falling back on a default if the email address isn't in the book. This works great, but I have a few concerns:
Performance: The recipes I've found for looking for an address book record by Email address (or phone number) are reportedly rather slow. The reason for this is that one must iterate over every address book record, and for each one that has an image, iterate over all email addresses to find a match. This can be time-consuming for a large address book, of course.
Table Cells: So I thought I'd gather up all the email addresses for which I need to find images and find them all at once. This way I iterate through the book only once for all addresses. But this doesn't work well for table cells, where each cell corresponds to a single email address. I'd either have to gather all the images before displaying any cells (potentially slow), or have each cell look up each image as it loads (even slower, as I'd need to iterate through the book to find a match for each email address).
Asynchronous Lookup: So then I thought I'd look them up in bulk, but asynchronously, using NSInvocationOperation. For each image found in AddressBook, I'd save a thumbnail in the app sandbox. Then each cell could just reference this file and show the default if it doesn't exist (because it's not in the book or hasn't yet been found). If the image is later found in the asynchronous lookup, the next time the image needs to be displayed it would suddenly appear. This might work well for periodic regeneration of images (for when images have been changed in the address book, for example). But then for any given instance of my app, an image may not actually show up for a while.
Asynchronous Table Cell Lookup: Ideally, I'd use something like markjnet's asynchronous table cell updating to update table cells with an image once it has been downloaded. But for this to work, I'd have to spin off an NSInvocationOperation job for each cell as it's displayed and if the cached icon is missing from the sandbox. But then we're back to inefficiently iterating through the entire address book for each oneā€”and that can be a lot of them if you've just downloaded a whole bunch of new email addresses.
So my question is: How do others do this? I was fiddling with Tweetie2, and it looks like it updates displayed table cells asynchronously. I assume it's sending a separate HTTP request for every image it needs. If so, I imagine that searching the local address book by email address isn't any less efficient, so maybe that's the best approach? Just not worry about the performance issues associated with searching the address book?
If so, is saving a thumbnail image in the sandbox the best approach to caching? And if I wanted to create a new job to update all the thumbnails with any changes in the address book say once a day, what's the best approach to doing so?
How do the rest of you solve this sort of problem? Suggestions would be much appreciated!
Regardless of what strategy you use for the actual caching of images, I would only make one pass through the Address Book data each time you get a batch of email addresses, if possible. (And yes, I would do this asynchronously.)
Create an NSMutableDictionary which will serve as your in-memory cache for search results. Initialize this dictionary with each email address from the download as a key, with a sentinel as that key's value (such as [NSNull null]).
Next, iterate through each ABRecordRef in the Address Book, calling ABRecordCopyValue(record, kABPersonEmailProperty) and looping through the results in each ABMultiValue that is returned. If any of the email addresses are keys in your cache, set [NSNumber numberWithInt:ABRecordGetRecordId(record)] as the value of that key in your dictionary.
Using this dictionary as a lookup index, you can quickly obtain the images of ABRecordRefs for only the email addresses that you are currently displaying in your table view given the user's current scroll position, as suggested in hoopjones's answer. You can add an address book change listener to invalidate your cache, trigger another indexing operation, and then update the view, if your application needs that level of "up-to-date-ness".
I'd use the last method you listed (Asynchronous Table Cell Lookup) but only look images for the current records being displayed. I overload the UIScrollViewDelegate methods to find out when a user has stopped scrolling, and then only start making requests for the current visible cells.
Something like this (this is slightly modified from a tutorial I found on the web which I can't find now, apologies for not citing the author) :
- (void)loadContentForVisibleCells
{
NSArray *cells = [self.table visibleCells];
[cells retain];
for (int i = 0; i < [cells count]; i++)
{
// Go through each cell in the array and call its loadContent method if it responds to it.
AddressRecordTableCell *addressTableCell = (AddressRecordTableCell *)[[cells objectAtIndex: i] retain];
[addressTableCell loadImage];
[addressTableCell release];
addressTableCell = nil;
}
[cells release];
}
- (void)scrollViewDidEndDecelerating:(UIScrollView *)scrollView;
{
// Method is called when the decelerating comes to a stop.
// Pass visible cells to the cell loading function. If possible change
// scrollView to a pointer to your table cell to avoid compiler warnings
[self loadContentForVisibleCells];
}
- (void)scrollViewDidEndDragging:(UIScrollView *)scrollView willDecelerate:(BOOL)decelerate;
{
if (!decelerate)
{
[self loadContentForVisibleCells];
}
}
Once you know what address records are currently visible, just doing a search for those (5 -7 records probably) will be lightning fast. Once you grab the image, just cache it in a dictionary so that you don't have to redo the request for the image later.
You seem to try to implement lazy images loading in UITableView.
there's a good example from Apple, I'm referencing it here :
Lazy load images in UITableView
FYI, I've released a free, powerful, and easy library for doing asynchronous image loading and fast file caching: HJ Managed Objects
http://www.markj.net/asynchronous-loading-caching-images-iphone-hjobjman/