Swift Avfoundation: QRcode scanning error in bright ambient - swift

I'm using the example from here to built a qrcode scanning apps and it work perfectly fine for paper, or normal light
in normal condition, the qrcode is looks like below:
(the line are thicker and those dots are stick near to each other)
My problem: when the ambient is bright, and the phone is bright (especially from retina display like SamsungEdge 7), the qrcode scanned become like below. Unable to read the qrcode anymore!
(the line become thinner and the dots become smaller and further apart)
any suggestion or where/how i can fix this kind of error? because ZXING is enable to scan even in my 'error' scenario.
Thanks in advance!

After asking and searching around. (useful info: https://www.objc.io/issues/21-camera-and-photos/camera-capture-on-ios/)
This issue is related with the camera exposure, brightness, contrast, white balance factors.
this is what i add to solve the issue
//zoom + set exposure for bright senario
do {
try currentDevice.lockForConfiguration()
} catch {
// handle error
return
}
currentDevice.videoZoomFactor = 1.0 + CGFloat(1)
let exposureBias:Float = -0.5
currentDevice.setExposureTargetBias(exposureBias) { (time:CMTime) -> Void in
}
currentDevice.unlockForConfiguration()

Related

Apple Vision – Barcode Detection doesn't work for barcodes with different colours

So, I have to scan different barcodes with various colours. For example, a yellow barcode on black background or yellow barcode on white background.
I don't have any issues with them being recognized by traditional linear and CCD barcode scanners. I have tried using Apple Vision framework but it doesn't work on them. They work perfectly fine on black barcodes with white background.
My barcodes are all Code 128 so I use this code for it:
var barcodeObservations: [String : VNBarcodeObservation] = [:]
for barcode in barcodes {
if let detectedBarcode = barcode as? VNBarcodeObservation {
if detectedBarcode.symbology == .code128 {
barcodeObservations[detectedBarcode.payloadStringValue!] = detectedBarcode
}
}
}
And in 'captureOutput' function under AVCaptureVideoDataOutputSampleBufferDelegate, I use this to filter my live feed as black and white which helps in the recognition of the golden barcode on silver background (The first image):
let context = CIContext(options: nil)
let currentFilter = CIFilter(name: "CIPhotoEffectMono")
currentFilter!.setValue(CIImage(cvImageBuffer: pixelBuffer), forKey: kCIInputImageKey)
let output = currentFilter!.outputImage!
context.render(output, to: pixelBuffer)
How can I make the Vision Framework detect barcodes with invert colors?
The 'CIColorInvert' filter doesn't work.
Edit: These are the barcodes:
Theory
By default, Apple Vision, CoreML, ARKit and RealityKit frameworks are designed to detect barcodes that must be seen through a camera as a high-contrast black-and-white images (with a predictable weights for channels: r=30%, g=59%, b=11%). In your case a yellow barcode on a white background has the lowest contrast, so no one real barcode scanner can read it, including rgb camera feed for Vision.
Let's see what Best and Worst Colors for Barcode Labels article tells us:
One reason why barcodes can be hard to scan is color, or more specifically, the lack of contrast in colors. If there isn’t enough contrast between the background and bar colors, barcode scanners will have a hard time reading it.
Avoid the following colours' combination because they are in low-contrast greyscale spectre:
Practical Solution
(only if barcode was printed on planar surface with diffuse paint)
Although if you wanna successfully detect a chromatic barcode on a chromatic background using Vision, you definitely need to apply a grayscale filter to color CVPixelBuffer stream before Vision starts recognizing barcodes. For this use AVFoundation and CoreImage frameworks.
Please read these three posts to find out how you can do it:
Applying Filters to a Capture Stream
Convert Image to CVPixelBuffer for Machine Learning Swift
Converting Color Images to Grayscale
P.S.
Metallic Paints
Barcodes printed with metallic paints (gold, silver, copper, etc) are the worst instances for barcode readers. This is due to the fact that metallic paint catches reflections and it has environment lights' speculars. So barcodes printed with metallic paints are barely read.

Image created upside down in Android

I am using the following code to export SceneCaptureComponent as a PNG image on the device and I am facing a weird issue.
On Windows and iOS everything works as expected.
On Android the image is upside down, like it is rotated by 180 degrees.
Any hints why?
bool UsaveJPG::SaveImage(class USceneCaptureComponent2D* Target, const FString ImagePath, const FLinearColor ClearColour)
{
FRenderTarget* RenderTarget = Target->TextureTarget->GameThread_GetRenderTargetResource();
if (RenderTarget == nullptr)
{
return false;
}
TArray<FColor> RawPixels;
// Format not supported - use PF_B8G8R8A8.
if (Target->TextureTarget->GetFormat() != PF_B8G8R8A8)
{
// TRACEWARN("Format not supported - use PF_B8G8R8A8.");
return false;
}
if (!RenderTarget->ReadPixels(RawPixels))
{
return false;
}
// Convert to FColor.
FColor ClearFColour = ClearColour.ToFColor(false); // FIXME - want sRGB or not?
for (auto& Pixel : RawPixels)
{
// Switch Red/Blue changes.
const uint8 PR = Pixel.R;
const uint8 PB = Pixel.B;
Pixel.R = PB;
Pixel.B = PR;
// Set alpha based on RGB values of ClearColour.
Pixel.A = ((Pixel.R == ClearFColour.R) && (Pixel.G == ClearFColour.G) && (Pixel.B == ClearFColour.B)) ? 0 : 255;
}
TSharedPtr<IImageWrapper> ImageWrapper = ImageWrapperModule.CreateImageWrapper(EImageFormat::PNG);
const int32 Width = Target->TextureTarget->SizeX;
const int32 Height = Target->TextureTarget->SizeY;
if (ImageWrapper.IsValid() && ImageWrapper->SetRaw(&RawPixels[0], RawPixels.Num() * sizeof(FColor), Width, Height, ERGBFormat::RGBA, 8))
{
FFileHelper::SaveArrayToFile(ImageWrapper->GetCompressed(), *ImagePath);
return true;
}
return false;
}
It seems as if you aren't the only one having this issue.
Questions with answers (bolding to pop out answer section):
"I believe the editor mobile preview provides "correct" results, while the mobile device produces upside down results. I ran into this issue when creating a post process volume to simulate a camera partially submerged in water with mathematically calculated wave displacement. I can convert the screen position to world position using a custom material node with the code: float2 ViewPos = (ScreenPos.xy - View.ScreenPositionScaleBias.wz) / View.ScreenPositionScaleBias.xy * 10; float4 Pos = mul(float4(ViewPos, 10, 1), [Link Removed]); return Pos.xyz / Pos.w; This works as expected in the editor mobile preview, but the results are weirdly flipped around when launching to a mobile device. Manually inverting the Y-coordinate of the UV by subtracting the Y-component from 1 will correct the ViewportUV output, but the world space calculation still doesn't work (probably because [Link Removed] and [Link Removed]need to also be inverted). Strangely, if I feed the inverted UV coordinates into SceneTexture:PostProcessInput0, the entire scene flips upside down when running on a mobile device. This means that the ScreenPosition's ViewportUV is not the same as the SceneTexture UV. As a result, any post process effects that rely on the ViewportUV do not work correctly when running on mobile devices."
https://unreal-engine-issues.herokuapp.com/issue/UE-63442
I'm having trouble getting the tilt function to work with my Android phone. It seems that the tilt values are upside down because when I hold the phone flat on the table with the screen up, the tilt values snap from values like x=-3,y=0,z=-3 to z=3,y=0,z=3. When I hold the phone up with the screen facing down, the values are much closer to 0 all round.
Response:
tilt Y is always zero on android devices. but works fine on IOS
I think nobody has reported it as bug yet :) you do it
https://answers.unrealengine.com/questions/467434/tilt-vector-values-on-android-seem-to-be-upside-do.html?sort=oldest
I'm not an Unreal dev, but you might have to put in a condition for Android to invert the image for yourself. I've done that kind of thing before (think cross browser compatibility). It's always a PITA and bloats the code for a stupid "reason", but sometimes it's what you have to do. Maybe Unreal will release a patch for it, but not likely, since any other devs who've already accepted and accounted for this issue will now have to fix their "fixed" code. It may just have to be something you live with, unfortunately.
With reading this, it just sounds like the different OSs decided on different corners for the origin of the "graph" that is the screen. Win and iOS seem to have chosen the same corner, while Android chose differently. I've done similar things in programming as well as for CNC machines. You may be able to choose your "world origin" and orientation in setting.
Open the Find in Blueprints menu item.
Search for Debug Draw World Origin. In the results, double-click the Debug Draw World Origin function.
https://docs.unrealengine.com/en-US/Platforms/AR/HandheldAR/ARHowToShowWorldOrigin/index.html
There's not much out there that I can decipher being relevant, but searching for "android unreal world origin" gets a few results that might help. And adding "orientation" gets some different, possibly relevant, results.

How to duplicate one stereo channel to the other stereo channel using AudioKit

I am using a Focusrite Scarlett 2i2 into a Mac. The signal into the Scarlett is a guitar.
With code along these lines I can get audio into the, app, but it is only the stereo left channel.
mic = AKMicrophone()
device = AKDevice(name:"Scarlett 2i4 USB", deviceID:56);
mic.setDevice(device)
let booster = AKBooster(mic, gain: 1.0)
AudioKit.output = booster
AudioKit.start()
mic.start()
Is there a simple way to combine left and right channels from a mic input into a single mono signal (or left and right with the same signal)?
I tried a variation on this answer about flipping left and right channels: AudioKit - Stereo channel flipping from input to output?
But that didn't work. FWIW, it also didn't work for purely flipping the channels (AKPanner seems to be able to pan something from the center to hard left, but not from hard left to center or right.)
Two other things that might be related:
It seems that AKStereoInput is not available for the Mac platform. Is that correct?
What exactly is "deviceID"? I seem to be able to change that and get the same result.
Thank you.
Yes, there is something called AKStereoFieldLimiter that does just that:
https://audiokit.io/docs/Classes/AKStereoFieldLimiter.html

Cocos2d. Diffuse image (60 fps)

The game was created by support cocos2d 0.99.5 and Box2d.
Iphone SDK 4.3
We have a character. When a character moves quickly, it looks blurred (fuzzy // unfocused). On a simulator and on device (iPhone 3G).
To move a character using mouseJoint (dampingRatio = 0 // frequencyHz = -1).
In the screenshot image clearly. link
The character is focused. The screenshot not transfer problems.
All the time 60 fps.
Tried params:
use kCCDirectorProjection2D // 3D
alies // antialies to texture params
CC_COCOSNODE_RENDER_SUBPIXEL 1 and 0
Video sample: link
How to get a clear image of the character during the move?
I also had a problem like this and fixed it by changing this line in ccConfig.h:
#define CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL 0
to
#define CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL 1
This is the comment for this define, maybe it helps someone.
/** #def CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL
If enabled, the texture coordinates will be calculated by using this formula:
- texCoord.left = (rect.origin.x*2+1) / (texture.wide*2);
- texCoord.right = texCoord.left + (rect.size.width*2-2)/(texture.wide*2);
The same for bottom and top.
This formula prevents artifacts by using 99% of the texture.
The "correct" way to prevent artifacts is by using the spritesheet-artifact-fixer.py or a similar tool.
Affected nodes:
- CCSprite / CCSpriteBatchNode and subclasses: CCLabelBMFont, CCTMXTiledMap
- CCLabelAtlas
- CCQuadParticleSystem
- CCTileMap
To enabled set it to 1. Disabled by default.
#since v0.99.5
*/
I am pretty sure that what you are describing is an optical illusion. LCDs, especially lower-quality LCDs, have a finite response time. If this response time is too slow, it can cause ghosting, i.e. the moving object looks smeared. Basically what's happening is the previous frame's (or several frames') pixels take a long time to actually "turn off" and you see fainter versions of your sprite left behind as it moves.
With regards to your comment:
For the experiment, I took a pencil and put it to a sheet of paper
began to move quickly. Eyes see a pencil in focus, then problem is not
an optical effect, a code problems
Looking at a moving object in the real world is not the same as looking at a moving object on the screen, with or without a poor display response time. The real-world object moves continuously, but the screen object moves in discrete steps. Your eye can follow the pencil exactly and keep the image sharp on your retina. If you follow a screen image, however, your eye moves smoothly, while the screen image "jumps" from place to place. This can cause a "juddering" effect for sufficiently fast-moving objects, even at high framerates. If 60fps is still juddery, there is basically no way around this; it is a limitation of current technology.

Zoom causes (heavy) problems in Bing Maps with polylines

Hello i have some problems with the Bing Map control.
If i zoom to near to the polylines they begin to disappear (from bottom to top and from right to left)
The Polylines are generated dynamically with an ItemsControl (that one which is included in the maps namespace) bound to a collection of my own LocationData from ViewModel thats converted by a IValueConverter to the map specific LocationPoints.
Some values that are not accessible from ViewModel are set in the loaded event.
The map and the container stretch over the whole screen.
So if the lines begin to disappear and i zoom out via a button in my ApplicationBar
private void ZoomOut_Click(object sender, RoutedEventArgs e)
{
map1.ZoomLevel -= 1.0;
}
the Application exits without exception...
I have tested it on a real device with and without debugger and the debugger only says that he have lost the connection to the device.
Anyone have this or similar problems and hopefully solved it?
Thanks for any help.
PS: My LocationData contains approximately 100 - 200 points that are split up to 3 - 7 lines that can't be to much or?
Yes, hundreds of points is too much, but that's the least of your problems. The way you have coded this, you are reconverting and replotting your points every time there is a pan or zoom.
Don't use the type converter. Convert your points once, cache the converted points and bind to the converted points.
Research quadtrees and how they apply to culling your point set in proportion to zoom level.
Apply a clipping rectangle. In my experience, half a degree larger each side of your display region works well.
Study the Bing map event model and redesign your code so that you only cull, clip and plot when map manipulation stops.
Ideally, write your cull, clip and plot logic so that it is asynch and can be signalled to abort so that if manipulation restarts before cull, clip and plot is finished, it can be aborted and restarted.
Using the techniques above I am able to get performance comparable to the built-in map.