Implement OpenCV for Image Marker detecting - swift

i have set up an app using OpenCV framework, to detect an image marker with the camera, and then react if the image on the screen matches the marker.
So far i have gotten quite a way by following:
https://medium.com/#dwayneforde/image-recognition-on-ios-with-swift-and-opencv-b5cf0667b79
And have also used his code, to implement the image-matching-function. Everything is set up, and i should be ready to go... but... The app crashes when it try to run the application.
The problem lies where i check for matches. I use this code as instructed in the tutorial:
- (void)processImage:(cv::Mat &)img {
cv::Mat gimg;
// Convert incoming img to greyscale to match template
cv::cvtColor(img, gimg, CV_BGR2GRAY);
// Use check for matches with a certain threshold to help with scaling and angles
cv::Mat res(img.rows-gtpl.rows+1, gtpl.cols-gtpl.cols+1, CV_32FC1);
cv::matchTemplate(gimg, gtpl, res, CV_TM_CCOEFF_NORMED);
cv::threshold(res, res, 0.5, 1., CV_THRESH_TOZERO);
double minval, maxval, threshold = 0.9;
cv::Point minloc, maxloc;
cv::minMaxLoc(res, &minval, &maxval, &minloc, &maxloc);
// If it's a good enough match
if (maxval >= threshold)
{
// Draw a rectangle for confirmation
cv::rectangle(img, maxloc, cv::Point(maxloc.x + gtpl.cols, maxloc.y + gtpl.rows), CV_RGB(0,255,0), 2);
cv::floodFill(res, maxloc, cv::Scalar(0), 0, cv::Scalar(.1), cv::Scalar(1.));
// Call our delegates callback method
[delegate matchedItem];
}
}
The problem seems to be with the line:
cv::matchTemplate(gimg, gtpl, res, CV_TM_CCOEFF_NORMED);
And i get this error-message:
OpenCV(3.4.1) Error: Assertion failed (corrsize.height <= img.rows + templ.rows - 1 && corrsize.width <= img.cols + templ.cols - 1) in crossCorr, file /Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/templmatch.cpp, line 589
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(3.4.1) /Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/templmatch.cpp:589: error: (-215) corrsize.height <= img.rows + templ.rows - 1 && corrsize.width <= img.cols + templ.cols - 1 in function crossCorr
Can anyone see what i am doing wrong?

Related

I have trouble getting depth information from the DEPTH16 format with the Camera2 API using ToF on P30 pro

I am currently testing options for depth measurement with the smartphone and wanted to create a depth image initially for testing. I am using the Camera2Basic example as a basis for this. (https://github.com/android/camera-samples/tree/main/Camera2Basic) Using Depth16 I get a relatively sharp "depth image" back. But the millimetres are not correct. They are in a range around from 3600mm to 5000mm for an object like a wall that is about 500mm or 800mm away from the camera.
But what puzzles me the most is that the image does not transmit any information in the dark. If Android is really targeting the ToF sensor for DEPTH16, it shouldn't be affected in the dark, should it? Or do I have to use AR-Core or Huawei's HMS core to get a real ToF image?
I am using a Huawei P30 Pro and the code for extracting the depth information looks like this. And yes performance wise it is bullshit but it is only for testing purposes:)
private Map<String, PixelData> parseDepth16IntoDistanceMap(Image image) {
Map<String, PixelData> map = new HashMap();
Image.Plane plane = image.getPlanes()[0];
// using asShortBuffer() like in the documentation leads to a wrong format (for me) but does not help getting better values
ByteBuffer depthBuffer = plane.getBuffer().order(ByteOrder.nativeOrder());
int stride = plane.getRowStride();
int offset = 0;
int i = 0;
for (short y = 0; y < image.getHeight(); y++) {
for (short x = 0; x < image.getWidth(); x++) {
short depthSample = depthBuffer.getShort( (y / 2) * stride + x);
short depthSampleShort = (short) depthSample;
short depthRange = (short) (depthSampleShort & 0x1FFF);
short depthConfidence = (short) ((depthSampleShort >> 13) & 0x7);
float depthPercentage = depthConfidence == 0 ? 1.f : (depthConfidence - 1) / 7.f;
maxz = depthRange;
sum = sum + depthRange;
numPoints++;
listOfRanges.add((float) depthRange);
if (depthRange < minz && depthRange > 0) {
minz = depthRange;
}
map.put(x + "_" + y, new PixelData(x, y, depthRange, depthPercentage));
i++;
}
}
return map;
}
In any case, it would help a lot to know if you can get the data this way at all, so I know if I'm already doing something fundamentally wrong. Otherwise I will change to one of the ar systems. Either way, many thanks for your efforts
If you want to extract a depth map where you can see the distance to an object you might use ARCORE Depth API.
https://developers.google.com/ar/develop/java/depth/overview
Or you can follow the codelab where shows you how to get the data in millimeters.
https://codelabs.developers.google.com/codelabs/arcore-depth#0

How to solve error 'Callbacks disabled in gurobi'?

Currently I am investigating an MILP in Pyomo with gurobi. I would like to be able to add cuts during the B&B and found that callback functions can be used for this purpose. However, when I try to implement a callback function I get the following error: "callbacks disabled for solver gurobi" (or: callbacks disabled for gurobi_persistent). This error does not seem very common, does someone have any experience with it?
I am quite new to both Pyomo and gurobi.
My code is the following (an example that I found through the Pyomo documentation).
from gurobipy import GRB
import pyomo.environ as pe
#from pyomo.core.expr.taylor_series import taylor_series_expansion
m = pe.ConcreteModel()
m.x = pe.Var(bounds=(0, 4))
m.y = pe.Var(within=pe.Integers, bounds=(0, None))
m.obj = pe.Objective(expr=2*m.x + m.y)
m.cons = pe.ConstraintList() # for the cutting planes
def _add_cut(xval):
# a function to generate the cut
m.x.value = xval
return m.cons.add(m.y >= ((m.x - 2)**2))
_add_cut(0) # start with 2 cuts at the bounds of x
_add_cut(4) # this is an arbitrary choice
opt = pe.SolverFactory('gurobi_persistent')
opt.set_instance(m)
#opt.set_gurobi_param('PreCrush', 1)
#opt.set_gurobi_param('LazyConstraints', 1)
def my_callback(cb_m, cb_opt, cb_where):
if cb_where == GRB.Callback.MIPSOL:
cb_opt.cbGetSolution(vars=[m.x, m.y])
if m.y.value < (m.x.value - 2)**2 - 1e-6:
cb_opt.cbLazy(_add_cut(m.x.value))
opt.set_callback(my_callback)
opt.solve()
assert abs(m.x.value - 1) <= 1e-6
assert abs(m.y.value - 1) <= 1e-6
Thanks.

The color depth 1 is not supported

I am using iTextSharp version 5.5.2.0 and when trying to load the attached pdf sample 058780-02.pdf I get an exception in the ImageRenderInfo.GetImage() method the "PdfImageObject" is not returned and with a message saying "The color depth 1 is not supported".
Any suggestions?
Thanks,
Abedellatif
i fixed it, modify the iTextSharp source code PdfImageObject class FindColorspace method at 222 line add
if (PdfName.CALGRAY.Equals(tyca) || PdfName.DEVICEGRAY.Equals(tyca)) {
stride = (width * bpc + 7) / 8;
pngColorType = 0;
}

MatLab: Error handling of integer input

How do I in MatLab catch the error that occours when the user enters letters and other things that aren't numbers in the input:
width = input('Enter a width: ');
I have played around for a while with the try/catch command:
width = 0;
message = '';
% Prompting.
while strcmp(message,'Invalid.') || width < 1 || width ~= int32(width)
try
disp(message)
width = input('Frame width: ');
catch error
message = 'Invalid.';
end
end
But with no luck (the above doesn't work). As shown I would like a simple message like "Frame width: " for the user the first time he has to enter his choice. But if an error is caught I want the message for him to be "Invalid. Try again: " fx everytime an error occours.
I have also tried the error() but I don't know how to place that correctly. Since the error() doesn't take the input command, where the error happends, as an argument, it must detect it in another way, which I can't figure.
Any help would be appreciated.
width = input('Frame width: ');
while(~isInt(width))
width = input('Invalid. Try again: ');
end
and you'll have to have the following function somewhere (or another implementation of it)
function retval = isInt(val)
retval = isscalar(val) && isnumeric(val) && isreal(val) && isfinite(val) && (val == fix(val));
end
answer = input('Frame width: ', 's');
[width, status] = str2num(answer);
while ~status || ~isscalar(width) || width ~= floor(width)
answer = input('Invalid. Try again: ', 's');
[width, status] = str2num(answer);
end
disp(width);
(status is 0 if the conversion failed. Without the isscalar test, an input like [1 2; 3 4] would also be accepted. The last test ensures that width must be an integer.)

How to NSLog pixel RGB from a UIImage?

I just want to:
1) Copy the pixel data.
2) Iterate and Modify each pixel (just show me how to NSLog the ARGB values as 255)
3) Create a UIImage from the new pixel data
I can figure out the the gory details if someone can just tell me how to NSLog the RGBA values of a pixel as 255. How do I modify the following code to do this? Be Specific Please!
-(UIImage*)modifyPixels:(UIImage*)originalImage
{
NSData* pixelData = (NSData*)CGDataProviderCopyData(CGImageGetDataProvider(originalImage.CGImage));
uint myLength = [pixelData length];
for(int i = 0; i < myLength; i += 4) {
//CHANGE PIXELS HERE
/*
Sidenote: Just show me how to NSLog them
*/
//Example:
//NSLog(#"Alpha 255-Value is: %u", data[i]);
//NSLog(#"Red 255-Value is: %u", data[i+1]);
//NSLog(#"Green 255-Value is: %u", data[i+2]);
//NSLog(#"Blue 255-Value is: %u", data[i+3]);
}
//CREATE NEW UIIMAGE (newImage) HERE
return newImage;
}
Did this direction work for you? I'd get pixel data like this:
UInt32 *pixels = CGBitmapContextGetData( ctx );
#define getRed(p) ((p) & 0x000000FF)
#define getGreen(p) ((p) & 0x0000FF00) >> 8
#define getBlue(p) ((p) & 0x00FF0000) >> 16
// display RGB values from the 11th pixel
NSLog(#"Red: %d, Green: %d, Blue: %d", getRed(pixels[10]), getGreen(pixels[10]), getBlue(pixels[10]));
If you'd like to actually see the image, you can use Florent Pillet's NSLogger:
https://github.com/fpillet/NSLogger
The idea is you start the NSLogger client on your desktop, and then in your app you put this up towards the top:
#import "LoggerClient.h"
And in your modifyPixels method you can do something like this:
LogImageData(#"RexOnRoids", // Any identifier to go along with the log
0, // Log level
newImage.size.width, // Image width
newImage.size.height, // Image height
UIImagePNGRepresentation(newImage)); // Image as PNG
Start the client on your desktop, and then run the app on your iphone, and you'll see real images appear in the client. VERY handy for debugging image problems such as flipping, rotating, colors, alpha, etc.