OpenCV cv::dft() iOS Assertion Error - iphone

I need some help in a bad way. I swear I have looked for now 1 week for an answer to this, and have been unsuccessful, so I come crawling for help.
My goal is simple. I am trying to use the OpenCV library in Xcode. I'm having some round about frustrating problem. I got the OpenCV library to work well with cvCanney and cvAdaptive Transforms, but I can't get it to do cv::dft(). I started by attempting the following:
cv::Mat tempMat = [self.imageView.image CVGrayscaleMat];
cv::dft(tempMat, output2);
This would error because it was not in the proper format (CV_32FC1). So I then tried:
cv::Mat tempMat = [self.imageView.image CVMat];
cv::cvtColor(tempMat, output2, CV_32FC1);
cv::dft(output2, output3);
and I get the same error. Specifically the error reads:
Assertion failed (type == CV_32FC1 || type == CV_32FC2 || type == CV_64FC1 || type == CV_64FC2) in dft
As an update to the original question, I've been trying to determine the type using cv::type() and it returns a type= 24. Can anyone explain to me how to decipher what this type means? Is it the wrong type? Latest attempt:
cv::Mat tempMat = [self.imageView.image CVMat];
cv::Mat output2(tempMat.rows, tempMat.cols, CV_32FC1);
cv::cvtColor(tempMat, tempMat, CV_32FC1);
int type = tempMat.type();
int type2 = output2.type();
When I run this I get a type of 24 for tempMat, and a type of 5 for output2. If I try to add this:
cv::cvtColor(output2, output2, CV_32FC1);
I get error: Assertion failed (scn == 3 || scn == 4) in cvtColor
Any ideas? Even if it a RTFM suggestion, I'll take anything at this point. Please help.
Thank you.

I think it is a variable type and number of channels problem.
cv::Mat tempMat = [self.imageView.image CVMat];
cv::Mat output2;
cv::cvtColor(tempMat,output2, CV_BGR2GRAY);
output2.convertTo(output2, CV_32FC1);
cv::dft(output2, output2);
output2.convertTo(image, CV_8UC1);
Please let me know if this works, I have a similar code in my project and tried to derive an answer from it; thus it may contain some errors. And we can work it through interactively from here.

Related

Wrong mode when using PIL.blend with 'F' mode images?

The title says most of it: I'm trying to blend two images with mode F, that is 32-bit float pixel values. However, I get an error from PIL that says:
image has wrong mode
However, I have verified that both images are of mode F and cannot find any evidence that this shouldn't be possible. Is there some way to make this work, preferably without converting to a new image type?
I'm pretty sure, that – under the hood – Image.blend calls this implementation. The very first check there is:
/* Check arguments */
if (!imIn1 || !imIn2 || imIn1->type != IMAGING_TYPE_UINT8 || imIn1->palette ||
strcmp(imIn1->mode, "1") == 0 || imIn2->palette ||
strcmp(imIn2->mode, "1") == 0) {
return ImagingError_ModeError();
}
So, although not stated in the documentation, I'd guess, that Image.blend is only supported for uint8 image modes.

How to overcome indefinite matrix error (NbClust)?

I'm getting the following error when calling NbClust():
Error in NbClust(data = ds[, sapply(ds, is.numeric)], diss = NULL, distance = "euclidean", : The TSS matrix is indefinite. There must be too many missing values. The index cannot be calculated.
I've called ds <- ds[complete.cases(ds),] just before running NbClust so there's no missing values.
Any idea what's behind this error?
Thanks
I had same issue in my research.
So, I had mailed to Nadia Ghazzali, who is the package maintainer, and got an answer.
I'll attached my mail and her reply.
my e-mail:
Dear Nadia Ghazzali. Hello Nadia. I have some questions about
NbClust function in R library. I have tried googling but could not
find satisfying answers. First, I’m so grateful for you to making
this awsome R library. It is very helpful for my reasearch. I tested
NbClust function in NbClust library with my own data like below.
> clust <- NbClust(data, distance = “euclidean”,
min.nc = 2, max.nc = 10, method = ‘kmeans’, index =”all”)
But soon, an error has occurred. Error: division by zero! Error in
Indices.WBT(x = jeu, cl = cl1, P = TT, s = ss, vv = vv) : object
'scott' not found So, I tried NbClust function line by line and
found that some indices, like CCC, Scott, marriot, tracecovw,
tracew, friedman, and rubin, were not calculated because of object
vv = 0. I’m not very familiar with argebra so I don’t know meaning
of eigen value. But it seems to me that object ss(which is squart of
eigenValues) should not be 0 after prodected.
So, here is my questions.
I assume that my data is so sparse(a lot of zero values) that sqrt(eigenValues) becomes too small, is that right? I’m sorry I
can’t attach my data but I can attach some part of eigenValues and
squarted eigenValues.
> head(eigenValues)
[1] 0.039769880 0.017179826 0.007011972 0.005698736 0.005164871 0.004567238
> head(sqrt(eigenValues))
[1] 0.19942387 0.13107184 0.08373752 0.07548997 0.07186704 0.06758134
And if my assume is right, what can I do for this problems? Only one
way to drop out 7 indices?
Thank you for reading and I’ll waiting your reply. Best regards!
and her reply:
Dear Hansol,
Thank you for your interest. Yes, your understanding is good.
Unfortunately, the seven indices could not be applied.
Best regards,
Nadia Ghazzali
#seni The cause of this error is data related. If you look at the source code of this function,
NbClust <- function(data, diss="NULL", distance = "euclidean", min.nc=2, max.nc=15, method = "ward", index = "all", alphaBeale = 0.1)
{
x<-0
min_nc <- min.nc
max_nc <- max.nc
jeu1 <- as.matrix(data)
numberObsBefore <- dim(jeu1)[1]
jeu <- na.omit(jeu1) # returns the object with incomplete cases removed
nn <- numberObsAfter <- dim(jeu)[1]
pp <- dim(jeu)[2]
TT <- t(jeu)%*%jeu
sizeEigenTT <- length(eigen(TT)$value)
eigenValues <- eigen(TT/(nn-1))$value
for (i in 1:sizeEigenTT)
{
if (eigenValues[i] < 0) {
print(paste("There are only", numberObsAfter,"nonmissing observations out of a possible", numberObsBefore ,"observations."))
stop("The TSS matrix is indefinite. There must be too many missing values. The index cannot be calculated.")
}
}
And I think the root cause of this error is the negative eigenvalues that seep in when the number of clusters is very high, i.e. the max.nc is high. So to solve the problem, you must look at your data. See if it got more columns then rows. Remove missing values, check for issues like collinearity & multicollinearity, variance, covariance etc.
For the other error, invalid clustering method, look at the source code of the method here. Look at line number 168, 169 in the given link. You are getting this error message because the clustering method is empty. if (is.na(method))
stop("invalid clustering method")

Weird casting needed in Swift

I was watching some of the videos at WWDC2014 and trying to code I liked, but one of the weird things I noticed is that Swift keeps getting mad at me and wanting me to cast to different number types. This is easy enough but in the videos at WWDC they did NOT need to do this. Here is an example from "What's New With Interface Builder":
-M_PI/2 keeps giving me the error: "Could not find an overload for '/' that accepts the supplied arguments'
Does anyone have a solution to this problem, that does NOT simply involve casting because there is clearly another way of doing this? I have many many more examples for similar problems to this.
if !ringLayer {
ringLayer = CAShapeLayer()
let innerRect = CGRectInset(bounds, lineWidth / 2.0, lineWidth / 2.0)
let innerPath = UIBezierPath(ovalInRect: innerRect)
ringLayer.path = innerPath.CGPath
ringLayer.fillColor = nil
ringLayer.lineWidth = lineWidth
ringLayer.strokeColor = UIColor.blueColor().CGColor
ringLayer.anchorPoint = CGPointMake(0.5, 0.5)
ringLayer.transform = CATransform3DRotate
(ringLayer.transform, -M_PI/2, 0, 0, 1)
layer.addSublayer(ringLayer)
}
ringLayer.frame = layer.bounds
Edit: NB: CGFloat has changed in beta 4, specifically to make handling this 32/64-bit difference easier. Read the release notes and don't take the below as gospel now: it was written for beta 2.
After a clue from this answer I've worked it out: it depends on the selected project architecture. If I leave the Project architecture at the default of (armv7, arm64), then I get the same error as you with this code:
// Error with arm7 target:
ringLayer.transform = CATransform3DRotate(ringLayer.transform, -M_PI/2, 0, 0, 1)
...and need to cast to a Float (well, CGFloat underneath, I'm sure) to make it work:
// Works with explicit cast on arm7 target
ringLayer.transform = CATransform3DRotate(ringLayer.transform, Float(-M_PI/2), 0, 0, 1)
However, if I change the target architecture to arm64 only, then the code works as written in the Apple example from the video:
// Works fine with arm64 target:
ringLayer.transform = CATransform3DRotate(ringLayer.transform, -M_PI/2, 0, 0, 1)
So to answer your question, I believe this is because CGFloat is defined as double on 64-bit architecture, so it's okay to use M_PI (which is also a double)-derived values as a CGFloat parameter. However, when arm7 is the target, CGFloat is a float, not a double, so you'd be losing precision when passing M_PI (still a double)-derived expressions directly as a CGFloat parameter.
Note that Xcode by default will only build for the "active" architecture for Debug builds—I found it was possible to toggle this error by switching between iPhone 4S and iPhone 5S schemes in the standard drop-down in the menu bar of Xcode, as they have different architectures. I'd guess that in the demo video, there's a 64-bit architecture target selected, but in your project you've got a 32-bit architecture selected?
Given that a CGFloat is double-precision on 64-bit architectures, the simplest way of dealing with this specific problem would be to always cast to CGFloat.
But as a demonstration of dealing with this type of issue when you need to do different things on different architectures, Swift does support conditional compilation:
#if arch(x86_64) || arch(arm64)
ringLayer.transform = CATransform3DRotate (ringLayer.transform, -M_PI / 2, 0, 0, 1)
#else
ringLayer.transform = CATransform3DRotate (ringLayer.transform, CGFloat(-M_PI / 2), 0, 0, 1)
#endif
However, that's just an example. You really don't want to be doing this sort of thing all over the place, so I'd certainly stick to simply using CGFloat(<whatever POSIX double value you need>) to get either a 32- or 64-bit value depending on the target architecture.
Apple have added much more help for dealing with different floats in later compiler releases—for example, in early betas you couldn't even take floor() of a single-precision float easily, whereas now (currently Xcode 6.1) there are overrides for floor(), ceil(), etc. for both float and double, so you don't need to be fiddling with conditional compilation.
There seems to be issues currently with automatic conversions between Objective C numeric types and Swift types. For this I was able to get it to work by marking the lineWidth to the Float type. I don't know why they didn't have that issue in the video, I assume that is a different build they were using. Either there is an Objective C interop setting I'm missing, or it's just a beta issue.
To verify some of the basic issues (even happening in Playground) I used:
var x:NSNumber = 1
var y:Integer = 2
var z:Int = 3
x += 5 //error
y += 6 //error
z = z + y //error
For Swift 1.2 you have to cast second parameter to CGFloat
This code works:
ringLayer.transform = CATransform3DRotate(ringLayer.transform, CGFloat(-M_PI/2), 0, 0, 1)

Why is a function parameter considered an undefined variable?

I've been looking around for a while, but I can't seem to find something that describes and answers the problem I've been having. Namely, I have a function being defined in a ball class that checks to see if it and another ball are intersecting (due to the balls being kept on the same z-plane and the radius being the same for all of them, I've simplified the problem to that of intersecting circles). This function is as follows (where both obj and other are of the class ball, and where the ball class contains a position vector of length 3):
function intersected = ball_intersection(obj, other)
intersected = (obj.position(1)-other.position(1))^2+(obj.position(2)-other.position(2))^2 <= (2*ball.radius)^2;
end
I get the following error:
Undefined variable other.
Error in ball/ball_intersection (line 29)
intersected = (obj.position(1)-other.position(1))^2+(obj.position(2)-other.position(2))^2 <= (2*ball.radius)^2;
Error in ball/move (line 56)
if ball_intersection(other)
Error in finalproject (line 41)
cueball.move(0.0001, 0, 0, 9.32, 4.65, otherball);
For some reason, Matlab thinks that a function parameter is undefined, and I don't know how to make it know that it is in fact defined right there.
Any and all help is appreciated - thanks for reading!
ball_intersection has to be called with two input arguments.
Most likely, you want to change line 56 of ball to if ball_intersection(obj,other), or if obj.ball_intersection(other).

EXC_BAD_ACCESS when calling avcodec_encode_video

I have an Objective-C class (although I don't believe this is anything Obj-C specific) that I am using to write a video out to disk from a series of CGImages. (The code I am using at the top to get the pixel data comes right from Apple: http://developer.apple.com/mac/library/qa/qa2007/qa1509.html). I successfully create the codec and context - everything is going fine until it gets to avcodec_encode_video, when I get EXC_BAD_ACCESS. I think this should be a simple fix, but I just can't figure out where I am going wrong.
I took out some error checking for succinctness. 'c' is an AVCodecContext*, which is created successfully.
-(void)addFrame:(CGImageRef)img
{
CFDataRef bitmapData = CGDataProviderCopyData(CGImageGetDataProvider(img));
long dataLength = CFDataGetLength(bitmapData);
uint8_t* picture_buff = (uint8_t*)malloc(dataLength);
CFDataGetBytes(bitmapData, CFRangeMake(0, dataLength), picture_buff);
AVFrame *picture = avcodec_alloc_frame();
avpicture_fill((AVPicture*)picture, picture_buff, c->pix_fmt, c->width, c->height);
int outbuf_size = avpicture_get_size(c->pix_fmt, c->width, c->height);
uint8_t *outbuf = (uint8_t*)av_malloc(outbuf_size);
out_size = avcodec_encode_video(c, outbuf, outbuf_size, picture); // ERROR occurs here
printf("encoding frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, out_size, f);
CFRelease(bitmapData);
free(picture_buff);
free(outbuf);
av_free(picture);
i++;
}
I have stepped through it dozens of times. Here are some numbers...
dataLength = 408960
picture_buff = 0x5c85000
picture->data[0] = 0x5c85000 -- which I take to mean that avpicture_fill worked...
outbuf_size = 408960
and then I get EXC_BAD_ACCESS at avcodec_encode_video. Not sure if it's relevant, but most of this code comes from api-example.c. I am using XCode, compiling for armv6/armv7 on Snow Leopard.
Thanks so much in advance for help!
I have not enough information here to point to the exact error, but I think that the problem is that the input picture contains less data than avcodec_encode_video() expects:
avpicture_fill() only sets some pointers and numeric values in the AVFrame structure. It does not copy anything, and does not check whether the buffer is large enough (and it cannot, since the buffer size is not passed to it). It does something like this (copied from ffmpeg source):
size = picture->linesize[0] * height;
picture->data[0] = ptr;
picture->data[1] = picture->data[0] + size;
picture->data[2] = picture->data[1] + size2;
picture->data[3] = picture->data[1] + size2 + size2;
Note that the width and height is passed from the variable "c" (the AVCodecContext, I assume), so it may be larger than the actual size of the input frame.
It is also possible that the width/height is good, but the pixel format of the input frame is different from what is passed to avpicture_fill(). (note that the pixel format also comes from the AVCodecContext, which may differ from the input). For example, if c->pix_fmt is RGBA and the input buffer is in YUV420 format (or, more likely for iPhone, a biplanar YCbCr), then the size of the input buffer is width*height*1.5, but avpicture_fill() expects the size of width*height*4.
So checking the input/output geometry and pixel formats should lead you to the cause of the error. If it does not help, I suggest that you should try to compile for i386 first. It is tricky to compile FFMPEG for the iPhone properly.
Does the codec you are encoding support the RGB color space? You may need to use libswscale to convert to I420 before encoding. What codec are you using? Can you post the code where you initialize your codec context?
The function RGBtoYUV420P may help you.
http://www.mail-archive.com/libav-user#mplayerhq.hu/msg03956.html