iPhone, OpenCV and CvBlobDetector - iphone

I found Yoshimasa Niwa's article about blob detection here:
http://niw.at/articles/2009/03/14/using-opencv-on-iphone/en
And something on realtime face detection here:
http://www.morethantechnical.com/2009/08/09/near-realtime-face-detection-on-the-iphone-w-opencv-port-wcodevideo/
But what I really want to do is realtime blob detection (like http://www.youtube.com/watch?v=LIgsVoCXTXM) using the iPhone 4 camera.
I can find the headers for CvBlobDetector in cvvidsurv.hpp. But trying to use that without modification is not the right thing to do.
How do I get CvBlobDetector to work? Or is there an alternate solution?

Make sure you've followed the instructions to use it properly:
http://opencv.willowgarage.com/wiki/cvBlobsLib
One of the alternative solutions i used and it works good is:
http://code.google.com/p/cvblob/

Related

Save images for my web server in SPIFFS on esp32-S2

Hi it's my first time using esp32-S2 because now its not recommended to use esp32. I'm looking for saving images in SPIFFS for my web server. In esp32 i used to use esp32fs plugin (https://github.com/me-no-dev/arduino-esp32fs-plugin) but it doesn't work for esp32-S2. I would like to know if there is any plugin like esp32fs and if not how can i save my images (I'm using arduinoIDE 1.8.19). I've been searching but i didn't found anything. Any orientation is welcomed. Thank you for your time and assistance.
You can try my ESP32_FSWebServer_DRD or ESP_FSWebServer example of ESP_WiFiManager library
Follow the instructions in ESP_FSWebServer Example
You can use either deprecated SPIFFS or the better LittleFS

What may be the possible causes that my cast is failing

I am trying to make a vehicle selection system in ue4 with blueprints.
I have followed this video.
Below are the screenshots of my code.
screenshot1 and
screenshot2
firstly I tried to figure out what is happening to my code then I came to know that the cast was not succedding and hence the car is being spawned but not posessed.
please help me by listing some reasons that may be responsible for this.
if you want any info so I have given the link of the video which I have used to make the system.
thanks in advance.
I found the answer
I just searched it on Google and found it here:
https://answers.unrealengine.com/questions/98622/view.html

Is it possible to send HTTP GET Requests from a Simulink Block?

basically the title says it all. I'm working on a model that needs (there is no way around it) to load data from a website, parse it and pass it onto another block. I thought I could use an S-Function written in C++, which didn't properly work, then I tried to use webread()
which also didn't work in Simulink because I can't use extrinsic functions on the device this will run on.
I thought I could work around it by downloading the file externally and then reading it through fscanfbut it turned out that Matlab CODER doesn't support that as well.
After putting 2 1/2 days into this now, I'm asking myself whether it is even possible to do something like an HTTP Request through a Simulink block. That's why I went here to ask that question. Thanks for every answer!
I figured out a way to do it with a C++ S-Function by now.
I also created a GitHub Repo for it. If you're stuck with the same problem as I was, try to take a look at this. I'm pretty sure it will help you.

ZXing library and ZXing.org differences?

I have implemented ZXing-node and am able to scan generated QRcode images great, however any images captured via a phone camera, don't get recognized, even though I've added some GraphicsWizard manipulation to deblur, resize etc.
I have tried using the --try_harder option as well, without success.
However the ZXing.org website handles all of these without any issues, where can I find out what settings, or additional image manipulation are done here?
Cheers
It is also all open source: https://github.com/zxing/zxing/tree/master/zxingorg
It uses TRY_HARDER mode, and different binarizers, and will try PURE_BARCODE mode too.

OpenCV and iOS - Getting started

I am new to iOS development and apologies for a basic question. I am trying to convert an image to grayscaled and threshold it using openCV in iOS. So far, I have imported and setup the framework on xcode. What I am trying to do now is to implement the following features:
http://www.youtube.com/watch?feature=player_embedded&v=Ko3K_xdhJ1I
at 0:24 and 0:53
I tried to follow the tutorial which points to the above youtube video :
http://docs.opencv.org/doc/tutorials/ios/image_manipulation/image_manipulation.html
and wasn't sure where to paste the above code and in which file?
Many thanks.
Kind Regards.
These are helper methods and best written in a separate file. Quite simply,
http://answers.oreilly.com/topic/631-how-to-get-c-and-objective-c-to-play-nicely-in-xcode/
Put all that image manipulation code in say ImageManipulationHelper.mm and create a header file for the same
Create a nice little category for UIImage.
http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/ProgrammingWithObjectiveC/CustomizingExistingClasses/CustomizingExistingClasses.html
which can call these methods in turn to create any image manipulation you might want.
Easy does it. And yeah, read up a bit more on using C++ in objectiveC, if you get into trouble and also about categories. They are some of the niftier features of objectivec
I achieved the same,using the help of this awesome link
Let me know if you need any further help.
Cheers!!
Edit :
Check this out ImageFiltering