Google Analytics custom dimensions and goal conversions - universal-analytics

Having built out custom dimensions within Universal Analytics, the hope was to be able to associate conversions taking place with these custom dimensions but alas it is not operating as such.
Does anyone know why that would be the case?

Related

Image/scene recognition

Hi guys I'm looking for a solution to that enable a user compare a image to a previously store image. For example, i take a picture on my iPhone of a chair and then it will compare to a locally store image, if the similarity is reasonable then it confirms the image and calls another action.
Most solutions I've been able to find require cloud processing on third party servers(Visioniq, moodstocks, kooaba etc). Is this because the iPhone doesn't have sufficient processing power to complete this task?
A small reference library will be stored on the device and referenced when needed.
Users will be able to index their own pictures for recognition.
Is there any such solution that anyone has heard of? My searches have only shown cloud solutions from the above mentioned BaaS providers.
Any help will be much appreciated.
Regards,
Frank
Try using OpenCV library in iOS
OpenCV
You can try using FlannBasedMatcher to match images. Documentation and code is available here You can see "Feature Matching FLANN" here.

Suitability of CodeNameOne for image processing application

We need to build an image processing application for smartphones (mainly for iPhone). The operations consist of:
image filtering
composite geometrical transformation
region growing
Also, the user is required to specify (touch) some parts of the image. Those parts would serve as inputs for the app. Example: eyes and lip in a face.
We have built a desktop version of this app. The processing part is quite heavy and we extensively used BufferedImage class.
Should we use CodeNameOne for building this app? If not then what alternatives do you suggest?
Please consider the following factors:
Performance
Ease of writing the code (for image processing)
I gave an answer for this in our discussion forum but I think its a worthwhile question for a duplicate post:
Generally for most platforms you should be fine in terms of performance except for iOS & arguably Windows Phone.
Codename One is optimized for common use cases, since iOS doesn't allow for JIT's it can never be as fast as Java on the desktop since its really hard to do some optimizations e.g. array bound check elimination. So every access to an array will contain a branch check which can be pretty expensive for image processing.
Add to that the fact that we don't have any image processing API's other than basic ARGB and you can get the "picture", it just won't be efficient or easy.
The problem is that this is a very specific field, I highly doubt you will find any solution that will help you with this sort of code. So your only approach AFAIK is to build native code to do the actual image processing heavy lifting.
You can do this with Codename One by using the NativeInterface API which allows you to invoke critical code in native code and use cn1lib's to wrap them as libraries. You would then be able to get native performance for that portion of the code but this would only make sense for critical sections in the code. If you write a lot of native code the benefits of Codename One start to dissipate and you might as well go to native.

Does a free API for a Augmented reality service exist?

Currently I am trying to create an app for iPhone which is capable of recognizing the objects on an image such as car, bus, building, bridge, human, etc, and label as object name with the help of Internet.
Is there any free service which provide solution to my problem, as object recognition its self a complex algorithm requiring digital image processing, neural networks and all.
Can this can be done via API?
If you want to recognise planar images the current generation of mobile AR SDKs from Metaio, Qualcomm and Layar will allow you to upload images to match against, and perform the matching.
If you want to match freely against a set of 3D objects, e.g. a Toyota Prius or the Empire state, the same techniques might be applied to match against sets of images taken at different rotations, but you might have to choose to match just one object due to limitations on how large an image database you can have with the service, or contact those companies for a custom solution, and it may not work very reliably given the state of the art is to reliably match against planar images.
If you want to recognize general classes (human, car, building), this is a very difficult problem, and I don't know of any solutions anywhere fast enough to operate online (which I assume is a requirement given you want an AR solution - is that a fair assumption?). It's been a few years since I studied CV, but at that time the most promising solution for visual classification was "bag of visual words" approaches - you might try reading up on those.
Take a look at Cortexica. Very useful for this sort of thing.
http://www.cortexica.com/
I haven't done work with mobile AR in a while, but the last time I was working on this stuff I was using Layar and starting to investigate Junaio. Those are oriented toward 3D graphics, not simply text labels, so for your use case you may be better served with OpenCV.
Note that Layar (and I believe Junaio too) works like a web app, where you put the content on your own server and give Layar the URL to link to.

Can I use my own tiles in MapKit, instead of Google's?

I'm currently trying to decide wether to accept a client's proposal or not. Basically, I'm asked to create a MapView that displays markers at several locations on a map, with the additional requirement that the client's own map tiles are used instead of Google Maps'.
I do not know yet how the client stores their own map tiles, but I was assured that I'd be able to convert them into any format I'd need.
Is it possible to use different map tiles in MapKit's MapView?
Do you have good online literature about this? Links please?
If this is possible, I'd propably have to create a server that sends the files to the device.
How hard is it to create such a server? Is it just "setup apache, done." or is there more to it?
How hard, or time-consuming would both these things be, in relation to just setting up a normal MapView?
Thanks for your answers.
You can't use custom tiles with MapKit. You're limited to using the ones provided by Google.
It could be easier to create a "Google Maps-ish" web app that uses the custom titles and can be viewed on the iPhone through UIWebView?
Have you looked at alternate map frameworks on the iPhone? I know there is at least one open source map engine, also with tiles (that are not as good as the Google tiles, but hey).
A decent set of them is here:
Creating an IPhone Map application
The "easiest" way to do this within the Google Map framework is simply to map the client's map as a texture on top of the "ground." You can create textures at different resolutions, for different zoom factors. Then you won't need to do any special coding at all --- everything will just work.
The way you do this is with a KML region that maps to ground level.
See: http://earth.google.com/outreach/tutorial_region.html

are there any good statistic visualization libraries for the iphone?

I wonder if every developer would have to code statistic visualization by him/herself, or if there's a lib already that can be used to draw charts, curves, stats, etc. (like in the stock app for example)?
Take a look at this graphing package, it will also compile on the iPhone:
http://www.mpkju.fr/~graphview/page1/page1.html
Note that I've not used it yet, I ran across it before and made a note of the potential usefulness of it.
You might use the Google Charts API, and just use the images that come back from that:
http://code.google.com/apis/chart/
For instance, you can retrieve the bytes from this URL
http://chart.apis.google.com/chart?cht=p3&chd=t:60,40&chs=250x100&chl=Hello|World
and insert them into an NSImage, or just let the web view do the rendering for you.
Core Plot is a cross-platform (Mac / iPhone) plotting framework being developed by a group of scientifically-minded Cocoa developers. It is based on Core Animation, and was advancing quite quickly the last time I checked in. You might want to read the mailing list archives to get an idea of the design goals and current state of the framework.