Google Maps showing orange tiles for satellite view - google-maps-android-api-2

I noticed this morning that the GMSMapView and GoogleMap of our iOS and Android apps are starting to show tiles with an abnormal orange tint. I've attached a screenshot of the app that captures the original tiles being replaced by orange tiles.
I'm wondering whether anyone else is seeing this issue, and whether there is some configuration I missed that can get the map back to pulling the original tiles.
Screenshot

Those tiles come from a different (presumably newer) series of sat images, taken from a different angle with a different cam at a different time of day etc.
Under these circumstances, minor differences in tint are not unexpected.

Related

Unity: Strange Camera Offset for multi-monitors at different resolutions

I am creating a game (it's actually more of an application/useful software than a game) which needs to be run on multiple monitors, where all monitors are potentially different resolutions.
In the example below, you can see the primary monitor (far left) rendering a full screen camera view, just as it should. There are two other cameras set to render to monitor 2 and 3. These are activated with Display.displays [1].Activate (1600, 900, 50); (so as to set their resolutions to 1600x900).
The issue is that Unity appears to be offsetting the camera rendering so that it does not originate from the top left of the screen, but rather some way down the window. The grey area shows the area where the image is missing.
When running in the editor, the cameras render each window perfectly with no strange offsetting/cropping.
What do I need to do in order to get the standalone output to render the correct, uncropped/offset image in each of the windows, please?
If I make the resolution of all monitors the same, it renders exactly as it should do:
Annoyingly, this seems to be a bug with Unity version 5.3.x. It seems to be fixed in 5.4.x [See Bug Report Here] (I have downloaded the beta and tested it - I can confirm this to be true!). However, 5.4.x is currently in beta and you can only access it if you have Unity Pro.

Build Iphone app that can recognise colour from streaming camera

I am building an iphone app to recognise a specific colour through the iphone camera when placed onto a colour board.
Note that I want it to work through the streaming camera output not just a still image or photo.
My initial thoughts were to scan series of pixels (say 4 on each corner of the camera feed) and if the colours registered in each pixel match, then display colour (in text) to user.
Can someone please point me in the right direction as far as example code or API or even if there is a better design solution to the problem.

Polling IPhone Camera to Process Image

Scenario is I want my app to process (in the background if possible) images been seen by the iphone camera.
e.g. App is running, user places the phone down on a piece of red cardboard, than want to display an alertview saying "Phone placed on Red Surface"(this is a simplified version of what i want to do but just to keep the question direct).
Hope this makes sense. I know there is two seperate concerns here.
How to process images from the camera in the background of the app (if we cant do this that we can initiate the process with say a button click if needed).
Processing the image to say what solid colour it is sitting on.
Any help/guidance would be greatly appreciated.
Thanks
Generic answers to your two questions:
Background processing of image can be triggered as a timer event. Say for example, every 30 second, capture the image on the screen and do the processing behind. If the processing is not computing/time intensive, this should work
It is technically possible to know the color of say one pixel programatically. If you are sure that the entire image is just one color, you can try that approach. Get few random points and get the color of the pixel in the image. But if the image (in your example, red board) consists of an image or multiple colors, then that will require detailed image processing techniques.
Hope this helps
1) Image Capture
There's two kinds of apps that continually take imagery from the camera: media capture (e.g. Camera, iMovie) or Augmented Reality apps.
Here's the iPhone SDK tutorial for media capture:
https://developer.apple.com/library/ios/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW3
Access the camera with iPhone SDK
Augmented Reality apps take continual pictures from the camera for processing/overlay. I suggest you look into some of the available AR kits and see how they get a continual stream from the camera and also analyze the pixels.
Starting a augmented reality (AR) app like Panasonic VIERA AR Setup Simulator
http://blog.bordertownlabs.com/post/157320598/customizing-the-iphone-camera-view-with
2) Image Processing
Image processing is a really big topic that's been addressed in multiple other places:
https://photo.stackexchange.com/questions/tagged/image-processing
https://dsp.stackexchange.com/questions/tagged/image-processing
https://mathematica.stackexchange.com/questions/tagged/image-processing
..but for starters, you'll need to use some heuristical analysis to determine what you're looking for. Sampling the captured pixels in a bunch of places (e.g. corners + middle) may help, as would generating a histogram of colour intensities - if there's lots of red but little or no blue and green, it's a red card.

How to make iPhone Camera less sensitive to movement

I have made a "two screen app" in which the camera is divided into two sides left and right each of which can be captured independently and merged later.
The problem which iam facing is when ever user touches the capture button the camera moves a bit and captured image shakes so the user is unable to match the two halves.
Is there any way to make camera less sensitive to minor movements?
I am using imagepicker
Thanks
Roll your won image capture. Capture a larger image than necessary and stabilize the shown image using the gyro.
There is a great example of stabilizing the compass in much the same way here:
http://www.sundh.com/blog/2011/09/stabalize-compass-of-iphone-with-gyroscope/

iPhone UIImage overlap render bug

I've come across a strange render bug on iPhone OS 3.0...
I have two images. One is a non-transparent PNG that is predominately black with a white gradient fading upward.
The second is a transparent PNG with translucent clouds.
When I overlay the two using UIImageView, the intersection of the clouds and white gradient triggers a render bug that causes a rather odd looking graphical glitch that removes all opacity from the image on top (in this case the clouds), and causes the glitched portion of the image to render on top of all layers in the current view (including ones it is technically underneath).
It only occurs at the intersection of the two portions of the images. So typically only a very small block is experiencing the error while the rest of the images render normally.
Has anyone seen this and does anyone have a fix? I want to check before I move on to Core Animation which will hopefully address the problem (since I imagine that CA or even OpenGL is more apt to handle overlapping alpha channels).
Screenshot found here:
http://www.jasconi.us/glitch.jpg
You can see the intersect of the two images at the lower right.
From your description, this seems to be a bug in Apple's code. I would report it to Apple and wait for a fix.
In the meantime, you can try to implement the same functionality in Core Animation or OpenGL in the hope that the bug is in the higher-level UIImageView, but since the UIImageView itself uses Core Animation, it's possible that this bug is simply unavoidable until it's fixed.
I assume you're displaying them using UIImageView? If so, have you set opaque to NO on the transparent view?