Samsung Smart TV App to use HUE as Ambilight - samsung-smart-tv

I am trying to accomplish this task:
Running an app on a Samsung Smart TV (in background, kind of)
This app should check the screen content in an interval and calculate the main color of screen content or the main colors of each border (lets say 20% of width and heigth from border)
Use the remote accessible api for HUE to control n Philips HUE Lights to accomplish a roomwide ambilight.
Now as I am an android developer and do not have any experience with Smart-TVs I would ask you, if this could be accomplished (or if there is any show stopper) and you have some tips for me, prior to diggin into this very deeply? The actual "How to get startet developing a SmartTV App" will not be the main problem and I am into this right now.
So my actual questions are:
What is the best bettern (or is it impossible) to have something like an background job in an Samsung SmartTV? Maybe something like a ticker app with no actual visible overlay or a very small one, would also be a solution?
Is there a way to access the currently shown picture on TV, so I get access to the rgb values of the areas/pixels or maybe a screenshot or thumbnail of the screen, no matter what the source of the signal is, as I have to analyze it to get the color.
Would be great I you could advise me some resources specially to this tasks and give me some advice if this will be working or if there are any limitations or better concepts.

It seems the Huey app in the Play Store does what you want but accomplishes it in a different manner, using the camera of a device set in front of the TV to determine the colors.

Steve,
Hue API is not fit to be used as Ambilight control facility, since Hue API is not run real-time.
Overheads generated by client and server make it possible to develop Hue API - based Ambilight apps supporting 1-2-3 Hue Lights,
since hue, sat, bri are updated by server-side run scripts, so upddate is slow.
You need to run Ambilight real-time ( 5-10 updates in 1 sec) and have 8-10 or more Hue lights controlled real-time.
So I develop real-time hardware based Ambilight demo for my students.
Hue API alone is not heavy but Hue API calls are server-side processed by API handlers, to send calls via Zigbee master to Hue Lights with Zigbee
hardware and protocol embedded.
Smart TV is hardware based solution, so runs almost real-time and you can get video image updated frequently.

This may pique your interests: Build your own Ambilight clone with the Raspberry Pi

Related

How to detect real face using flutter

I want to make an attendance system where users can attend with the camera.
I am using
tflite_flutter.
google_ml_kit
It works perfectly but if I take a picture and show it in front of the camera it also works. I need to stop that. How can I detect pictures or videos or real faces in flutter?
In order to detect if faces are real or not, it may not be very hard to use a normal camera as the input for the system, as it may not provides enough data to prevent picture or video to be used to trick the system, That's why many face recoqnition systems uses some kind of extra sensors to ensure the security of the system.
For your case of attendance system, it may be better to get another external device/sensor that is able to feed the required data to prevent fake data to the system maybe via something like BLE.

Possible to automate Sony cameras from Matlab with API Beta SDK?

I'm doing research that requires a camera that is automated, but it also has to coordinate with the rotation of a filter wheel and take a series of images relatively quickly (4 images in less than 2 seconds). I'd like to do this by writing a Matlab script to control everything and handle incoming data.
I know there are scientific cameras out there that can do this job and have very good SDKs, but they are also very expensive if they have the sensor size that I need (APS-C or larger). Using a simple Sony mirrorless camera would work perfectly for my needs as long as I can control it.
I'd like to use Matlab or LabView to automate the data acquisition, but I'm not sure what is possible with this API Beta SDK. My understanding is that it is designed to allow the user to create a stand-alone app, but not to integrate camera commands into a programming environment like Matlab. I know there are ways to call an external application from within Matlab, but I've also read one person's account of trying this indirect method and it sounds like it takes a long time to trigger the camera this way (five seconds or more for a single image). That would be too slow.
Does the SDK allow camera control directly from a program like Matlab?
My understanding is that it is designed to allow the user to create a stand-alone app, but not to integrate camera commands into a programming environment like Matlab.
Don't trust marketing statements, that's just how they advertise their SDK. If you take a closer look into the documentation, you will realize your Camera runs a server which accepts JSON-RPC over HTTP commands. I would use an already exiting examples for Android (Java) and adapt it to run on your operating system, you can directly call java code from your matlab console.
I've had great success communicating between MatLab and a Sony QX1 (the 'webwrite' function is your friend!).
That said, you will definitely struggle to implement anything like precise triggering. The call-response times vary greatly (~5 seconds +-2 ish).
You might be able to get away with shooting video and then pulling the relevant frames out of the sequence?

Which RGB scale should be used in iOS development?

I am running into an issue that I am sure many iOS developers have experienced before, so I am coming here to get some help. I work at a company where the design specs for the iPhone app I am working on are made in Adobe PhotoShop. I get these design specs and am told to "make it happen".
In an effort to follow the design specs as closely as possible, I often use the DigitalColor Meter utility that is included in OS X. It is a powerful and useful tool that has been very helpful. The problem is that it is capable of displaying different RGB scales.
So for example, if I am looking at an image exported from PS and I am using the generic RGB scale, I could look at a gray value and get 234/234/234. That's fine. I put that into my iOS app using UIColor and I get a color that looks right, but when I look at it using the DigitalColor Meter the value is 228/228/228!
How can I get a more consistent workflow? How can I make it so that the value I get from the PNG image from PS and the image that shows up in my simulator and even device are the EXACT same? Is that possible?
Thanks!
I am pretty sure that the different iOS devices out there (iPhones, iPads) have different characteristics in terms of their display color profiles: if you used a different device, you would have got a different result from Digital Color Meter.
In principle, to solve this problem, you should color profile the device display; once you have a color profile of the display, you could use that in Photoshop to get the values that you should specify so that the output on the profile display would match the original color.
To create a color profile, you should use a specific tool (colorimeter) and a specific software (there are many on the market).
In practice, since, as I said, each device has its own characteristics, you would need a profile for each one of them and then use a different set of colors for each device. Pretty unmanageable.
What you need is color management in your app.
There is a good color management library available. It's open source.
Have a look at little CMS :- https://github.com/mm2/Little-CMS
From IOS 9.3, apple has built in colormanagement across the devices.

does devicePixelRatio is really useful

I just wonder if the devicePixelRatio related to the web-kit based browsers and Apple's device is really useful, Or it's just apple's private asset. You know, the web-kit engine is also belongs to apple inc. I think this kind of stuff was only meaningful for Apple's Retina screen, and i always think that the deference between the screen's resolution and OS's resolution should be handled properly by the OS, it's not our task.
If there are lots values of devicePixelRatio range from 0 to 1000000, how many pictures should i prepared for those screens.
Web browsing is the most popular activity for mobile device users and webpages themselves are served in a variety of shapes and sizes.
Apple and the various companies that followed them into the mobile hardware arena needed to make the web browsing experience as easy as possible in order to maximize the amount time spent using and relying on their devices. They needed to avoid having the user pinch and zoom and pan around a page in order to read content, so they exposed an API to web developers known as "meta viewport" which allowed them to serve with little extra effort a small screen adapted versions of their website.
Later they realized that scaling in such a manner made images look like absolute crap when scaled up in a higher dpi device like apple retina AND android devices like the galaxy sIII and nexus devices. So they made a variable devicePixelRatio and a corresponding CSS Media Query to enable web developers to detect that a given device needs higher resolution images in order for a website to look good after being scaled. No one expects website owners/developers to waste 2x the bandwidth serving bitmaps with subpixel data to EVERYONE just because 0.2% of their users happen to be using a device with 2x the usual amount of pixels for a given physical size. In order for a high dpi device to be successful they needed to make the web look good on it and the only way for the web to look good on it is to make it easy and worthwhile enough for a website owner/developer to opt into making their website look good on it.
its up to the website developer to weigh the cost and benefit of taking the extra time to selectively serve images so that a website will not look bad on high pixel density devices. If the web ever comes to a point where most websites are doing this, the consumer will be under the impression that YOUR website is of low quality, not due to some shortcomings in the hardware they are using.
and just to clarify:
apple only uses 1 and 2 for their devicePixelRatio.
google promotes use of 1, 1.5, and 2 (although they cannot always enforce this).
microsoft uses 96dpi (1) 144dpi (1.5) 192dpi (2) in their screen.deviceXDPI value
most people just serve one 2x resolution version of their assets to all devices above some sort of threshold like 1.3 and 1x versions to devices below that. For those web developers who understand what exactly all these device values mean and how to use "CSS Media Queries" or their respective javascript values, it is extremely easy and not as frustrating as I suspect you are imagining it to be.

iPhone indoor location based app

I am researching how to create an app for my work that allows clients to download the app (preferably via the app store) and using some sort of wifi triangulation/fingerprints be able to determine their location for essentially an interactive tour.
Now, my question specifically is what is the best route to take for the iPhone? None of the clients will be expected to have jail broken iPhones.
To my understanding this requires the use of the wifi data which is a private api therefore not meeting the app store requirements. The biggest question I have is how does American Museum of Natural History get away with using the same technology, but still available on the app store?
if you're unfamiliar with American Museum of Natural History interactive tour app, see here:
http://itunes.apple.com/us/app/amnh-explorer/id381227123?mt=8
Thank you for any clarification you can provide.
I'm one of the developers of the AMNH Explorer app you're referencing.
Explorer uses the Cisco "Mobility Services Engine" (MSE) behind the scenes to determine its location. This is part of their Cisco wifi installation. The network itself listens for devices in the museum and estimates their position via Wifi triangulation. We do a bit of work in the app to "ask" the MSE for our current location.
Doing this work on the network side was (and still is) the only available option for iOS since, as you've found, the wifi scanning functions are considered to be private APIs.
If you'd like to build your own system and mobile app for doing something similar, you might start with the MSE.
Alternatively, we've built the same tech from Explorer into a new platform called Meridian which provides location-based services on both iOS and Android. Definitely get in touch with us via the website if you're interested in building on that.
Update 6/1/2017
Thought I would update this old answer - AMNH is no longer using the Wifi-based system I describe above, as of a few years ago. They now use an installation of a few hundred battery-powered Bluetooth Beacons (also provided by Meridian). The device (iOS or Android) scans for nearby beacons and, based on their known locations and RSSI values, triangulates a position. You can read more about it in this article.
Navizon offers an indoor positioning solution that works for iOS as well as any other platform. You can check it out here:
http://www.navizon.com/product-navizon-indoor-triangulation-system
It works by triangulating the WiFi signals transmitted by the device. Since it doesn't require an app to run on the phone, it bypasses the iOS limitations and can locate any other WiFi device for that matter.
Google recently launched an API called Maps Geolocation API. You can use it for indoor tracking of devices, which essentially can be used to achieve something similar to what AMNH's app does.
I would do this using Augmented Reality. There is a system sort of in place for this, the idea being that you place physical markers that have virtual information associated with them. I believe the system I saw was a type of bar code. When a user holds up the phone with the app, the app uses the camera to read the code and then display information. This could easily be used to make a virtual tour type app distributable through the app store and not even require a WIFI or 3/4G connection. This assumes that you simply load your information and store it locally with your app. Then to update it you simply push an update through the app store. Another solution is to use a SOAP/REST service and provide the information in that way, and this does not use private API's, though it does require some form of internet connection. For this you can see a question I asked about this topic a little bit ago:
SOAP/XML Tutorials Question
In addition, you could load a map of your tour location, and based on what code is scanned you can locate the user on the map and give suggested routes based on interests etc.
I found this tutorial recently on augmented reality, I haven't gone through it, but if its anything like the rest of Ray's tutorials, it will be extremely helpful.
http://www.raywenderlich.com/3997/introduction-to-augmented-reality-on-the-iphone
I'll stick around to clarify any questions or other concerns you may have with your app.
To augment the original answer for devs who were using Cisco MSE for indoor location - now they have an iOS and Android SDK which enables you to do indoor location using the MSE. A simulator can be used as well to develop the app without implementing the infrastructure to start with : https://developer.cisco.com/site/cmx-mobility-services/downloads/
For indoor location you can use Bluetooth LE beacons since it's a very accessible technology nowadays, there are several methods:
Trilateration: it uses 3 beacons, but with the noise and attenuation of Bluetooth signals, it gets quite difficult to determine the exact position and also it's not easy to use more than 3 beacons to increase accuracy.
Levenberg Marquadt method: used to solve non-linear squares problems showed good results on indoor positioning.
Dead Reckoning method: using the motion co-processor of the device, giving an initial position you can calculate the moving path of the device. Not that easy to implement anyway.
I wrote a post on the topic, you can find more info here: http://bits.citrusbyte.com/indoor-positioning-with-beacons/
And you can use this iOS app for your own indoor positioning experiments: https://github.com/citrusbyte/beacons-positioning
I doubt the American Museum is actually using private APIS; you'll probably find the routers that have been setup serve different responses to each other, so the app can detect it's position in the museum.
If you are looking for a cheaper to way to do the same task, you could have signs with QR codes, and use an open source library to let users scan these barcodes as they move through the museum, and update the onscreen content accordingly. On an even more low tech level, you can just tag each area with unique numbers, and distinguish that way.