We are trying to implement temperature monitoring system with our devices. The goal is to have an alert when the temperature exceeds/goes below the set points.
I can see that we can set the temperature limits with the thermostattemperaturesetrange trait.
But there is no way to send a proactive notification with the trait.
Also, it automatically works with Google's Nest thermostats as mentioned here. Just wondering how one would achieve with custom devices.
The sensorState trait also doesn't mention anything about temperature notifications.
Is there any way to have proactive notifications for thermostat or any custom proactive notifications in general?
Proactive notifications for Thermostats are not available right now. We will send your feedback to the smart home team for the feature request of temperature notification . In the meantime, I would suggest to open up a feature request on the public issue tracker.
I want my smart device to be controlled by Google assistant. I have checked the smart home action guide, but the required device type is not available in smart home action.
I need to add customization to my voice commands as well, so smart home action is not option for my requiremnts.
Is conversational action a good choice to implement smart home. The google action doc mentions the use cases for which conversational action can be used.
Smart home use cases seem to be not fitting in the above mentioned use cases, as it will take time to process the request.
Is there any other option to implement this ?
While selecting a device type to integrate with google, the device type trait provides many options for combinations for you to pick the closest device type you want with all the required traits.
NOTE - Conversational actions are currently deprecated and will be fully shut down on June 13, 2023. For more information, visit https://developers.google.com/assistant/ca-sunset.
I recently made a basic Google Smart Home app for one of my devices. I was wondering if there was a way to query my device for information such as energy consumption. For example, if I asked "How much has energy has my heater used today?", would it be possible to get a real-time value for this (from my fulfillment)?
The answer to your question specifically is no. Traits that are not supported in the Smart Home documentation will not be available to query in the Home Graph.
However, you can use the Actions on Google platform to build a separate fulfillment which can provide some of these custom user queries. The two can be developed as separate projects but using the same backend.
You: "OK Google, turn up the heater to 80 degrees"
Google Assistant: "Okay"
You: "OK Google, ask My Heater how much energy you've used today"
Google Assistant: "Getting your heater."
Heater: "Today you have used 10 energy units. That's 5% more than yesterday."
On amazon alexa, cards are displayed in the amazon alexa app or on the screen of an echo show ?). If I call my google actions on my smartphone, I am also able to view the cards. But what happens if I use a different non-screen surface, like the google home? Do the cards appear in the google home app anywhere or do they just get lost?
Cards (and other visual elements you can add) aren't shown if the surface you're currently interacting with doesn't support them. This is intentional since the user may not expect them there and might open the app later and be surprised.
You can always check what surfaces are being supported in your current conversation by using app.getAvailableSurfaces() or the equivalent JSON properties. If you need to show the user something, you can prompt them to change to a surface that supports display by using app.askForNewSurface(). See the documentation about Surface Capabilities for detailed information.
In general, it is a good design to expect the user to only interact with their voice and to require visual information only minimally. Visual information should be used to supplement and enhance the voice as much as possible.
I am researching how to create an app for my work that allows clients to download the app (preferably via the app store) and using some sort of wifi triangulation/fingerprints be able to determine their location for essentially an interactive tour.
Now, my question specifically is what is the best route to take for the iPhone? None of the clients will be expected to have jail broken iPhones.
To my understanding this requires the use of the wifi data which is a private api therefore not meeting the app store requirements. The biggest question I have is how does American Museum of Natural History get away with using the same technology, but still available on the app store?
if you're unfamiliar with American Museum of Natural History interactive tour app, see here:
http://itunes.apple.com/us/app/amnh-explorer/id381227123?mt=8
Thank you for any clarification you can provide.
I'm one of the developers of the AMNH Explorer app you're referencing.
Explorer uses the Cisco "Mobility Services Engine" (MSE) behind the scenes to determine its location. This is part of their Cisco wifi installation. The network itself listens for devices in the museum and estimates their position via Wifi triangulation. We do a bit of work in the app to "ask" the MSE for our current location.
Doing this work on the network side was (and still is) the only available option for iOS since, as you've found, the wifi scanning functions are considered to be private APIs.
If you'd like to build your own system and mobile app for doing something similar, you might start with the MSE.
Alternatively, we've built the same tech from Explorer into a new platform called Meridian which provides location-based services on both iOS and Android. Definitely get in touch with us via the website if you're interested in building on that.
Update 6/1/2017
Thought I would update this old answer - AMNH is no longer using the Wifi-based system I describe above, as of a few years ago. They now use an installation of a few hundred battery-powered Bluetooth Beacons (also provided by Meridian). The device (iOS or Android) scans for nearby beacons and, based on their known locations and RSSI values, triangulates a position. You can read more about it in this article.
Navizon offers an indoor positioning solution that works for iOS as well as any other platform. You can check it out here:
http://www.navizon.com/product-navizon-indoor-triangulation-system
It works by triangulating the WiFi signals transmitted by the device. Since it doesn't require an app to run on the phone, it bypasses the iOS limitations and can locate any other WiFi device for that matter.
Google recently launched an API called Maps Geolocation API. You can use it for indoor tracking of devices, which essentially can be used to achieve something similar to what AMNH's app does.
I would do this using Augmented Reality. There is a system sort of in place for this, the idea being that you place physical markers that have virtual information associated with them. I believe the system I saw was a type of bar code. When a user holds up the phone with the app, the app uses the camera to read the code and then display information. This could easily be used to make a virtual tour type app distributable through the app store and not even require a WIFI or 3/4G connection. This assumes that you simply load your information and store it locally with your app. Then to update it you simply push an update through the app store. Another solution is to use a SOAP/REST service and provide the information in that way, and this does not use private API's, though it does require some form of internet connection. For this you can see a question I asked about this topic a little bit ago:
SOAP/XML Tutorials Question
In addition, you could load a map of your tour location, and based on what code is scanned you can locate the user on the map and give suggested routes based on interests etc.
I found this tutorial recently on augmented reality, I haven't gone through it, but if its anything like the rest of Ray's tutorials, it will be extremely helpful.
http://www.raywenderlich.com/3997/introduction-to-augmented-reality-on-the-iphone
I'll stick around to clarify any questions or other concerns you may have with your app.
To augment the original answer for devs who were using Cisco MSE for indoor location - now they have an iOS and Android SDK which enables you to do indoor location using the MSE. A simulator can be used as well to develop the app without implementing the infrastructure to start with : https://developer.cisco.com/site/cmx-mobility-services/downloads/
For indoor location you can use Bluetooth LE beacons since it's a very accessible technology nowadays, there are several methods:
Trilateration: it uses 3 beacons, but with the noise and attenuation of Bluetooth signals, it gets quite difficult to determine the exact position and also it's not easy to use more than 3 beacons to increase accuracy.
Levenberg Marquadt method: used to solve non-linear squares problems showed good results on indoor positioning.
Dead Reckoning method: using the motion co-processor of the device, giving an initial position you can calculate the moving path of the device. Not that easy to implement anyway.
I wrote a post on the topic, you can find more info here: http://bits.citrusbyte.com/indoor-positioning-with-beacons/
And you can use this iOS app for your own indoor positioning experiments: https://github.com/citrusbyte/beacons-positioning
I doubt the American Museum is actually using private APIS; you'll probably find the routers that have been setup serve different responses to each other, so the app can detect it's position in the museum.
If you are looking for a cheaper to way to do the same task, you could have signs with QR codes, and use an open source library to let users scan these barcodes as they move through the museum, and update the onscreen content accordingly. On an even more low tech level, you can just tag each area with unique numbers, and distinguish that way.