Is there a on/off sensor for smart home actions - actions-on-google

I've been looking for a way to implement a simple sensor that only has an on or off state.
As the simple SensorState does not supporta simple on/off state I tried using a Switch with an OnOff trait, and set the queryOnlyOnOff attribute to true
The description of the attribute is Indicates if the device can only be queried for state information, and cannot be controlled through commands. As of the app still showing on/off buttons, is commands only for voice commands? If so Google needs to add a simple sensor we already have the trait for it we just need to add a simple detected/notdetected sensor to the SensorState if there is not already. Any ideas how to work around this?
Edit 1:
My current SYNC response https://gist.githubusercontent.com/DRSchlaubi/7099dc289858ae6dba72e54311fa728f/raw/4f3f9506cfe27701be00db438477c3bed8a6b0ec/sync.json
EDIT 2:
I see google specifically lists an example for sensors here
but the example is very vague and looks like you have to add it to the top-level payload which does not work as well as adding it to the attributes object
EDIT 3: I tried switching from the Switch device to a Sensor device but now it does no longer display the state in the app probably Sensor does not support StateOnOff?
Edit 4: on the sensor page it says These traits are recommended, if applicable to your device. However, you are free to mix and match from all available traits to best match your existing product functionality. however the OnOff state does not seem to display anything

Related

Changing control schemes between camera views?

I am a beginner gamedev and trying to figure out how to go about implementing my game's inventory management system. I am developing a third person RPG, with heavy emphasis on inventory and crafting, and so it needs to be done right.
I essentially have a tablet device the player will be able to use to access statistics about themselves and should also be their method of ascertaining crafting recipe. An example scenario is as follows: player obtains three recipe items, Apple, Leaf, and Ash. They want to craft a healing poultice in the field. They remove their backpack, which serves as both inventory and a less powerful crafting station, and the camera moves to focus on the backpack as if it were an inventory screen, allowing the player to arrange their ingredients as necessary and then craft.
My problem is I haven't figured out how to manage the control switch. Ideally the player shouldn't be able to walk around and do normal traversal while crafting. Indeed having WASD usable as menu manipulation controls would be preferable. The game is of the type that realtime usage of inventory is part of the experience, so I dont want a static pause screen. What avenue do I need to pursue in my research for this?
I've looked into Cinemachine and state switching/statemachines but I'm still fairly puzzled by that whole affair. I can switch to a state so far, but only sometimes get out of it. I'm away from my computer at the moment so I don't have an example but this concept has been nagging me for hours.

Continuous data streaming from NFC to iPhone in Swift?

I have an NFC tag that has integrated environmental sensors inside (MLX90129 to be exact). I would like to make an iPhone app that can read the realtime data from the tag multiple times per second and graph them. I'm not looking for background tag reading, and you can assume that the app will be open and the phone is near the tag at all times.
From what I can see on Apple documentation and other sources, the Swift support for NFC tags is mostly built for single session interrogation. Has anyone succeeded in getting continuous and repeated NFC tag reading for this type of purpose?
As you pointed out: "to make continuous and repeated NFC readings" it's not the intended functionality.
While I think that you can sort this out, there's another thing that could be a headache... to make multiple readings per second it's directly confronted to the current implementation of NFC tag reading in iOS.
Every time you start a reading, it shows the native window which informs the user that you are making a NFC Reading. A part of this process is the interaction of the user, and is exactly that part the one that imposes a time constraint. Even if the interaction with the user is not needed, there is an animation, and that animation has its lifecycle's events (start reading, reading, OK, KO, close...).
Afaik you can't bypass that animation which definitely could represent a couple seconds in the best case.
With that said, you should have a few things in mind, if you still want to try:
NFCTagReaderSession can only have one active reading at a time, and when that reading ends (OK/KO), it should be invalidated. So if you want to make another reading, you'll need to create and configure a new instance.

Is there any way to make some mode trait modes queryOnlyModes and others not?

After reviewing the Google Actions API docs, I am seeing that the mode trait supports queryOnlyModes meaning the modes cannot be edited and can only be QUERIED.
Setting this on the mode trait makes all modes queryOnlyModes.
Our device has multiple different modes, each is pretty unique and we want to be able to set some of them to be "query only" while setting others to be able to be adjusted by the user.
Does anyone know whether this is supported or how this could be implemented?
Any help or info is appreciated. Thanks.
You are not able to set queryOnly at the level of individual modes. However, if you do get a command that cannot be done, you can return an error as the response with a code like functionNotSupported to let the user know the command was acknowledged.

Should functional testing simulate UI events or check for preconditions?

I am struggling with this question since I noticed that many functional testing frameworks (like Selenium for the web or UISpec for iOS) actually simulate UI events while testing. I am asking: couldn't it be sufficient just to check for preconditions such as that, e.g., the target and selector for a button are set correctly and then fire the selector manually? Why do I need to simulate touches? This has the con that you have to know more about the UI elements you're testing (you have to know what makes them to behave correctly), but since I am the one writing the tests, maybe this doesn't matter?
Could anyone shed some light on this?
Simulating touches can be useful for determining crashes caused by obscure or unplanned user behaviour - a particularly common one is having two items pressed simultaneously. It also allows you to create potentially quite esoteric tests: for example, random user input for a sustained period of time to attempt to crash or break your application in ways you wouldn't expect. The level to which you'd do this would depend on your app, and how important it was to you.
Your alternative approach also has some disadvantages when it comes to multi-touch. Whilst it would be fairly straightforward to fire a button selector through some sort of automatic test rather than simulating user input, what happens if you have an app that deals with swiping, pinching, or other multiple input gestures? In those cases the desired result may not be as black and white as the on/off of the button: you may have many shades of grey and differing output that required validation.
Simulated UI testing actually has quite a long history - there's an interesting story (well, interesting to me) about the original MacPaint and how a random UI input test was able to assist in reproducing obscure or difficult crashes here: http://www.folklore.org/StoryView.py?story=Monkey_Lives.txt

OpenFeint achievements performance

I've decided to integrate OpenFeint into my new game to have achievements and leaderboards.
The game is dynamic and I would like user to be rewarded immediately for some successful results, but as it seems for me, OpenFeint's achievements are a bit sluggish and it shows visual notification only when it receives confirmation from the server.
Is it possible to change something in settings or hack it a little bit to show notification immediately as soon as it checks only local database if the achievement has not been unlocked it?
Not sure if this relates to the Android version of the SDK (which seems even slower), but we couldn't figure out how to make it faster. It was so unacceptably slow that we started developing our own framework that fixes most of open feint's shortcomings and then some. Check out Swarm, it might fit your needs better.
There are several things you can do to more tightly control the timing of these notifications. I'll explain one approach and you can use this as a starting point to explore further on your own. These suggestions apply specifically to iOS apps. One caveat is that these suggestions refer to internal APIs in OFSDK 2.8 for iOS and not ordinarily recommended for high level use and subject to change in future versions.
The first thing I recommend is that you build the sample app with your own product key. Use the standard sample app to experiment before applying the result to your own code.
You are going to get the snappiest response by separating the notification pop-up UI from the process of submitting the achievement. This way you don't have to worry about getting wrapped up in the logic for deciding whether the submission is going just to the local db or is doing the full confirmation on an async network transaction.
See the declaration of "showAchievementNotice" in "OFNotification.h". Performing a search in the sample app, you will see that this is the internal API used for displaying the achievement pop-up when an achievement is earned. It does not actually submit the achievement. You can call this method directly as it is called from "OFAchievementService.mm" to directly control when the message appears. You can then use the following article to disable the pop-up from being called when the actual submission occurs:
http://support.openfeint.com/dev/notification-pop-ups-in-ios/
This gives you complete freedom to call the submission at a later time provided you keep track of the need to do so. For example, you could locally serialize a flag to take care of the actual submission either after the level is done or the next time the app starts up. Don't forget that the user could quit out of a game without cleanly finishing a level.