I'm building my own device on google home, I am wondering if it is possible to order the device's traits to be displayed in a specific order on Nest Hub.
--- e.g, I would like to display first the toggles then the volume.
see the screenshot of my google hub screen, I have the volume at the top then the controls... I would like the control to be first.
Help would be appreciated
There is no mechanism for you, as the developer, to set the order of traits.
Related
I'm trying to develop an app for the Hololens 1 using Unity. What I want to archive is providing a pre-designed experience to users for a specific room (like a specific room in a museum).
My idea is, that I scan the room with the Hololens, using the scanned mesh in Unity to place the virtual content (using the scan mesh to place the content at the correct position in the room) and then build the app and deploy it to the device. The goal is, that I can give a visitor of the museum the Hololens, he can go to this room, start the app in the room (everywhere in the room) and see the virtual objects on the right places (for example a specific exhibit, the door to the next room, in the middle of the room or....). I don't want to have the visitor place objects by himself and I don't want the staff to do this in advance (before handing out the headset). I want to design the complete experience in Unity for one specific room.
Everytime I am searching for use cases like this I didn't really find a starting point. Somehow the app has to recognize the position of the headset in the room (or find pre-set anchors or something like this).
I really thought this might be a very basic use case for the hololens.
Is there a way to achieve this goal? Later I want to design multiple experiences for all the rooms of the museum (maybe a separate app for every room).
I think I have to find pre set anchors in this room and then placing the content relative to it. But how is it possible to define this anchor and ensure that every visitor finds it so that the virtual content appears on the corresponding real world object?
You should start with Spatial Anchor technology. Spatial Anchor can help you lock the GameObject in a place to locations in the real world based on the system’s understanding. Please refer this link for more information:Spatial anchors. And then, you need persisting local Spatial Anchor in the real-world, this documentation show how to persist the location of WorldAnchor's across sessions with WorldAnchorStore class:Persistence in Unity. If you also want to share experiences with multiple customer to collectively view or interact with the same hologram which is positioned at a fixed point in space. You need to export an anchor from one device and import it by a second HoloLens device, please follow this guide:Local anchor transfers in Unity
Besides, in situations where you can use Azure Spatial Anchors we strongly recommend you to use it. Azure Spatial Anchors provides convenience for sharing experiences across sessions and devices, you can quick-start with this:How to create and locate anchors using Azure Spatial Anchors in Unity
I'm learning about Google Analytics for Unity and also learning about Google Analytics in general. For some games, it would be really useful to have page views:
Imagine your game has 20 levels. You want to track what level people get to before they quit because that correlates to how engaged they were and how fun the game is.
As you can see above, the Audience Overview already has a Pages / Session metric. If you could define each level in a game as a page, then the Pages / Session would give you a lot of useful information.
Unfortunately, I don't see a way to set pages in the reference documentation. Does anyone know how I could do this? Is it really easy to make something equivalent with a custom metric/dimension?
To summarize, there are two different answers that would help me and I'd accept either:
A way to use this plugin to define page views
A way to use this plugin to give me something equivalent to Pages / Session (i.e., Levels / Session). But, I'd like an answer for this to include how to view the Levels / Session, not just collect the data.
I figured this out. The mistake I made is creating a GA view of type "Website." I should have created one of type "App." The difference is explained here: https://support.google.com/analytics/answer/2649553#WebVersusAppViews
The plugin has the ability to send ScreenName's which are effectively PageViews. But, unless my view is setup as App, GA won't really give any reports that show the ScreenNames.
So, it was a matter of creating a new view, then sending ScreenNames as described here: https://developers.google.com/analytics/devguides/collection/unity/v4/reference#screen-basic
I am now ready to submit my app and I'm reading Apple's App Store Guidelines that I'm required to have a method for filtering objectionable material from being posted to my app. In my case, I believe this means I have to have a method for filtering the chat/posts to make sure people cannot bully each other or post pornographic pictures in the chat.
Has anyone ever encountered this before? Any recommendation on the best way to proceed? Perhaps there is a way to add a list of objectionable words and phrases to the chat and/or firebase to be able to prevent certain objectionable things from being said? Any pre-existing filters you can import? I'm using firebase.
I really have no idea how to solve this. Thanks for the comments.
I have had an app rejected for not providing a way to hide content if a user deems not suitable.
You can add a “do not show me again” action and also you must add a reporting system for users to flag any abusif content.
In my case I added two buttons : hide and report.
Any hidden content is applied for that user.
For reported content, if a content gets three reports, that content gets hidden from the whole community.
This was my way of doing it, you can come up with your own vision.
Apple will also want you to address this issue in the terms of use that the users must accept when using your app, you most likely add a checkbox on the signup screen that a user has read and accepted the terms of user, and you provide your terms of use either through and external url or a dedicated screen.
I'm creating a setup of a Google Assistant/Home that should IDEALLY respond to the phrase "Okay Google, show pictures of [PARAMETER PHRASE]" by giving me the parameter phrase. It also HAS to be able to function like a regular home ("Hey Google, how far away is the moon", "... tell me a joke", etc.), without having me reimplement all of that functionality (unmatched phrases should fallback to the Google Home).
If I use the Home, I'm afraid I won't be able to avoid "... tell [MY APP NAME] to ...", but it has a great mic and speaker built in.
I am alternatively looking into a raspberry pi solution for the added layer of control, but the Home has a fantastic mic and speaker already. And importantly, I absolutely don't want to recreate the core Google Home features (possibly able to pass off uncaught phrases to the Google Home backend?)
I can mask some non-parameterized commands with the Assistant Shortcuts ("Okay Google, cat time!", "Hey Google, show me cats") in order to simplify the call phrase, but that does not work because it's not parametrizable.
TLDR: I have a setup that needs to 1. work like a normal Google Home, but must 2. have additional functionality that I implement. I would like to 3. avoid having to state "... tell MY TARGET APP to [...]", but I need 4. parameters to be passed to my code., even if completely unparsed.
What are my options?
There are a bunch of possible approaches here, depending on the exact angle you want to tackle this. None really are perfect at this time, however, but since everything is evolving, we'll see what might develop.
It sounds like you're making an IoT picture frame or something like that? And you want to be able to talk to it? If so, you may want to look into the Assistant SDK, which lets you embed the Assistant into your IoT device. This would let you implement some voice commands yourself, but pass other things off to the Assistant to handle.
But this isn't a perfect solution, since it splits where the voice recognition works, where it is applied, and may not get you the hotword triggering.
It is also still in an early Developer Preview, so things might change, and it may evolve to be something closer to what you want... but it is difficult to tell right now.
Depending on the IoT appliance you're working on, you may be able to leverage the built-in commands by building a Smart Home Action. However, at the moment, these have a fairly limited set of appliance types they can work with. It also sounds like you're trying to deal with media control - which isn't something that Smart Home directly works with, and is (hopefully) a future Action API (there were some hints about this at I/O, with Cast compatibility promised... but no details).
If you really want to build for the Home and Assistant, you'll need to use the limitations around Actions on Google. And that does include some issues with the triggering name.
However... one good strategy is to pick a name that works well with the prefix phrases that are used. Since "Ask" is a legitimate prefix that Home handles, you could plan for a triggering name such as "awesome photo frame", and make the command "Ask awesome photo frame to show pictures of something".
More risky, since it isn't clearly documented, but it seems that some triggering names work without a prefix at all. So if your application is named "fly to the moon", it seems like you can say "Hey Google, fly to the moon" and the action will be triggered. If you can get a name like this registered, it will feel very natural for the user.
Finally, you can pick a reasonable name, but have your users set an alias or shortcut that makes sense to them. I'm not sure how this would fit in with solution (1), but being able for you to predefine shortcuts would make it pretty powerful.
You can't invoke your app without first connecting to your app using Ok Googe, talk to my app* because if it happens so, it will be like talking to the Core Assistant, not your app.
Google doesn't allow to talk an app without app invoke
I want to add an image carousel on a dashboard in Tableau. Around 3-4 images in slideshow from right to left. How does that work? Any insights would be helpful. Thanks!
Adding a picture slideshow in Tableau is not supported in a native way (purposely I assume) and I think there are several reasons you should reconsider your idea.
Tableau is a data visualisation tool and not PowerPoint. You should stick to visualising your data and not create a full on multi-media dashboard that distracts from the important points you want to present
If you need to display pictures that's fine (and possible) but having them changed independently of the data, doesn't seem to add any additional value to a dashboard and should be better done in a different place eg. the website you embed the dashboard in.
If you really want to have animations and moving parts in your dashboard and consider it a necessary feature to proof your point, the only way is to do what you already mentioned, create this slider in an external website and embed it in the dashboard. This however seems like a weird idea as well since if you do embed it in a website, it would be way easier to just do the slider there. If the dashboard is intended to be used locally however, you cannot guarantee that the user will have internet connection, which would mean that it cannot be consumed in a way you intended it to be consumed.