How to "route" text from Zoom chat into another application (like Max/MSP). (self.learnprogramming) - chat

I was trying to do the following:
I needed a script that would be "reading" the chat of the current Zoom meeting, and routing that text into another application (in specific, Max/MSP).
I am trying to have live interaction between the participants in the call and th Max/MSP patch. I know what I need to do on Max side to "interpret" the text, but I am clueles on how I could "route" the text from the Zoom chat.
I have little experience in other programming languages, but am open to learn from other scripts.
Sorry if I'm being a little vague. I searched for similar questions and didn't find anything like what I need.

You would typically access the Zoom API and query a user's chat messages:
https://marketplace.zoom.us/docs/api-reference/zoom-api/chat-messages/getchatmessages
This should be possible to do in Max, using the maxurl object:
https://docs.cycling74.com/max8/refpages/maxurl

Related

Including a list of manually selected online newspaper articles (Flutter)

I am an absolute beginner in programming and I have set myself the goal of creating an app for our small association.
I would also like to create a kind of news feed on a page in which I can post local newspaper articles
can add manually. So selected articles. I know that it is probably very complex at the beginning, but I want to get an idea of ​​how and what I need for it.
My question is, what hardware or software items do I need? Or how can that be done?
For now, I don't need any codes, I just need an overview of the means by which I can get there. And then I try to get used to it bit by bit.

Create custom Google Smart Home Action

I have a Google Nest Hub Max and I want to increase its capabilities for a custom need:
"Hey Google, add xyz to my work planning"
Then I want to make an HTTP call to my private server
The private server returns a text
The text is displayed in the Google Nest Hub Max screen + speak-out.
How can that be achieved?
Originally I thought that this will not be difficult. I've imagined a NodeJs, Java, Python or whatever framework where Google gives me the xyz text and I can do my thing and return a simple text. And obviously, Google will handle the intent matching and only call my custom code when users say the precise phrase.
I've tried to search for how to do it online, but there is a lot of documentation everywhere. This post resumes quite well the situation, but I've never found a tutorial or hello world example of such a thing.
Does anyone know how to do it?
For steps 2. and 3., I don't necessarily need to use a private server, if I can achieve what the private server does inside the Smart Home Action code, mostly some basic Python code.
First - you're on the right track! There are a few assumptions and terminology issues in your question that we need to clear up first, but your idea is fundamentally sound:
Google uses the term "Smart Home Actions" to describe controlling IoT/smart home devices such as lights, appliances, outlets, etc. Making something that you control through the Assistant, including Smart Speakers and Smart Hubs, means building a Conversational Action.
Most Conversational Actions need to be invoked by name. So you would start your action with something like "Talk to Work Planning" or "Ask Work Planning to add XYZ'. There are a limited, but growing, number of built in intents (BIIs) to cover other verticals - but don't count on them right now.
All Actions are public. They all share an invocation name namespace and anyone can access them. You can add Account Linking or other ways to ensure a limited audience, and there are ways to have more private alpha and beta testing, but there are issues with both. (Consider this an opportunity!)
You're correct that Google will help you with parsing the Intent and getting the parameter values (the XYZ in your example) and then handing this over to your server. However, the server must be at a publicly accessible address with an HTTPS endpoint. (Google refers to this as a webhook.)
There are a number of resources available, via Google, StackOverflow, and elsewhere:
On StackOverflow, look for the actions-on-google tag. Frequently, conversational actions are either built with dialogflow-es or, more recently, actions-builder which each have their own tags. (And don't forget that when you post your own questions to make sure you provide code, errors, screen shots, and as much other information as you can to help us help you overcome the issues.)
Google's documentation about how to design and build conversational actions.
Google also has codelabs and sample code illustrating how to build conversational actions. The codelabs include the "hello world" examples you are probably looking for.
Most sample code uses JavaScript with node.js, since Google provides a library for it. If you want to use python, you'll need the JSON format that the Assistant will send to your webhook and that it expects back in response.
There are articles and videos written about it. For example, this series of blog posts discussing designing and developing actions outlines the steps and shows the code. And this YouTube playlist takes you through the process step-by-step (and there are other videos covering other details if you want more).

PageViews for Google Analytics Plugin for Unity

I'm learning about Google Analytics for Unity and also learning about Google Analytics in general. For some games, it would be really useful to have page views:
Imagine your game has 20 levels. You want to track what level people get to before they quit because that correlates to how engaged they were and how fun the game is.
As you can see above, the Audience Overview already has a Pages / Session metric. If you could define each level in a game as a page, then the Pages / Session would give you a lot of useful information.
Unfortunately, I don't see a way to set pages in the reference documentation. Does anyone know how I could do this? Is it really easy to make something equivalent with a custom metric/dimension?
To summarize, there are two different answers that would help me and I'd accept either:
A way to use this plugin to define page views
A way to use this plugin to give me something equivalent to Pages / Session (i.e., Levels / Session). But, I'd like an answer for this to include how to view the Levels / Session, not just collect the data.
I figured this out. The mistake I made is creating a GA view of type "Website." I should have created one of type "App." The difference is explained here: https://support.google.com/analytics/answer/2649553#WebVersusAppViews
The plugin has the ability to send ScreenName's which are effectively PageViews. But, unless my view is setup as App, GA won't really give any reports that show the ScreenNames.
So, it was a matter of creating a new view, then sending ScreenNames as described here: https://developers.google.com/analytics/devguides/collection/unity/v4/reference#screen-basic

Showing nearby specific business offices in table view cells using iphone GPS

There are lots of similar questions but my question is different in that sense i got coordinates of my current location in my application.
But now i want to show specific business offices in table view around that coordinates. I know ReverseGeoConding is answer to that but i cant find suitable tutorial or advice on this. How can i implement it.
Please suggest .
Reverse Geocoding will provide you with address information for the coordinates provided. In order to get businesses around those coordinates, you may want to look into external APIs. I believe Facebook has a Places API and Foursquare has an API as well. These may be able to provide you with local business information.

What is the standard/best way of route implementation on iPhone?

I have read about a number of posts for developers who want to plot out a route on a map on an iPhone but there is no satisfactory answer as how to best achieve this. You can use the Route-Me library, add a layer on top MKMapView or send coordinates to phones map software then you navigate from your application, which in my opinion is bad user experience.
None of this solves the problem in a good way.
Some Post writes that there are legal obstacles, others write that it is about licensing money. This must be a very common requirement and thus a common feature to implement. So what is the de-facto standard way to do this?
Can someone with good experience share their insights on this question?
BR
//Christoffer
So I decided to use Apple Developer Technical Support to really clear this out. This is the reply:
Hello,
Thank you for your inquiry to Apple Worldwide Developer Technical Support.
I am responding to let you know that I have received your request for technical assistance.
The de-facto standard way of doing this is using the Map application. I realize this is not what you want. You want to stay within your app. The MKMapView API does not give you that level of support when it comes to user directions. You will have to rely on a separate web service to obtain those directions, then plot each lat/long point yourself on the MKMapView. Basically you will need to make an HTTP request to the Google Directions API. The terms require you to display the results on a Google map; since MKMapView shows Google, that should be OK.
http://code.google.com/apis/maps/documentation/directions/#DirectionsRequests
If you succeed in obtaining driving directions from let say Yahoo or Google service, MKMapView will allow you to plot a visual course using an MKOverlayPathView and MKShapes to draw polygon-like shapes. Apple has a sample called "KMLViewer" found at
http://developer.apple.com/library/ios/#samplecode/KMLViewer/Introduction/Intro.html
It shows you how to plot points based on KML. The approach is the same since we are dealing with lat/long coordinates.
You may want to consider using the Map application, which would be considerably easier. All you need is this:
// for lat/long directions
NSString *urlString1 = #"http://maps.google.com/maps?daddr=37.324885,-122.032378&saddr=37.332094,-122.03124";
// for address directions
NSString *urlString2 = #"http://maps.google.com/maps?f=d&source=s_d&saddr=1+Infinite+Loop,+Cupertino,+CA+95014&daddr=Mandarin+Gourmet,+Cupertino,+CA&hl=en&geocode=FcajOQIdYvO5-Ckbd16TtrWPgDFAc4Pi50E92A%3BFZ2GOQIdLe65-CHRv0sTH7YegykLqKn9rbWPgDGUnqKbIqi1Bg&mra=ls&sll=37.325567,-122.032989&sspn=0.007243,0.007285&ie=UTF8&ll=37.328195,-122.031466&spn=0.007243,0.007285&z=17";
[[UIApplication sharedApplication] openURL: [NSURL URLWithString: urlString1]];
SECOND E-MAIL:
Just to be clear in my last e-mail about the terms for using Google services. Google requires you to display the route results on a Google map, which MKMapView uses. However, I would double-check Apple's own T&Cs regarding the use of external services like user directions on MKMapView itself. I don't want you do go down a path only to find legal restrictions along the way.