I am using thingspeak platform with my pi3 for home automation. I have successfully sent and received data from my board to the channel. However, I am not able to properly understand the MATLAB analysis tutorial supported on the site.
https://in.mathworks.com/help/thingspeak/analyze-your-data.html
I am unable to understand what and why readchId should be given here and
what is the job of MATLAB analysis.
if the MATLAB analysis is to write my received data to channel and then use MATLAB visualize to display it using readchId. what purpose does the readchId in MATLAB analysis part solve?
ThingSpeak allows you to send data from your IoT device to a ThingSpeak channel, and then to use apply various ThingSpeak "apps" to those channels: these can perform various actions based on the channel data (like tweeting, or sending a message to some other web service), or they can perform analytics, or create visualisations, on the channel data. These analytics and visualisation apps are implemented in MATLAB code, and run on ThingSpeak.
The tutorial you're looking at reads in data from one channel (ThingSpeak 12397, which receives weather data), does some analysis on it to calculate the dew point from the temperature and humidity, and then writes it out to another channel and visualizes it.
readChId in the tutorial is the ID of the channel you are reading from (12397), and writeChId is the ID of the channel you are writing to (677 as an example, but replace with your own channel number).
Related
I am currently working on implementing a Cohda DSRC V2X device that takes a vehicle CAN input. I don't have access to the vehicle and want to simulate the input with a prerecorded CAN log file from it. If possible, we want to playback the CAN log file into the V2X device. I was directed to look into Vector CANoe/CANalyzer products. After looking into their products, documentation, forums, and FAQs, I have not been able to determine if this is possible. So, can this be done, and if so, how?
Perhaps you could use a replay block in Vector CANoe environment if the log file is in Vector supported format and you do not use any secure communication.
For reference please see:
Vector Public Knowledge Area on logging formats, especially the note on the bottom.
My lab recently purchased the BIOPAC system to measure skin conductance as part of our experiments.
My experiments are normally coded using Livecode.
I need to be able to tell Livecode to automatically score and label digital event marks in the skin conductance responses in the BIOPAC System.
Does anyone have any experience interfacing the BIOPAC system and Livecode? Does anyone have same code they have used?
Thanks!
There is a gadget from "Bonig und KallenBach":
http://www.bkohg.com/serviceusbplus_e.html
This has analog inputs that you could easily configure to measure skin resistance. It comes with a framework for LiveCode, and connects through the USB port.
I have used these in many applications to connect the real world to my computer. All your processing is done in the usual LC way.
I think, there is no direct Example using Biopac Hardware.
To tinkering with your own software outside AcqKnowledge, you have to purchase BHAPI (MPDEV.dll) from Biopac. The problem is BHAPI only support Windows. not MacOS nor Linux.
Another way is streaming data through AcqKnowledge 5.x. Start acquisition in AcqKnowledge, and you can stream it. Then receive the data stream in livecode and process it.
I have a board on my Watson IoT dashboard, which I use to monitor temperature real-time with a line graphic.
I have a second dashboard with a different organisation, and I want to import the graphic which is in the board in the first dashboard, without making again the same graphic.
I've already tried that solution and, even if possibile, it wouldn't be efficent. I mean, I already have the data on IoT why sending from IoT to Iot?
So, can i display on, let's call it... dashboard2, the board with the temperature (or any other property) which is in dashboard1 without sending or duplicate data on dashboard2?
If so, how can i do that? It's almost a week that I'm searching, and I start to doubt that something like this actually exists...
would like to know if Bluemix, potentially Watson capability could do the following: if multiple persons having conversation via one or many microphones as streamed audio source, could identify also a person tone / spectrum - i.e. who of all is producing the sound ?Thanks: Markku
There's no Watson API to work on an audio stream at the moment.
As you pointed out, you could use Speech-to-text to get a transcript of the conversation and potentially Tone-analyzer to get a sentiment analysis but this won't be enough to determine who's the speaker.
If you want to know more about how those two services work, please check their pages on the Watson Developer Cloud
Here is an example application and the code for combining STT and Tone Analyzer:
Application
https://realtime-tone.mybluemix.net/
Get the code here to use for your own applications:
https://github.com/IBM-Bluemix/real-time-tone-analysis
Julia
I've created an Android Application and I've connected different watson services, available on Bluemix, to it: Natural Language Classifier, Visual Recognition and Speech to Text.
1) The first and the second work well; I've a little problem with the third one about the format of the audio. The app should register a 30sec audio, save it on memory and send to the service to obtain the corresponding text.
I've used an instance of the class MediaRecorder to register the file. It works, but the available Output formats are AAC_ADTS, AMR_WB, AMR_NB, MPEG_4, THREE_GPP, RAW_MR and WEBM.
The service, differently, accepts in input these formats: FLAC, WAV, PCM.
What is the best way to convert the audio file from the first set of outputs to the second one? Is there a simple method to do that? For example, from THREE_GPP or MPEG_4 to WAV or PCM.
I've googled searching infos and ideas, but I've found only few and long-time methods, not well understood.
I'm looking for a fast method, because I would make the latency of conversion and elaboration by the service as short as possible.
Is there an available library that does this? Or a simple code snippet?
2) One last thing:
SpeechResults transcript = service.recognize(audio, HttpMediaType.AUDIO_WAV);
System.out.println(transcript);
"transcript" is a json response. Is there a method to directly extract only the text, or should I parse the json?
Any suggestion will be appreciated!
Thanks!
To convert the audio records in different formats/encodings you could:
- find an audio encoder lib to include into your app which supports the required libs but it could very heavy to run on a mobile device (if you find the right lib)
- develop an external web application used to send your record, make it encoded and returned as a file or a stream
- develop a simple web application working like a live proxy that gets the record file, makes a live conversion of the file and send to Watson
Both the 2nd option and the 3rd one expects to use an encoding tool like ffmpeg.
The 3rd one is lighter to develop but a little bit more complex but could allow you to save 2 http request from you android device