Hi I'm wanting to make a HID dj controller with a Ardiuno uno as the core.
I've had some success with using the unojoy usb gamepad project
http://code.google.com/p/unojoy/
I have read elsewhere that the maximum size for a HID report is 64Bytes so I intend to have
48 analogue inputs at 8 bit resolution and 128 binary inputs (48*8)=384+128 =512b or 64B,
Is this many inputs possible/plausible?
Also is there any open source code for some kind of HID device with this great a many inputs? (or similar)
Thank you.
SELF ANSWERED, Yes it is possible, I am struggling to understand how I should write the report descriptor.
Related
I am trying to drive EPD(ED060SD1) using STM32F429ZGT, and got datasheet from display vendor. But there is no specific explanation of how to drive EPD display or details of pin.
So I want to know what those pin does.. and any hint how to run this display..
Thank you
ED060SD1 Pin List
The stm32 series of microcontrollers do not support EPD displays directly. I think you would need a EPD controller in between to make it work.
You night be able to generate just the digital inputs, then use an external HV supply from a chip like HV850 and then step it down with inline Zener diodes. I used this approach to make a microflyer based on a piezo speaker!
The HV850 has on/off via digital line and needs +4.2V minimum, max Vout is +/-59V on alternate outputs which is ideal for this purpose.
Simply add a high value resistor on the output side to let it discharge when the panel isn't being driven.
I'd put it in extclk mode and run it at 10Hz to reduce power usage.
i need some help understanding a specific serial port connection from a sensor. I need to read data from the sensor and make some calculations in matlab or c++ (i will decide later)
The manufacturer only gives a chart with the following details:
Sensor Serial Port
Pin Number Mode Pin Description
I Trigger Input
I RS-232 Receive
O RS-232 Transmit
PWR Sensor Power (DTR)
PWR/GND Signal Ground
Not Used (Reserved)
Not Used (Reserved)
I/O RS-485 B Signal Pin
I/O RS-485 A Signal Pin**
So my question is: OK i know that pin 2 is used to receive data but how am i going to decode the volts stream into integers for example for my program? Also, i know that pin 4 gives power to the sensor. How do i know how many volts it has to give? Generally how am i going to learn all these details since the manufacturer does not give it?
Do you think Serial Port Analyzer Software will help?
Thanks very much in advance.
You might want to search for "DE-9 pinout YourSensorNameHere" in google or This page might be of some use to you. With most RS-232 you only need pins 2,3 and 5. With out more specifics about your sensor there isn't much SO can do for you.
I've implemented a PIC32 as a USB sound card, using USB Audio Class 1. I'm sending a sawtooth signal from the microcontroller to the PC(windows 7, 64 bit), as 16-bit samples:
in decimal:
000
800
1600
2400
.. so on
then i try recording the received audio using Audacity, with MME -driver, as .wav or .raw.
I use MATLAB to open and inspect the data, and there i see data like:
000
799
1599
2400
..
The distortion varies from -1 to +1 bit pr sample..
Anyone have any idea where the problem might be.?
Windows-audio drivers.?
Since you receive the audio signal on PC, playback it, and record it using SW, the audio signal is converted from digital to analog, and to digital again. These introduce quantization error and noise, and you see the little difference between two signals.
I solved my problem..
The problem was caused by the application i used to record the data, and the method i used.. I used Audacity, which supports the old windows MME audio API, and the DirectSound API. These are relatively high-level API's apparently, and are the cause of the distortion.
About the Windows Core Audio APIs
Instead i used another program, called Reaper, it has an option to record using ASIO og WASAPI. This solves my problem. I've checked every sample in an 2 hour .wav file, using MATLAB, and it is completely bit-perfect.
I was probably some quantization error, but it was caused by the API.
ASIO and WASAPI gave me bit-perfect sound, MME and DirectSound gave me a distorted signal.
I want to compare 2 audio files programmatically.
For example: I have a sound file in my iPhone app, and then I record another one. I want to check if the existing sound matches the recorded sound or not ( - similar to voice recognition).
How can I accomplish this?
Have a server doing audio fingerprinting computation that is not suitable for mobile device anyway. And then your mobile app uploads your files to the server and gets the analysis result for display. So I don't think programming language implementing it matters much. Following are a few AF implementations.
Java: http://www.redcode.nl/blog/2010/06/creating-shazam-in-java/
VC++: http://code.google.com/p/musicip-libofa/
C#: https://web.archive.org/web/20190128062416/https://www.codeproject.com/Articles/206507/Duplicates-detector-via-audio-fingerprinting
I know the question has been asked a long time ago, but a clear answer could help someone else.
The libraries from Echoprint ( website: echoprint.me/start ) will help you solve the following problems :
De-duplicate a big collection
Identify (Track, Artist ...) a song on a hard drive or on a server
Run an Echoprint server with your data
Identify a song on an iOS device
PS: For more music-oriented features, you can check the list of APIs here.
If you want to implement Fingerprinting by yourself, you should read the docs listed as references here, and probably have a look at musicip-libofa on Google Code
Hope this will help ;)
Apply bandpass filter to reduce noise
Normalize for amplitude
Calculate the cross-correlation
It can be fairly Mhz intensive.
The DSP details are in the well known text:
Digital Signal Processing by
Alan V. Oppenheim and Ronald W. Schafer
I think as well you may try to select a few second sample from both audio track, mnormalise them in amplitude and reduce noise with a band pass filter and after try to use a correlator.
for instance you may take a 5 second sample of one of the thwo and made it slide over the second one computing a cross corelation for any time you shift. (be carefull that if you take a too small pachet you may have high correlation when not expeced and you will soffer the side effect due to the croping of the signal and the crosscorrelation).
After yo can collect an array with al the results of the cross correlation and get the index of the maximun.
You should then set experimentally up threshould o decide when yo assume the pachet to b the same. this will change depending on the quality of the audio track you are comparing.
I implemented a correator to receive and distinguish preamble in wireless communication. My script is actually done in matlab. if you are interested i can try to find the common part and send it to you.
It would be a too long code to be pasted hene in the forum. if you want just let me know and i will send it to ya asap.
cheers
I'm looking for some C/C++ code for VAD (Voice Activity Detection).
Basically, my application is reading PCM frames from the device. I would like to know when the user is talking. I'm not looking for any speech recognition algorithm but only for voice detection.
I would like to know when the user is talking and when he finishes:
bool isVAD(short* pcm,size_t count);
Google's open-source WebRTC code has a VAD module written in C. It uses a Gaussian Mixture Model (GMM), which is typically much more effective than a simple energy-threshold detector, especially in a situation with dynamic levels and types of background noise. In my experience it's also much more effective than the Moattar-Homayounpour VAD that Gilad mentions in their comment.
The VAD code is part of the much, much larger WebRTC repository, but it's very easy to pull it out and compile it on its own. E.g. the webrtcvad Python wrapper includes just the VAD C source.
The WebRTC VAD API is very easy to use. First, the audio must be mono 16 bit PCM, with either a 8 KHz, 16 KHz or 32 KHz sample rate. Each frame of audio that you send to the VAD must be 10, 20 or 30 milliseconds long.
Here's an outline of an example that assumes audio_frame is 10 ms (320 bytes) of audio at 16000 Hz:
#include "webrtc/common_audio/vad/include/webrtc_vad.h"
// ...
VadInst *vad;
WebRtcVad_Create(&vad);
WebRtcVad_Init(vad);
int is_voiced = WebRtcVad_Process(vad, 16000, audio_frame, 160);
There are open source implementations in the Sphinx and Freeswitch projects. I think they are all energy based detectors do won't need any kind model.
Sphinx 4 (Java but it should be easy to port to C/C++)
PocketSphinx
Freeswitch
How about LibVAD? www.libvad.com
Seems like that does exactly what you're describing.
Disclosure: I'm the developer behind LibVAD