Raspberry Pi iBeacon Scan Parsing Response - raspberry-pi

I have used the raspberry pi to detect the ibeacons and gone through the tutorial provided by Radius Networks here. I made a small script that first turns on lescan and redirects output to /dev/null. Then it turns the hcidump on piping to the output to the script.
The output shown by the script is slow. While the advertisement packets are transmitted in magnitude of milliseconds, the result however on the terminal is slow. consequently, the command keeps on showing new output even if you turn off the transmitter. My understanding tells me that parsing takes its time, while the HCIDUMP data waits in the sed queue.
For proper action to trigger according to proximity, minimum parsing time is necessary so that all packets are parsed as they are received.
Have i missed something or parsing is faster if one uses the bluetooth development kit provided by Radius Networks? if so, what makes it faster?
Thanks,

You are correct, the output of the script does lag behind when detecting a large magnitude of iBeacon advertisements. The parsing script was written in bash for simplicity, and its speed suffers as a result -- piping to sed to store each identifier is slow and inefficient. The script was rewritten in Ruby for the Beacon Development Kit (now called the PiBeacon) and is much faster and more responsive. Ruby and other high-level level programming languages are more well suited for parsing and converting the raw iBeacon packet data. A disk image of the development kit with this new script is available to download here.
You can also try implementing another iBeacon Raspberry Pi scanning script, written in Python, that can be found here. I have yet to try this out myself, but it appears to be another good solution.

Related

GPSD not seeing complete data

I'm trying to get a Raspberry Pi with a 4" touch screen setup to wardrive and capture some WiFi signals. The piece that is giving me issues is getting the GPS working.
I'm trying to use a module that came with Microsoft Streets & Trips I got a long time ago.
lsusb shows the device as "Bus 001 Device 007: ID 1546:01a5 U-Blox AG [u-blox 5]"
dmesg | grep tty shows:
[ 8.276615] cdc_acm 1-1.1:1.0: ttyACM0: USB ACM device
[ 344.931792] cdc_acm 1-1.1:1.0: ttyACM0: USB ACM device
I do see a data stream if I issue cat /dev/ttyACM0
sudo gpsd /dev/ttyACM0 -F /var/run/gpsd.sock and then gpsmon /dev/ttyACM0 gives the following:
But then when I try cgps -s I get:
I seem to be getting some data but no lat/long/time data.
Should I conclude that this GPS module is not supported?
Are you sure your receiver has a satellite fix? Also, you have a log of the GPS data stream? Do you know what specific model receiver you are using?
From your first screenshot it looks like you are definitely receiving data from the GPS, as it's recognizing several different NMEA sentences. On top of that, the first screenshot shows what seems to be a valid GPGLL sentence (I haven't confirmed the checksum):
$GPGLL,,,,,224538.00,V,N*40
My initial hunch would be that the GPS receiver does not have a satellite fix. This is based on
Empty GPGLL sentence. The only populated fields in the above sentence are the UTC time and Status fields. The Status value is V, which means the data is invalid.
Status NO FIX in the second screenshot.
The various boxes in the first screenshot give me the impression that GPSD is correctly parsing the various NMEA packets. For example, the GSV box lists various satellites in view.
The lsusb output reports the device as a u-blox receiver. U-blox receivers generally have solid support from GPSD, so I'd be surprised if this receiver isn't supported (but anything is possible). Without knowing the specific model it's hard to say anything definitive.
I've only worked with a couple different receivers, but my general experience is that when they don't have a fix on startup they'll send empty/partial packets. And the date/time data in those packets is probably due to a Real-time clock (RTC) on the receiver. RTCs are common on GPS receivers as they often drastically improve the Time To First Fix (TTFF). So it makes sense that you have a time, but it's marked as invalid.
Recommendations
The fact that your receiver is reporting satellites in view (though none are being used) and that the Status is NO FIX is especially strong evidence to me that your receiver probably just doesn't have a fix. Try moving it to somewhere with a better view of the sky. Also ensure that if it requires any kind of external antenna hardware that you connect that. Lastly, you might have to wait awhile to get a fix. If it's been awhile since you've used the device you could be looking at a cold start with a TTFF of upwards of 10-20 minutes.

STM32F103 blue pill ADC example

After searching for a veeery long time (more then 3 months) in all the main places where to get info and reading the datasheet of the chip I would like to ask the STM32 specialists inhere if there is a example for using the ADC maybe with DMA from the arduino IDE. I did see some incomplete parts inhere and for other compiler/IDE environments. But maybe i did not strike the good luck of finding the right info (that even i can understand) yet for what i need.
Your help is much appreciated.
I want to sample audio data, one channel 30KHz plus, 12 bits and each time 16 samples are taken an interrupt to handle the data that is in an array.
I have seen the pigOscope code (it uses analogread) and the info about analogread where there is stated this command is not meant for higher sampling speeds So that got me sort of into conflict with myself .... Who can break me out of my endless brainloop .....?;
Greetings ... Eric.
I have seen the pigOscope code (it uses analogread)
I wrote the Pig-o-scope code, with a lot of input from others at stm32duinocom and if you take the time to read the code, which I will grant you is somewhat simplistic, you will discover that analogRead is only used to trigger. The code uses DMA to do the high speed transfer.
I completely agree with the comment that you dont't need the Arduino IDE, you could "borrow" the DMA code and tailor it to your needs. However if you want a quick and dirty coding and prototyping environment, then there is nothing wrong with using the Arduino IDE. Take a trip to the stm32duino.com site, and you will see that I along with a lot of the other developers use the Arduino IDE, and Eclipse, and Atollic, and roll our own batch files, use vi, etc etc.
It all depends on what you are trying to do, and in many cases using the Arduino IDE gets you to working result a lot faster than learning an entire new IDE, just for one task.
But then again, I'm firmly on the side of vi in the vi/emacs wars, so what the heck do I know. Just don't use nano. ;¬)

Recording multi-channel audio input in real-time

I am trying to perform Time Difference of Arrival in real-time using the PS3 Eye. Since it has a built-in 4 microphone array, I've successfully rearranged the array into a square array and cross-correlated the signals using MATLAB to obtain a relatively accurate TDOA algorithm. However, so far I've been recording the signal, saving the files (4 individual files for each microphone in the array), and then feeding those files into MATLAB to read after-the-fact.
My problem is: MATLAB doesn't recognize the PS3 Eye's microphones separately; it only recognizes it as a whole. So far, Audacity is one of the few programs that actually works well in doing so, but I am inexperienced in using the program and don't know its real-time capabilities. Anyone have suggestions as to how can I can perform real-time signal analysis in this manner? If using something else besides the PS3 Eye would work better, then I am open to suggestions. Thanks.
I know very little about MATLAB or PS3 eye, but various hardware microphones allow you to capture a single audio stream containing multiple (typically 2) channels. The audio data will come to you in frames, each frame containing a single sample for each channel.
I'm not really sure what you mean by "recognizes as a whole", but I assume you mean MATLAB is mixing the channels so that the device only produces one usable channel. If you can capture the channels to file, and they all originate from the same device (i.e. hardware clock), you should be fine except that this solution is not "realtime".
There is a similar discussion on Sound Exchange which ends up suggesting the Microcone. There are a variety of other products, from microphone arrays to digital mixers for analog mic sources, also, but your question seems to be mainly about how to get the data with software.
In short, make sure you are seeing a single device with multiple channels. This will ensure each channel uses the same hardware clock and will prevent drift issues.
This is just a wild guess as I don't know know about MATLAB real time input options.
Maybe try reaper ( http://www.reaper.fm/ ).. it has great multi track capabilities and you can extend it (I think the scripting language is python ). Nice documentation and third party contributions, OSC and Rewire support. So maybe you could think of routing the audio to reaper, doing some data normalization there in python and then route data to MATLAB.
Or you could use PURE DATA which is open source and very open, with lots of patches (basic processing units) that you could probably put together.
HTH
BTW I am in no way affiliated wit reaper or PD.
EDIT: you might also want to consider supercollider (http://supercollider.github.io/) or Chuck (http://chuck.cs.princeton.edu/)
Here's a lead, but I haven't been able to test it, yet.
On Windows, you can record a single 4 track ogg audio file from the Eye with Audacity (using the WASAPI driver selection).
As of 23 Jul 2014, the pa-wavplay for 32-bit and 64-bit MEX supports WASAPI. You will have to rebuild the PortAudio library to select the WASAPI interface as described here and get all four tracks in MatLab (in Windows).
Sadly, if you're not on Windows, I don't have any suggestions. Adjusting the PortAudio build might help, but I only know that WASAPI works with the Eye.

Can an NXT brick really be programmed with Enchanting and adaption of Scratch?

I want my students to use Enchanting a derivative of Scratch to program Mindstorm NXT robots to drive a pre-programmed course, follow a line and avoid obstacles. (Two state, five state and proportional line following.) Is Enchanting developed enough for middle school students to program these behaviors?
I'm the lead developer on Enchanting, and the answer is: Yes, definitely.
The video demoing Enchanting 0.0.4 shows how to make a proportional line follower (and you could extend it to use a PID controller, if you wish). If you download the latest version, 0.2.2, it includes a sample showing a two-state line follower (and you can see a video and download code here). You, or with some instruction / playing around, a middle-schooler, can readily create a program to do n-states, and, especially if you follow a behaviour-oriented approach, you can avoid obstacles at the same time.
As far as I know, yes and no.
What Scratch does with its sensor board, Lego Wedo, and the S4A - Scratch for Arduino - version as well as, I believe, with NXT is basically use its remote sensor protocol - it exchanges messages on TCP port 42001.
A client written to interface that port with an external system allows communication of messages and sensor data. Scratch can pick up sensor state and pass info to actuators, every 75ms according to the S4A discussion.
But that isn't the same as programming the controller - we control the system remotely, which is already very nice, but we're not downloading a program in the controller (the NXT brick) that your robot can use to act independently when it is disconnected.
Have you looked into 12blocks? http://12blocks.com/ I have been using it for Propeller and it's great and it has the NXT option (I have not tested)
It's an old post, but I'll answer anyway.
Enchanting looks interesting, and seems to be still an active project.
I would actually take the original Scratch (1.4), as it's is more familiar and reliable.
It's easy to interface hardware with Scratch using the remote sensor protocol. I use a simple serial interface (over a USB-adapter) which provides 3 digital inputs and 3 digital outputs. With that, it's possible to implement projects such as traffic lite, light/water/heat-sensors, using only lets, resistors, reed-contacts, photo-transistors, switches, PTSs.
The costs are < 5$
For some motor-based projects like factory belts, elevator, etc. There is not much more required, a battery and a couple of transistors/relais/motor driver.

iPhone Remote IO Issues

I've been playing around with the SDK recently, and I had an idea to just build a personal autotuner (because I am just as awesome as T-Pain).
Intro aside, I wanted to attach a high-quality microphone into the headphone jack, and I wanted my audio to be processed in a callback, and then copied to the output buffer. This has several implications:
When my audio-in is being routed through the built-in microphone, I need to be able to process this input, and send it once my input has stopped (this works).
When my audio-in is being routed through the microphone-in input from the headset jack, I want the output to be sent immediately.
Routing, however, doesn't seem to work properly when using AudioSession modes and overrides, which technically should allow you to reroute output to the iPhone speakers, no matter where the input is coming from. This is documented to work, but in practice, doesn't really work.
Remote IO, however, is not documented at all. Anyone with experience using Remote IO audio units, can you give me a reasonable high-level overview on how to do this properly? I have been using the aurioTouch example code, but I am running into errors where I get error codes like -50 and -10863, none of which are documented.
Thanks in advance.
The aurioTouch example implements remoteIO play through.
You could modify the samples before passing them on.
It simply calls AudioUnitRender in the output render callback.
NB this trick does not seem to work if you port the code
to OSX style CoreAudio. There, 99% of the time, you need
to create two AUHALs (RemoteIO-a-likes) and pass
the samples between them.