How do I convert a waveform from arduino into a csv file that a user can download? - export-to-csv

I am using the following code:
void setup() {
Serial.begin(9600);
}
void loop() {
int val = analogRead(A0);
Serial.println(val);
}
To plot the waveform onto the serial monitor. From there, I want to somehow convert the signal I receieve into a csv file for further analysis. What steps should I take? Thanks

Related

ESP32 ModBus master half duplex read coil input registers

I'm trying to read a MODBUS sensor via an ESP32.
I'm using the following library: https://github.com/emelianov/modbus-esp8266
I have the following code:
#include <ModbusRTU.h>
#include <SoftwareSerial.h>
SoftwareSerial modBusSerial;
ModbusRTU modbus;
#define startReg 100
#define endReg 123
uint16_t res[endReg - startReg + 1];
// Callback to monitor errors in the modbus
bool cb(Modbus::ResultCode event, uint16_t transactionId, void* data) {
if (event != Modbus::EX_SUCCESS) {
Serial.print("Request result: 0x");
Serial.print(event, HEX);
}
return true;
}
void setup() {
Serial.begin(115200); // Default serial port (Hardware serial)
modBusSerial.begin(9600, SWSERIAL_8E1, MB_RX, MB_TX); // modbus configuration SWSERIAL_8E1 = 8 bits data, even parity and 1 stop-bit
modbus.begin(&modBusSerial);
modbus.master();
Serial.println("starting modbus...");
while (true) {
Serial.println(modBusSerial.read());
res[endReg - startReg] = 0; // string terminator to allow to use res as char*
if (!modbus.slave()) {
modbus.readIreg(16, startReg, res, endReg - startReg, cb);
}
modbus.task();
Serial.print("result: ");
Serial.println((char*) res);
delay(1000); // every second
}
}
Response I get;
When I do the exact same in QModMaster, I do get the correct output. Anyone any idea what I'm doing wrong here?
These are the settings I use;
I am aware of the "wrong" address in my code. I have 2 identical sensors but one is connected to my computer while the other one is connected to my ESP32.
Thanks in advance!

STM32 HAL UART receive by interrupt cleaning buffer

I'm working on an application where I process commands of fixed length received via UART.
I'm also using FreeRTOS and the task that handles the incoming commands is suspended until the uart interrupt handler is called, so my code is like this
void USART1_IRQHandler()
{
HAL_UART_IRQHandler(&huart1);
}
void HAL_UART_ErrorCallback(UART_HandleTypeDef *huart){
HAL_UART_Receive_IT(&huart1, uart_rx_buf, CMD_LEN);
}
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart){
BaseType_t higherTaskReady = pdFALSE;
HAL_UART_Receive_IT(&huart1, uart_rx_buf, CMD_LEN); //restart interrupt handler
xSemaphoreGiveFromISR(uart_mutex, &higherTaskReady);
portYIELD_FROM_ISR( higherTaskReady); //Relase the semaphore
}
I am using the ErrorCallBack in case if an overflow occurs. Now I successfully catch every correct command, even if they are issued char by char.
However, I'm trying to make the system more error-proof by considering the case where more characters are received than expected.
The command length is 4 but if I receive, for example, 5 chars, then the first 4 is processed normally but when another command is received it starts from the last unprocessed char, so another 3 chars are needed until I can correctly process the commands again.
Luckily, the ErrorCallback is called whenever I receive more than 4 chars, so I know when it happens, but I need a robust way of cleaning the UART buffer so the previous chars are gone.
One solution I can think of is using UART receive 1 char at a time until it can't receive anymore, but is there a better way to simply flush the buffer?
Yes, the problem is the lack of delimiter, because every byte can can carry a value to be processed from 0 to 255. So, how can you detect the inconsistency?
My solution is a checksum byte in the protocol. If the checksum fails, a blocking-mode UART_Receive function is called in order to put the rest of the data from the "system-buffer" to a "disposable-buffer". In my example the fix size of the protocol is 6, I use the UART6 and I have a global variable RxBuffer. Here is the code:
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *UartHandle)
{
if(UartHandle->Instance==USART6) {
if(your_checksum_is_ok) {
// You can process the incoming data
} else {
char TempBuffer;
HAL_StatusTypeDef hal_status;
do {
hal_status = HAL_UART_Receive(&huart6, (uint8_t*)&TempBuffer, 1, 10);
} while(hal_status != HAL_TIMEOUT);
}
HAL_UART_Receive_IT(&huart6, (uint8_t*)RxBuffer, 6);
}
}
void HAL_UART_ErrorCallback(UART_HandleTypeDef *UartHandle) {
if(UartHandle->Instance==USART6) {
HAL_UART_Receive_IT(&huart6, (uint8_t*)RxBuffer, 6);
}
}

Play sounds synchronously using snd_pcm_writei

I need to play sounds upon certain events, and want to minimize
processor load, because some image processing is being done too, and
processor performance is limited.
For the present, I play only one sound at a time, and I do it as
follows:
At program startup, sounds are read from .wav files
and the raw pcm data are loaded into memory
a sound device is opened (snd_pcm_open() in mode SND_PCM_NONBLOCK)
a worker thread is started which continously calls snd_pcm_writei()
as long as it is fed with data (data->remaining > 0).
Somewhat resumed, the worker thread function is
static void *Thread_Func (void *arg)
{
thrdata_t *data = (thrdata_t *)arg;
snd_pcm_sframes_t res;
while (1)
{ pthread_mutex_lock (&lock);
if (data->shall_stop)
{ data->shall_stop = false;
snd_pcm_drop (data->pcm_device);
snd_pcm_prepare (data->pcm_device);
data->remaining = 0;
}
if (data->remaining > 0)
{ res = snd_pcm_writei (data->pcm_device, data->bufptr, data->remaining);
if (res == -EAGAIN) continue;
if (res < 0) // error
{ fprintf (stderr, "snd_pcm_writeX() error: %s\n", snd_strerror(result));
snd_pcm_recover (data->sub_device, res);
}
else // another chunk has been handed over to sound hw
{ data->bufptr += res * bytes_per_frame;
data->remaining -= res;
}
if (data->remaining == 0) snd_pcm_prepare (data->pcm_device);
}
pthread_mutex_unlock (&lock);
usleep (sleep_us); // processor relief
}
} // Thread_Func
Ok, so this works well for one sound at a time. How do I play various?
I found dmix, but it seems a tool on user level, to mix streams coming
from separate programs.
Furthermore, I found the Simple Mixer Interface in the ALSA Project C
Library Interface, without any hint or example or tutorial about how
to use all these function described by one line of text each.
As a last resort I could calculate the mean value of all the buffers
to be played synchronously. So long I've been avoiding that, hoping
that an ALSA solution might use sound hardware resources, thus
relieving the main processor.
I'd be thankful for any hint about how to continue.

I want to make a stream of small data by calling it again and again

I have a question, I've got a small CSV data that I'm able to launch on flink with help of kafka . My question is can I call the same data, again and again, using window and trigger, or it'll call my data only once?
1,35
2,45
3,55
4,65
5,555
This is the data that I want to call again and again. Though I myself don't think so it's better to take 2nd opinion as I'm a beginner. Thanks for the help
Not sure what you mean by call data again and again. But you can create a stream of that data in Flink using SourceFunction. For example, the following source creates a stream of that csv file and emits it every second.
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<String> csvStream = env.addSource(new SourceFunction<String>() {
#Override
public void run(SourceContext<String> sourceContext) throws Exception {
String data = "1,35\n" +
"2,45\n" +
"3,55\n" +
"4,65\n" +
"5,555";
while(true) {
sourceContext.collect(data);
TimeUnit.SECONDS.sleep(1);
}
}
#Override
public void cancel() {
}
});

Is it possible to get audio data from the users audio device using NAudio to pass to Unity3D for visualization

What I am trying to do is grab the raw audio from the users audio output take that data and get an audio spectrum and output array similar to unity3d and pass that data to my visualizer.
So a couple things I need to know:
Can I grab the raw audio from the users device? What I have found so far is yes using waveinstream which by default gets audio data from all devices.
Can I grab the audio spectrum and output from that data similarly to Unity3D GetOutputData() and GetSpectrumData()? the NAudio demo provides similar functionality but not exactly what I want.
I am a newbie with coding naudio, whereas unity's API and extremely thorough documentation make it easier, naudio seems to have a couple tutorials and examples, nothing of what I need, and no API definitions. I will eventually figure it out but what I need to know is if what I'm attempting is possible, other than that any help is appreciated.
I provided my attempt at problem 1 which provides seemingly random data which I am trying to make sense of using a WaveFileReader but that crashes
using UnityEngine;
using System.Collections;
using System;
using NAudio.Wave;
using NAudio.CoreAudioApi;
using NAudio.Utils;
public class myNAudio : MonoBehaviour {
private WaveIn waveInStream;
private WaveFileReader reader;
private WaveStream readerStream;
void waveInStream_DataAvailable(object sender, WaveInEventArgs e){
//reader.Read(e.Buffer,0,e.BytesRecorded);
//Debug.Log(e.Buffer);
float tempDB = 0;
for(int i = 0; i < e.Buffer.Length; i++){
//Debug.Log(i + " = " +e.Buffer[i]);
tempDB += (float)e.Buffer[i]/255;
}
Debug.Log(e.Buffer.Length + ", " +tempDB);
}
void OnApplicationQuit(){
waveInStream.StopRecording();
waveInStream.Dispose();
waveInStream = null;
}
void OnDisable(){
waveInStream.StopRecording();
waveInStream.Dispose();
waveInStream = null;
}
// Use this for initialization
void Start () {
waveInStream = new WaveIn();
waveInStream.DeviceNumber = 0;
waveInStream.DataAvailable += new EventHandler<WaveInEventArgs>(waveInStream_DataAvailable);
waveInStream.StartRecording();
}
// Update is called once per frame
void Update () {
}
}