Visualizing Sine Wave with Processing - visualization

I have 1000+ row Sine Wave data which changes with time and I want to visualize it with Processing language. My aim is to create an animation which will draw a Sine Wave with time starting from the middle of the rectangular [height/2]. I also want to show only the 1 second periods of that wave. I mean after 1 second, first coordinate should dissappear, and so forth.
How can I achieve that ?
Thanks
Sample Data :
TIME X Y
0.1333 0 0
0.2666 0.1 0.0999983333
0.3999 0.2 0.1999866669
0.5332 0.3 0.299955002
0.6665 0.4 0.3998933419
0.7998 0.5 0.4997916927
0.9331 0.6 0.5996400648
1.0664 0.7 0.6994284734

The way you'd achieve that is to split this project into tasks:
load & parse data
update time and render data
To make sure part 1 goes smoothly it's probably best to make sure your data is easy to parse first. The sample data looks like a table/spreadsheet, but it's not formatted with a standard separator(e.g. comma or tab). You can fiddle things when you parse, but I recommend using clean data first, for example, if you plan on using space as a separator:
TIME X Y
0.1333 0.0 0
0.2666 0.1 0.0999983333
0.3999 0.2 0.1999866669
0.5332 0.3 0.299955002
0.6665 0.4 0.3998933419
0.7998 0.5 0.4997916927
0.9331 0.6 0.5996400648
1.0664 0.7 0.6994284734
Once that's done, you can use loadStrings() to load the data and split() to break a row into 3 elements which can be converted from string to float.
Once you've got value to use, you can store them. You can either create three arrays, each holding a field from the loaded data(one for all the X values, one for all the Y values and one for all the time values) or you can cheat and use a single array of PVector objects. Although PVector is meant for 3D math/linear algebra, you have 2D coordinates, so you can store time as 3rd 'dimension'/component.
Part two revolves mostly around updating based on time, and this is where millis() comes in handy. You can check the amount of time passed between updates and if it's greater than a certain (delay) value, it's time for another update (of the frame/data row index).
The last part you need to worry about is rendering the data on screen. Luckily in your sample data the coordinates are normalized(between 0.0 and 1.0) which makes easy to map to the sketch dimensions(by using simple multiplication). Otherwise the map() function comes in handy.
Here's a sketch to illustrate the above, data.csv is a text file containing the formatted sample data from above:
PVector[] frames;//keep track of the frame data(position(x,y) and time(store in PVector's z property))
int currentFrame = 0,totalFrames;//keep track of the current frame and total frames from the csv
int now, delay = 1000;//keep track of time and a delay to update frames
void setup(){
//handle data
String[] rows = loadStrings("data.csv");//load data
totalFrames = rows.length-1;//get total number of lines (-1 = sans the header)
frames = new PVector[totalFrames];//initialize/allocate frame data array
for(int i = 1 ; i <= totalFrames; i++){//start parsing data(from 1, skip header)
String[] frame = rows[i].split(" ");//chop each row into 3 strings(time,x,y)
frames[i-1] = new PVector(float(frame[1]),float(frame[2]),float(frame[0]));//parse each row(not i-1 to get back to 0 index) and how the PVector's initialized 1,2,0 (x,y,time)
}
now = millis();//initialize this to keep track of time
//render setup, up to you
size(400,400);smooth();fill(0);strokeWeight(15);
}
void draw(){
//update
if(millis() - now >= delay){//if the amount of time between the current millis() and the last time we updated is greater than the delay (i.e. every 'delay' ms)
currentFrame++;//update the frame index
if(currentFrame >= totalFrames) currentFrame = 0;//reset to 0 if we reached the end
now = millis();//finally update our timer/stop-watch variable
}
PVector frame = frames[currentFrame];//get the data for the current frame
//render
background(255);
point(frame.x * width,frame.y * height);//draw
text("frame index: " + currentFrame + " data: " + frame,mouseX,mouseY);
}
There are a couple of extra notes needed:
You mentioned moving to the next coordinate after 1 second. From what I can see in your sample data there are 8 updates per second, so 1000/8 would probably work better. It's up to you how you handle timing though.
I assume your full set includes data for a sine wave movement. I've mapped to the full coordinates, but in the render part of the draw() loop you can map however you like(e.g. including a height/2 offset, etc.). Also if you're not familiar with sine waves, have a look at these Processing resources: Daniel Shiffman's SineWave sample, Ira Greenberg's trig tutorial.

Related

How to get some objects to move randomly in a space lets say a grid of [-5,5]

I need to move some objects lets say 50 in a space (i.e a grid of [-5,5]) and making sure that if the grid is divided into 100 portions most of the portions (90% or more) are once visited by any object
constraints :
object should move in random directions in the grid changing their velocities frequently (change speed and direction in each iteration)
I was thinking of bouncing balls ( BUT moving in random directions even if not hit by anything in space, not they way a real ball moves) , if we could leave them into space in different positions with different forces and each time they hit each other (or getting closer to a specific distance ) they move to different directions with different speed and could give us a result near to 90% hit of portions in the grid .
I also need to make sure objects are not getting out of grid ( could make lb and ub limits and get them back in case they try to leave the grid)
My code is different from the idea I have written above ...
ux = 1;
uy = 15;
g = 9.81;
t = 0; x(1) = 0;
y(1) = 0;
tf = 2.0 * uy / g; % time of flight back to the ground
dt = tf / 20; % time increment - taking 20 steps
while t < tf
t = t + dt;
if((uy - 0.5 * g * t) * t >= 0)
x(end + 1) = ux * t;
y(end + 1) = (uy - 0.5 * g * t) * t;
end
end
plot(x,y)
this code makes the ball to go with Newton's law which is not the case
Bottom line i just need to be able to visit many portions of grid in a short time so this is why i want the objects to moves in a chaotic way in the space in a random manner (each time running the code i need different result so it needs to be random path) and to get a better result i could make the objects bounce to different directions if they hit or visit each other in the same portions , this probably give me a better result .

In caffe, py-faster-rcnn, "scores" return a large matrix, why?

I use the py-faster-rcnn demo to build further of my project with 20 classes.
However, I am trying to gain the softmax, last layer probability of my classes.
For example:
# Load the demo image
im_file = os.path.join(cfg.DATA_DIR, 'demo', image_name)
im = cv2.imread(im_file)
# Detect all object classes and regress object bounds
timer = Timer()
timer.tic()
scores, boxes = im_detect(net, im)
timer.toc()
print ('Detection took {:.3f}s for '
'{:d} object proposals').format(timer.total_time, boxes.shape[0])
# Visualize detections for each class
CONF_THRESH = 0.8
NMS_THRESH = 0.3
for cls_ind, cls in enumerate(CLASSES[1:]):
cls_ind += 1 # because we skipped background
cls_boxes = boxes[:, 4*cls_ind:4*(cls_ind + 1)]
cls_scores = scores[:, cls_ind]
dets = np.hstack((cls_boxes,
cls_scores[:, np.newaxis])).astype(np.float32)
keep = nms(dets, NMS_THRESH)
dets = dets[keep, :]
vis_detections(im, cls, dets, thresh=CONF_THRESH)
print scores
While I do the print scores, it gives me a very large matrix output,
instead of 1 x 20 . I am not sure why, and how can I get the last probability matrix?
Thanks
The raw scores the detector outputs include overlapping detections and very low score detections as well.
Note that only after applying non-maximal suppression (aka "nms") with NMS_THRESH=0.3 the function vis_detection only displays detections with confidence larger than CONF_THRESH=0.8.
So, if you want to look at the "true" objects, you need to check inside vis_detection and check only the detections it renders on the image.

Blob position comparison across several video frames

Goal is to detect whether an object/s(can be multiple) is stationary in a ROI for a period of time (Application: Blocking the zebra lane detection). So it means obeserving each blob with respect to time t
Input = Video file
So, let's say the pedestrian crossing lane is the ROI. Background subtraction happens inside ROI only, then each blob(vehicle) will be observed separately for time t if they have been motionless there.
What I'm thinking is getting the position of the blob at frame 1 and frame n (time threshold) and check if the position is the same. But this must be applied on each blob assuming there are multiple blobs. So a loop is involved here to process each blob one by one. But what about processing each blob by getting its position at frame 1 and frame n, then compare if it's the same(if so then it has been motionless for time t therefore it's "blocking"). Then move on to the next blob.
My logic written on java code:
//assuming "blobs" is an arraylist containing all the blobs in the image
int initialPosition = 0, finalPosition = 0;
static int violatorCount=0;
for(int i=0; i<blobs.size(); i++){ //iterate to each blob to process them separately
initialPosition = blobs.get(i).getPosition();
for(int j=0; j<=timeThreshold; j++){
if(blobs.get(i) == null){ //if blob is no longer existing on frame j
break;
}
finalPosition = blobs.get(i).getPosition();
}
if(initialPosition == finalPosition){
violatorCount++;
}
//output count on top-right part of window
}
Can you share guys the logic on how to implement the goal/idea in either Matlab or OpenCV?
Optical Flow is an option thanks to PSchn. Any other options I can consider
Sounds like optical flow. You could yous the OpenCV implementation. Pass your points to cv::calcOpticalFlowPyrLK along with the next image (see here). Then you could check for the distance between to points and dicide what to do.
I dont know if it works, just an idea.

How can I display a simple animated spectrogram to visualize audio from a MixerHostAudio object?

I'm working off of some of Apple's sample code for mixing audio (http://developer.apple.com/library/ios/#samplecode/MixerHost/Introduction/Intro.html) and I'd like to display an animated spectrogram (think itunes spectrogram in the top center that replaces the song title with moving bars). It would need to somehow get data from the audio stream live since the user will be mixing several loops together. I can't seem to find any tutorials online about anything to do with this.
I know I am really late for this question. But I just found some great resource to solve this question.
Solution:
Instantiate an audio unit to record samples from the microphone of the iOS device.
Perform FFT computations with the vDSP functions in Apple’s Accelerate framework.
Draw your results to the screen using a UIImage
Computing the FFT or spectrogram of an audio signal is a fundamental audio signal processing. As an iOS developer, whether you want to simply find the pitch of a sound, create a nice visualisation, or do some front end processing it’s something you’ve likely thought about if you are at all interested in audio. Apple does provide a sample application for illustrating this task (aurioTouch). The audio processing part of that App is obscured, however, imho, by extensive use of Open GL.
The goal of this project was to abstract as much of the audio dsp out of the sample app as possible and to render a visualisation using only the UIKit. The resulting application running in the iOS 6.1 simulator is shown to the left. There are three components. rscodepurple1 A simple ‘exit’ button at the top, a gain slider at the bottom (allowing the user to adjust the scaling between the magnitude spectrum and the brightness of the colour displayed), and a UIImageView displaying power spectrum data in the middle that refreshes every three seconds. Note that the frequencies run from low to the high beginning at the top of the image. So, the top of the display is actually DC while the bottom is the Nyquist rate. The image shows the results of processing some speech.
This particular Spectrogram App records audio samples at a rate of 11,025 Hz in frames that are 256 points long. That’s about 0.0232 seconds per frame. The frames are windowed using a 256 point Hanning window and overlap by 1/2 of a frame.
Let’s examine some of the relevant parts of the code that may cause confusion. If you want to try to build this project yourself you can find the source files in the archive below.
First of all look at the content of the PerformThru method. This is a callback for an audio unit. Here, it’s where we read the audio samples from a buffer into one of the arrays we have declared.
SInt8 *data_ptr = (SInt8 *)(ioData->mBuffers[0].mData);
for (i=0; i<inNumberFrames; i++) {
framea[readlas1] = data_ptr[2];
readlas1 += 1;
if (readlas1 >=33075) {
readlas1 = 0;
dispatch_async(dispatch_get_main_queue(), ^{
[THIS printframemethod];
});
}
data_ptr += 4;
}
Note that framea is a static array of length 33075. The variable readlas1 keeps track of how many samples have been read. When the counter hits 33075 (3 seconds at this sampling frequency) a call to another method printframemethod is triggered and the process restarts.
The spectrogram is calculated in printframemethod.
for (int b = 0; b < 33075; b++) {
originalReal[b]=((float)framea[b]) * (1.0 ); //+ (1.0 * hanningwindow[b]));
}
for (int mm = 0; mm < 250; mm++) {
for (int b = 0; b < 256; b++) {
tempReal[b]=((float)framea[b + (128 * mm)]) * (0.0 + 1.0 * hanningwindow[b]);
}
vDSP_ctoz((COMPLEX *) tempReal, 2, &A, 1, nOver2);
vDSP_fft_zrip(setupReal, &A, stride, log2n, FFT_FORWARD);
scale = (float) 1. / 128.;
vDSP_vsmul(A.realp, 1, &scale, A.realp, 1, nOver2);
vDSP_vsmul(A.imagp, 1, &scale, A.imagp, 1, nOver2);
for (int b = 0; b < nOver2; b++) {
B.realp[b] = (32.0 * sqrtf((A.realp[b] * A.realp[b]) + (A.imagp[b] * A.imagp[b])));
}
for (int k = 0; k < 127; k++) {
Bspecgram[mm][k]=gainSlider.value * logf(B.realp[k]);
if (Bspecgram[mm][k]<0) {
Bspecgram[mm][k]=0.0;
}
}
}
Note that in this method we first cast the signed integer samples to floats and store in the array originalReal. Then the FFT of each frame is computed by calling the vDSP functions. The two-dimensional array Bspecgram contains the actual magnitude values of the Short Time Fourier Transform. Look at the code to see how these magnitude values are converted to RGB pixel data.
Things to note:
To get this to build just start a new single-view project and replace the delegate and view controller and add the aurio_helper files. You need to link the Accelerate, AudioToolbox, UIKit, Foundation, and CoreGraphics frameworks to build this. Also, you need PublicUtility. On my system, it is located at /Developer/Extras/CoreAudio/PublicUtility. Where you find it, add that directory to your header search paths.
Get the code:
The delegate, view controller, and helper files are included in this zip archive.
A Spectrogram App for iOS in purple
Apple's aurioTouch example app (on developer.apple.com) has source code for drawing an animated frequency spectrum plot from recorded audio input. You could probably group FFT bins into frequency ranges for a coarser bar graph plot.

Auto inferring scale for a time series plot

Problem:
I am plotting a time series. I don't know apriori the minimum & maximum values. I want to plot it for the last 5 seconds of data. I want the plot to automaticaly rescale itself to best fit the data for the past five seconds. However, I don't want the rescaling to be jerky (as one would get by constantly resetting the min & max) -- when it does rescale, I want the rescaling to be smooth.
Are there any existing algorithms for handling this?
Formally:
I have a function
float sample();
that you can call multiple times. I want you to constantly, in real time, plot the last 5 * 60 values to me, with the chart nicely scaled. I want the chart to automatically rescale; but not in a "jerky" way.
Thanks!
You could try something like
float currentScale = 0;
float adjustSpeed = .3f;
void iterate() {
float targetScale = sample();
currentScale += adjustSpeed * (targetScale - currentScale);
}
And lower the adjustSpeed if it's too jerky.