Auto inferring scale for a time series plot - charts

Problem:
I am plotting a time series. I don't know apriori the minimum & maximum values. I want to plot it for the last 5 seconds of data. I want the plot to automaticaly rescale itself to best fit the data for the past five seconds. However, I don't want the rescaling to be jerky (as one would get by constantly resetting the min & max) -- when it does rescale, I want the rescaling to be smooth.
Are there any existing algorithms for handling this?
Formally:
I have a function
float sample();
that you can call multiple times. I want you to constantly, in real time, plot the last 5 * 60 values to me, with the chart nicely scaled. I want the chart to automatically rescale; but not in a "jerky" way.
Thanks!

You could try something like
float currentScale = 0;
float adjustSpeed = .3f;
void iterate() {
float targetScale = sample();
currentScale += adjustSpeed * (targetScale - currentScale);
}
And lower the adjustSpeed if it's too jerky.

Related

Scaling seperate triangles (in geometry shader?)

For a masking object, I am trying to scale each triangle individually. If I scale the object as a whole, the points further away from the center will get moved too far and I just want the object to have 'more body'. Since I use it as a mask, it doesn't matter if the triangles end up overlapping.
Although looking at this might hurt someone deep inside, this is actually what I'm trying to achieve:
I thought this was best done in a shader and I thought this could be achieved in the geometry shader since I need to know the center of the triangle. I came up with the code below, but things keep acting... strange.
float3 center = (IN[0].vertex.xyz + IN[1].vertex.xyz + IN[2].vertex.xyz) / 3;
for (int i = 0; i < 3; i++)
{
float3 distance = IN[i].vertex.xyz - center.xyz;
float3 normal = normalize(distance);
distance = abs(distance);
float scale = 1;
float3 pos = IN[i].vertex.xyz + (distance * normal.xyz * (scale - 1));
o.pos.xyz = pos.xyz;
o.pos.w = IN[i].vertex.w;
tristream.Append(o);
}
My plan was to calculate the center of the triangle and than calculate the distance between the center and each point. I would than take the normal of this distance to know in which direction I would have to move the vertex and change the position by adding the distance * normal(direction) * scale to the original position of the vertex. Yet, it seems the triangles change when you rotate the camera, so I would doubt it if this is right. Does anyone know what could be wrong?
(Just some notes:
the mesh is basically 2D, only changing across the x- and z-axis (if this matters).
I did abs(distance) since I thought it would cancel out the normal if both would be negative. I'm not sure if this is necessary.
I did scale -1 since a scale of 1 would result in the mesh staying the same. A scale of 2 should result in all triangles being twice as big.
I have no clue on what to do with the w value, but keeping the old value at least doesn't screw up that much. Perhaps here lays the problem? I thought this value should always be 1 for matrix multiplications.
)
Oke, so besides using a way to 'complex' formula to calculate the new position of each point. (Better way at https://math.stackexchange.com/questions/1563249/how-do-i-scale-a-triangle-given-its-cartesian-cooordinates). I found out that it somehow indeed had to do with the w-value. As I always thought this was mainly a helper variable, it would be awesome if someone could explain how that values screwed things over.
Anyways, including that value in the equation it works fine.
float4 center = (IN[0].vertex.xyzw + IN[1].vertex.xyzw + IN[2].vertex.xyzw) / 3;
for (int i = 0; i < 3; i++)
{
float scale = 2;
float4 pos = (IN[i].vertex.xyzw * scale) - center.xyzw;
o.pos.xyzw = pos.xyzw;
tristream.Append(o);
}
This works just fine :)

Find collision time given distance and speed

Suppose I have an object A at position x = 0 and object B at position x = 16.
Suppose A have this code:
public class Move : MonoBehaviour
{
float speed = 0.04f;
Update()
{
transform.Translate(speed, 0, 0);
}
}
My question is: how to evaluate how many seconds (precisely) will it take for A to collide with B?
If I apply the formula S = S0 + vt, it won't work correctly, because I don't know how to measure how many frames it will pass in a second to exactly measure what speed is.
First of all you shouldn't do that. Your code is currently framerate-dependent so the object moves faster if you have a higher framerate!
Rather use Time.deltaTime
This property provides the time between the current and previous frame.
to convert your speed from Unity Units / frame into Unity Units / second
transform.Translate(speed * Time.deltaTime, 0, 0);
this means the object now moves with 0.04 Unity Units / second (framerate-independent).
Then I would say the required time in seconds is simply
var distance = Mathf.Abs(transform.position.x - objectB.transform.position.x);
var timeInSeconds = distance / speed;
Though .. this obviously still assumes by "collide" you mean at the same position (at least on the X axis) .. you could also take their widths into account since their surfaces will collide earlier than this ;)
var distance = Mathf.Abs(transform.position.x - objectB.transform.position.x) - (objectAWidth + objectBWidth);
var timeInSeconds = distance / speed;

Move object to nearest empty space on a plane

Check the following gif: https://i.gyazo.com/72998b8e2e3174193a6a2956de2ed008.gif
I want the cylinder to instantly change location to the nearest empty space on the plane as soon as I put a cube on the cylinder. The cubes and the cylinder have box colliders attached.
At the moment the cylinder just gets stuck when I put a cube on it, and I have to click in some direction to make it start "swimming" through the cubes.
Is there any easy solution or do I have to create some sort of grid with empty gameobjects that have a tag which tells me if there's an object on them or not?
This is a common problem in RTS-like video games, and I am solving it myself. This requires a breadth-first search algorithm, which means that you're checking the closest neighbors first. You're fortunate to only have to solve this problem in a gridded-environment.
Usually what programmers will do is create a queue and add each node (space) in the entire game to that queue until an empty space is found. It will start with e.g. the above, below, and adjacent spaces to the starting space, and then recursively move out, calling the same function inside of itself and using the queue to keep track of which spaces still need to be checked. It will also need to have a way to know whether a space has already been checked and avoid those spaces.
Another solution I'm conceiving of would be to generate a (conceptual) Archimedean spiral from the starting point and somehow check each space along that spiral. The tricky part would be generating the right spiral and checking it at just the right points in order to hit each space once.
Here's my quick-and-dirty solution for the Archimedean spiral approach in c++:
float x, z, max = 150.0f;
vector<pair<float, float>> spiral;
//Generate the spiral vector (run this code once and store the spiral).
for (float n = 0.0f; n < max; n += (max + 1.0f - n) * 0.0001f)
{
x = cos(n) * n * 0.05f;
z = sin(n) * n * 0.05f;
//Change 1.0f to 0.5f for half-sized spaces.
//fmod is float modulus (remainder).
x = x - fmod(x, 1.0f);
z = z - fmod(z, 1.0f);
pair<float, float> currentPoint = make_pair(x, z);
//Make sure this pair isn't at (0.0f, 0.0f) and that it's not already in the spiral.
if ((x != 0.0f || z != 0.0f) && find(spiral.begin(), spiral.end(), currentPoint) == spiral.end())
{
spiral.push_back(currentPoint);
}
}
//Loop through the results (run this code per usage of the spiral).
for (unsigned int n = 0U; n < spiral.size(); ++n)
{
//Draw or test the spiral.
}
It generates a vector of unique points (float pairs) that can be iterated through in order, which will allow you to draw or test every space around the starting space in a nice, outward (breadth-first), gridded spiral. With 1.0f-sized spaces, it generates a circle of 174 test points, and with 0.5f-sized spaces, it generates a circle of 676 test points. You only have to generate this spiral once and then store it for usage numerous times throughout the rest of the program.
Note:
This spiral samples differently as it grows further and further out from the center (in the for loop: n += (max + 1.0f - n) * 0.0001f).
If you use the wrong numbers, you could very easily break this code or cause an infinite loop! Use at your own risk.
Though more memory intensive, it is probably much more time-efficient than the traditional queue-based solutions due to iterating through each space exactly once.
It is not a 100% accurate solution to the problem, however, because it is a gridded spiral; in some cases it may favor the diagonal over the lateral. This is probably negligible in most cases though.
I used this solution for a game I'm working on. More on that here. Here are some pictures (the orange lines in the first are drawn by me in Paint for illustration, and the second picture is just to demonstrate what the spiral looks like if expanded):

Visualizing Sine Wave with Processing

I have 1000+ row Sine Wave data which changes with time and I want to visualize it with Processing language. My aim is to create an animation which will draw a Sine Wave with time starting from the middle of the rectangular [height/2]. I also want to show only the 1 second periods of that wave. I mean after 1 second, first coordinate should dissappear, and so forth.
How can I achieve that ?
Thanks
Sample Data :
TIME X Y
0.1333 0 0
0.2666 0.1 0.0999983333
0.3999 0.2 0.1999866669
0.5332 0.3 0.299955002
0.6665 0.4 0.3998933419
0.7998 0.5 0.4997916927
0.9331 0.6 0.5996400648
1.0664 0.7 0.6994284734
The way you'd achieve that is to split this project into tasks:
load & parse data
update time and render data
To make sure part 1 goes smoothly it's probably best to make sure your data is easy to parse first. The sample data looks like a table/spreadsheet, but it's not formatted with a standard separator(e.g. comma or tab). You can fiddle things when you parse, but I recommend using clean data first, for example, if you plan on using space as a separator:
TIME X Y
0.1333 0.0 0
0.2666 0.1 0.0999983333
0.3999 0.2 0.1999866669
0.5332 0.3 0.299955002
0.6665 0.4 0.3998933419
0.7998 0.5 0.4997916927
0.9331 0.6 0.5996400648
1.0664 0.7 0.6994284734
Once that's done, you can use loadStrings() to load the data and split() to break a row into 3 elements which can be converted from string to float.
Once you've got value to use, you can store them. You can either create three arrays, each holding a field from the loaded data(one for all the X values, one for all the Y values and one for all the time values) or you can cheat and use a single array of PVector objects. Although PVector is meant for 3D math/linear algebra, you have 2D coordinates, so you can store time as 3rd 'dimension'/component.
Part two revolves mostly around updating based on time, and this is where millis() comes in handy. You can check the amount of time passed between updates and if it's greater than a certain (delay) value, it's time for another update (of the frame/data row index).
The last part you need to worry about is rendering the data on screen. Luckily in your sample data the coordinates are normalized(between 0.0 and 1.0) which makes easy to map to the sketch dimensions(by using simple multiplication). Otherwise the map() function comes in handy.
Here's a sketch to illustrate the above, data.csv is a text file containing the formatted sample data from above:
PVector[] frames;//keep track of the frame data(position(x,y) and time(store in PVector's z property))
int currentFrame = 0,totalFrames;//keep track of the current frame and total frames from the csv
int now, delay = 1000;//keep track of time and a delay to update frames
void setup(){
//handle data
String[] rows = loadStrings("data.csv");//load data
totalFrames = rows.length-1;//get total number of lines (-1 = sans the header)
frames = new PVector[totalFrames];//initialize/allocate frame data array
for(int i = 1 ; i <= totalFrames; i++){//start parsing data(from 1, skip header)
String[] frame = rows[i].split(" ");//chop each row into 3 strings(time,x,y)
frames[i-1] = new PVector(float(frame[1]),float(frame[2]),float(frame[0]));//parse each row(not i-1 to get back to 0 index) and how the PVector's initialized 1,2,0 (x,y,time)
}
now = millis();//initialize this to keep track of time
//render setup, up to you
size(400,400);smooth();fill(0);strokeWeight(15);
}
void draw(){
//update
if(millis() - now >= delay){//if the amount of time between the current millis() and the last time we updated is greater than the delay (i.e. every 'delay' ms)
currentFrame++;//update the frame index
if(currentFrame >= totalFrames) currentFrame = 0;//reset to 0 if we reached the end
now = millis();//finally update our timer/stop-watch variable
}
PVector frame = frames[currentFrame];//get the data for the current frame
//render
background(255);
point(frame.x * width,frame.y * height);//draw
text("frame index: " + currentFrame + " data: " + frame,mouseX,mouseY);
}
There are a couple of extra notes needed:
You mentioned moving to the next coordinate after 1 second. From what I can see in your sample data there are 8 updates per second, so 1000/8 would probably work better. It's up to you how you handle timing though.
I assume your full set includes data for a sine wave movement. I've mapped to the full coordinates, but in the render part of the draw() loop you can map however you like(e.g. including a height/2 offset, etc.). Also if you're not familiar with sine waves, have a look at these Processing resources: Daniel Shiffman's SineWave sample, Ira Greenberg's trig tutorial.

understanding fractals and especially mandelbrot set

I'm really scratching my head here in an effort to understand a quote i read somewhere that says "the more we zoom inside the fractal, the more iteration we will most likely need to perform".
so far, i haven't been able to find any mathematical / academical paper that proves that saying.
i've also managed to find a small code that calculates the mandelbrot set, taken from here :
http://warp.povusers.org/Mandelbrot/
but yet, wasn't able to understand how zooming affects iterations.
double MinRe = -2.0;
double MaxRe = 1.0;
double MinIm = -1.2;
double MaxIm = MinIm+(MaxRe-MinRe)*ImageHeight/ImageWidth;
double Re_factor = (MaxRe-MinRe)/(ImageWidth-1);
double Im_factor = (MaxIm-MinIm)/(ImageHeight-1);
unsigned MaxIterations = 30;
for(unsigned y=0; y<ImageHeight; ++y)
{
double c_im = MaxIm - y*Im_factor;
for(unsigned x=0; x<ImageWidth; ++x)
{
double c_re = MinRe + x*Re_factor;
double Z_re = c_re, Z_im = c_im;
bool isInside = true;
for(unsigned n=0; n<MaxIterations; ++n)
{
double Z_re2 = Z_re*Z_re, Z_im2 = Z_im*Z_im;
if(Z_re2 + Z_im2 > 4)
{
isInside = false;
break;
}
Z_im = 2*Z_re*Z_im + c_im;
Z_re = Z_re2 - Z_im2 + c_re;
}
if(isInside) { putpixel(x, y); }
}
}
Thanks!
This is not a scientific answer but a one with common sense. In theory, to decide whether a point belongs to the Mandelbrot set or not, you should iterate infinitely, and check if the value ever reaches Infinity. This is practically useless so we make assumptions:
We iterate only 50 times
We check that iteration value ever gets larger than 2
When you zoom into a Mandelbrot set, the second assumption remains valid. However zooming means increasing the significant fractional digits of the point coordinates.
Say you start with (0.4,-0.2i).
Iterating over and over this value increases the digits used, but won't lose significant digits. Now when your point coordinate looks such: (0.00000000045233452235, -0.00000000000943452634626i) to check if that point is in the set you need much more iteration to see if that iteration would ever reach 2 not to mention that if you use some kind of Float type, you will lose significant digits at some zoom level and you'll have to switch to an arbitrary precision library.
Trying is your best friend :-) Calculate a set with a low iteration and a high iteration and subtract the second image from the first. You will always see change at the edges (where black pixels meet colored pixels), but if your zooming level is high (meaning: the point coordinates have a lot of fractional digits) you will get a different image.
You asked how zooming affects iterations and my typical zoom to iterations ratio is that if you zoom in to a 9th of the size I increase iterations by 1.7. A 9th of the size of course means that both width and height is divided by 3.
Making this more generic I actually use this in my code
Complex middle = << calculate from click in image >>
int zoomfactor = 3;
width = width / zoomfactor;
maxiter = (int)(maxiter * Math.Sqrt(zoomfactor));
minimum = new Complex(middle.Real - width, middle.Imaginary - width);
maximum = new Complex(middle.Real + width, middle.Imaginary + width);
I find that this relation between zoom and iterations works out pretty well, the details in the fractals still come well on deep zooms without getting too crazy on the iterations too fast.
How fast you want to zoom if your own preference, I like a zoomfactor of 3 but anything goes. The important thing is that you need to keep the relation between the zoomfactor and the increase in interations.