fixed game update cycles [closed] - game-loop

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I've implemented delta time in my game loop so that any fluctuations in frame rate don't matter any more, however won't the game still run faster on faster machines and slower on slow ones?
I was under the impression that most games update the logic at a fixed rate (60 times a second) and then perform as many renders as they can. How do I make the updates loop as many times as I want per second?

Just use an accumulator ... every time your game loop runs, += the delta of time that passed. Once that time is greater than 16.6666667 seconds, run your update logic and subtract 60 seconds from your time. Keep running the update method until the accumulator is < 60 seconds :-)
pseudo-code!
const float updateFrequency = 16.6666667f;
float accumulator = updateFrequency;
public void GameLoop(float delta)
{
// this gets called as fast as the CPU will allow.
// assuming delta is milliseconds
if (accumulator >= updateFrequency)
{
while (accumulator >= updateFrequency)
{
Update();
accumulator -= updateFrequency;
}
}
// you can call this as many times as you want
Draw();
accumulator += delta;
}
This technique means that if the draw method takes longer than the update frequency, it will catch up. Of course, you have to be careful because if your draw method routinely takes more than 1/60 of a second, then you'll be in trouble because you will always have to be calling update several times in between draw passes, which could cause hiccups in the rendering.

Related

Difference between O(1) & O(N) Big-O Notation completely? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
Improve this question
I need a clear-cut understanding. We know a constant-time method is O(1) and a linear-time method is O(N). Please briefly explain type-1 and type-2, and what is the difference. And why type-1, and type-2 will go O(1), and O(N) respectively. Thanks in advance.
Type-1:
func printLoop() {
for p in 0..<100 {
print(p)
}
}
printLoop()
Type-2:
func printLoop(n: Int) {
for p in 0..<n {
print(p)
}
}
printLoop(100)
The difference here has to do with semantics. In fact if you pass n = 100 to the second function then both versions will take the same time to run. The first printLoop() executes a fixed number of prints, and so is said to have a constant O(1) running time. The second version loops n times, and therefore has an O(n) running time based on the input.
Type-1 is O(1) because there is no input parameter for the method. You can't change the time it takes to complete. (If completing the function of type-1 is going to take 1 second, it is always 1 second no matter where and how you are using that.
Type-2 is O(n) because if you pass it 100, it may take 1 second to complete (As we assumed in the above section), but if you pass 200, it will take twice the time, right? It is called "linear time" because it goes up by a linear function. (Y = alpha * X where X refers to the number of loops and Y refers to the time for Type-2 function to complete its operation)

How to get the distribution of all the times in a queue? [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed last year.
Improve this question
Edit: The solution was that the serviceRate was too high for the amount of arriving trucks.
In my Anylogic model, I have a population of terminals (5) and a population of trucks (100). The trucks visit the terminals, which are a queueing model for them. The terminals have a number of gates (e.g., 7) that can all service 1 truck at a time (service time is based on a uniform distribution). If all the gates are busy, the other trucks have to wait in a (FIFO) queue in front of the terminal.
I want to measure the time trucks are standing within the queues before the terminals (without the service time). How can I create these terminal processes best within my Anylogic model?
I tried using a service block (the first processes in the picture), but I think that gives all the time and not only the time within the queue. I also tried a queue and delay block (below), to be able to measure the queue time. However, the distribution of the time measurement is not working as I get no distribution but just 1 (very small) number, as can be seen in the lowest picture. Same if I measure the time within the service or delay block... Does any body know how to let this work? Thanks!
You delay capacity is numberOfGates. It means if that value is 5, then 5 trucks will move into delay block at the same time. Other arrived trucks will wait in the queue if delay.size()=5. There is nothing wrong in this, you should check if your model really works as intended.
The agents would move into delay block immediately if you selected the maximum capacity option in the delay block.
Also instead of timeMeasureStart/End, use your own assignments. That is, inside the delay, on enter type agent.waitStart = time(); and upon leaving type yourHistogramData.add(time()-agent.waitStart);
It turns out that the service rate was too high for the amount of trucks I send to the terminal.

Wago codesys PLC word to bool conversion [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I am programing a solarcell tracker with the use of Wago PFC100 and Ecockpit software.
I have a principle problem regarding converting some counter values in a 16-bit WORD to a pulsetrain in form of BOOL.
The 16-bit word register counts up/down from 0 to 12621. I need to convert this to BOOL pulses.
when word counter goes from 0 to 1 I need a BOOL pulse 0->1->0, and on next count from 1 to 2 I need a new BOOL pulse of 0->1->0.
I also need pulses in the case that the word register counts down:
2 to 1 needs to also generate a BOOL pulse 0->1->0.
I am programing this with structured text (ST), and I don't know how I could get this part running.
There are a couple of ways to accomplish this.
If you are not expecting the counter to increment more than once per program scan, you can simply look at bit 0 of the counter. Every time it changes, pulse the output.
If it might count more than one per program scan, then on each program scan you need to look at the current counter value and compare it to the counter value on the last scan. The difference between the current value and the last value is how many times you need to pulse the output.

Need help in modelling a delay element in Simulink [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I need to generate a pulse which steps from 0 to 1 after an initial predetermined time has elapsed. When the new predetermined time is available, the pulse should again step from 1 to 0. It should step from 0 to 1 after that time has elapsed. This model has to be implemented in Simulink.
Thanks.
I'm assuming that the times at which the on/off behaviours are to be performed are available before the model simulation begins. Let's say that it's 2 seconds of value 0 and then 3 seconds of value 1.
Use the Pulse Generator block in the Sources library of Simulink. The trick is starting with a zero. To do this, set the Amplitude to 1 second, the Period to 5s, the Pulse Width to 60% and the Phase Delay to 2s.
The output will look as below.

Best way to code a real-time multiplayer game

I'm not sure if the term real-time is being misused here, but the idea is that many players on a server have a city producing n resources per second. There might be a thousand such cities. What's the best way to reward all of the player cities?
Is the best way a loop like such placed in an infinite loop running whenever the game is "live"? (please ignore the obvious faults with such simplistic logic)
foreach(City c in AllCities){
if(c.lastTouched < DateTime.Now.AddSeconds(-10)){
c.resources += (DateTime.Now-c.lastTouched).Seconds * c.resourcesPerSecond;
c.lastTouched = DateTime.Now;
c.saveChanges();
}
}
I don't think you want an infinite loop as that would waste a lot of CPU cycles. This is basically a common simulation situation Wikipedia Simulation Software and there are a few approaches I can think of:
A discrete time approach where you increment the clock by a fixed time and recalculate the state of your system. This is similar to your approach above except do the calculation periodically and remove the 10 seconds if clause.
A discrete event approach where you have a central event queue, each with a timestamp, sorted by time. You sleep until the next event is due and then dispatch it. E.g. the event could mean adding a single resource. Wikipedia Discrete Event Simulation
Whenever someone asks for the number of resources calculate it based on the rate, initial time, and current time. This can be very efficient when the number of queries is expected to be small relative to the number of cities and the elapsed time.
while you can store the last ticked time per object, like your example, it's often easier to just have a global timestep
while(1) {
currentTime = now();
dt = currentTime - lastUpdateTime;
foreach(whatever)
whatever.update(dt);
lastUpdateTime = currentTime;
}
if you have different systems that don't need as frequent updates:
while(1) {
currentTime = now();
dt = currentTime - lastUpdateTime;
subsystem.timer += dt
while (subsystem.timer > subsystem.updatePeriod) {// need to be careful
subsystem.timer -= subsystem.updatePeriod; // that the system.update()
subsystem.update(subsytem.updatePeriod); // is faster than the
} // system.period
// ...
}
(which you'll notice is pretty much what you were doing on a per city basis)
Another gotcha is that with different subsystem clock rates, you can get overlaps (ie ticking many subsystems the same frame), leading to inconsistent frame times which can sometimes be an issue.