How in Flink CEP can we detect a pattern that last a period of time? - scala

I want to detect a pattern with Flink CEP, here my use case:
I should raise an event when i got the speed of my vehicle above a speedLimit for a laps of time.
Example1: (speedlimit = 100, period=60 seconds)
event1: speed = 50, eventtime=0
event1: speed = 100, eventtime=10
event1: speed = 120, eventtime=30
event1: speed = 150, eventtime=40
event1: speed = 120, eventtime=70
event1: speed = 50, eventtime=90
=> raise 1 event
Example1: (speedlimit = 100, period=60 seconds)
event1: speed = 50, eventtime=0
event1: speed = 100, eventtime=10
event1: speed = 120, eventtime=30
event1: speed = 150, eventtime=40
event1: speed = 60, eventtime=70
=> raise 0 event
Please, your help.

I would approach this by looking for a sequence of 2 or more events where the speed is greater than or equal to 100 for all of them, and where the timestamp of the last one minus the timestamp of the first one is greater than or equal to 60.
By the way, you may find MATCH_RECOGNIZE is easier to work with, but either it or CEP should be fine for this use case.

Related

Int96Value to Date string

When reading a parquet file (using Scala) I read the timestamp field back as:
Int96Value{Binary{12 constant bytes, [0, 44, 84, 119, 54, 49, 0, 0, -62, -127, 37, 0]}}
How can I convert it to a date string?
I did some research for you. The Int96 format is quite specific a seems to be deprecated.
Here is a discussion about converting Int96 to Date.
Based on this, I created following piece of code:
def main(args: Array[String]): Unit = {
import java.util.Date
import org.apache.parquet.example.data.simple.{Int96Value, NanoTime}
import org.apache.parquet.io.api.Binary
val int96Value = new Int96Value(Binary.fromConstantByteArray(Array(0, 44, 84, 119, 54, 49, 0, 0, -62, -127, 37, 0)))
val nanoTime = NanoTime.fromInt96(int96Value)
val nanosecondsSinceUnixEpoch = (nanoTime.getJulianDay - 2440588) * (86400 * 1000 * 1000 * 1000) + nanoTime.getTimeOfDayNanos
val date = new Date(nanosecondsSinceUnixEpoch / (1000 * 1000))
println(date)
}
However, it prints Sun Sep 27 17:05:55 CEST 2093. I am not sure, if this is a date, that you expected.
Edit: using Instance as suggested:
val nanosInSecond = 1000 * 1000 * 1000;
val instant = Instant.ofEpochSecond(nanosecondsSinceUnixEpoch / nanosInSecond, nanosecondsSinceUnixEpoch % nanosInSecond)
println(instant) // prints 2093-09-27T15:05:55.933865216Z
java.time supports Julian days.
Credits to ygor for doing the research and finding out how to interpret the 12 bytes of your array.
byte[] int96Bytes = { 0, 44, 84, 119, 54, 49, 0, 0, -62, -127, 37, 0 };
// Find Julian day
int julianDay = 0;
int index = int96Bytes.length;
while (index > 8) {
index--;
julianDay <<= 8;
julianDay += int96Bytes[index] & 0xFF;
}
// Find nanos since midday (since Julian days start at midday)
long nanos = 0;
// Continue from the index we got to
while (index > 0) {
index--;
nanos <<= 8;
nanos += int96Bytes[index] & 0xFF;
}
LocalDateTime timestamp = LocalDate.MIN
.with(JulianFields.JULIAN_DAY, julianDay)
.atTime(LocalTime.NOON)
.plusNanos(nanos);
System.out.println("Timestamp: " + timestamp);
This prints:
Timestamp: 2017-10-24T03:01:50
I’m not happy about converting your byte array to an int and a long by hand, but I don’t know Parquet will enough to use the conversions that are probably available there. Use them if you can.
It doesn’t matter which LocalDate we use as starting point since we are changing it to the right Julian day anyway, so I picked LocalDate.MIN just to pick one.
The way I read the documentation, Julian days are always in the local time zone, that is, no time zone is understood, and they always start at midday (not midnight).
Link: Documentation of JulianFields in java.time

What is the purpose of the add_loss function in Keras?

Currently I stumbled across variational autoencoders and tried to make them work on MNIST using keras. I found a tutorial on github.
My question concerns the following lines of code:
# Build model
vae = Model(x, x_decoded_mean)
# Calculate custom loss
xent_loss = original_dim * metrics.binary_crossentropy(x, x_decoded_mean)
kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
vae_loss = K.mean(xent_loss + kl_loss)
# Compile
vae.add_loss(vae_loss)
vae.compile(optimizer='rmsprop')
Why is add_loss used instead of specifying it as compile option? Something like vae.compile(optimizer='rmsprop', loss=vae_loss) does not seem to work and throws the following error:
ValueError: The model cannot be compiled because it has no loss to optimize.
What is the difference between this function and a custom loss function, that I can add as an argument for Model.fit()?
Thanks in advance!
P.S.: I know there are several issues concerning this on github, but most of them were open and uncommented. If this has been resolved already, please share the link!
Edit 1
I removed the line which adds the loss to the model and used the loss argument of the compile function. It looks like this now:
# Build model
vae = Model(x, x_decoded_mean)
# Calculate custom loss
xent_loss = original_dim * metrics.binary_crossentropy(x, x_decoded_mean)
kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
vae_loss = K.mean(xent_loss + kl_loss)
# Compile
vae.compile(optimizer='rmsprop', loss=vae_loss)
This throws an TypeError:
TypeError: Using a 'tf.Tensor' as a Python 'bool' is not allowed. Use 'if t is not None:' instead of 'if t:' to test if a tensor is defined, and use TensorFlow ops such as tf.cond to execute subgraphs conditioned on the value of a tensor.
Edit 2
Thanks to #MarioZ's efforts, I was able to figure out a workaround for this.
# Build model
vae = Model(x, x_decoded_mean)
# Calculate custom loss in separate function
def vae_loss(x, x_decoded_mean):
xent_loss = original_dim * metrics.binary_crossentropy(x, x_decoded_mean)
kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
vae_loss = K.mean(xent_loss + kl_loss)
return vae_loss
# Compile
vae.compile(optimizer='rmsprop', loss=vae_loss)
...
vae.fit(x_train,
x_train, # <-- did not need this previously
shuffle=True,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_test, x_test)) # <-- worked with (x_test, None) before
For some strange reason, I had to explicitly specify y and y_test while fitting the model. Originally, I didn't need to do this. The produced samples seem reasonable to me.
Although I could resolve this, I still don't know what the differences and disadvantages of these two methods are (other than needing a different syntax). Can someone give me more insight?
I'll try to answer the original question of why model.add_loss() is being used instead of specifying a custom loss function to model.compile(loss=...).
All loss functions in Keras always take two parameters y_true and y_pred. Have a look at the definition of the various standard loss functions available in Keras, they all have these two parameters. They are the 'targets' (the Y variable in many textbooks) and the actual output of the model. Most standard loss functions can be written as an expression of these two tensors. But some more complex losses cannot be written in that way. For your VAE example this is the case because the loss function also depends on additional tensors, namely z_log_var and z_mean, which are not available to the loss functions. Using model.add_loss() has no such restriction and allows you to write much more complex losses that depend on many other tensors, but it has the inconvenience of being more dependent on the model, whereas the standard loss functions work with just any model.
(Note: The code proposed in other answers here are somewhat cheating in as much as they just use global variables to sneak in the additional required dependencies. This makes the loss function not a true function in the mathematical sense. I consider this to be much less clean code and I expect it to be more error-prone.)
JIH's answer is right of course but maybe it is useful to add:
model.add_loss() has no restrictions, but it also removes the comfort of using for example targets in the model.fit().
If you have a loss that depends on additional parameters of the model, of other models or external variables, you can still use a Keras type encapsulated loss function by having an encapsulating function where you pass all the additional parameters:
def loss_carrier(extra_param1, extra_param2):
def loss(y_true, y_pred):
#x = complicated math involving extra_param1, extraparam2, y_true, y_pred
#remember to use tensor objects, so for example keras.sum, keras.square, keras.mean
#also remember that if extra_param1, extra_maram2 are variable tensors instead of simple floats,
#you need to have them defined as inputs=(main,extra_param1, extraparam2) in your keras.model instantiation.
#and have them defind as keras.Input or tf.placeholder with the right shape.
return x
return loss
model.compile(optimizer='adam', loss=loss_carrier)
The trick is the last row where you return a function as Keras expects them with just two parameters y_true and y_pred.
Possibly looks more complicated than the model.add_loss version, but the loss stays modular.
I was also wondering about the same query and some related stuff like how to add loss function within the intermediate layers. Here I'm sharing some of the observed information, hope it may help others. It's true that standard keras loss functions only take two arguments, y_true and y_pred. But during the experiment, there can some cases where we need some external parameter or coefficient while computing with these two values (y_true, y_pred). This can be needed at the last layer as usual or somewhere in the middle of the model's layer.
model.add_loss()
The accepted answer correctly said about the model.add_loss() functions. It potentially depends on the layer inputs (tensor). According to the official doc, when writing the call method of a custom layer or a subclassed model, we may want to compute scalar quantities that we want to minimize during training (e.g. regularization losses). We can use the add_loss() layer method to keep track of such loss terms. For instance, activity regularization losses dependent on the inputs passed when calling a layer. Here's an example of a layer that adds a sparsity regularization loss based on the L2 norm of the inputs:
from tensorflow.keras.layers import Layer
class MyActivityRegularizer(Layer):
"""Layer that creates an activity sparsity regularization loss."""
def __init__(self, rate=1e-2):
super(MyActivityRegularizer, self).__init__()
self.rate = rate
def call(self, inputs):
# We use `add_loss` to create a regularization loss
# that depends on the inputs.
self.add_loss(self.rate * tf.reduce_sum(tf.square(inputs)))
return inputs
Loss values added via add_loss can be retrieved in the .losses list property of any Layer or Model (they are recursively retrieved from every underlying layer):
from tensorflow.keras import layers
class SparseMLP(Layer):
"""Stack of Linear layers with a sparsity regularization loss."""
def __init__(self, output_dim):
super(SparseMLP, self).__init__()
self.dense_1 = layers.Dense(32, activation=tf.nn.relu)
self.regularization = MyActivityRegularizer(1e-2)
self.dense_2 = layers.Dense(output_dim)
def call(self, inputs):
x = self.dense_1(inputs)
x = self.regularization(x)
return self.dense_2(x)
mlp = SparseMLP(1)
y = mlp(tf.ones((10, 10)))
print(mlp.losses) # List containing one float32 scalar
Also note, when using model.fit(), such loss terms are handled automatically. When writing a custom training loop, we should retrieve these terms by hand from model.losses, like this:
loss_fn = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.Adam()
# Iterate over the batches of a dataset.
for x, y in dataset:
with tf.GradientTape() as tape:
# Forward pass.
logits = model(x)
# Loss value for this batch.
loss_value = loss_fn(y, logits)
# Add extra loss terms to the loss value.
loss_value += sum(model.losses) # < ------------- HERE ---------
# Update the weights of the model to minimize the loss value.
gradients = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(gradients, model.trainable_weights))
Custom losses
With model.add_loss(), (AFAIK), we can use it somewhere in the middle of the network. Here we no longer bound with only two parameters i.e. y_true, y_pred. But what if we also want to impute external parameter or coefficient to the last layer loss functions of the network. Nric answer is correct. But it can also be implemented by subclassing the tf.keras.losses.Loss class by implementing the following two methods:
__init__(self): accept parameters to pass during the call of your loss function
call(self, y_true, y_pred): use the targets (y_true) and the model predictions (y_pred) to compute the model's loss
Here is an example of a custom MSE by subclassing the tf.keras.losses.Loss class. And here we also no longer bound only two parameters i.e. y_ture, y_pred.
class CustomMSE(keras.losses.Loss):
def __init__(self, regularization_factor=0.1, name="custom_mse"):
super().__init__(name=name)
self.regularization_factor = regularization_factor
def call(self, y_true, y_pred):
mse = tf.math.reduce_mean(tf.square(y_true - y_pred))
reg = tf.math.reduce_mean(tf.square(0.5 - y_pred))
return mse + reg * self.regularization_factor
model.compile(optimizer=..., loss=CustomMSE())
Try this:
import pandas as pd
import numpy as np
import pickle
import matplotlib.pyplot as plt
from scipy import stats
import tensorflow as tf
import seaborn as sns
from pylab import rcParams
from sklearn.model_selection import train_test_split
from keras.models import Model, load_model, Sequential
from keras.layers import Input, Lambda, Dense, Dropout, Layer, Bidirectional, Embedding, Lambda, LSTM, RepeatVector, TimeDistributed, BatchNormalization, Activation, Merge
from keras.callbacks import ModelCheckpoint, TensorBoard
from keras import regularizers
from keras import backend as K
from keras import metrics
from scipy.stats import norm
from keras.utils import to_categorical
from keras import initializers
bias = bias_initializer='zeros'
from keras import objectives
np.random.seed(22)
data1 = np.array([0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0], dtype='int32')
data2 = np.array([1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0,
1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0], dtype='int32')
data3 = np.array([0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0,
1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0], dtype='int32')
#train = np.zeros(shape=(992,54))
#test = np.zeros(shape=(921,54))
train = np.zeros(shape=(300,54))
test = np.zeros(shape=(300,54))
for n, i in enumerate(train):
if (n<=100):
train[n] = data1
elif (n>100 and n<=200):
train[n] = data2
elif(n>200):
train[n] = data3
for n, i in enumerate(test):
if (n<=100):
test[n] = data1
elif(n>100 and n<=200):
test[n] = data2
elif(n>200):
test[n] = data3
batch_size = 5
original_dim = train.shape[1]
intermediate_dim45 = 45
intermediate_dim35 = 35
intermediate_dim25 = 25
intermediate_dim15 = 15
intermediate_dim10 = 10
intermediate_dim5 = 5
latent_dim = 3
epochs = 50
epsilon_std = 1.0
def sampling(args):
z_mean, z_log_var = args
epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim), mean=0.,
stddev=epsilon_std)
return z_mean + K.exp(z_log_var / 2) * epsilon
x = Input(shape=(original_dim,), name = 'first_input_mario')
h1 = Dense(intermediate_dim45, activation='relu', name='h1')(x)
hD = Dropout(0.5)(h1)
h2 = Dense(intermediate_dim25, activation='relu', name='h2')(hD)
h3 = Dense(intermediate_dim10, activation='relu', name='h3')(h2)
h = Dense(intermediate_dim5, activation='relu', name='h')(h3) #bilo je relu
h = Dropout(0.1)(h)
z_mean = Dense(latent_dim, activation='relu')(h)
z_log_var = Dense(latent_dim, activation='relu')(h)
z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])
decoder_h = Dense(latent_dim, activation='relu')
decoder_h1 = Dense(intermediate_dim5, activation='relu')
decoder_h2 = Dense(intermediate_dim10, activation='relu')
decoder_h3 = Dense(intermediate_dim25, activation='relu')
decoder_h4 = Dense(intermediate_dim45, activation='relu')
decoder_mean = Dense(original_dim, activation='sigmoid')
h_decoded = decoder_h(z)
h_decoded1 = decoder_h1(h_decoded)
h_decoded2 = decoder_h2(h_decoded1)
h_decoded3 = decoder_h3(h_decoded2)
h_decoded4 = decoder_h4(h_decoded3)
x_decoded_mean = decoder_mean(h_decoded4)
vae = Model(x, x_decoded_mean)
def vae_loss(x, x_decoded_mean):
xent_loss = objectives.binary_crossentropy(x, x_decoded_mean)
kl_loss = -0.5 * K.mean(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var))
loss = xent_loss + kl_loss
return loss
vae.compile(optimizer='rmsprop', loss=vae_loss)
vae.fit(train, train, batch_size = batch_size, epochs=epochs, shuffle=True,
validation_data=(test, test))
vae = Model(x, x_decoded_mean)
encoder = Model(x, z_mean)
decoder_input = Input(shape=(latent_dim,))
_h_decoded = decoder_h (decoder_input)
_h_decoded1 = decoder_h1 (_h_decoded)
_h_decoded2 = decoder_h2 (_h_decoded1)
_h_decoded3 = decoder_h3 (_h_decoded2)
_h_decoded4 = decoder_h4 (_h_decoded3)
_x_decoded_mean = decoder_mean(_h_decoded4)
generator = Model(decoder_input, _x_decoded_mean)
generator.summary()
You need to change the compile row to
vae.compile(optimizer='rmsprop', loss=vae_loss)

MyHDL: Can't translating Signal.intbv.max to VHDL

I'm new to python and MyHDL so I started by converting old VHDL projects to MyHDL. This project is a vga timer that can accept any width, height, and frequency (given that they actually work with monitors). It doesn't successfully convert to either VHDL or Verilog because of the statements:
h_count.val.max # line 30
v_count.val.max # line 33
I can print their values just fine so they definitely evaluate to integers, but if I replace them with their literal values then it properly converts. I couldn't find anything about this in the myhdl issue tracker, but I don't want to add a false issue because of a newbie's mistake. Is there a proper way to use Signal.val.max or do I just avoid it? Here's the full code:
from myhdl import Signal, intbv, always_comb, always, toVHDL
def vga_timer(clk, x, y, h_sync, v_sync, vidon, width=800, height=600, frequency=72,
left_buffer=0, right_buffer=0, top_buffer=0, bottom_buffer=0):
# load vga constants by resolution
resolution = (width, height, frequency)
supported_resolutions = {(640, 480, 60): (16, 96, 48, 10, 2, 33, 0),
(800, 600, 60): (40, 128, 88, 1, 4, 23, 1),
(800, 600, 72): (56, 120, 64, 37, 6, 23, 1),
(1024, 768, 60): (24, 136, 160, 3, 6, 29, 0),
(1280, 720, 60): (72, 80, 216, 3, 5, 22, 1),
(1920, 1080, 60): (88, 44, 148, 4, 5, 36, 1)}
assert resolution in supported_resolutions, "%ix%i # %ifps not a supported resolution" % (width, height, frequency)
screen_constants = supported_resolutions.get(resolution)
# h for horizontal variables and signals, v for vertical constants and signals
h_front_porch, h_sync_width, h_back_porch, v_front_porch, v_sync_width, v_back_porch, polarity = screen_constants
h_count = Signal(intbv(0, 0, width + h_front_porch + h_sync_width + h_back_porch))
v_count = Signal(intbv(0, 0, height + v_front_porch + v_sync_width + v_back_porch))
print(h_count.val.max)
print(v_count.val.max)
#always(clk.posedge)
def counters():
h_count.next = h_count + 1
v_count.next = v_count
if h_count == 1040 - 1: # h_count.val.max - 1:
h_count.next = 0
v_count.next = v_count + 1
if v_count == 666 - 1: # v_count.val.max - 1:
v_count.next = 0
# determines h_sync and v_sync
#always_comb
def sync_pulses():
h_sync_left = width - left_buffer + h_front_porch
h_sync_right = h_sync_left + h_sync_width
h_sync.next = polarity
if h_sync_left <= h_count and h_count < h_sync_right:
h_sync.next = not polarity
v_sync_left = height - top_buffer + v_front_porch
v_sync_right = v_sync_left + v_sync_width
v_sync.next = polarity
if v_sync_left <= v_count and v_count < v_sync_right:
v_sync.next = not polarity
#always_comb
def blanking():
vidon.next = 0
if h_count < width - left_buffer - right_buffer and v_count < height - top_buffer - bottom_buffer:
vidon.next = 1
#always_comb
def x_y_adjust():
# x and y are only used when vidon = 1. during this time x = h_count and y = v_count
x.next = h_count[len(x.val):]
y.next = v_count[len(y.val):]
return counters, sync_pulses, blanking, x_y_adjust
width = 800
height = 600
frequency = 72
clk = Signal(bool(0))
x = Signal(intbv(0)[(width-1).bit_length():])
y = Signal(intbv(0)[(height-1).bit_length():])
h_sync = Signal(bool(0))
v_sync = Signal(bool(0))
vidon = Signal(bool(0))
vga_timer_inst = toVHDL(vga_timer, clk, x, y, h_sync, v_sync, vidon, width, height, frequency)
Any miscellaneous advice on my code is also welcome.
You may have found this out by now, but if you want convertible code, you can't use the signal qualities (min, max, number of bits, etc.) in the combinational or sequential blocks. You can use them in constant assignments outside these blocks, though. So if you put these instead of your print statements:
h_counter_max = h_count.val.max - 1
v_counter_max = v_count.val.max - 1
you can use h_counter_max and v_counter_max in your tests on line 30 and 33.
The min, max attributes can be used in the latest version.

How to get an SKAction.sequence to wait for a random duration

I'm trying to get a door to open up, wait for a random amount of time (within a range) and then close. If I use SKAction.waitForDuration, I can set the exact time to wait, and that works. However, if I use SKAction.waitForDuration (withRange), it always opens exactly at the shortest time in the range. How to I get it to open at other times within the range? Any help would be greatly appreciated! Thanks!
Here's my code:
var doorAction = SKAction.moveTo(CGPoint(x: size.width * 0.5, y: size.height + size.height * 1.95), duration: NSTimeInterval(1))
// randomWait should give me a value between 10 & 30, but door always opens at 10
var randomWait = SKAction.waitForDuration(20.0, withRange: 20.0)
// waitAction works fine, but I want a value between 10 and 30
// var waitAction = SKAction.waitForDuration(10)
var doorReturnAction = SKAction.moveTo(CGPoint(x: size.width * 0.5, y: size.height * 2.18), duration: NSTimeInterval(1))
var actionSequence = SKAction.sequence([doorAction, randomWait, doorReturnAction])
self.runAction(actionSequence)
Since you are not repeating the action sequence you don't really need SKAction.waitForDuration:withRange:. You can calculate a waitDuration using arc4random()
let upperlimit : UInt32 = 30
let lowerlimit : UInt32 = 10
let waitDuration = NSTimeInterval(CGFloat(arc4random() % ((upperlimit - lowerlimit) * 10) + lowerlimit * 10)/10.0)
var randomWait = SKAction.waitForDuration(waitDuration)
From the docs.
sec - The average amount of time to wait.
durationRange - The range of possible values for the duration.
If you set 20 to be the average, and the range only goes up to 20 then the only way it can meet that is to always open at 20. You should try having the first parameter (the average) be somewhere close to the middle of the range, if you want the behavior to be less uniform.

What is the meaning of group.nodeLocation in One simulator syntax?

I was reading some tutorial of One simulator. I got one syntax that is
group.nodeLocation = 100,100
As far as I know a group can have multiple nodes. Therefore, I am not clear what does it mean by group.nodeLocation. Which node location we are fixing by using this command.
Thanks,
It depends on which kind of movement models you use.
Setting group.nodeLocation is required for StationaryMovement, but for other dynamic movement modes (e.g., RandomWaypoint) is meaningless.
If you want to set multiple nodes N with different location, you should separate it into N groups.
In speaking of adding static nodes in bulk, use MapRouterMovement to simulate it. The static node can be regarded as the initioal coordination equals the destination coordination. For instance, 5 static nodes are defined as:
LINESTRING (100 100, 100.0 100.0)
LINESTRING (200 200, 200.0 200.0)
LINESTRING (300 300, 300.0 300.0)
LINESTRING (400 400, 400.0 400.0)
LINESTRING (500 500, 500.0 500.0)
And the setting file like:
Group4.groupID = b
Group4.movementModel = MapRouteMovement #MapRouteMovement
Group4.routeFile = path/routFile.wkt #routeFile
Group4.routeType = 2
Group4.nrofHosts = 5
Group4.waitTime = 0, 0
Group4.speed = 0, 0
BTW, don't forget to group the above coordinations as a map file, seeing below:
#settings.txt
MapBasedMovement.nrofMapFiles = 1
MapBasedMovement.mapFile1 = path/mapFile.wkt
#mapFile.wkt
LINESTRING (100 100, 200.0 200.0, 300 300, 400 400, 500.0 500.0)