I am using evalvid 2.7, ns-2.35, ubuntu 14.04 to evaluate video traffic.
But when i use this command:
~/myevalvid2$ ./etmp4 -f -0 sd_a01 rd_a01 st_a01 a01.mp4 a01out
i receive this with an error at the end:
loss_a01out.txt: percentage of lost frames|packets
column 1: I (including H)
column 2: P
column 3: B
column 4: overall
delay_a01out.txt: jitter/delay statistics
column 1: frame|packet id
column 2: loss flag
column 3: end-to-end delay s
column 4: sender inter frame|packet lag s
column 5: receiver inter [frame|packet] lag s
column 6: cumulative jitter s Hartanto et. al.
rate_s_a01out.txt: sender rate
column 1: time s
column 2: momentary rate bytes/s
column 3: cumulative rate bytes/s
rate_r_a01out.txt: receiver rate
column 1: time s
column 2: momentary rate bytes/s
column 3: cumulative rate bytes/s
Error in etmp4: double free or corruption (fasttop): 0x085ec028
Aborted (core dumped)
Do you have any idea how I could solve this problem
export MALLOC_CHECK_=0
run the above command on the terminal and then run etmp4 command
Related
I created a report that compares two amounts and shows its increase or decrease percentage.
logic is
amount1 compared to amount2 then lastly show its % inc/dec
I have this field than computes for the increase/decrease of the number
formula is
(tonumber({tblReclass.Amount})/tonumber({tblReclass.AverageAmt}))*100-100
however there are data rows that contain zero values and zero division throws an error so I decided to put an if statement and the code is now this
if {tblReclass.Amount} > 0 and {tblReclass.AverageAmt} > 0 then
(tonumber({tblReclass.Amount})/tonumber({tblReclass.AverageAmt}))*100-100
else
0
it now throws an error after the then statement it says
a string is required here
what must be revised in the code
The computation works fine if I remove the zero values
so what I did temporarily was remove the zero data values but this report now shows incomplete data. I want to show the zero values
Try this version:
if tonumber({tblReclass.AverageAmt}) > 0 then
(tonumber({tblReclass.Amount})/tonumber({tblReclass.AverageAmt}))*100-100
else 0
In the process of using spring batch, I have a requirement to read the excel file, complete the statistics of one of the columns, and then use the value of one of the columns divided by the statistical result.Examples are as follows
input:
name price
a 10
b 20
c 30
ouput:
name price proportion
a 10 1/6
b 20 1/3
c 30 1/2
I can suggest you to go with two steps :
The first one (tasklet) who calculate the sum of all the prices and register the result in the jobExecutionContext
And the second one (chunk) to re read the file and write the proportion column with the sum calculated in the first step.
I am trying to understand the stoachstic uniform selection algorithm as described in he docs: https://se.mathworks.com/help/gads/genetic-algorithm-options.html
The ga default selection function, Stochastic uniform, lays out a line in which each parent corresponds to a section of the line of length proportional to its scaled value. The algorithm moves along the line in steps of equal size. At each step, the algorithm allocates a parent from the section it lands on. The first step is a uniform random number less than the step size.
For myself the above docs can be interpreted in two ways:
Either a random number x will be picked initially and all subsequent "steps" are simply multiple of it.
Step size: 1
Random x e.g. 0.5
Location on line: 0.5, 1, 1.5, 2, 2.5
The algorithm moves along the line in fixed steps and additionally a random x < the fixed size is added every time.
Fixed Step size: 1
Random x varies all the time but < 1
Location on line: 1.1, 2.3, 3.2, 4.5, 5.1
Number 1 faces the issue that if the random value chosen is too small only the most fit individual will be selected as we don't move along the line at all. So is the 2nd interpretation correct?
As far as I understood the scaled fitness values sum up to the count of parents that will be generated, therefore isn't the step size always 1 since we can fit exactly as many steps * parents needed on the line?
Here's a third interpretation for your consideration. The algorithm moves along the line in fixed steps. However, the starting point is less than the step size.
Fixed step size: 1
Randomly chosen start: 0.32
Locations on line: 0.32 1.32 2.32 3.32 4.32 5.32
By using a fixed step size, the algorithm knows exactly how many parents will be selected. For example, if the line is 100 units long, and the step size is 1, then exactly 100 parents will be selected. But which parents are selected is determined by the random starting point.
This assumes that there are multiple parents to choose from in each interval of length 1. And the most fit individual has a scaled length less than 1.
I am having difficulty filtering a time dimension.
I have a Time Dimension with a date level that is derived from sql function -> DATE( TimeStampBegin )
I want to filter where the date is greater than 2014-03-03:
select NON EMPTY [Users].[Trial].Members ON COLUMNS,
NON EMPTY[Problem Areas].[Fridges] ON ROWS from [Search]
WHERE (Filter([Time].[Date].Members, [Time].[Date].CurrentMember > [2014-03-03]))
The result set I get is:
Axis #0:
{[Time].[2014-02-28]}
{[Time].[2014-03-04]}
{[Time].[2014-03-10]}
{[Time].[2014-03-13]}
Axis #1:
{[Users].[ILearnTrial2014]}
Axis #2:
{[Problem Areas].[Fridges]}
Row #0: 161
As you can see, the Time member returned includes 2014-02-28 which is not greater than 2014-03-03.
If I change to 2014-03-03 I get:
Axis #0:
{[Time].[2014-02-28]}
{[Time].[2014-03-10]}
Axis #1:
{[Users].[ILearnTrial2014]}
Axis #2:
{[Problem Areas].[Fridges]}
Row #0: 93
As you can see there is no relationship between the greater than date and what Time].[Date] members are returned.
I am also trying [Time].[Hour] and the results show the same inconsistencies.
Can anyone help?
EDIT
I'd like to pass on the solution to my problem in the spirit in which these forums are supposed to work.
Thanks to Luc for his pointless and unhelpful comment yesterday.
For anyone interested, filter for the current member name as follows, Filter([Time].[Date].Members, [Time].[Date].CurrentMember.Name > "2014-03-03").
I made a neural network that also have Back Propagation.it has 5 nodes in input layer,6 nodes in hidden layer,1 node in output layer and have random weights and i use sigmoid as activation function.
i have two set of data for input.
for example :
13.5 22.27 0 0 0 desired value=0.02
7 19 4 7 2 desired value=0.03
now i train the network with 5000 iteration or iteration will stop if the error
value(desired - calculated output value) is less than or equal to 0.001.
the output value of first iteration for each input set is about 60 And it will decrease in each iteration.
now the problem is that the second set of inputs(that has desired value of 0.03),cause to stop iteration because of calculated output value of 3.001 but the first set of inputs did not arrived to desired value of it(that is 0.02) and its output is about 0.03 .
EDITED :
I used LMS algorithm andchanged the error threshold 0.00001 to find correct error value,but now output value of last iteration for both 0.03 and 0.02 desired value is between 0.023 and 0.027 and that is incorrect yet.
For your error value stop threshold, you should take the error on one epoch (Sum of every error of all your dataset) and not only on one member of you dataset. With this you will have to increase the value of your error threshold but it will force your neural network to do a good classification on all your example and not only on some example.