Cisco Netflow with nfdump - netflow

I've been using nfdump to read netflow data from my router but my problem is that the flow duration field is measured in milliseconds. I'd like for this to me measured in micro or nanoseconds if possible. Does anyone know anything about nfdump or netflow that they could help me to do this? I've already checked the ubuntu manpage and can't find anything about my problem there.
Thanks in advance,
Shane

NetFlow's precision is limited to milliseconds by spec. There are software netflow exporters that report nanosecond-precision, but they do so using a custom NetFlow 9 field.

millisecond precision is highest precision in NetFlow v9 export. Moreover, considering the use case of netflow, i believe, the more granular precision than milli seconds is not needed.

Related

Variable Length MIDI Duration Algorithm

I'm trying to compile MIDI files, and I reached an issue with the duration values for track events. I know these values (according to this http://www.ccarh.org/courses/253/handout/vlv/) are variable length quantities where each byte is made up of a continuation bit (0 for no following duration byte and 1 for a following duration byte) and the rest of the number in a 7 bit representation.
For example, 128 would be represented as such:
1_0000001 0_0000000
The problem is that I'm having trouble wrapping my head around this concept, and am struggling to come up with an algorithm that can convert a decimal number to this format. I would appreciate it if someone could help me with this. Thanks in advance.
There is no need to re-invent the wheel. The official MIDI specification has example code for dealing with variable length values. You can freely download the specs from the official MIDI.org website.

AudioKit parameters range

I am new to AudioKit, newish to Swift to be honest.
I cannot find in the documentation any reference to the range of many or any of the .notation parameters in the synths.
So for instance obviously frequencyCutOff is generally in Hz so the range is 0 - 30k or whatever. I'm assuming most have a 0-1 range as standard. There are many others which I would have thought will have min and max values that may be more defined.
Envelopes for instance. ADSR. Are they supposed to be in seconds? if so can I have a release time of 1000s ?! The docs are a little vague to me or am I missing something. Thanks
I'm not in the AudioKit team, but as I use daily AudioKit, I can answer you.
Yes you are right, range of many parameters are not clearly specified in the docs.
But you can find them by searching in the AudioKit Playgrounds examples.
Most of the AudioKit objects are covered here and you will find many info about parameters range.
Alternatively, you can also have a look in the AudioKit Xcode project exemples.
About ADSR, yes value is seconds for Attack, Decay and Release.
Hope it helps!

cpufreq - Is not possible to change "scaling_available_frequencies"

I am trying to change scaling_available_frequencies because I am working on a low power system.
I used to do this
$ cat /system/cpu/cpu0/cpufreq/scaling_available_frequencies
600000 1200000
but I want to reduce frequency to less than 600000.
So I tried the echo command to modify scaling_available_frequencies, but I'm stuck.
Is it possible to change scaling_available_frequencies?
Thank you for your help.
General answer: The scaling_available_frequencies is a read only property in userspace. It lists all the available frequencies which cpufreq kernel driver handles. So if you want to have more available frequencies, you first need to ensure that your hardware is capable of it. If there are other frequencies available and they aren't supported yet, you need to extend your driver or device tree (it depends on your particular hardware).
BTW the cpuidle driver might be also helpful if you are reducing the CPU power consumption. But again, this heavily depends on your use-case and hardware as well.

Grafana aggregation issue when changing time range (%CPU and more)

I have an % CPU usage grafana graph.
The problem is that the source data is collected by collectd as Jiffies.
I am using the following formula:
collectd|<ServerName>|cpu-*|cpu-idle|value|nonNegativeDerivative()|asPercent(-6000)|offset(100)
The problem is that when I increase the time range (to 30 days for example), the grafana is aggregating the data and since it is accumulative numbers (And not percentage or something it can make a simple average), the data in the graph is becoming invalid.
Any idea how to create a better formula?
Have you looked at the aggregation plugin (read type) to compute averages?
https://collectd.org/wiki/index.php/Plugin:Aggregation/Config
it is very strange that you have to use the nonNegativeDerivative function for a CPU metric. nonNegativeDerivative should only be used for ever increasing counters, not a gauge like metric like CPU

Converting from bandwidth to traffic gives different results depending on operators position?

This must be a stupid question, but nevertheless I find it curious:
Say I have a steady download of 128Kbps.
How much disk space is going to be consumed after a hour in Megabytes?
128 x 60 x 60 / 8 / 1024 = 56.25 MB
But
128 x 60 x 60 / 1000 /8 = 57.6 MB
So what is the correct way to calculate this?
Thanks!
In one calculation you're dividing by 1000, but in another you're dividing by 1024. There shouldn't be any surprise you get different numbers.
Officially, the International Electrotechnical Commission standards body has tried to push "kibibyte" as an alternative to "kilobyte" when you're talking about the 1024-based version. But if you use it, people will laugh at you.
Please remember that there is overhead in any transmission. There can be "dropped" packets etc. Also there is generally some upstream traffic as your PC acknoledges receipt of packets. Finally since packets can be received out of order, the packets themselves contain "extra" data to all the receiver to reconstruct the data in the proper order.
Ok, I found out an official explanation from Symantec on the matter:
http://seer.entsupport.symantec.com/docs/274171.htm
It seems the idea is to convert from bits to bytes as early as possible in calculation, and then the usual 1024 division comes in place.
I just hope it's a standard procedure, and not Symantec imposed one :).