Netflow V9 field Id range - netflow

I am confused about the range of field IDs that are supported in netflow v9. I am getting varied data from online sources from 79, 127, 128.
I got the above information from
(79) - NetFlow v9 has a set of 79 field types defined, whereas IPFIX has the same 79, for backwards compatibility, but then goes all the way from there up to 238. (https://www.ittsystems.com/netflow-vs-ipfix/)
(87) - https://www.plixer.com/support/netflow-v9/
(127) - There are 1 to 127 fields listed here https://www.ibm.com/support/knowledgecenter/en/SSCVHB_1.1.0/collector/cnpi_collector_v9_fiels_types.html.
(128) - Values 0-127: NFv9-compatible
https://www.iana.org/assignments/ipfix/ipfix.xhtml
A customer using cisco ASA said netflow-v9 supports field 233 (FW_EVENT) and wanted to check if our flow format supports that.
https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/sec_data_zbf/configuration/15-mt/sec-data-zbf-15-mt-book/sec-data-zbf-log.pdf
My question:
As a developer, What range of fields (numbers) can I use in netflow-v9?
Can I use anything above 128? How is cisco doing this?

Cisco Netflow V9 and IPFIX are largely the same and differ only in minor details. Both represent the field ID ('Field Type' if NFV9 and 'Field Specifier') with a 16 bit field. All 16 bit values (65536) may be considered valid.
The original NFV9 RFC gave specifications of the first 79 values, and states that Cisco website will provide provide more details. Quote:
When extensibility is required, the new field types will be added
to the list. The new field types have to be updated on the
Exporter and Collector but the NetFlow export format would remain
unchanged. Refer to the latest documentation at
http://www.cisco.com for the newly updated list.
The Cisco website provides specifications for field IDs up to 128 and then states that field IDs 128 to 32768 match those in the IANA IPFIX field registry.
The IANA IPFIX registry currently lists specifications of approximately 500 fields.
The definition of the IPFIX Field Specifier provides that values with the top 'Enterprise' bit set (values 32768 and greater) are 'enterprise-specific' and the authority for the specifications of those will be given the following Enterprise number.
From a pragmatic point of view, in the case of Netflow V9, you are unlikely to see values greater than 500 in flow records.
If none of the approx 500 fields already defined in the IANA IPIFX registry meet your use case, you can submit new field specifications for consideration.

Related

Does Orion-LD support temporal query language from the NGSI-LD specification?

In the "ETSI GS CIM 009 V1.2.1 (2019-10). Context Information Management (CIM); NGSI-LD API" standard there is a chapter "4.11 NGSI-LD Temporal Query language".
The NGSI-LD Temporal Query language shall be supported by implementations. It is intended to define predicates
which allow testing whether Temporal Properties of NGSI-LD Entities, Properties and Relationships, are within certain
temporal constraints. In particular it can be used to request historic Property values and Relationships that were valid
within the specified timeframe.
The following grammar defines the syntax that shall be supported:
timerel = beforeRel / afterRel / betweenRel
beforeRel = "before"
afterRel = "after"
betweenRel = "between"
The points in time for comparison are defined as follows:
• A time element, which shall represent the comparison point for the before and after relation and the starting
point for the between relation. It shall be represented as DateTime (mandated by clause 4.6.3).
• An endtime element, which is only used for the between relation and shall represent the end point for
comparison. It shall be represented as DateTime (mandated by clause 4.6.3).
And in the "C.5.5 Temporal Query" there is query example
GET /ngsild/v1/*temporal*/entities/?type=Vehicle&q=brandName!=Mercedes&attrs=speed,brandName**&timerel=between
&time=2018-08-01:12:00:00Z&endTime=2018-08-01:13:00:00Z**
I'm trying to run similar GET request(Orion-LD (with Mintaka)) against certain data and get "400" back with: "time is unknown" (or "endTime is unknown" if I remove "time" condition from the string).
If I remove the part of the request that has timerel, time and endDate, then this GET request returns data fine. Therefore I was wondering if the part of the specification described in"4.11 NGSI-LD Temporal Query language". has been implemented in Orion_LD?
Thank you
Orion-LD does support the Temporal Interfaces of NGSI-LD, it just requires the component "Mintaka" for that: https://github.com/FIWARE/mintaka
Orion-LD needs to store the data in TimescaleDB, where it than can be retrieved via Mintaka. To enable the storage on timescale, see https://github.com/FIWARE/context.Orion-LD/blob/develop/doc/manuals-ld/troe.md

How to enhance length of Property of an attribute of class in eco Model designer

Iam using mdriven build 7.0.0.11347 for DDD project and have model designed in .ecomdl file.
In this file i have a class Job with WorkDone as one of a property. Backedup SQL table has WorkDone varchar(255) field. Now i wanted to increase length of this field and When i changed the WorkDone property length from 255 to 2000 then it modified the code file but when application runs EvolveSchema then evolving process doesn't recognize this change which leads to no scripts being generated. In the end database doesn't get this updated.
Can you please help me how to get this change persist to database. I thought to increase manually to SQL table but then if database gets change in case of new envrionment QA production then it has to be done every time, which id don't want to do.
In MDriven we dont evolve attribute changes - we only write a warning (255->2000 this change will not be evolved)
You should take steps to alter the column in the database yourself.
We should fix in the future but currently this is a limitation
To expand on my comment, VARCHAR can only be from 0-255 chars
Using TEXT will allow for non-binary (character) strings and BLOBs will allow for binary (byte) strings
Your mileage may vary with this as to what you can do with them, as I am using MySQL knowledge and knowledgebases (since you don't specify your SQL type)
See below for exaplanations of the types;
char / varchar
blobs / text

Ethernet/IP - is it possible to read/write specific data from Assembly Object using Get/Set_Attribute_Single and Extended Logical Format?

We have a Windows application that we are using to test against a CLICK PLC and a Schneider PLC.
My question is around reading and writing data to the Assembly Object, and more specifically if there is a way to read/write from/to a particular spot in the byte array defined by the Assembly Object instance?
Reading the entire array is not so much of an issue, but we really don't want to write the entire array if all we need to do is write a single value. If we have to write all the values back, don't we risk overwriting some values that could have changed underneath us? Yes, we could do a read, change the single value, then write, but there's is no guarantee that the other values won't have changed between the read and the write.
The PLCs do not support Get/Set_Member, so we cannot use that, which means we are left with Get/Set_Attribute_Single. From looking at the ODVA documentation, Vol 1, Appendix C, section C-1.4.2 it seems to me that I can create an EPATH that should let me do just that by using the Extended Logical Format?
Ex: If I want to read the first element in the byte array, I should be able to construct my padded EPATH using the Extended Logical Format as follows:
EPATH = 20 04 | 24 65 | 30 03 | 3D 01 | 00 00
(Class ID 4, Instance ID 101, Attribute 3, Extended Logical with 16 bit Logical value and Array Index Type, Array Index 0)
If I call Get_Attribute_Single with this EPATH I get a "Path Segment Error" for either PLC.
Is my thinking correct, that I should be able to get an array element?
If so, is my EPATH correct?
If so, could the error be due to either PLC not supporting this?

Will I get clash issues if I compare mongodb Ids case-insensitively?

I have an email token collection in my mongodb database for a meteor app and I stick these email tokens in the reply address of my email (eg. #example.com) so that when I parse it I know what it's relating to.
The problem I have is that the email token uses the default _id algorithm to generate a unique id and that algorithm generates a string that is a mixture of upper case and lower case characters.
However, I've discovered that some email clients, lowercases the entire reply address, which means that I can only identify the addresses case-insensitvely.
I guess now I have two options.
1) The easiest option would be to match the email tokens with the reply address case insensitively. What would be the chance of clashes in that respect?
2) Make the email token some sort of guid and generate this guid independent of the mongodb ID creation.
Yes, you would get issues. Meteor uses both upper and lower case values in its 17 character id values. You can have a look at the code in the Random package: https://github.com/meteor/meteor/tree/devel/packages/random.
So it would be possible to get two distinct values of which the differences could only be casing. This could cause mixups if your client's email applications convert the address to lowercase characters.
In your case it is best not to use Random.id(), rather to make up your own Random character generator. Something like this might work:
var lowerCaseId = function() {
var digits = [],
self = this;
for (var i = 0; i < 17; i++) {
digits[i] = Random.choice("23456789abcdefghijkmnopqrstuvwxyz");
}
return digits.join("");
};
Also of note is the meteor _id value is built up of 'unmistakeable characters' - There are no characters that can cause confusion such as 0 vs O, 1 vs I, etc.
If you don't use it in your _id field, you would have to generate a value with this and check it does not exist in your database before inserting it, or using a unique index for it.
Additionally also be aware there will be a significant decrease in entropy since the number of possible combinations will have dropped with the loss of the uppercased characters. If this is of significance to you, you could increase the number of digits from 17 in the code above.
Meteor is generating it's own Id's which are different from the MongoDB ObjectId's. As noted, these would be subject to clash when converting case or checking case insensitively. This is kind of interesting and I'm not sure of the project's reasons for this.
Under the hood however the mongodb node native driver. So the ObjectId creation functions should be available if you want to use them.
https://github.com/mongodb/js-bson/blob/master/lib/bson/objectid.js#L68-L74
The important part is in these calls:
value.toString(16)
So the radix here is set to 16 for hex or all the characters 0-9a-f.
You can also note in drivers that they will Regex check like this:
^[0-9a-fA-F]{24}$
So it would seem that case sensitivity is not an issue.
Still if you want to use something alternate there is a section in the documentation that might serve as a useful guide.
http://docs.mongodb.org/manual/core/document/#the-id-field

database design for new system but legacy dependency

We are planing to make a new project (complete relaunch) of a web application in PHP (Symfony 2) and PostgreSQL. Currently we use PHP and MySQL (MyISAM). -> webapp
The current and new webapp depends on another system (.NET) including a database (MS SQL 8 / 2000), which will not be modified (changed or merge the databases together) anytime soon, because there is a complex workflow with the whole megillah -> legacy system
BTW: biggest table has 27 million rows in total
Most of the data/tables will be transfered multuple times per day from the legacy database to the webapp database. For the new webapp we already redesigned most of the database schema, so we have now almost a normalised schema (the schema of the legacy database is massive redundant and really messy)
Currently the transfer job try to insert data. When there is an exception with the specific code, we know the row already there and then do a update. This is because of performance (no select before update).
For the new webapp schema we still want to use the same primary IDs like in the legacy database. But there are some problems, one of them: some tables has primary keys which looks like a integer, but they aren't. most of the rows have integers like 123456, but then, there are some rows with a character like 123456P32.
Now there are two options for the new schema:
Use string type for PK and risk performance issues
Use integer type for PK and make a conversion
The conversion could look like this (character based)
legacy new
--------------------------
0 10
1 11
2 12
. ..
9 19
a 20
b 21
. ..
y 45
z 46
A 50 (not 47, because the arity of the second digit is 'clean' with 50)
B 51
. ..
Z 76
The legacy pk 123 would be converted into 111213, so the length is double from original. Another example 123A9 -> 1112135019. Because every character hase two digits it also can be converted back.
My first doubt was that the sparse PKs would bring some performance issues, but when using b-tree (self-balancing) as index which is default index sysetm for Postgres, it should be fine.
What do you think? Have you some experience with similar systems with legacy dependencies?
PostgreSQL performance with text PK isn't that bad — I'd go with it for simplicity.
You didn't tell us how long can these keys be. Using your conversion an ordinary integer would be enough for only 4 character key and bigint only for 9.
Use CREATE DOMAIN to isolate the proposed data types. Then build and test a prototype. You're lucky; you have no shortage of valid test data.
create domain legacy_key as varchar(15) not null;
create table your_first_table (
new_key_name legacy_key primary key,
-- other columns go here.
);
To test a second database using integer keys, dump the schema, change that one line (and the name of the database if you want to have them both at the same time), and reload.
create domain legacy_key as bigint not null;
You should think hard about storing the legacy system's primary key exactly as they are. Nothing to debug--great peace of mind. If you must convert, be careful with values like '1234P45'. If that letter happens to be an E or a D, some applications will interpret it as indicating an exponent.
You shouldn't have performance problems due to key length if you're using varchar() keys of 10 or 15 characters, especially with version 9.2. Read the documentation about indexes before you start. PostgreSQL supports more kinds of indexes than most people realize.