snmp server how to encode request packet? - sockets

i am writing a snmp server on port 161 that just shows five stars and I can get the star number by snmpget command line and also set them by snmpset. when i receive request packet in string i cannot encode it to readable . the snmp packet format starts with integer for version and then an octet string and so on...
public class MyServer {
public static void main(String[] args) throws Exception
{
DatagramSocket socket = new DatagramSocket(161);
byte[] buffer = new byte[64];
DatagramPacket packet = new DatagramPacket(buffer,buffer.length);
while(true)
{
socket.receive(packet);
byte[] data = packet.getData();
String s=new String(data,"UTF-8");
System.out.println(s+"\t"+packet.getAddress()+"\t"+packet.getPort());
the output is some like this but stranger!!!
0'public?<\

The SNMP is not that simple as other protocols. In fact this is the one of the most complex protocols to implement and use.
Constructing a SNMP message requires some knowledge of the data types specified by ASN.1. ASN.1 primitive data types include Integer, Octet (byte, character) String, Null, Boolean and Object Identifier. The Object Identifier type is central to the SNMP message, because a field of the Object Identifier type holds the OID used to address a parameter in the SNMP agent. To expand the programmer's ability to organize data, ASN.1 allows primitive data types to be grouped together into complex data types.
ASN.1 offers several complex data types necessary for building SNMP messages. One complex data type is the Sequence. A Sequence is simply a list of data fields. Each field in a Sequence can have a different data type. ASN.1 also defines the SNMP PDU (Protocol Data unit) data types, which are complex data types specific to SNMP. The PDU field contains the body of an SNMP message. Two PDU data types available are GetRequest and SetRequest, which hold all the necessary data to get and set parameters, respectively. Ultimately the SNMP message is a structure built entirely from fields of ASN.1 data types. However, specifying the correct data type is not enough. If the SNMP message is a Sequence of fields with varying data types, how can a recipient know where one field ends and another begins, or the data type of each field? Avoid these problems by conforming to the Basic Encoding Rules (BER).
The most fundamental rule states that each field is encoded in three parts: Type, Length, and Value (TLV). Type specifies the data type of the field using a single byte identifier. Length specifies the length in bytes of the following Value section, and Value is the actual value communicated (the number, string, OID, etc).
So please do not reinvent the wheel as there are several very well written libraries that implement SNMPv1, SNMPv2C and SNMPv3 in full accordance to standards. These are NET-SNMP, SNMP4J and some others. Use them.

Related

GRPC test client GUI that supports representing a bytes type as a hex string?

MongoDB's ObjectID type is a 12 byte array. When you view the database, you can see it displayed as: ObjectId("6000e9720C683f1b8e638a49").
We also want to share this value with SQL server and pass it into a GRPC request.
When the same value stored in MS SQL server as a binary(12) column, it is displayed as: 0x6000E9720C683F1B8E638A49. It's simple enough to convert this representation to the Mongo representation.
However, when trying to pass it via GRPC as a bytes type, BloomRPC requires that we represent it in the format: "id": [96,0,233,114,12,104,63,27,142,99,138,73]
So I'm looking for a GRPC test client GUI application to replace BloomRPC that will support a hex string format similar to MongoDB or SQL server to represent the underlying byte array. Anyone have a line on something like this that could work?
We could just represent it as a string in the proto, but my personal opinion is that it should be unnecessary to do this. It will require our connected services to convert bytes->string->bytes on every GRPC call. The other 2 tools seem to be happy having a byte array in the background and representing it as a string in the front end, so if we could just get our last tool to behave the same, that would be great.

Specifying data structure in a REST query

I'm trying to write a basic REST-style query for a database in Go. I've never worked with REST before, so I'm a bit unclear as to how deeply the principle about being able to handle any kind of data is supposed to go. Does it violate the principle to make assumptions about the data/data types I'm getting from the database in the code for the client? Should I write the former or the latter struct to unpack the JSON data into?
type datum struct {
ID int16 `json:"src_id"`
Time int64 `json:"timestamp"`
Lat float64 `json:"latitude"`
Long float64 `json:"longitude"`
Thermo float64 `json:"ir_thermo_temperature_filtered"`
Humid float64 `json:"relative_humidity"`
AirTemp float64 `json:"air_temp"`
Wind float64 `json:"wind_speed_world_filtered"`
}
type datum struct {
ID interface{} `json:"src_id"`
Time interface{} `json:"timestamp"`
Lat interface{} `json:"latitude"`
Long interface{} `json:"longitude"`
Thermo interface{} `json:"ir_thermo_temperature_filtered"`
Humid interface{} `json:"relative_humidity"`
AirTemp interface{} `json:"air_temp"`
Wind interface{} `json:"wind_speed_world_filtered"`
}
My thinking is that maybe the latter struct violates REST principles because it makes assumptions about the data types that you'd be receiving from the server, so it isn't properly uniform. I'm hoping I'm not correct, but I can see that conclusion could come from a strict reading of the REST principles.
I believe you are reading REST principles incorrectly.
REST is an architecture, not a protocol with strict rules. The key points are that it is stateless and that it represents the underlying resources similar to how the web operates.
REST describes resources (datum, in your case), how you can access and modify those resources, and how those resources should describe what other resources can be accessed relative to that resource. This is similar to how web browsing works: each page has a unique URI, and the page may contain data (fields of datum), and links to other pages you can reach from that page.
So in your case, the fields of datum is analogous to the contents of a web page. If you have other resources reachable from the datum, then you have to provide those as URIs to the caller, and the caller can follow them. That doesn't mean that you have to give up type safety and deal with all kinds of data. Again, this is not a protocol. If the information submitted is not in the expected format, you should return an error. Since your communication format is JSON, your data types are limited to a string, number, and boolean. You are expecting an integer for ID, and if the client sends you a string (even if it is something like "123"), it is an error. Use the first struct, if unmarshaling the input returns an error, return that to the caller.

utf-8 gets stored differently on server side (JAVA)

Im trying to figure out the answer to one of my other questions but anyways maybe this will help me.
When I persist and entity to the server, the byte[] property holds different information than what I persisted. Im persisting in utf-8 to
the server.
An example.
{"name":"asd","image":[91,111,98,106,101,99,116,32,65,114,114,97,121,66,117,102,102,101,114,93],"description":"asd"}
This is the payload I send to the server.
This is what the server has
{"id":2,"name":"asd","description":"asd","image":"W29iamVjdCBBcnJheUJ1ZmZlcl0="}
as you can see the image byte array is different.
WHat im trying to do it get the image bytes saved on the server and display them on the front end. But i dont know how to get the original bytes.
No, you are wrong. Both method stored the ASCII string [object ArrayBuffer].
You are confusing the data with its representation. The data it is the same, but on both examples, you represent binary data in two different way:
The first as an array of bytes (decimal representation), on the second a classic for binary data representation: BASE64 (you may discover it because of final character =.
So you just have different representation of the same data. But so the data is stored on the same manner.
You may need to specify how to get binary data in string form (as in your example), and so the actual representation.

GRPC design of a current REST API

I'm looking at the posibility of porting a lot of our REST API services to a gRPC schema, but here is the thing.
We currently make heavy use of one method of the API, that makes calls to multiple PostgreSQL functions based on the name of the function received as a parameter and the input as the body of the request, ie: api.com/functions/exec/{name}, the functions defined in the DB recieves and returns JSON.
So, if I understood well, a gRPC method can only have a static data structure both for receiving and returning types. How can i make it flexible?, because depends on the DB function to be called the data structure to be returned and sent as input
The structure returned by the API is something like
{
"code": 200,
"data": {
"some": "value"
},
"status": "Correct..blabla"
}
The structure of the data sent to the API depends on the mode to be used if it's encrypted the request body will be a binary string
a37163b25ed49d6f04249681b9145f63f64d84a486e234fa397f134f8b25fd62f1e755e40c09da09f9900beea4b51fc638e7db8730945bd319600943e01d10f2512fa6ab335fb65de32fc2ee0a2150f7987ae0999ea5d8d09e1125c533e7803ba9118ee5aff124282149792a33dce992385969b9df2417613cd2b039bf6056998469dfb015eade9585cb160275ec83de06d027180818652c60c09ada1c98d6e9ed11df9da737408e14161ae00aaf9d4568a015dc6b6d267f1ee04638dd60e4007dc543524b83ca6b2752c5f21e9dfff3b15e6e55db8b9be9e8c07d64ccd1b3d44ce48cc3e49daee5ae1da5186d9ef6994ccf41dc86a4e289fdbab8ca4a397f929445c42f40268ebb2c3d8bcb80f9e844dba55020da665bce887bd237ae2699876e12cc0043b91578f9df2d76391d50fbf8d19b6969
if it's not encrypted then it's just common JSON
{
"one": "parameter"
}
One possible solution that I can think of, is to use always a type byte, both on the request and the response types, the only thing that I have to do is convert JSON to a binary string and vice-versa right?
I'm open to suggestions.
Depending on your needs and performance requirements, going the raw bytes route might be sensible, if you really don't have any other uses for protobuf fields. If you do, you might want to define a message type that supports encrypted and unencrypted message fields like: https://github.com/grpc/grpc/blob/master/src/proto/grpc/reflection/v1alpha/reflection.proto#L77

What is the best way to publish and consume different type of messages?

Kafka 0.8V
I want to publish /consume byte[] objects, java bean objects, serializable objects and much more..
What is the best way to define a publisher and consumer for this type scenario?
When I consume a message from the consumer iterator, I do not know what type of the message it is.
Can anybody point me a guide on how to design such scenarios?
I enforce a single schema or object type per Kafka Topic. That way when you receive messages you know exactly what you are getting.
At a minimum, you should decide whether a given topic is going to hold binary or string data, and depending on that, how it will be further encoded.
For example, you could have a topic named Schema that contains JSON-encoded objects stored as strings.
If you use JSON and a loosely-typed language like JavaScript, it could be tempting to store different objects with different schemas in the same topic. With JavaScript, you can just call JSON.parse(...), take a peek at the resulting object, and figure out what you want to do with it.
But you can't do that in a strictly-typed language like Scala. The Scala JSON parsers generally want you to parse the JSON into an already defined Scala type, usually a case class. They do not work with this model.
One solution is to keep the one schema / one topic rule, but cheat a little: wrap an object in an object. A typical example would be an Action object where you have a header that describes the action, and a payload object with a schema dependent on the action type listed in the header. Imagine this pseudo-schema:
{name: "Action", fields: [
{name: "actionType", type: "string"},
{name: "actionObject", type: "string"}
]}
This way, in even a strongly-typed language, you can do something like the following (again this is pseudo-code) :
action = JSONParser[Action].parse(msg)
switch(action.actionType) {
case "foo" => var foo = JSONParser[Foo].parse(action.actionObject)
case "bar" => var bar = JSONParser[Bar].parse(action.actionObject)
}
One of the neat things about this approach is that if you have a consumer that's waiting for only a specific action.actionType, and is just going to ignore all the others, it's pretty lightweight for it to decode just the header and put off decoding action.actionObject until when and if it is needed.
So far this has all been about string-encoded data. If you want to work with binary data, of course you can wrap it in JSON as well, or any of a number of string-based encodings like XML. But there are a number of binary-encoding systems out there, too, like Thrift and Avro. In fact, the pseudo-schema above is based on Avro. You can even do cool things in Avro like schema evolution, which amongst other things provides a very slick way to handle the above Action use case -- instead of wrapping an object in an object, you can define a schema that is a subset of other schemas and decode just the fields you want, in this case just the action.actionType field. Here is a really excellent description of schema evolution.
In a nutshell, what I recommend is:
Settle on a schema-based encoding system (be it JSON, XML, Avro,
whatever)
Enforce a one schema per topic rule