IBM watson IoT platform-Hex data - ibm-cloud

My question is about connecting loriot network server to the IBM Watson IoT Platform.
I have managed to connect loriot backend with the Watson IoT Platform and see some data coming through. However, the data are in a hexadecimal format. Any idea on how I can convert this hex data to be human readable?

If the data coming through to the Watson IoT Platform is in JSON format, but contains properties whose values are in hex, you can use the Data Management capabilities to convert the data in these events to Device State. The expression language used in the property mapping expressions includes an $unpack function that can be used to convert strings and hex octets to numeric values. When used in conjunction with the $substring function, you can extract specific strings from a large hex value and convert it to a number.
As an example, say you had the following inbound event:
{
"propertyA": "valueA",
"propertyB": "valueB",
"data": "3b45940201000000010e4601"
}
... you could map values to properties on the device state using mapping expressions similar to the following:
$unpack($substring($event.data, 0, 8), "l32f")
$unpack($substring($event.data, 8, 2), "l8")
$unpack($substring($event.data, 10, 8), "l32")
The corresponding output of the three expressions above is:
2.1786381830505485E-37
1
16777216
The Data Management capabilities are documented here:
https://console.bluemix.net/docs/services/IoT/GA_information_management/ga_im_device_twin.html#device_twins

Related

GRPC test client GUI that supports representing a bytes type as a hex string?

MongoDB's ObjectID type is a 12 byte array. When you view the database, you can see it displayed as: ObjectId("6000e9720C683f1b8e638a49").
We also want to share this value with SQL server and pass it into a GRPC request.
When the same value stored in MS SQL server as a binary(12) column, it is displayed as: 0x6000E9720C683F1B8E638A49. It's simple enough to convert this representation to the Mongo representation.
However, when trying to pass it via GRPC as a bytes type, BloomRPC requires that we represent it in the format: "id": [96,0,233,114,12,104,63,27,142,99,138,73]
So I'm looking for a GRPC test client GUI application to replace BloomRPC that will support a hex string format similar to MongoDB or SQL server to represent the underlying byte array. Anyone have a line on something like this that could work?
We could just represent it as a string in the proto, but my personal opinion is that it should be unnecessary to do this. It will require our connected services to convert bytes->string->bytes on every GRPC call. The other 2 tools seem to be happy having a byte array in the background and representing it as a string in the front end, so if we could just get our last tool to behave the same, that would be great.

utf-8 gets stored differently on server side (JAVA)

Im trying to figure out the answer to one of my other questions but anyways maybe this will help me.
When I persist and entity to the server, the byte[] property holds different information than what I persisted. Im persisting in utf-8 to
the server.
An example.
{"name":"asd","image":[91,111,98,106,101,99,116,32,65,114,114,97,121,66,117,102,102,101,114,93],"description":"asd"}
This is the payload I send to the server.
This is what the server has
{"id":2,"name":"asd","description":"asd","image":"W29iamVjdCBBcnJheUJ1ZmZlcl0="}
as you can see the image byte array is different.
WHat im trying to do it get the image bytes saved on the server and display them on the front end. But i dont know how to get the original bytes.
No, you are wrong. Both method stored the ASCII string [object ArrayBuffer].
You are confusing the data with its representation. The data it is the same, but on both examples, you represent binary data in two different way:
The first as an array of bytes (decimal representation), on the second a classic for binary data representation: BASE64 (you may discover it because of final character =.
So you just have different representation of the same data. But so the data is stored on the same manner.
You may need to specify how to get binary data in string form (as in your example), and so the actual representation.

Format sensor data as it comes into IBM Watson

I am still new to IBM Watson. Is there any way that i can format the sensor data that comes into IBM Watson? The issue that i am facing right now is that the timestamp bunch the date and the time together and it poses problems when i try to create certain data visualizations in any data analytics and visualization software. I will make things easier for me splitting the date and time from the timestamp. I am aware that the data is in json format.
In addition, I am using node-red, do let me know if the formatting of data should be done at node red.
Here is my sample sensor data :
{
"_id": "04691370-387e-11e8-8cd5-8b3f61628d0d",
"_rev": "1-a4328ecd41d03b8e4ac86de06baf03d2",
"deviceType": "RaspberryPi",
"deviceId": "9074bd",
"eventType": "event",
"format": "json",
"timestamp": "2018-04-05T11:04:12.583+08:00",
"data": {
"d": {
"temperature": 19.5,
"humidity": 44,
"heatIndex": 18.65
}
}
}
Things that I am using:
Raspberry Pi 3 Model B
Raspbian for Robots (Dexter Industries)
GrovePi+
GrovePi DHT 11, Light sensor , Sound sensor , UV sensor
Node Red with all the grovepi+ nodes including nodes for IBM Watson
IBM Watson , IBM Waston Iot
Cloudant NoSQL DB
CData ODBC Driver for Cloudant
Microsoft Power Bi (subject to change , depends on which software is easier to Adopt)
This is just JSON data, there is nothing to stop you adding 2 new fields to the object (e.g date and time).
It's probably simplest to do this in Node-RED with a function node with something like the following:
var timestamp = msg.payload.timestamp;
msg.payload.date = timestamp.substring(0,timestamp.indexOf('T'));
msg.payload.time = timestamp.substring(timestamp.indexOf('T') + 1);
return msg;

snmp server how to encode request packet?

i am writing a snmp server on port 161 that just shows five stars and I can get the star number by snmpget command line and also set them by snmpset. when i receive request packet in string i cannot encode it to readable . the snmp packet format starts with integer for version and then an octet string and so on...
public class MyServer {
public static void main(String[] args) throws Exception
{
DatagramSocket socket = new DatagramSocket(161);
byte[] buffer = new byte[64];
DatagramPacket packet = new DatagramPacket(buffer,buffer.length);
while(true)
{
socket.receive(packet);
byte[] data = packet.getData();
String s=new String(data,"UTF-8");
System.out.println(s+"\t"+packet.getAddress()+"\t"+packet.getPort());
the output is some like this but stranger!!!
0'public?<\
The SNMP is not that simple as other protocols. In fact this is the one of the most complex protocols to implement and use.
Constructing a SNMP message requires some knowledge of the data types specified by ASN.1. ASN.1 primitive data types include Integer, Octet (byte, character) String, Null, Boolean and Object Identifier. The Object Identifier type is central to the SNMP message, because a field of the Object Identifier type holds the OID used to address a parameter in the SNMP agent. To expand the programmer's ability to organize data, ASN.1 allows primitive data types to be grouped together into complex data types.
ASN.1 offers several complex data types necessary for building SNMP messages. One complex data type is the Sequence. A Sequence is simply a list of data fields. Each field in a Sequence can have a different data type. ASN.1 also defines the SNMP PDU (Protocol Data unit) data types, which are complex data types specific to SNMP. The PDU field contains the body of an SNMP message. Two PDU data types available are GetRequest and SetRequest, which hold all the necessary data to get and set parameters, respectively. Ultimately the SNMP message is a structure built entirely from fields of ASN.1 data types. However, specifying the correct data type is not enough. If the SNMP message is a Sequence of fields with varying data types, how can a recipient know where one field ends and another begins, or the data type of each field? Avoid these problems by conforming to the Basic Encoding Rules (BER).
The most fundamental rule states that each field is encoded in three parts: Type, Length, and Value (TLV). Type specifies the data type of the field using a single byte identifier. Length specifies the length in bytes of the following Value section, and Value is the actual value communicated (the number, string, OID, etc).
So please do not reinvent the wheel as there are several very well written libraries that implement SNMPv1, SNMPv2C and SNMPv3 in full accordance to standards. These are NET-SNMP, SNMP4J and some others. Use them.

What is the proper way to communicate data wtih units in a REST payload?

I am working on a REST api that shows numbers of different items, each in a unit. For example, storage capacity. I have a media type which defines this payload, but I feel that the REST api should not be in the business of formatting the numbers, and they should all be in a baseUnit, say bits. It's up to the UI to format that as they wish.
My question is - should the data "hint" as to the units say:
{ name: "capacity", number: 33333, units: "bits" }
on every piece of data to make it explicit, or is that something that has to be inferred through documentation?