I am a little bit confused about the LoRaWAN security mechanism.
The NwkSKey is used by both the network server and the end-device to
calculate and verify the MIC (message integrity code) of all data
messages to ensure data integrity.
The AppSKey is an application session key specific for the end-device.
It is used by both the application server and the end-device to
encrypt and decrypt the payload field of application-specific data
messages
So technically the Network server does not know the AppSKey.
What I dont understand is that in the OTAA procedure, the network server responds with a Join-accept that is encrypted using the AppKey. So if the network server has already the AppKey, it can generate the AppSKey using this formula:
AppSKey = aes128_encrypt(AppKey, 0x02 | AppNonce | NetID | DevNonce | pad16)
In that case the network server if he would like to it can decrypt the message destinated to the application.
Is my analysis correct or I am missing something?
Thanks and best reagrds!
Got the answer on the TTN forum:
https://www.thethingsnetwork.org/forum/t/lorawan-security-can-the-network-server-generate-the-appskey/8672
Related
LoRa appKey and appEUI can i randomly generate ? or anything else...
Example:
AppEUI (8 bytes, example): 424152414E490000 (random value)
Appkey (16 bytes, example):424152414E492044455349474E000000 (random value)
You can use random AppKeys for your devices. (It is recommended that every device has a different AppKey.) Please note that the AppKey you provision on the LoRaWAN network server must be the same as the one your LoRaWAN end device is personalized with.
The purpose of the AppEUI is to identify the Join Server of your home network. When your device sends a Join Request to the network server it will forward it to the Join Server of your device. Therefore, it is essential the you personalize your end device with a proper AppEUI.
However, in case your network server is not connected to any external Join Server and processes all Join Requests by itself, the AppEUI is not used for anything and you can use a random EUI. It is still important that the AppEUI provisioned on the network server is the same as the one the device is personalized with.
As the title suggest I'm trying to send data over to the client (via RDMA) instead of the client sending to the server. All of the examples I can find on the topic is client to the server. Is there any resources/references i can refer to?
When using the libibverbs library for RDMA verbs, in most cases there is no notion of a client and a server. One establishes a connection between two queue pairs using out-of-band channel (TCP or RDMA CM) and then the configures the QPs with the connection information that was exchanged. Once the connection has been established both sides can use RDMA one-sided operation if the QP configuration allows it.
An example of such application is perftest, which exchanges connection information with TCP (or RDMA CM) and then measures performance of RDMA one-sided and two-sided operations.
With librdmacm and rsockets, there is a concept of a server (listening ID or socket) and a client. Still, after a connection is established, both sides are allowed to use RDMA. You can find examples for both in the rdma-core repository. For example, rping uses RDMA CM and invokes RDMA reads and writes from the server to the client, and riostream uses rsockets and invokes RDMA writes from both the client and the server.
i am trying to implement a non blocking SSL connection using nio sockets & SSlEngine. Unfornetly the message must contains enough data so it can be decrypted by the sslengine. And i am wondering how does a normal SSLSocket in blocking mode know that the https message have fully arrived??
Is there any flag announcing the end a https message/packet?
Thanks
SSL packs the data into records and each record contains the size at the beginning. So the SSLengine itself knows how many data it needs. According to http://www.onjava.com/pub/a/onjava/2004/11/03/ssl-nio.html a call to unwrap would return BUFFER_UNDERFLOW if the record is not fully read and thus can not be decrypted and you need to read more data from the connection in this case.
In a Modbus server implementation, what response should the server send if it receives a request from the client that contains too few (or no) data bytes to interpret correctly?
For example, a modbus RTU server (with address 0x01) receives the ADU datagram: 0x01, 0x01, 0xE0, 0xC1. In this case no physical transport layer errors are detected, the address is correct, the CRC is correct and the function (Read Coils) is correct and implented on the server, but the PDU does not contain the Starting Address or Quantity of Inputs fields required to process the request.
Should the server assume that a (very rare) bit error has occurred and not respond at all?
Should the server interpret this as 'a value in the query data field' being not allowed for the server and respond with an ILLEGAL DATA VALUE exception?
Should the server do something completely different?
In my experience, at least with Modbus TCP, devices tend to just ignore malformed requests.
From the specification MODBUS APPLICATION PROTOCOL SPECIFICATION V1.1b3 the exception (code 3) is correct. Figure 9 MODBUS Transaction state diagram clearly indicates the exception response to an incorrectly formed message.
I suspect the common response that rejects the message is indistinguishable from a transmission error, and the implementor of the faulty client will then be induced to correct their implementation.
Your suggestion that a communication error triggers this is possible, but only if the underlying link does not detect missing bytes. Any byte, other than 0xFF will introduce a start bit into a serial channel, and a missing byte in the TCP/UDP implementations is even less likely.
I am new to Async Socket Connection. Can you please explain. How does this technology work.
There's an existing application (server) which requires socket connections to transmit data back and forward. I already create my application (.NET) but the Server application doesn't seem to understand the XML data that I am sending. My documentation is giving me two ports one to Send and another one to Receive.
I need to be sure that I understand how this works.
I got the IP addresses and also the two Ports to be used.
A socket is the most "raw" way you can use to send byte-level TCP and UDP packets across a network.
For example, your browser uses a socket TCP connection to connect to the StackOverflow web server on port 80. Your browser and the server exchange commands and data according to an agreed-on structure/protocol (in this case, HTTP). An asynchronous socket is no different than a synchronous socket except that is does not block the thread that's using it.
This is really not the most ideal way to work (check and see if your server/vendor application supports SOAP/Web Services, etc), but if this is really the only way, there could be a number of reasons why it's failing. To name a few...
Not actually getting connected or sending data. Run a test using WinsockTool (http://www.isatools.org/tools/winsocktool.msi) and simulate your client first to make sure the server is working as expected.
Encoding incorrect - You're sending raw bytes across the network... Make sure you're using the correct encoding to convert your XML into bytes (ASCII, UTF8, etc).
Buffer Length - Your sending buffer (the amount of data you can transmit in one shot) may be too small or the server may expect a content of a certain length, and your XML could be getting truncated.
let's break a misconception... sockets are FULL-DUPLEX: you connect to a server using one port, then you can send AND receive data through the same socket, no need for 2 port numbers. (actually, there is a port assigned for receiving data, but it is: 1. assigned automatically when creating the socket (unless told so) and 2. of no use in the function calls to receive data)
so you tell us that your documentation give you 2 port numbers... i assume that the "server" is an already existing in-house application, and you are trying to talk to it. if the doc lists 2 ports, then you will need 2 sockets: one for sending and another one for receiving. now i would suggest you first use a synchronous socket before trying the async way: a synchronous socket is less error-prone for a first test.
(by the way, let's break another misconception: if well coded, once a server listen on a port, it can receive any number of connection through the same port number, no need to open 2 listening ports to accept 2 connections... sorry for the re-alignment, but i've seen those 2 errors committed enough time, it gives me a urge to kill)