Problems with sending UDP packets (milight, limitlessled) - swift

I trying to write a menubar app to get control over my lights via Mac.
I'm using the system of milight (limitless, easybulbs...).
They have an open system were you can send commands via UDP.
I'm able to control my lights via python-limitless library in python, so I know the networking thing such as IP and port is right.
So I think I do anything wrong with this UDP stuff I never worked with.
I'm trying to use SwiftSocket library to send my commands but nothing happens, I've been trying it since 2 days.
Here ist what I'm trying:
let host = "192.168.2.102"
let port = 5987
var client: UDPClient!
#IBAction func lightOn(_ sender: NSButton) {
let bridgeon: [UInt8] = [0x31, 0x00, 0x00, 0x00, 0x03, 0x03, 0x00, 0x00, 0x00, 0x01]
let rgbwon: [UInt8] = [0x31, 0x00, 0x00, 0x07, 0x03, 0x01, 0x00, 0x00, 0x00, 0]
print("Licht an")
print(client.send(data: bridgeon))
sleep(1)
print(client.send(data: rgbwon))
sleep(1)
}
#IBAction func lightOff(_ sender: NSButton) {
print("Licht aus")
}
override func viewDidLoad() {
super.viewDidLoad()
client = UDPClient(address: host, port: Int32(port))
}
When I compare this with the complexity of the pythonlibrary I'm sure I forget something important. I haven't worked with networks yet so be lenient with me.
thanks and greetings.

I'm a bit late, but I hope it can help you :
Before sending your lighton request you have to send a first request to get, the wifi bridge session. You also need to compute what Milight called the "checksum" based on your request.
You also make sure about what kind of bulb you have, is it WW bulb or CW? I was stuck for severals days because I was sending wrong request..
I made an implementation, It's in php but you can use it same way in Objective-C.
Check it out : https://github.com/winosaure/MilightAPI
UPDATE :
According to limitlessled "documentation" (http://www.limitlessled.com/dev/) this is how a request is composed :
UDP Hex Send Format: 80 00 00 00 11 {WifiBridgeSessionID1}
{WifiBridgeSessionID2} 00 {SequenceNumber} 00 {COMMAND} {ZONE NUMBER}
00 {Checksum}
This is why you must get wifibridge session first then you need to calculate the checksum.
Let me take one example about how to turn on the light.
The documentation says that :
31 00 00 08 04 01 00 00 00 = Light ON
31 00 00 08 04 01 00 00 00 refer to {COMMAND} above.
So far the full request must be :
80 00 00 00 11 {WifiBridgeSessionID1} {WifiBridgeSessionID2} 00 {SequenceNumber} 00 31 00 00 08 04 01 00 00 00 {ZONE NUMBER} 00 {Checksum}
Now let's get the Wifibridge session. The documention says :
to get the WifiBridgeSessionID1 and WifiBridgeSessionID2 send this
command UDP.
SEND hex bytes: 20 00 00 00 16 02 62 3A D5 ED A3 01 AE 08
2D 46 61 41 A7 F6 DC AF (D3 E6) 00 00 1E <-- Send this to the ip
address of the wifi bridge v6
That's why I'm doing this:
private function getWifiBridgeSession()
{
$command = array (
0x20,0x00, 0x00,
0x00, 0x16, 0x02,
0x62, 0x3A, 0xD5,
0xED, 0xA3, 0x01,
0xAE, 0x08, 0x2D,
0x46, 0x61, 0x41,
0xA7, 0xF6, 0xDC,
0xAF, 0xD3, 0xE6,
0x00, 0x00, 0x1E);
return $this->sendCommand($command);
}
Once you send a UDP request with this command, you will get a result.
The Wifi Bridge session1 refers to the 20th byte of the response and WifiBridge Session2 will refer to the 21th byte response (Don't forget that we start to count from 0, so you must take something like "response[19]" and "response[20]").
Let's say, after sending this request I get this response :
28 00 00 00 11 00 02 AC CF 23 F5 7A D4 69 F0 3C 23 00 01 05 00
So my "WifiBridgesession1" is 0x05 and "Wifibridgesession2" is 0x00
So now our request to "turn on" the light is :
80 00 00 00 11 0x05 0x00 00 {SequenceNumber} 00 31 00 00 08 04 01 00 00 00 {ZONE NUMBER} 00 {Checksum}
So now we need to find out {SequenceNumber} {Zone Number} and {Checksum}
What is a "Sequence Number"?
The doc says :
Sequential byte just helps with keeping commands in the correct order,
and it helps to ignore duplicate packets already received. increment
this byte for each new command by 1.
So put what you want and increase this value to 1 for each request. (Personnally I always send 0x01).
"Zone number" refers to which zone you synchronized your light.
Valid List for {ZONE NUMBER} 0x00 All 0x01 Zone1 0x02 Zone2 0x03
Zone3 0x04 Zone4
Let's say, our "zone" is 0x01.
Almost done. we just need now to calculate the "checksum".
The doc says :
take the 9 bytes of the command, and 1 byte of the zone, and add the 0
= the checksum = (checksum & 0xFF) e.g. SUM((31 00 00 08 04 01 00 00 00)(command) 01(zone) 00) = 3F(chksum)
So the checksum for our command is :
31+00+00+08+04+01+00+00+00+01+00 = 0x54
I add all byte of the command (turn on) + 0x01 for the zone + 0x00
So now we have everything and the full request to turn on the light is :
80 00 00 00 11 05 00 00 01 00 31 00 00 08 04 01 00 00 00 01 00 54
That's it.
Note : Do not just copy and paste the request, I calculated the value based on example, the request to turn on the light will change each time, based on what you will calculate.
Maybe you got noticed that I wrote "00 31 00 00 08 04 01 00 00 00" to do the "turn on" command, this will work only for CW bulb. The doc does not specify that...
The same Command for WW bulb is 00 31 00 00 07 03 01 00 00 00
So the full command for WW bulb will be :
80 00 00 00 11 05 00 00 01 00 31 00 00 07 03 01 00 00 00 01 00 54
What is the difference between CW and WW bulb?
I can tell that CW refers to "Cold White" and WW to "Warm White". But as I am not an expert in "led bulb" I cannot explain more, I don't know why we need to write a different request for both, either.
Anyway I wish I was clear enough.
Let me know how things are working.

Related

How to retrieve details of the console port used by BIOS using efivars?

As part of installation of linux, I would like to set the "console device properties"(example, console=ttyS0,115200n1) via the kernel cmdline for Intel based platform.
There is No VGA console, only serial consoles via COM interface.
On these systems BIOS already has the required settings to interact using the appropriate serial port.
I see that EFI has variables ConIn, ConOut, ConErr which I am able to see from /sys/firmware/efi but unable to decode the contents of it.
Is it possible to identify which COM port is being used by the BIOS by examining the efi variables.
Example, of the EFI var on my box.
root#linux:~# efivar -p -n 8be4df61-93ca-11d2-aa0d-00e098032b8c-ConOut
GUID: 8be4df61-93ca-11d2-aa0d-00e098032b8c
Name: "ConOut"
Attributes:
Non-Volatile
Boot Service Access
Runtime Service Access
Value:
00000000 02 01 0c 00 d0 41 03 0a 00 00 00 00 01 01 06 00 |.....A..........|
00000010 00 1a 03 0e 13 00 00 00 00 00 00 c2 01 00 00 00 |................|
00000020 00 00 08 01 01 03 0a 18 00 9d 9a 49 37 2f 54 89 |...........I7/T.|
00000030 4c a0 26 35 da 14 20 94 e4 01 00 00 00 03 0a 14 |L.&5.. .........|
00000040 00 53 47 c1 e0 be f9 d2 11 9a 0c 00 90 27 3f c1 |.SG..........'?.|
00000050 4d 7f 01 04 00 02 01 0c 00 d0 41 03 0a 00 00 00 |M.........A.....|
00000060 00 01 01 06 00 00 1f 02 01 0c 00 d0 41 01 05 00 |............A...|
00000070 00 00 00 03 0e 13 00 00 00 00 00 00 c2 01 00 00 |................|
00000080 00 00 00 08 01 01 03 0a 18 00 9d 9a 49 37 2f 54 |............I7/T|
00000090 89 4c a0 26 35 da 14 20 94 e4 01 00 00 00 03 0a |.L.&5.. ........|
000000a0 14 00 53 47 c1 e0 be f9 d2 11 9a 0c 00 90 27 3f |..SG..........'?|
000000b0 c1 4d 7f ff 04 00 |.M.... |
root#linux:~#
The contents of the ConOut variable are described in the UEFI specification - current version (2.8B):
3.3 - globally defined variables:
| Name | Attribute | Description |
|---------|------------|------------------------------------------------|
| ConOut | NV, BS, RT | The device path of the default output console. |
For information about device paths, we have:
10 - Protocols — Device Path Protocol:
Apart from the initial description of device paths, table 44 shows you the Generic Device Path Node structure, from which we can start decoding the contents of the variable.
The type of the first node is 0x02, telling us this node describes an ACPI device path, of 0x000c bytes length. Now jump down to 10.3.3 - ACPI Device Path and table 52, which tells us 1) that this is the right table (subtype 0x01) and 2) that the default ConOut has a _HID of 0x0a03410d and a _UID of 0.
The next node has a type of 0x01 - a Hardware Device Path, described further in 10.3.2, in this case table 46 (SubType is 0x01) for a PCI device path.
The next node describes a Messaging Device Path of type UART and so on...
Still, this only tells you what UEFI considers to be its default console, SPCR is what an operating system is supposed to be looking at for serial consoles. Unfortunately, on X86 the linux kernel handily ignores SPCR apart from for earlycon. I guess this is what you're trying to work around. It might be good to start some discussion on kernel development lists about whether to fix that and have X86 work like ARM64.
In my case since I know that console port is a "Serial IOPORT",
I could get the details now as follows.
a. Get hold of the /sys/firmware/acpi/tables/SPC table.
b. Read the Address offset 44-52. Actually one the last two bytes suffice.
Reference:
a. https://learn.microsoft.com/en-us/windows-hardware/drivers/serports/serial-port-console-redirection-table states that
Base Address 12 40
The base address of the Serial Port register set described using the ACPI Generic Address Structure.
0 = console redirection disabled
Note:
COM1 (0x3F8) would be:
Integer Form: 0x 01 08 00 00 00000000000003F8
Viewed in Memory: 0x01080000F803000000000000
COM2 (Ox2F8) would be:
Integer Form: 0x 01 08 00 00 00000000000002F8
Viewed in Memory: 0x01080000F802000000000000

How to catch data from SIM

I use a CardReader to communicate to a SIM-card.
For example, I need to get an IMSI from the SIM card.
To do this I send some commands (SELECT 3F00/7F20/6F07):
A0 A4 00 00 02 3F 00
A0 A4 00 00 02 7F 20
A0 A4 00 00 02 6F 07
and here I send READ BINARY command
A0 B0 00 00 09
and after that I receive 90 00 --> Ok - normal ending of the command.
Hey! And where is my IMSI stored?? How can I catch data, which were read by "A0 B0 00 00 09" command?
If I try "A0 C0 00 00 00" command (GET RESPONSE) I will get an Error.
You don't need to send Get Response Command "A0 C0 00 00 00" after Read Data.
There are 9 bytes of data in reply to your Read Data Command "A0 B0 00 00 09".

Akka Stream TLS Server Logging & Troubleshooting

I'm using Akka Streams to create a TCP server using akka.stream.scaladsl.TLS with client certificate authentication. I'm working on creating an echo server as a first proof of concept.
In the meantime, I'm new to Scala/Akka/Akka Streams and so I created a similar server and TCP client in Python to provide tooling in testing my work in Scala. The Python server/client are functional using client cert authentication. When connecting to the server, the client takes the following steps:
Creates and configures an SSLContext
Creates a socket using socket.create_connection()
Wraps the socket with the SSLContext using SSLContext.wrap_socket(). This creates the peer connection
Once connected, prints the server certificate
Infinite loop asking for input and sending each input to the server
I believe I have the server completed using Akka Streams and akka.stream.scaladsl.TLS, but when I attempt to connect using my Python client the client never gets past connecting to the peer using context.wrap_socket(sock, server_hostname=host). The server successfully binds the TCP connection and creates the corresponding IncomingConnection object. The client/server also never timeout (the client just sits awaiting the handshake?).
My biggest problem is that I see no information from my TLS BidiFlow, akka.stream.scaladsl.TLS. I have no idea what step in the handshake I'm stuck at, which makes troubleshooting very difficult.
Is there any way to output some information throughout the TLS handshake process? It seems as though all of the functionality is encapsulated and I don't know if there's any way to troubleshoot.
Otherwise, I'm attempting to troubleshoot with openssl and get the following:
bash$ openssl s_client -connect myserver.com:443 -state -debug
CONNECTED(00000003)
SSL_connect:before/connect initialization
write to 0x7fd914100080 [0x7fd915001000] (318 bytes => 318 (0x13E))
0000 - 16 03 01 01 39 01 00 01-35 03 03 e3 ff 5d fb 26 ....9...5....].&
0010 - 15 e3 32 89 37 e2 cb 95-f5 00 bd df 13 3d ae a6 ..2.7........=..
0020 - d7 37 db 4e 80 19 63 ad-d6 6c f1 00 00 98 cc 14 .7.N..c..l......
0030 - cc 13 cc 15 c0 30 c0 2c-c0 28 c0 24 c0 14 c0 0a .....0.,.(.$....
0040 - 00 a3 00 9f 00 6b 00 6a-00 39 00 38 ff 85 00 c4 .....k.j.9.8....
0050 - 00 c3 00 88 00 87 00 81-c0 32 c0 2e c0 2a c0 26 .........2...*.&
0060 - c0 0f c0 05 00 9d 00 3d-00 35 00 c0 00 84 c0 2f .......=.5...../
0070 - c0 2b c0 27 c0 23 c0 13-c0 09 00 a2 00 9e 00 67 .+.'.#.........g
0080 - 00 40 00 33 00 32 00 be-00 bd 00 45 00 44 c0 31 .#.3.2.....E.D.1
0090 - c0 2d c0 29 c0 25 c0 0e-c0 04 00 9c 00 3c 00 2f .-.).%.......<./
00a0 - 00 ba 00 41 c0 11 c0 07-c0 0c c0 02 00 05 00 04 ...A............
00b0 - c0 12 c0 08 00 16 00 13-c0 0d c0 03 00 0a 00 15 ................
00c0 - 00 12 00 09 00 ff 01 00-00 74 00 0b 00 04 03 00 .........t......
00d0 - 01 02 00 0a 00 3a 00 38-00 0e 00 0d 00 19 00 1c .....:.8........
00e0 - 00 0b 00 0c 00 1b 00 18-00 09 00 0a 00 1a 00 16 ................
00f0 - 00 17 00 08 00 06 00 07-00 14 00 15 00 04 00 05 ................
0100 - 00 12 00 13 00 01 00 02-00 03 00 0f 00 10 00 11 ................
0110 - 00 23 00 00 00 0d 00 26-00 24 06 01 06 02 06 03 .#.....&.$......
0120 - ef ef 05 01 05 02 05 03-04 01 04 02 04 03 ee ee ................
0130 - ed ed 03 01 03 02 03 03-02 01 02 02 02 03 ..............
SSL_connect:unknown state
At which point openssl just hangs.
The Akka TLS support uses the built in Java TLS support behind the scenes, so to get debug output for TLS you'll have to enable debugging for that. It can be done through passing a system property to the JVM when starting it like so -Djavax.net.debug=all
Ultimately I found that the ssl-config logging is very sparse and wasn't helpful to resolving my issue. It does provide some debugging information but not much. Much better for debugging the TLS handshake is to use the -Djavax.net.debug=all flag when running the JVM. However, even this provides mixed results. For example, the resulting error I received is that the server couldn't find a matching cipher suite. Eventually I resolved my issue by realizing that when creating the input streams for my keystore/truststore I was specifying my path incorrectly.
Note for anyone coming across this: if you specify your keystore and truststore incorrectly the resulting input streams will be null and SSLContext.init will happily use these and provide an error that is unrelated to the keystore/truststore! This was very difficult to troubleshoot due to the incorrect error handling in SSLContext.

Mifare Desfire Wrapped Mode: How to calculate CMAC?

When using Desfire native wrapped APDUs to communicate with the card, which parts of the command and response must be used to calculate CMAC?
After successful authentication, I have the following session key:
Session Key: 7CCEBF73356F21C9191E87472F9D0EA2
Then when I send a GetKeyVersion command, card returns the following CMAC which I'm trying to verify:
<< 90 64 00 00 01 00 00
>> 00 3376289145DA8C27 9100
I have implemented CMAC algorithm according to "NIST special publication 800-38B" and made sure it is correct. But I don't know which parts of command and response APDUs must be used to calculate CMAC.
I am using TDES, so MAC is 8 bytes.
I have been looking at the exact same issue for the last few days and I think I can at least give you some pointers. Getting everything 'just so' has taken some time and the documentation from NXP (assuming you have access) is a little difficult to interpret in some cases.
So, as you probably know, you need to calculate the CMAC (and update your init vec) on transmit as well as receive. You need to save the CMAC each time you calculate it as the init vec for the next crypto operation (whether CMAC or encryption etc).
When calculating the CMAC for your example the data to feed into your CMAC algorithm is the INS byte (0x64) and the command data (0x00). Of course this will be padded etc as specified by CMAC. Note, however, that you do not calculate the CMAC across the entire APDU wrapping (i.e. 90 64 00 00 01 00 00) just the INS byte and data payload is used.
On receive you need to take the data (0x00) and the second status byte (also 0x00) and calculate the CMAC over that. It's not important in this example but order is important here. You use the response body (excluding the CMAC) then SW2.
Note that only half of the CMAC is actually sent - CMAC should yield 16 bytes and the card is sending the first 8 bytes.
There were a few other things that held me up including:
I was calculating the session key incorrectly - it is worth double checking this if things are not coming out as you'd expect
I interpreted the documentation to say that the entire APDU structure is used to calculate the CMAC (hard to read them any other way tbh)
I am still working on calculating the response from a Write Data command correctly. The command succeeds but I can't validate the CMAC. I do know that Write Data is not padded with CMAC padding but just zeros - not yet sure what else I've missed.
Finally, here is a real example from communicating with a card from my logs:
Authentication is complete (AES) and the session key is determined to be F92E48F9A6C34722A90EA29CFA0C3D12; init vec is zeros
I'm going to send the Get Key Version command (as in your example) so I calculate CMAC over 6400 and get 1200551CA7E2F49514A1324B7E3428F1 (which is now my init vec for the next calculation)
Send 90640000010000 to the card and receive 00C929939C467434A8 (status is 9100).
Calculate CMAC over 00 00 and get C929939C467434A8A29AB2C40B977B83 (and update init vec for next calculation)
The first half of our CMAC from step #4 matches the 8 byte received from the card in step #3
Sry for my English,- its terrible :) but it's not my native language. I'm Russian.
Check first MSB (7 - bit) of array[0] and then shiffting this to the left. And then XOR if MSB 7 bit was == 1;
Or save first MSB bit of array[0] and after shiffting put this bit at the end of array[15] at the end (LSB bit).
Just proof it's here:
https://www.nxp.com/docs/en/application-note/AN10922.pdf
Try this way:
Zeros <- 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
SessionKey <- 00 01 02 03 E3 27 64 0C 0C 0D 0E 0F 5C 5D B9 D5
Data <- 6F 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00
First u have to encrypt 16 bytes (zeros) with SesionKey;
enc_aes_128_ecb(Zeros);
And u get EncryptedData.
EncryptedData <- 3D 08 A2 49 D9 71 58 EA 75 73 18 F2 FA 6A 27 AC
Check bit 7 [MSB - LSB] of EncryptedData[0] == 1? switch i to true;
bool i = false;
if (EncryptedData[0] & 0x80){
i = true;
}
Then do Shiffting of all EncryptedData to 1 bit <<.
ShiftLeft(EncryptedData,16);
And now, when i == true - XOR the last byte [15] with 0x87
if (i){
ShiftedEncryptedData[15] ^= 0x87;
}
7A 11 44 93 B2 E2 B1 D4 EA E6 31 E5 F4 D4 4F 58
Save it as KEY_1.
Try bit 7 [MSB - LSB] of ShiftedEncryptedData[0] == 1?
i = false;
if (ShiftedEncryptedData[0] & 0x80){
i = true;
}
Then do Shiffting of all ShiftedEncryptedData to 1 bit <<.
ShiftLeft(ShiftedEncryptedData,16);
And now, when i == true - XOR the last byte [15] with 0x87
if (i){
ShiftedEncryptedData[15] ^= 0x87;
}
F4 22 89 27 65 C5 63 A9 D5 CC 63 CB E9 A8 9E B0
Save it as KEY_2.
Now we take our Data (6F 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00)
As Michael say's - pad command with 0x80 0x00...
XOR Data with KEY_2 - if command was padded, or KEY_1 if don't.
If we have more like 16 bytes (32 for example) u have to XOR just last 16 bytes.
Then encrypt it:
enc_aes_128_ecb(Data);
Now u have a CMAC.
CD C0 52 62 6D F6 60 CA 9B C1 09 FF EF 64 1A E3
Zeros <- 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
SessionKey <- 00 01 02 03 E3 27 64 0C 0C 0D 0E 0F 5C 5D B9 D5
Key_1 <- 7A 11 44 93 B2 E2 B1 D4 EA E6 31 E5 F4 D4 4F 58
Key_2 <- F4 22 89 27 65 C5 63 A9 D5 CC 63 CB E9 A8 9E B0
Data <- 6F 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00
CMAC <- CD C0 52 62 6D F6 60 CA 9B C1 09 FF EF 64 1A E3
C/C++ function:
void ShiftLeft(byte *data, byte dataLen){
for (int n = 0; n < dataLen - 1; n++) {
data[n] = ((data[n] << 1) | ((data[n+1] >> 7)&0x01));
}
data[dataLen - 1] <<= 1;
}
Have a nice day :)

Webbit websocket ws:// connection works but wss:// handshake fails silently without any error?

Upgraded Webbit to 0.4.6 to use the new SSL support but immediately realized that all wss:// handshakes are failing silently and I don't have any errors to show for it. Chrome only reports a "success" for a response without a HTTP code or any other headers. I check server logs and it doesn't even register an "open" event.
The catch here is that any ws:// connection works great. So what could be possible problems and how can I get an error out of it? Could it be something wrong with the java keystore and SSL handshake?
Edit
I was able to find an openSSL command for a test handshake. Here's the output:
SSL_connect:before/connect initialization
SSL_connect:SSLv2/v3 write client hello A
SSL_connect:error in SSLv2/v3 read server hello A
Edit 2
I realized I could debug this further
CONNECTED(0000016C)
SSL_connect:before/connect initialization
write to 0x1f57750 [0x1f6a730] (210 bytes => 210 (0xD2))
0000 - 16 03 01 00 cd 01 00 00-c9 03 01 4f 6b 8d 68 63 ...........Ok.hc
0010 - 99 06 08 30 93 2a 42 88-f8 f1 c4 c5 dc 89 71 0b ...0.*B.......q.
0020 - b6 04 42 4e 11 79 b4 76-6c f7 66 00 00 5c c0 14 ..BN.y.vl.f..\..
0030 - c0 0a 00 39 00 38 00 88-00 87 c0 0f c0 05 00 35 ...9.8.........5
0040 - 00 84 c0 12 c0 08 00 16-00 13 c0 0d c0 03 00 0a ................
0050 - c0 13 c0 09 00 33 00 32-00 9a 00 99 00 45 00 44 .....3.2.....E.D
0060 - c0 0e c0 04 00 2f 00 96-00 41 00 07 c0 11 c0 07 ...../...A......
0070 - c0 0c c0 02 00 05 00 04-00 15 00 12 00 09 00 14 ................
0080 - 00 11 00 08 00 06 00 03-00 ff 01 00 00 44 00 0b .............D..
0090 - 00 04 03 00 01 02 00 0a-00 34 00 32 00 01 00 02 .........4.2....
00a0 - 00 03 00 04 00 05 00 06-00 07 00 08 00 09 00 0a ................
00b0 - 00 0b 00 0c 00 0d 00 0e-00 0f 00 10 00 11 00 12 ................
00c0 - 00 13 00 14 00 15 00 16-00 17 00 18 00 19 00 23 ...............#
00d2 - <SPACES/NULS>
SSL_connect:SSLv2/v3 write client hello A
read from 0x1f57750 [0x1f6fc90] (7 bytes => 0 (0x0))
12488:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:.\ssl\s23_lib.c:177:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 210 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
---
Edit 3
Ok I've nailed the problem to Webbit initialization, but it doesn't throw any errors so I could use some input to getting getResourceAsStream functioning properly. Here's how the server is initialized:
def startWebSocketServer(webSocketHandler:PartialFunction[WebSocketEvent, Unit]) {
val webServer = WebServers.createWebServer(port)
try {
webServer.setupSsl(getClass.getResourceAsStream("/keystore"), "webbit")
webServer.add("/", new WebSocketEventAdapter(webSocketHandler))
webServer.start
} catch {
case e => e.printStackTrace()
}
}
Unfortunately setupSsl won't output any information, and I've tried both what I thought would be the path and inserting a fake path. In either case, I can't get an error. How on earth would I properly locate the path? Thanks!
The OMFG Answer
In a hysterical twist of fate, I found the problem. This particular issue took up 48 hours of my time, but the cause was not even code related and a funny miscommunication.
So as it turns out, another developer had copied our websocket code into a new file he was working on for development. All this time we were trying to debug code in a file that wasn't even executing at run-time. So upon further investigation we scrolled to the bottom of a very long and different file, and found the webbit init code and excuted it perfectly.
Moral of the story: don't commit an incomplete file to the master branch and point everyone there for debugging ;)