How to open .saz websocket dump? - fiddler

I know .saz files are just a zip archive.
However, when i try to open websocket session file (the .w file in .saz)
most of values are in unknown binary format. How i can read them? when i view same .saz file in fiddler, it works well (i see normal text data from websocket session). But when i open it in notepad i get the following:
‚ю —Ѓ]CИСњЎr0ја)6»ЎњСrrж°PIЂо.7тЎ/0жв<1Єо3$©м80жв2.Е‹0­уpЇд37тЎ,§кfqюІisуП< ¤·icЕ‹,¦х8-ј¬ :ёдgc©с-/Ўв<7Ўо3l°¬*4ї¬;,ємp6єн8-«о9&¬ЊW §п)&¦хp­п:7 »}sЕ‹PI
Request-Length: 17
ID: 17
BitFlags: 0
DoneRead: 2015-02-14T09:47:35.1427680+03:00
BeginSend: 2015-02-14T09:47:35.1427680+03:00
DoneSend: 2015-02-14T09:47:35.1427680+03:00
How i can decode this?

The WebSocket file's format is not currently documented and direct manipulation is not supported.
As of Fiddler 2.5.0.1, the format is as follows:
[File Headers]\r\n
[Message 0 Headers]\r\n
[Message 0 raw bytes]\r\n
[Message 1 Headers]\r\n
[Message 1 raw bytes]\r\n
[Message 2 Headers]\r\n
[Message 3 raw bytes]\r\n
<eof>
Obviously, parsing this requires that you have code that can parse the raw bytes of a WebSocket message.
Rather than writing all of that code yourself, you'd probably be better off just using Fiddler's Script or Extension model to interact with the WebSocketMessage objects that Fiddler builds when reloading a SAZ file.

Related

Log HAProxy captured response header in custom format

I have an HAProxy configuration with a custom JSON log-format. I want to capture a specific response header and log it.
However, no matter how I try to capture it, I cannot make it appear in the log.
In my log format I use %[capture.res.hdr(0)] but it only comes up as -. I've also tried %[res.hdr(0)] and %[res.hdr(MyHeader)] but they were not valid configuration and HAProxy failed to start.
I've tried capturing using:
capture response header MyHeader len 50
But it doesn't work. I also tried:
declare capture response len 50
http-response capture res.hdr(MyHeader) id 0
With no success. The %hs format variable works - all the captured headers are logged in a delimited string. But I want to log the headers separately as JSON properties.
What am I doing wrong?
I'm currently using HAProxy 1.8.
It seems like the combination of having capture response header MyHeader len 50 in the frontend section and %[capture.res.hdr(0)] in the log-format actually works. Turns out I had multiple instances of HAProxy running and only reloaded some of them, so the changes only came through for some requests.

Dont receive results other than those from first audio chunk

I want some level of real-time speech to text conversion. I am using the web-sockets interface with interim_results=true. However, I am receiving results for the first audio chunk only. The second,third... audio chunks that I am sending are not getting transcribed. I do know that my receiver is not blocked since I do receive the inactivity message.
json {"error": "Session timed out due to inactivity after 30 seconds."}
Please let me know if I am missing something if I need to provide more contextual information.
Just for reference this is my init json.
{
"action": "start",
"content-type":"audio/wav",
"interim_results": true,
"continuous": true,
"inactivity_timeout": 10
}
In the result that I get for the first audio chunk, the final json field is always received as false.
Also, I am using golang but that should not really matter.
EDIT:
Consider the following pseudo log
localhost-server receives first 4 seconds of binary data #lets say Binary 1
Binary 1 is sent to Watson
{interim_result_1 for first chunk}
{interim_result_2 for first chunk}
localhost-server receives last 4 seconds of binary data #lets say Binary 2
Binary 2 is sent to Watson
Send {"action": "stop"} to Watson
{interim_result_3 for first chunk}
final result for the first chunk
I am not receiving any transcription for the second chunk
Link to code
You are getting the time-out message because the service waits for you to either send more audio or send a message signalling the end of an audio submission. Are you sending that message? It's very easy:
By sending a JSON text message with the action key set to the value stop: {"action": "stop"}
By sending an empty binary message
https://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/doc/speech-to-text/websockets.shtml
Please let me know if this does not resolve your problem
This is a bit late, but I've open-sourced a Go SDK for Watson services here:
https://github.com/liviosoares/go-watson-sdk
There is some documentation about speech-to-text binding here:
https://godoc.org/github.com/liviosoares/go-watson-sdk/watson/speech_to_text
There is also an example of streaming data to the API in the _test.go file:
https://github.com/liviosoares/go-watson-sdk/blob/master/watson/speech_to_text/speech_to_text_test.go
Perhaps this can help you.
The solution to this question was to set the size header of the wav file to 0.

Convert ELMAH raw files to Fiddler SAZ traces

Is there a way to convert Raw/Source data in XML or in JSON generated by ELMAH to a Fiddler SAZ trace file?
https://github.com/mausch/ElmahFiddler adds a fiddle attachment when an error occurs ( when the errors are mailed). It extracts the *.saz file from the HttpRequest

How to C - windows socket reading textfile content

I am having problems reading a text file content via winsock on C , does anyone have any idea how it should work? actually when I try to GET HTTP header from google am able to, but when I try on my xampp machine,
it just gives me 400 bad request.
HTTP/1.1 400 Bad Request
char *message = "GET / HTTP/1.1\r\n\r\n";
Ok the problem that I was receiving 400 bad request on my localhost via winsock was the my HTTP request, i just changed the 1.1 to 1.0 .. and it worked!!! what I am wanting now is printing nothing the content of the text file and not the whole banner?! :)
Read RFC 2616, in particular sections 5.2 and 14.23. An HTTP 1.1 request is required to include a Host header, and an HTTP 1.1 server is required to send a 400 reply if the header is missing and no host is specified in the request line.
char *message = "GET / HTTP/1.1\r\nHost: hostnamehere\r\n\r\n";
As for the text content, you need to read from the socket until you encounter a \r\n\r\n sequence (which terminates the response headers), then process the headers, then read the text content accordingly. The response headers tell you how to read the raw bytes of the text content and when to stop reading (refer to RFC 2616 section 4.4 for details). Once you have the raw bytes, the Content-Type header tells you how to interpret the raw bytes (data type, charset, etc).

XML data text + binary

I built an iPhone app that is getting information from a server (this is also a server that I built).
The data from the server is XML and I use the XML parser to parse the message.
What I want is to add an image to be sent from the server, and I am asking if I can add binary data of such an image to the XML message. For example 10 tags will be text and 1 tag will be binary (the image). So when the XML parser gets to the binary tag, it inserts the data to NSDATA object and the rest of the tags will be inserted to NSString.
Does the XML parser of Cocoa can handle this situation?
If not, what do you think will be the easiest way to do this with one connection to the server so that the data from the server is sent once.
To transfer binary data wrapped in XML, encode it using e.g. Base64, which turns your binary data into characters that won't mess up your XML.
You can transfer the image data, encoded using Base64. There is this NSData category by Matt Gallagher that adds Base64 decoding support to NSData (dateFromBase64String). You can find it on his Cocoa with love website.
Mind you that encoding images in Base64 adds about 33% in file size.