I want to send a large amount of data to a server using NSURLConnection (and NSURLRequest). For this I create a bound pair of NSStreams (using CFStreamCreateBoundPair(...)). Then I pass the input stream to the NSURLRequest (-setHTTPBodyStream:) and schedule the output stream on the current run loop. When the run loop continues, I get the events to send data and the input stream sends this data to the server.
My problem is, that this only works when the data fits into the buffer between the paired streams. If the data is bigger, then somehow the input stream gets an event (I assume "bytes available") but the NSURLConnection has not yet opened the input stream. This results in an error message printed and the data is not being sent.
I tried to catch this in my -stream:handleEvent: method by simply returning if the input stream is not yet opened, but then my output stream gets a stream closed event (probably because I never sent data when I could).
So my question is: How to use a bound pair of streams with NSURLConnection correctly?
(If this matters: I'm developing on the iOS platform)
Any help is appreciated!
Cheers, Markus
Ok, I kind of fixed this by starting the upload delayed, so that it starts after the NSURLConnection had time to setup its input stream.
It's not what I call a clean solution though, since relying on -performSelector:withObject:afterDelay: seems a bit hacky.
So if anyone else has a solution to this, I'm still open for any suggestions.
Related
The documentation for akka-http explains that it is important to consume a request stream entirely since bytes that are not pulled will be interpreted as backpressure (https://doc.akka.io/docs/akka-http/current/implications-of-streaming-http-entity.html). When you know beforehand that the stream can be ignored you should use discardEntityBytes, or otherwise read it fully. There is also the option of closing the connection by attaching the stream to a Sink.cancelled.
My question is what happens when the stream fails.
Is the stream drained or is the connection closed? Or is it the responsibility of the implementation to recover from errors and either drain or close the connection? If so, what is a good code pattern for this?
Does it matter if a request is completed with a Future or if the response is streaming?
What if, instead of an unexpected failure, you determine half-way through the stream that the rest of the stream can be ignored. Is throwing an exception a good way of stopping stream processing?
Example completing with a future:
val route =
post {
extractDataBytes { data =>
complete {
data
.via(flow1)
.via(flow2) // say error happens here at some point
.runwWith(sink)
}
}
}
If the server connection is having problem then connection will be automatically closed.
I am interested in learning Vapor, so I decided to work on a website that displays government issued weather alerts. Alert distribution is done via a TCP/IP data stream (streaming1.naad-adna.pelmorex.com port 8080).
What I have in mind is to use IBM's BlueSocket (https://github.com/IBM-Swift/BlueSocket) to create a socket, though after this point, I gave it a bit of thought but was unable to come to a conclusion on what the next steps would be.
Alerts are streamed over the data stream, so I am aware the socket would need to be opened and listened on but wasn't able to get to much past that.
A few things with the data stream are that the start and end of an alert is detected using the start and end tags of the XML document (alert and /alert). There are no special or proprietary headers added to the data, it's only raw XML. I know some alerts also include an XML declaration so I assume the encoding should be taken into account if the declaration is available.
I was then thinking of using XMLParser to parse the XML and use the data I am interested in from the alert.
So really, the main thing I am struggling with is, when the socket is open, what would be the method to listen to it, determine the start and end of the alert and then pass that XML alert for processing.
I would appreciate any input, I am also not restricted to BlueSocket so if there is a better option for what I am trying to achieve, I would be more than open to it.
So really, the main thing I am struggling with is, when the socket is
open, what would be the method to listen to it, determine the start
and end of the alert and then pass that XML alert for processing.
The method that you should use is read(into data: inout Data). It stores any available data that the server has sent into data. There are a few reasons for this method to fail, such as the connection disconnecting.
Here's an example of how to use it:
import Foundation
import Socket
let s = try Socket.create()
try s.connect(to: "streaming1.naad-adna.pelmorex.com", port: 8080)
while true {
if try Socket.wait(for: [s], timeout: 0, waitForever: true) != nil {
var alert = Data()
try s.read(into: &alert)
if let message = String(data: alert, encoding: .ascii) {
print(message)
}
}
}
s.close()
First create the socket. The default is what we want, a IPv4 TCP Stream.
Second connect() to the server using the hostname and port. Without this step, the socket isn't connected and cannot receive or send any data.
wait() until hostname has sent us some data. It returns a list of sockets that have data available to read.
read() the data, decode it and print it. By default this call will block if there is no data available on the socket.
close() the socket. This is good practice.
You might also like to consider thinking about:
non blocking sockets
error handling
streaming (a single call to read() might not give a complete alert).
I hope this answers your question.
I am looking into the Swift Vapor framework.
I am trying to create a controller class that maps data obtained on an SSL link to a third party system (an Asterisk PBX server..) into a response body that is sent over some time down to the client.
So I need to send received text lines (obtained separately on the SSL connection) as they get in, without waiting for a 'complete response' to be constructed.
Seeing this example:
return Response(status: .ok) { chunker in
for name in ["joe\n", "pam\n", "cheryl\n"] {
sleep(1)
try chunker.send(name)
}
try chunker.close()
}
I thought it might be the way to go.
But what I see connecting to the Vapor server is that the REST call waits for the loop to complete, before the three lines are received as result.
How can I obtain to have try chunker.send(name) send it's characters back the client without first waiting for the loop to complete?
In the real code the controller method can potentially keep an HTTP connection to the client open for a long time, sending Asterisk activity data to the client as soon as it is obtained. So each .send(name) should actually pass immediately data to the client, not waiting for the final .close() call.
Adding a try chunker.flush() did not produce any better result..
HTTP requests aren't really designed to work like that. Different browsers and clients will function differently depending on their implementations.
For instance, if you connect with telnet to the chunker example you pasted, you will see the data is sent every second. But Safari on the other hand will wait for the entire response before displaying.
If you want to send chunked data like this reliably, you should use a protocol like WebSockets that is designed for it.
I want some level of real-time speech to text conversion. I am using the web-sockets interface with interim_results=true. However, I am receiving results for the first audio chunk only. The second,third... audio chunks that I am sending are not getting transcribed. I do know that my receiver is not blocked since I do receive the inactivity message.
json {"error": "Session timed out due to inactivity after 30 seconds."}
Please let me know if I am missing something if I need to provide more contextual information.
Just for reference this is my init json.
{
"action": "start",
"content-type":"audio/wav",
"interim_results": true,
"continuous": true,
"inactivity_timeout": 10
}
In the result that I get for the first audio chunk, the final json field is always received as false.
Also, I am using golang but that should not really matter.
EDIT:
Consider the following pseudo log
localhost-server receives first 4 seconds of binary data #lets say Binary 1
Binary 1 is sent to Watson
{interim_result_1 for first chunk}
{interim_result_2 for first chunk}
localhost-server receives last 4 seconds of binary data #lets say Binary 2
Binary 2 is sent to Watson
Send {"action": "stop"} to Watson
{interim_result_3 for first chunk}
final result for the first chunk
I am not receiving any transcription for the second chunk
Link to code
You are getting the time-out message because the service waits for you to either send more audio or send a message signalling the end of an audio submission. Are you sending that message? It's very easy:
By sending a JSON text message with the action key set to the value stop: {"action": "stop"}
By sending an empty binary message
https://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/doc/speech-to-text/websockets.shtml
Please let me know if this does not resolve your problem
This is a bit late, but I've open-sourced a Go SDK for Watson services here:
https://github.com/liviosoares/go-watson-sdk
There is some documentation about speech-to-text binding here:
https://godoc.org/github.com/liviosoares/go-watson-sdk/watson/speech_to_text
There is also an example of streaming data to the API in the _test.go file:
https://github.com/liviosoares/go-watson-sdk/blob/master/watson/speech_to_text/speech_to_text_test.go
Perhaps this can help you.
The solution to this question was to set the size header of the wav file to 0.
I'm sending an encoded live audio stream (mp3 or ogg) over websockets and i want to play it with the web audio api.
I read and tried several things but nothing worked so far...
I always tried to do it with the decodeAudioData Method.
But this method can not handle an continuous stream.
So my last approach was this:
ws.onmessage = function (evt) {
context.decodeAudioData(evt.data, function (decodedData) {
source = context.createBufferSource();
source.buffer = decodedData;
source.start(startTime);
source.connect(context.destination);
startTime += decodedData.duration;
},
function(e) {
var test = e;
}
);
};
This works at least with mp3s but not very well. between the received chunks there is a very small break. so there is no fluid playback of the stream. I don't know what's the reason for that... maybe the decodedData.duration property is not accuracte enough or there is some kind of delay anywhere.
Anyway it's not working at all with ogg files. I can hear the first received chunk but the rest is ignored. Maybe this has something to do with missing headers?
Is there any other method in the web audio api to play an encoded live stream then decodeAudioData? I could not find anything...
Thanks for your help!
Don't do this over web sockets if you can help it. Let the browser do its job and play this over HTTP. Otherwise you must reinvent everything.
If you insist on reinventing everything for some reason, you must:
Buffer incoming data
Decode that data
Buffer decoded data
Play back your decoded PCM buffers with a script node
Handle the times when you have buffer underruns/overruns (likely by playing back silence or dropping PCM samples)
How you do each of these items depends on your specific needs and implementation, so I would recommend splitting up the question if you get stuck on any of that.