Rewriting looping blocking code to SwiftNIO style non-blocking code - swift

I'm working on a driver that will read data from the network. It doesn't know how much is in a response, other than that when it tries to read and gets 0 bytes back, it is done. So my blocking Swift code looks naively like this:
func readAllBlocking() -> [Byte] {
var buffer: [Byte] = []
var fullBuffer: [Byte] = []
repeat {
buffer = read() // synchronous, blocking
fullBuffer.append(buffer)
} while buffer.count > 0
return fullBuffer
}
How can I rewrite this as a promise that will keep on running until the entire result is read? After trying to wrap my brain around it, I'm still stuck here:
func readAllNonBlocking() -> EventLoopFuture<[Byte]> {
///...?
}
I should add that I can rewrite read() to instead of returning a [Byte] return an EventLoopFuture<[Byte]>

Generally, loops in synchronous programming are turned into recursion to get the same effect with asynchronous programming that uses futures (and also in functional programming).
So your function could look like this:
func readAllNonBlocking(on eventLoop: EventLoop) -> EventLoopFuture<[Byte]> {
// The accumulated chunks
var accumulatedChunks: [Byte] = []
// The promise that will hold the overall result
let promise = eventLoop.makePromise(of: [Byte].self)
// We turn the loop into recursion:
func loop() {
// First, we call `read` to read in the next chunk and hop
// over to `eventLoop` so we can safely write to `accumulatedChunks`
// without a lock.
read().hop(to: eventLoop).map { nextChunk in
// Next, we just append the chunk to the accumulation
accumulatedChunks.append(contentsOf: nextChunk)
guard nextChunk.count > 0 else {
promise.succeed(accumulatedChunks)
return
}
// and if it wasn't empty, we loop again.
loop()
}.cascadeFailure(to: promise) // if anything goes wrong, we fail the whole thing.
}
loop() // Let's kick everything off.
return promise.futureResult
}
I would like to add two things however:
First, what you're implementing above is to simply read in everything until you see EOF, if that piece of software is exposed to the internet, you should definitely add a limit on how many bytes to hold in memory maximally.
Secondly, SwiftNIO is an event driven system so if you were to read these bytes with SwiftNIO, the program would actually look slightly differently. If you're interested what it looks like to simply accumulate all bytes until EOF in SwiftNIO, it's this:
struct AccumulateUntilEOF: ByteToMessageDecoder {
typealias InboundOut = ByteBuffer
func decode(context: ChannelHandlerContext, buffer: inout ByteBuffer) throws -> DecodingState {
// `decode` will be called if new data is coming in.
// We simply return `.needMoreData` because always need more data because our message end is EOF.
// ByteToMessageHandler will automatically accumulate everything for us because we tell it that we need more
// data to decode a message.
return .needMoreData
}
func decodeLast(context: ChannelHandlerContext, buffer: inout ByteBuffer, seenEOF: Bool) throws -> DecodingState {
// `decodeLast` will be called if NIO knows that this is the _last_ time a decode function is called. Usually,
// this is because of EOF or an error.
if seenEOF {
// This is what we've been waiting for, `buffer` should contain all bytes, let's fire them through
// the pipeline.
context.fireChannelRead(self.wrapInboundOut(buffer))
} else {
// Odd, something else happened, probably an error or we were just removed from the pipeline. `buffer`
// will now contain what we received so far but maybe we should just drop it on the floor.
}
buffer.clear()
return .needMoreData
}
}
If you wanted to make a whole program out of this with SwiftNIO, here's an example that is a server which accepts all data until it sees EOF and then literally just writes back the number of received bytes :). Of course, in the real world you would never hold on to all the received bytes to count them (you could just add each individual piece) but I guess it serves as an example.
import NIO
let group = MultiThreadedEventLoopGroup(numberOfThreads: 1)
defer {
try! group.syncShutdownGracefully()
}
struct AccumulateUntilEOF: ByteToMessageDecoder {
typealias InboundOut = ByteBuffer
func decode(context: ChannelHandlerContext, buffer: inout ByteBuffer) throws -> DecodingState {
// `decode` will be called if new data is coming in.
// We simply return `.needMoreData` because always need more data because our message end is EOF.
// ByteToMessageHandler will automatically accumulate everything for us because we tell it that we need more
// data to decode a message.
return .needMoreData
}
func decodeLast(context: ChannelHandlerContext, buffer: inout ByteBuffer, seenEOF: Bool) throws -> DecodingState {
// `decodeLast` will be called if NIO knows that this is the _last_ time a decode function is called. Usually,
// this is because of EOF or an error.
if seenEOF {
// This is what we've been waiting for, `buffer` should contain all bytes, let's fire them through
// the pipeline.
context.fireChannelRead(self.wrapInboundOut(buffer))
} else {
// Odd, something else happened, probably an error or we were just removed from the pipeline. `buffer`
// will now contain what we received so far but maybe we should just drop it on the floor.
}
buffer.clear()
return .needMoreData
}
}
// Just an example "business logic" handler. It will wait for one message
// and just write back the length.
final class SendBackLengthOfFirstInput: ChannelInboundHandler {
typealias InboundIn = ByteBuffer
typealias OutboundOut = ByteBuffer
func channelRead(context: ChannelHandlerContext, data: NIOAny) {
// Once we receive the message, we allocate a response buffer and just write the length of the received
// message in there. We then also close the channel.
let allData = self.unwrapInboundIn(data)
var response = context.channel.allocator.buffer(capacity: 10)
response.writeString("\(allData.readableBytes)\n")
context.writeAndFlush(self.wrapOutboundOut(response)).flatMap {
context.close(mode: .output)
}.whenSuccess {
context.close(promise: nil)
}
}
func errorCaught(context: ChannelHandlerContext, error: Error) {
print("ERROR: \(error)")
context.channel.close(promise: nil)
}
}
let server = try ServerBootstrap(group: group)
// Allow us to reuse the port after the process quits.
.serverChannelOption(ChannelOptions.socket(.init(SOL_SOCKET), .init(SO_REUSEADDR)), value: 1)
// We should allow half-closure because we want to write back after having received an EOF on the input
.childChannelOption(ChannelOptions.allowRemoteHalfClosure, value: true)
// Our program consists of two parts:
.childChannelInitializer { channel in
channel.pipeline.addHandlers([
// 1: The accumulate everything until EOF handler
ByteToMessageHandler(AccumulateUntilEOF(),
// We want 1 MB of buffering max. If you remove this parameter, it'll also
// buffer indefinitely.
maximumBufferSize: 1024 * 1024),
// 2: Our "business logic"
SendBackLengthOfFirstInput()
])
}
// Let's bind port 9999
.bind(to: SocketAddress(ipAddress: "127.0.0.1", port: 9999))
.wait()
// This will never return.
try server.closeFuture.wait()
Demo:
$ echo -n "hello world" | nc localhost 9999
11

Related

RxJava2/RxAndroidBle: subscribe to Observable from side effects

I have the following use case of a simple BLE device setup process using RxAndroidBle:
Connect to a BLE device.
Start listening to notification characteristic and set up a parser to parse each incoming notification. Parser will then use a PublishSubject to publish parsed data.
Perform a write to write characteristic (negotiate secure connection).
Wait for parser PublishSubject to deliver the parsed response from device - public key (which arrived through the notification characteristic as a response to our write).
Perform another write to the write characteristic (set connection as secure).
Deliver a Completable saying if the process has completed successfully or not.
Right now my solution (not working) looks like this:
deviceService.connectToDevice(macAddress)
.andThen(Completable.defer { deviceService.setupCharacteristicNotification() })
.andThen(Completable.defer { deviceService.postNegotiateSecurity() })
.andThen(Completable.defer {
parser.notificationResultSubject
.flatMapCompletable { result ->
when (result) {
DevicePublicKeyReceived -> Completable.complete()
else -> Completable.error(Exception("Unexpected notification parse result: ${result::class}"))
}
}
})
.andThen(Completable.defer { deviceService.postSetSecurity() })
And the DeviceService class:
class DeviceService {
/**
* Observable keeping shared RxBleConnection for reuse by different calls
*/
private var connectionObservable: Observable<RxBleConnection>? = null
fun connectToDevice(macAddress: String): Completable {
return Completable.fromAction {
connectionObservable =
rxBleClient.getBleDevice(macAddress)
.establishConnection(false)
.compose(ReplayingShare.instance())
}
}
fun setupCharacteristicNotification(): Completable =
connectionObservable?.let {
it
.switchMap { connection ->
connection.setupNotification(UUID_NOTIFICATION_CHARACTERISTIC)
.map { notificationObservable -> notificationObservable.doOnNext { bytes -> parser.parse(bytes) }.ignoreElements() }
.map { channel ->
Observable.merge(
Observable.never<RxBleConnection>().startWith(connection),
channel.toObservable()
)
}
.ignoreElements()
.toObservable<RxBleConnection>()
}
.doOnError { Timber.e(it, "setup characteristic") }
.take(1).ignoreElements()
} ?: Completable.error(CONNECTION_NOT_INITIALIZED)
fun postNegotiateSecurity(): Completable {
val postLength = negotiateSecurity.postNegotiateSecurityLength()
val postPGK = negotiateSecurity.postNegotiateSecurityPGKData()
return connectionObservable?.let {
it.take(1)
.flatMapCompletable { connection ->
postLength
.flatMapSingle { connection.write(it.bytes.toByteArray()) }
.doOnError { Timber.e(it, "post length") }
.flatMap {
postPGK
.flatMapSingle { connection.write(it.bytes.toByteArray()) }
.doOnError { Timber.e(it, "post PGK") }
}
.take(1).ignoreElements()
}
} ?: Completable.error(CONNECTION_NOT_INITIALIZED)
}
fun postSetSecurity(): Completable =
connectionObservable?.let {
it.take(1)
.flatMapCompletable { connection ->
negotiateSecurity.postSetSecurity()
.flatMapSingle { connection.write(it.bytes.toByteArray()) }
.take(1).ignoreElements()
}
} ?: Completable.error(CONNECTION_NOT_INITIALIZED)
}
private fun RxBleConnection.write(bytes: ByteArray): Single<ByteArray> =
writeCharacteristic(UUID_WRITE_CHARACTERISTIC, bytes)
The problem is that it gets stuck in deviceService.postNegotiateSecurity() and never gets past. I don't get any data in the parser as well, so I assume I'm incorrectly subscribing to the notification characteristic.
negotiateSecurity.postNegotiateSecurityLength() and negotiateSecurity.postNegotiateSecurityPGKData() are methods which prepare data to be sent and deliver it as Observable<SendFragment>. Because of data frame size limit, one frame might be encoded as several fragments, which are then emitted by these Observables.
Recap:
postNegotiateSecurity() is never completed
negotiateSecurity.postNegotiateSecurityLength() may emit one or more times
negotiateSecurity.postNegotiateSecurityPGKData() may emit one or more times
Analysis (omitted logs for readability):
it.take(1)
.flatMapCompletable { connection ->
postLength
.flatMapSingle { connection.write(it.bytes.toByteArray()) }
.flatMap {
postPGK // may emit more than one value
.flatMapSingle { connection.write(it.bytes.toByteArray()) }
}
.take(1) // first emission from the above `flatMap` will finish the upstream
.ignoreElements()
}
Every emission from postLength will start a characteristic write. Every succeeded write will start subscription to postPGK. If postLength will emit more than once — more subscriptions to postPGK will be made.
Every subscription to postPGK most likely will result in multiple emissions. Every emission will then be flatMapped to a characteristic write. Every write succeeded write will emit a value.
After the first emission from the above mentioned characteristic write the upstream will be disposed (because of .take(1) operator).
If the postNegotiateSecurity() is actually started it will also finish or error (given that both postLength and postPGK will emit at least one value) since there is no additional logic here.
Conclusion
postNegotiateSecurity() will most probably complete (but not in an intended manner) as the first packet from postPGK will finish it. I would assume that the peripheral expects full data before it will notify anything therefore it is waiting for the PGK to be fully transmitted which will not happen as shown above.
Logs from the application with RxBleLog.setLogLevel(RxBleLog.VERBOSE) set on could help with understanding of what actually happened.

How to combine write characteristics and charcteristics notify using RXBluetoothKit for RxSwift

I am trying to interface a BLE device using RXBluetoothKit for swift. All the data commands of the device follow the following sequence
1. write a command (writeWithResponse)
2. Read the response from notification (on a different characteristics)
The number of notification packets (20 bytes max in a notification packet) will depend on the command. This will be a fixed number or essentially indicated using a end-of-data bit in notif value.
Can this be achieved using writeValue(), monitorValueUpdate() combination?
// Abstraction of your commands / results
enum Command {
case Command1(arg: Float)
case Command2(arg: Int, arg2: Int)
}
struct CommandResult {
let command: Command
let data: NSData
}
extension Command {
func toByteCommand() -> NSData {
return NSData()
}
}
// Make sure to setup notifications before subscribing to returned observable!!!
func processCommand(notifyCharacteristic: Characteristic,
_ writeCharacteristic: Characteristic,
_ command: Command) -> Observable<CommandResult> {
// This observable will take care of accumulating data received from notifications
let result = notifyCharacteristic.monitorValueUpdate()
.takeWhile { characteristic in
// Your logic which know when to stop reading notifications.
return true
}
.reduce(NSMutableData(), accumulator: { (data, characteristic) -> NSMutableData in
// Your custom code to append data?
if let packetData = characteristic.value {
data.appendData(packetData)
}
return data
})
// Your code for sending commands, flatmap with more commands if needed or do something similar
let query = writeCharacteristic.writeValue(command.toByteCommand(), type: .WithResponse)
return Observable.zip(result, query, resultSelector: { (result: NSMutableData, query: Characteristic) -> CommandResult in
// This block will be called after query is executed and correct result is collected.
// You can now return some command specific result.
return CommandResult(command: command, data: result)
})
}
// If you would like to serialize multiple commands, you can do for example:
func processMultipleCommands(notifyCharacteristic: Characteristic,
writeCharacteristic: Characteristic,
commands: [Command]) -> Observable<()> {
return Observable.from(Observable.just(commands))
// concatMap would be more appropriate, because in theory we should wait for
// flatmap result before processing next command. It's not available in RxSwift yet.
.flatMap { command in
return processCommand(notifyCharacteristic, writeCharacteristic, command)
}
.map { result in
return ()
}
}
You can try above. It's just an idea how you could handle it. I tried to comment the most important things. Let me know if it works for you.

How to "tee" NSPipe in Swift

I'm trying to tee the standard output NSPipe one of NSTask to get two NSPipes, each of which will go into the standard input of two new NSTasks.
I know I can do this in C with the tee function, but I couldn't find it in neither the Foundation nor Darwin frameworks. How can I achieve this?
I wrote a solution to this and it is working well in my project. I have made a Swift package available on GitHub. You can also find examples of its use there. Here is the code:
/**
Duplicates the data from `input` into each of the `outputs`.
Following the precedent of `standardInput`/`standardOutput`/`standardError` in `Process` from `Foundation`,
we accept the type `Any`, but throw a precondition failure if the arguments are not of type `Pipe` or `FileHandle`.
https://github.com/apple/swift-corelibs-foundation/blob/eec4b26deee34edb7664ddd9c1222492a399d122/Sources/Foundation/Process.swift
When `input` sends an EOF (write of length 0), the `outputs` file handles are closed, so only output to handles you own.
This function sets the `readabilityHandler` of inputs and the `writeabilityHandler` of outputs,
so you should not set these yourself after calling `tee`.
The one exception to this guidance is that you can set the `readabilityHandler` of `input` to `nil` to stop `tee`ing.
After doing so, the `writeabilityHandler`s of the `output`s will be set to `nil` automatically after all in-progress writes complete,
but if desired, you could set them to `nil` manually to cancel these writes. However, this may result in some outputs recieving less of the data than others.
This implementation waits for all outputs to consume a piece of input before more input is read.
This means that the speed at which your processes read data may be bottlenecked by the speed at which the slowest process reads data,
but this method also comes with very little memory overhead and is easy to cancel.
If this is unacceptable for your use case. you may wish to rewrite this with a data deque for each output.
*/
public func tee(from input: Any, into outputs: Any...) {
tee(from: input, into: outputs)
}
public func tee(from input: Any, into outputs: [Any]) {
/// Get reading and writing handles from the input and outputs respectively.
guard let input = fileHandleForReading(input) else {
preconditionFailure(incorrectTypeMessage)
}
let outputs: [FileHandle] = outputs.map({
guard let output = fileHandleForWriting($0) else {
preconditionFailure(incorrectTypeMessage)
}
return output
})
let writeGroup = DispatchGroup()
input.readabilityHandler = { input in
let data = input.availableData
/// If the data is empty, EOF reached
guard !data.isEmpty else {
/// Close all the outputs
for output in outputs {
output.closeFile()
}
/// Stop reading and return
input.readabilityHandler = nil
return
}
for output in outputs {
/// Tell `writeGroup` to wait on this output.
writeGroup.enter()
output.writeabilityHandler = { output in
/// Synchronously write the data
output.write(data)
/// Signal that we do not need to write anymore
output.writeabilityHandler = nil
/// Inform `writeGroup` that we are done.
writeGroup.leave()
}
}
/// Wait until all outputs have recieved the data
writeGroup.wait()
}
}
/// The message that is passed to `preconditionFailure` when an incorrect type is passed to `tee`.
let incorrectTypeMessage = "Arguments of tee must be either Pipe or FileHandle."
/// Get a file handle for reading from a `Pipe` or the handle itself from a `FileHandle`, or `nil` otherwise.
func fileHandleForReading(_ handle: Any) -> FileHandle? {
switch handle {
case let pipe as Pipe:
return pipe.fileHandleForReading
case let file as FileHandle:
return file
default:
return nil
}
}
/// Get a file handle for writing from a `Pipe` or the handle itself from a `FileHandle`, or `nil` otherwise.
func fileHandleForWriting(_ handle: Any) -> FileHandle? {
switch handle {
case let pipe as Pipe:
return pipe.fileHandleForWriting
case let file as FileHandle:
return file
default:
return nil
}
}

Recursive/looping NSURLSession async completion handlers

The API I use requires multiple requests to get search results. It's designed this way because searches can take a long time (> 5min). The initial response comes back immediately with metadata about the search, and that metadata is used in follow up requests until the search is complete. I do not control the API.
1st request is a POST to https://api.com/sessions/search/
The response to this request contains a cookie and metadata about the search. The important fields in this response are the search_cookie (a String) and search_completed_pct (an Int)
2nd request is a POST to https://api.com/sessions/results/ with the search_cookie appended to the URL. eg https://api.com/sessions/results/c601eeb7872b7+0
The response to the 2nd request will contain either:
The search results if the query has completed (aka search_completed_pct == 100)
Metadata about the progress of search, search_completed_pct is the progress of the search and will be between 0 and 100.
If the search is not complete, I want to make a request every 5 seconds until it's complete (aka search_completed_pct == 100)
I've found numerous posts here that are similar, many use Dispatch Groups and for loops, but that approach did not work for me. I've tried a while loop and had issues with variable scoping. Dispatch groups also didn't work for me. This smelled like the wrong way to go, but I'm not sure.
I'm looking for the proper design to make these recursive calls. Should I use delegates or are closures + loop the way to go? I've hit a wall and need some help.
The code below is the general idea of what I've tried (edited for clarity. No dispatch_groups(), error handling, json parsing, etc.)
Viewcontroller.swift
apiObj.sessionSearch(domain) { result in
Log.info!.message("result: \(result)")
})
ApiObj.swift
func sessionSearch(domain: String, sessionCompletion: (result: SearchResult) -> ()) {
// Make request to /search/ url
let task = session.dataTaskWithRequest(request) { data, response, error in
let searchCookie = parseCookieFromResponse(data!)
********* pseudo code **************
var progress: Int = 0
var results = SearchResults()
while (progress != 100) {
// Make requests to /results/ until search is complete
self.getResults(searchCookie) { searchResults in
progress = searchResults.search_pct_complete
if (searchResults == 100) {
completion(searchResults)
} else {
sleep(5 seconds)
} //if
} //self.getResults()
} //while
********* pseudo code ************
} //session.dataTaskWithRequest(
task.resume()
}
func getResults(cookie: String, completion: (searchResults: NSDictionary) -> ())
let request = buildRequest((domain), url: NSURL(string: ResultsUrl)!)
let session = NSURLSession.sharedSession()
let task = session.dataTaskWithRequest(request) { data, response, error in
let theResults = getJSONFromData(data!)
completion(theResults)
}
task.resume()
}
Well first off, it seems weird that there is no API with a GET request which simply returns the result - even if this may take minutes. But, as you mentioned, you cannot change the API.
So, according to your description, we need to issue a request which effectively "polls" the server. We do this until we retrieved a Search object which is completed.
So, a viable approach would purposely define the following functions and classes:
A protocol for the "Search" object returned from the server:
public protocol SearchType {
var searchID: String { get }
var isCompleted: Bool { get }
var progress: Double { get }
var result: AnyObject? { get }
}
A concrete struct or class is used on the client side.
An asynchronous function which issues a request to the server in order to create the search object (your #1 POST request):
func createSearch(completion: (SearchType?, ErrorType?) -> () )
Then another asynchronous function which fetches a "Search" object and potentially the result if it is complete:
func fetchSearch(searchID: String, completion: (SearchType?, ErrorType?) -> () )
Now, an asynchronous function which fetches the result for a certain "searchID" (your "search_cookie") - and internally implements the polling:
func fetchResult(searchID: String, completion: (AnyObject?, ErrorType?) -> () )
The implementation of fetchResult may now look as follows:
func fetchResult(searchID: String,
completion: (AnyObject?, ErrorType?) -> () ) {
func poll() {
fetchSearch(searchID) { (search, error) in
if let search = search {
if search.isCompleted {
completion(search.result!, nil)
} else {
delay(1.0, f: poll)
}
} else {
completion(nil, error)
}
}
}
poll()
}
This approach uses a local function poll for implementing the polling feature. poll calls fetchSearch and when it finishes it checks whether the search is complete. If not it delays for certain amount of duration and then calls poll again. This looks like a recursive call, but actually it isn't since poll already finished when it is called again. A local function seems appropriate for this kind of approach.
The function delay simply waits for the specified amount of seconds and then calls the provided closure. delay can be easily implemented in terms of dispatch_after or a with a cancelable dispatch timer (we need later implement cancellation).
I'm not showing how to implement createSearch and fetchSearch. These may be easily implemented using a third party network library or can be easily implemented based on NSURLSession.
Conclusion:
What might become a bit cumbersome, is to implement error handling and cancellation, and also dealing with all the completion handlers. In order to solve this problem in a concise and elegant manner I would suggest to utilise a helper library which implements "Promises" or "Futures" - or try to solve it with Rx.
For example a viable implementation utilising "Scala-like" futures:
func fetchResult(searchID: String) -> Future<AnyObject> {
let promise = Promise<AnyObject>()
func poll() {
fetchSearch(searchID).map { search in
if search.isCompleted {
promise.fulfill(search.result!)
} else {
delay(1.0, f: poll)
}
}
}
poll()
return promise.future!
}
You would start to obtain a result as shown below:
createSearch().flatMap { search in
fetchResult(search.searchID).map { result in
print(result)
}
}.onFailure { error in
print("Error: \(error)")
}
This above contains complete error handling. It does not yet contain cancellation. Your really need to implement a way to cancel the request, otherwise the polling may not be stopped.
A solution implementing cancellation utilising a "CancellationToken" may look as follows:
func fetchResult(searchID: String,
cancellationToken ct: CancellationToken) -> Future<AnyObject> {
let promise = Promise<AnyObject>()
func poll() {
fetchSearch(searchID, cancellationToken: ct).map { search in
if search.isCompleted {
promise.fulfill(search.result!)
} else {
delay(1.0, cancellationToken: ct) { ct in
if ct.isCancelled {
promise.reject(CancellationError.Cancelled)
} else {
poll()
}
}
}
}
}
poll()
return promise.future!
}
And it may be called:
let cr = CancellationRequest()
let ct = cr.token
createSearch(cancellationToken: ct).flatMap { search in
fetchResult(search.searchID, cancellationToken: ct).map { result in
// if we reach here, we got a result
print(result)
}
}.onFailure { error in
print("Error: \(error)")
}
Later you can cancel the request as shown below:
cr.cancel()

How to read all lines in socket (swift)

I have problem to read lines. When server (server code in java) sent more than one line code, my secket can not read all lines.
I used this library https://github.com/swiftsocket/SwiftSocket
and socket read method code:
private func sendRequest(data: String, client: TCPClient?) -> (String?) {
// It use ufter connection
if client != nil {
// Send data (WE MUST ADD TO SENDING MESSAGE '\n' )
let (isSuccess, errorMessage) = client!.send(str: "\(data)\n")
if isSuccess {
// Read response data
let data = client!.read(1024*10)
if let d = data {
// Create String from response data
if let str = NSString(bytes: d, length: d.count, encoding: NSUTF8StringEncoding) as? String {
return (data: str)
} else {
return (data: nil)
}
} else {
return (data: nil)
}
} else {
print(errorMessage)
return (data: nil)
}
} else {
return (data: nil)
}
}
I have changed like this : let data = client!.read(1024*40) but again can not read (received data large). it read some received data like this:
338C6EAAD0740ED101860EFBA286A653F42793CF1FC7FB55D244C010941BDE3DCA540223B291639D1CD7285B4240B330EBC7C002003F957790749256D54EAC4DB7FC0AD5E2970FE951DFA0A1635A93DFCB031DBA642BA928B8327B661F9F4F22CA657AE803B25A208C23F72D6F934B95558108C1187F90E6D8DE13F9367534E7EC28DBA5AC6C8597800154033B63A3B2185DF68ABF67BB76AC6B593C52E5D7B0A5D7674BEC7C6AA9B4C57343C32C944FEB5D1C2D6C2A400D454196A0029C23DB30F1B6049423DC5BEE728E03C275F207639C25E226A38A23EAE04AD673132336D9E113FC6CA32DAD1D75191BCE40A281D40549C1D6FDD23BC5A38B472ED1EEB6BA1D80D00EF2A08F4729FD05329993A6AE58C34253660A77C20139C73FE0E4A68D9857E02C3F8589A61B22C4E26B3DA00098158FB6CE0C43F271ABC823BF9AD7DBEB32A4E9BE4E90C284E1FD633956F82EB5387DE2E8D7626C88900D183C7E1F683C27B8A654EE75017E897D11F2431A9AB4C0662CC91B3897D52E630A788A1A8D552ABB04916B5E52A1382DB2803A796A96ACC50C00913C650393AA919100E6477B7D88066FEB6C78D4F5853122AB6D7309540053B9DB98BEC0D518CD8BF41620506E1224FB0F8B240B7E9FD60649E703FA9E6A21E785BB0F646DB028F5C5E64697E41A857F6A2A459B2C1B070C244F7B6FD252FFF016CC0A4F0457711213C025E6165706DF6C35C6C38F53610373C3DE99DFE426102860E2D4CD1BF5F6B97346655F0E26103CD6BF37AFCE705D4D2F78F7F40C5316143725D0D7FB6647A92F98A42570F423D646DC2CA726ABB16ED6C62C
but not all. How to fix this. Please, advise me. Thanks
there is your responsibility to check what did you received and how to process all data received from your socket. try to check this simple example how to read line delimited text from network. you can easily adopt this code for use with your library, or use it as is if you have working socket handle ...
import Darwin
private var line = [UInt8]()
private var excess = [UInt8]()
private func readLine(inout lineBufer: [UInt8], inout excessBufer: [UInt8], var sockHandle: Int32, wait: Bool = true) -> Int{
// (1) received bytes in one cycle and numbers of cycles to receive
// whole line (line delimited text)
var received = 0
var i = 0
// (2) clear the buffer for the extra bytes readed after the \n
excessBufer.removeAll()
var buffer = [UInt8](count: 64, repeatedValue: 0)
// (3) read from network until at least one \n is found
repeat {
received = read(sockHandle, &buffer, buffer.count)
if received == 0 {
close(sockHandle)
sockHandle = -1
print("connection lost")
return -1
}
if received < 0 {
print("\treceived failed:", String.fromCString(strerror(errno))!)
return -1
}
i = 0
while i < received && buffer[i] != UInt8(ascii: "\n") { i++ }
lineBufer.appendContentsOf(buffer[0..<i])
// (4) now consume '\n'
i++
// (5) we have extra bytes for next line received -i > 0
if i < received {
excessBufer.appendContentsOf(buffer[i..<received])
break // from loop
}
// (6) if no extra bytes and no whole line, than received - i < 0 (-1)
// if whole line and no extra bytes, than received - i = 0
} while i != received
return received
}
// (1) prepare buffers
line.removeAll()
line.appendContentsOf(excess)
// (2) read line delimited text ( socket is you uderlying socket handle )
readLine(&line, excessBufer: &excess, sockHandle: socket)
// now your line buffer consist of one line ot text
// do something with the line and repeate the proces as you need .....
// (1) prepare buffers
line.removeAll()
line.appendContentsOf(excess)
// (2) read line delimited text
readLine(&line, excessBufer: &excess, sockHandle: socket)
Looking at the GitHub repository you mentioned, I'm under the impression that this is a relatively new and not well-supported project. Yes, it may provide a somewhat simplified interface to sockets, but it is unknown how buggy it is and what its future is. Instead of using it, one might be better off using well-supported Apple Objective-C frameworks. Please see
https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/NetworkingTopics/Articles/UsingSocketsandSocketStreams.html
https://developer.apple.com/library/ios/documentation/NetworkingInternetWeb/Conceptual/NetworkingOverview/CommonPitfalls/CommonPitfalls.html
It is fairly easy to use the Foundation framework with Swift. Yes, you can definitely call POSIX C networking API from Swift, but that is more work and is not recommended in iOS, as it doesn't activate the cellular radio in the device, as explained in the aforementioned Apple articles.