Remote video stream is not being rendered to the UIView even though the IBOutlet is connected - swift

I can't seem to get remote video stream to render properly to my UIView. I can hear the two participants but can't seem to render the video stream even though the IO.
Any ideas why? Here's my code:
func rtcEngine(_ engine: AgoraRtcEngineKit, firstRemoteVideoDecodedOfUid uid:UInt, size:CGSize, elapsed:Int) {
DispatchQueue.main.async {
if (self.remoteVideo.isHidden) {
self.remoteVideo.isHidden = false
}
self.agoraKit.muteLocalAudioStream(false)
let videoCanvas = AgoraRtcVideoCanvas()
videoCanvas.uid = 0
videoCanvas.view = self.remoteVideo
videoCanvas.renderMode = .adaptive
self.agoraKit.setupRemoteVideo(videoCanvas)
}
}

From your code, I see you are assigning the UID to be 0. That means it will automatically generate a new UID for the remote view. You can set the UID to 0 to auto generate the local video stream if you'd like. However, for the remote stream, you need to grab the assigned UID of the remote stream which is provided in the callback method's parameter as the uid variable.
Also, you need to make sure that you are adding the delegate methods in an extension that adopts the AgoraRtcEngineDelegate protocol.
extension VideoChatViewController: AgoraRtcEngineDelegate {
// Tutorial Step 5
func rtcEngine(_ engine: AgoraRtcEngineKit, firstRemoteVideoDecodedOfUid uid:UInt, size:CGSize, elapsed:Int) {
DispatchQueue.main.async {
if (self.remoteVideo.isHidden) {
self.remoteVideo.isHidden = false
}
self.agoraKit.muteLocalAudioStream(false)
let videoCanvas = AgoraRtcVideoCanvas()
videoCanvas.uid = uid
videoCanvas.view = self.remoteVideo
videoCanvas.renderMode = .adaptive
self.agoraKit.setupRemoteVideo(videoCanvas)
}
}
}

Related

Streaming audio data Using Audio Queue Services drops audio after first few buffers

I've been trying to get audio data to be played when it's being received but from the current implementation I found had flaws. I've modified it somewhat, but there are almost no examples of this case of streaming live audio data. The issue occurs when playing the beginning of the streamed data. It will play a few buffers, then when I monitor what is going on with the queue's being called, the AudioOutputCallback method goes rogue. It will call properly for the first few times then call more than the amount of buffers I have allocated (about 6 or more times). Currently I've tried manually calling the callback method and it will call but not cycle through the 3 buffers allocated.
Some specifications I have are, the data is coming in at 1000 Bytes and it never came in, in anything other than that, so it's pretty constant. Audio Queue is only used for Playback of streamed data from the network. 3 buffers are allocated as Apple suggested there be. They are somewhat reused. This implementation is supposed to be some sort of circular buffer.
Here is my current implementation:
Initialization:
public func setupAudioQueuePlayback() {
incomingData = NSMutableData()
audioQueueBuffers = []
isBuffersUsed = []
var outputStreamDescription = createLPCMDescription()
AudioQueueNewOutputWithDispatchQueue(&audioPlaybackQueue, &outputStreamDescription, 0, playbackQueue, AudioQueueOutputCallback)
createAudioBuffers()
AudioQueueStart(audioPlaybackQueue!, nil)
}
CreateBuffers method:
fileprivate func createAudioBuffers() {
for _ in 0..<allocatedBuffersUponStart {
var audioBuffer: AudioQueueBufferRef? = nil
let osStatus = AudioQueueAllocateBuffer(audioPlaybackQueue!, UInt32(1000), &audioBuffer)
if osStatus != noErr {
print("Error allocating buffers!"); return
} else {
self.audioQueueBuffers.append(audioBuffer)
self.isBuffersUsed.append(false)
AudioQueueAllocateBuffer(audioPlaybackQueue!, UInt32(1000), &audioBuffer)
}
}
}
Where the data is entering method:
private func playFromData(_ data: Data) {
playbackGroup.enter()
playbackQueue.sync {
incomingData.append(data)
var bufferIndex = 0
while true {
if !isBuffersUsed[bufferIndex] {
isBuffersUsed[bufferIndex] = true
break
} else {
bufferIndex += 1
if bufferIndex >= allocatedBuffersUponStart {
bufferIndex = 0
}
}
}
currentIndexGlobal = bufferIndex
let bufferReference = audioQueueBuffers[bufferIndex]
bufferReference?.pointee.mAudioDataByteSize = UInt32(incomingData.length)
bufferReference?.pointee.mAudioData.advanced(by: 0).copyMemory(from: incomingData.bytes, byteCount: incomingData.length)
AudioQueueEnqueueBuffer(audioPlaybackQueue!, bufferReference!, 0, nil)
incomingData = NSMutableData()
playbackGroup.leave()
}
}
Callback Method (Not Global):
private func AudioQueueOutputCallback(aq: AudioQueueRef, buffer: AudioQueueBufferRef) {
for index in 0..<allocatedBuffersUponStart {
if isBuffersUsed[index] == true {
isBuffersUsed[index] = false
}
}
}
If your wondering how it's all being implemented:
func inSomeMethodOutsideThisFile() {
audioService = AudioService.shared
audioService.setupAudioQueuePlayback()
dataManager.subscribe(toTopic: "\(deviceId)/LineOutAudio", qoS: .messageDeliveryAttemptedAtLeastOnce) { (audioData) in
self.audioService.playFromData(audioData)
}
}
I've tried other ways but this way was the main way it all started.

Screen Sharing using Twilio in iOS

I am using Twilio iOS framework to connect in the group room.
On the click on connect room button below is the code which I used
let recorder = RPScreenRecorder.shared()
recorder.isMicrophoneEnabled = false
recorder.isCameraEnabled = false
// The source produces either downscaled buffers with smoother motion, or an HD screen recording.
videoSource = ReplayKitVideoSource(isScreencast: true, telecineOptions: ReplayKitVideoSource.TelecineOptions.disabled)
screenTrack = LocalVideoTrack(source: videoSource!,
enabled: true,
name: "Screen")
recorder.startCapture(handler: { (sampleBuffer, type, error) in
if error != nil {
print("Capture error: ", error as Any)
return
}
switch type {
case RPSampleBufferType.video:
self.videoSource?.processFrame(sampleBuffer: sampleBuffer)
break
case RPSampleBufferType.audioApp:
break
case RPSampleBufferType.audioMic:
// We use `TVIDefaultAudioDevice` to capture and playback audio for conferencing.
break
}
}) { (error) in
if error != nil {
print("Screen capture error: ", error as Any)
} else {
print("Screen capture started.")
}
}
if (accessToken == "TWILIO_ACCESS_TOKEN") {
do {
accessToken = try TokenUtils.fetchToken(url: tokenUrl)
} catch {
let message = "Failed to fetch access token"
logMessage(messageText: message)
return
}
}
// Prepare local media which we will share with Room Participants.
self.prepareLocalMedia()
// Preparing the connect options with the access token that we fetched (or hardcoded).
let connectOptions = ConnectOptions(token: accessToken) { (builder) in
// Use the local media that we prepared earlier.
builder.audioTracks = self.localAudioTrack != nil ? [self.localAudioTrack!] : [LocalAudioTrack]()
builder.videoTracks = self.localVideoTrack != nil ? [self.localVideoTrack!, self.screenTrack!] : [LocalVideoTrack]()
// Use the preferred audio codec
if let preferredAudioCodec = Settings.shared.audioCodec {
builder.preferredAudioCodecs = [preferredAudioCodec]
}
// Use the preferred video codec
if let preferredVideoCodec = Settings.shared.videoCodec {
builder.preferredVideoCodecs = [preferredVideoCodec]
}
// Use the preferred encoding parameters
if let encodingParameters = Settings.shared.getEncodingParameters() {
builder.encodingParameters = encodingParameters
}
// Use the preferred signaling region
if let signalingRegion = Settings.shared.signalingRegion {
builder.region = signalingRegion
}
builder.roomName = self.roomTextField.text
}
// Connect to the Room using the options we provided.
room = TwilioVideoSDK.connect(options: connectOptions, delegate: self)
logMessage(messageText: "Attempting to connect to room \(String(describing: self.roomTextField.text))")
When I connected in the group with remote participant I want to share the screen with remote participant.
To implement this feature I have referred the “ReplayKitExample” with in-app capture method. But not able to do that.
Remote participant not able to see the screen share content.
Nothing is happening related to screen share with this, and looking for inputs on implementing it.
I want to share the screen to remote participant.
Its happening because you are trying to send "cameraSource" and "videoSource" both data at the same time you have to unsubscribe the "cameraSource" before sending "viseoSource".
Heres my code you can refer:
//MARK: - Screen Sharing via replaykit
extension CallRoomViewController: RPScreenRecorderDelegate {
func broadCastButtonTapped(){
guard screenRecorder.isAvailable else {
print("Not able to Broadcast")
return
}
print("Can Broadcast")
if self.videoSource != nil {
self.stopConference()
} else {
self.startConference()
}
}
func publishVideoTrack(){
if let participant = self.room?.localParticipant,
let videoTrack = self.localVideoTrack {
participant.publishVideoTrack(videoTrack)
}
}
func unpublishVideoTrack(){
if let participant = self.room?.localParticipant,
let videoTrack = self.localVideoTrack {
participant.unpublishVideoTrack(videoTrack)
}
}
func stopConference() {
self.unpublishVideoTrack()
self.localVideoTrack = nil
self.videoSource = nil
self.localVideoTrack = LocalVideoTrack(source: cameraSource!, enabled: true, name: "Camera")
screenRecorder.stopCapture{ (captureError) in
if let error = captureError {
print("Screen capture stop error: ", error as Any)
} else {
print("Screen capture stopped.")
self.publishVideoTrack()
}
}
}
func startConference() {
self.unpublishVideoTrack()
self.localVideoTrack = nil
// We are only using ReplayKit to capture the screen.
// Use a LocalAudioTrack to capture the microphone for sharing audio in the room.
screenRecorder.isMicrophoneEnabled = false
// Use a LocalVideoTrack with a CameraSource to capture the camera for sharing camera video in the room.
screenRecorder.isCameraEnabled = false
// The source produces either downscaled buffers with smoother motion, or an HD screen recording.
self.videoSource = ReplayKitVideoSource(isScreencast: true,
telecineOptions: ReplayKitVideoSource.TelecineOptions.p60to24or25or30)
self.localVideoTrack = LocalVideoTrack(source: videoSource!,
enabled: true,
name: "Screen")
let videoCodec = Settings.shared.videoCodec ?? Vp8Codec()!
let (_, outputFormat) = ReplayKitVideoSource.getParametersForUseCase(codec: videoCodec,
isScreencast: true,
telecineOptions:ReplayKitVideoSource.TelecineOptions.p60to24or25or30)
self.videoSource?.requestOutputFormat(outputFormat)
screenRecorder.startCapture(handler: { (sampleBuffer, type, error) in
if error != nil {
print("Capture error: ", error as Any)
return
}
switch type {
case RPSampleBufferType.video:
self.videoSource?.processFrame(sampleBuffer: sampleBuffer)
break
case RPSampleBufferType.audioApp:
break
case RPSampleBufferType.audioMic:
// We use `TVIDefaultAudioDevice` to capture and playback audio for conferencing.
break
default:
print(error ?? "screenRecorder error")
}
}) { (error) in
if error != nil {
print("Screen capture error: ", error as Any)
} else {
print("Screen capture started.")
self.publishVideoTrack()
}
}
}
}
And you can connect room from viewDidLoad()
func connectToChatRoom(){
// Configure access token either from server or manually.
// If the default wasn't changed, try fetching from server.
accessToken = self.callRoomDetail.charRoomAccessToken
guard accessToken != "TWILIO_ACCESS_TOKEN" else {
let message = "Failed to fetch access token"
print( message)
return
}
// Prepare local media which we will share with Room Participants.
self.prepareLocalMedia()
// Preparing the connect options with the access token that we fetched (or hardcoded).
let connectOptions = ConnectOptions(token: accessToken) { (builder) in
// The name of the Room where the Client will attempt to connect to. Please note that if you pass an empty
// Room `name`, the Client will create one for you. You can get the name or sid from any connected Room.
builder.roomName = self.callRoomDetail.chatRoomName
// Use the local media that we prepared earlier.
if let audioTrack = self.localAudioTrack {
builder.audioTracks = [ audioTrack ]
}
if let videoTrack = self.localVideoTrack {
builder.videoTracks = [ videoTrack ]
}
// Use the preferred audio codec
if let preferredAudioCodec = Settings.shared.audioCodec {
builder.preferredAudioCodecs = [preferredAudioCodec]
}
// Use the preferred video codec
if let preferredVideoCodec = Settings.shared.videoCodec {
builder.preferredVideoCodecs = [preferredVideoCodec]
}
// Use the preferred encoding parameters
let videoCodec = Settings.shared.videoCodec ?? Vp8Codec()!
let (encodingParams, _) = ReplayKitVideoSource.getParametersForUseCase(codec: videoCodec,
isScreencast: true,
telecineOptions:ReplayKitVideoSource.TelecineOptions.p60to24or25or30)
builder.encodingParameters = encodingParams
// Use the preferred signaling region
if let signalingRegion = Settings.shared.signalingRegion {
builder.region = signalingRegion
}
builder.isAutomaticSubscriptionEnabled = true
builder.isNetworkQualityEnabled = true
builder.networkQualityConfiguration = NetworkQualityConfiguration(localVerbosity: .minimal,
remoteVerbosity: .minimal)
}
// Connect to the Room using the options we provided.
room = TwilioVideoSDK.connect(options: connectOptions, delegate: self)
print( "Attempting to connect to room \(self.callRoomDetail.chatRoomName ?? ""))")
self.showRoomUI(inRoom: true)
}
You can get ReplayKitVideoSource file and other files from twilio repository https://github.com/twilio/video-quickstart-ios/tree/master/ReplayKitExample
I work at Twilio and I can confirm that you should be able to publish video tracks for both camera and screen at the same time without issue.
It is difficult to identify why this is not working for you without a completely functional example app.
However I have tested this using one of our reference apps and confirmed it is working. More details are here: https://github.com/twilio/video-quickstart-ios/issues/650#issuecomment-1178232542
Hopefully this is a useful example for how to publish both camera and screen video at the same time.

Apparently random execution time with GKGraph.findpath() method in Swift

I'm having a pathfinder class in a SpriteKit game that a I want to use to process every path request in the game. So I have my class stored in my SKScene and I access it from different parts of the game always from the main thread. The pathfinder uses a GKGridGraph of a pretty good size (288 x 224). The class holds an array of requests processed one after another at each update() call from the main scene. Here is the code :
class PathFinder {
var isLookingForPath = false
var groundGraph : GKGridGraph<MyNode>?
var queued : [PathFinderRequest] = []
var thread: DispatchQoS.QoSClass = .userInitiated
func generate(minPoint: CGPoint) {
// generate the groundGraph grid
}
func update() {
// called every frame
if !self.isLookingForPath {
findPath()
}
}
func findPath(from start: TuplePosition, to end: TuplePosition, on layer: PathFinderLayer, callBack: PathFinderCallback) {
// Generating request
let id = String.randomString(length: 5)
let request = PathFinderRequest(id: id, start: start, end: end, layer: layer, callback: callBack)
// Append the request object at the end of the array
queued.append(request)
}
func findPath() {
self.isLookingForPath = true
guard let request = queued.first else {
isLookingForPath = false
return
}
let layer = request.layer
let callback = request.callback
let start = request.start
let end = request.end
let id = request.id
var graph = self.groundGraph
queued.removeFirst()
let findItem = DispatchWorkItem {
if let g = graph, let sn = g.node(atGridPosition: start.toVec()), let en = g.node(atGridPosition: end.toVec()) {
if let path = g.findPath(from: sn, to: en) as? [GKGridGraphNode], path.count > 0 {
// Here we have the path found
// it worked !
}
}
// Once the findPath() method execution is over,
// we reset the "flag" so we can call it once again from
// the update() method
self.isLookingForPath = false
}
// Execute the findPath() method in the chosen thread
// asynchronously
DispatchQueue.global(qos: thread).async(execute: findItem)
}
func drawPath(_ path: [GKGridGraphNode]) {
// draw the path on the scene
}
}
Well the code works quite good as it is. If I send random path request within (x+-10, y+-10) length it will return them to each object holding the callback in the request object pretty quickly, but suddenly one request is randomly taking a huge amount of time (approximatively 20s compared to 0.001s) and despite everything I tried I wasn't able to find out what happens. It's never on the same path, never the same caller, never after a certain amount of time... here is a video of the issue : https://www.youtube.com/watch?v=-IYlLOQgJrQ
It sure happens more quickly when there is too much entities requesting but I can't figure why I'm sure it has to deal with the DispacthQueue async calls that I use to prevent the game from freezing.
With delay on every call, the error appear later but is still here :
DispatchQueue.global(qos: thread).asyncAfter(deadline: .now() + 0.1, execute: findItem)
When I look for what is taking so much time to process it is a sub method of the GKGridGraph class :
So I really don't know how to figure this out, I tried everything I could think of but it always happens whatever the delay, the number of entities, the different threads, etc...
Thank you for your precious help !

Unexpectedly unwrapping an optional to find a nil after an API call to Spotify

So I know this may be a bit specific but I've been staring at my code and am unable to resolve this issue. Basically, I'm making a network call to spotify to obtain a certain playlist and pass a number that will ultimately determine the number of songs I get back. The code is basically as follows:
// A network call is made just above to return somePlaylist
let playlist = somePlaylist as! SPTPartialPlaylist
var songs: [SPTPartialTrack] = []
// load in playlist to receive back songs
SPTPlaylistSnapshot.playlistWithURI(playlist.uri, session: someSession) { (error: NSError!, data: AnyObject!) in
// cast the data into a correct format
let playlistViewer = data as! SPTPlaylistSnapshot
let playlist = playlistViewer.firstTrackPage
// get the songs
for _ in 1...numberOfSongs {
let random = Int(arc4random_uniform(UInt32(playlist.items.count)))
songs.append(playlist.items[random] as! SPTPartialTrack)
}
}
The problem comes at the portion of code that initializes random. In maybe 1 in 20 calls to this function I, for whatever, reason unwrap a nil value for playlist.items.count and can't seem to figure out why. Maybe it's something I don't understand about API calls or something else I'm failing to see but I can't seem to make sense of it.
Anyone have any recommendations on addressing this issue or how to go about debugging this?
Ok, after sleeping on it and working on it some more I seem to have resolved the issue. Here's the error handling I implemented into my code.
if let actualPlaylist = playlist, actualItems = actualPlaylist.items {
if actualItems.count == 0 {
SongScraper.playlistHasSongs = false
print("Empty playlist, loading another playlist")
return
}
for _ in 1...numberOfSongs {
let random = Int(arc4random_uniform(UInt32(actualItems.count)))
songs.append(actualPlaylist.items[random] as! SPTPartialTrack)
}
completionHandler(songs: songs)
}
else {
print("Returned a nil playlist, loading another playlist")
SongScraper.playlistHasSongs = false
return
}

kAudioUnitType_MusicEffect as AVAudioUnit

I'd like to use my kAudioUnitType_MusicEffect AU in an AVAudioEngine graph. So I try to call:
[AVAudioUnitMIDIInstrument instantiateWithComponentDescription:desc options:kAudioComponentInstantiation_LoadInProcess completionHandler:
but that just yeilds a normal AVAudioUnit, so the midi selectors (like -[AVAudioUnit sendMIDIEvent:data1:data2:]:) are unrecognized. It seems AVAudioUnitMIDIInstrument instantiateWithComponentDescription only works with kAudioUnitType_MusicDevice.
Any way to do this? (Note: OS X 10.11)
Make a subclass and call instantiateWithComponentDescription from its init.
Gory details and github project in this blog post
http://www.rockhoppertech.com/blog/multi-timbral-avaudiounitmidiinstrument/#avfoundation
This uses Swift and kAudioUnitSubType_MIDISynth but you can see how to do it.
This works. It's a subclass. You add it to the engine and you route the signal through it.
class MyAVAudioUnitDistortionEffect: AVAudioUnitEffect {
override init() {
var description = AudioComponentDescription()
description.componentType = kAudioUnitType_Effect
description.componentSubType = kAudioUnitSubType_Distortion
description.componentManufacturer = kAudioUnitManufacturer_Apple
description.componentFlags = 0
description.componentFlagsMask = 0
super.init(audioComponentDescription: description)
}
func setFinalMix(finalMix:Float) {
let status = AudioUnitSetParameter(
self.audioUnit,
AudioUnitPropertyID(kDistortionParam_FinalMix),
AudioUnitScope(kAudioUnitScope_Global),
0,
finalMix,
0)
if status != noErr {
print("error \(status)")
}
}