I'm working on an iOS app written in Swift and I want to be able to detect when USB flash drives are connected or disconnected from the device. How can I do this in my app?
I've looked into using the External Accessory framework, but I'm not sure how to get started. Can anyone provide some guidance or point me to some resources that can help me implement this feature in my app?
Any help would be greatly appreciated!
let keys: [URLResourceKey] = [
.volumeNameKey, .volumeIsRemovableKey,
.volumeIsEjectableKey, .volumeAvailableCapacityKey,
.volumeTotalCapacityKey, .volumeUUIDStringKey,
.volumeIsBrowsableKey, .volumeIsLocalKey, .isVolumeKey
]
let manager =
FileManager.default.mountedVolumeURLs(includingResourceValuesForKeys: keys)
if let urls = manager {
print(urls)
}
This code for example works on MacOS , I get a list of volumes.
However, in iOS it just returns nil.
I am developing a Cocoa app that using TwilioVideo. Since TwilioVideo SDK is only built for iOS and not worked for macOS so I wanted to create an approach using Twilio with the shared weblink and work it with WKWebView in Cocoa. I start with
let pRoom = URL(string: "https://meetingroomfortwilio.com/room/roomid")!
let request = URLRequest(url: pRoom)
videoWebView.load(request)
in viewDidLoad method.
It opened the communication web page but with error like below.
I searched for error and learned that these problems via Twilio official API errors page.
The Client may not be using a supported WebRTC implementation.
The Client may not have the necessary resources to create or apply a new media description.
So, good or bad, I have to implement a video call system in Cocoa for the app.
How can I implement supported WebRTC implementation for my app in Cocoa?
What would you do for a building an app with WebRTC (video & voice call)?
Is opened a Safari browser from the app good approach for building this and can I track the browser which I opened for the video-voice call, situations like success, error?
I would like to learn how to achieve the problem, maybe in different ways?
Thanks in advance!
I want to get data from userDefaults to use in an Apple Watch app. I'm currently using WatchConnectivity. But for making session, the viewdidload should be open. Then the Phone only sends the data to the Watch when the iPhone app is loaded (viewDidLoad). To solve this, I want to use appGroupUserDefaults. But it doesn't work when I try to load the data in override func awake(withContext context: Any?) like this.
let loadedButtonList = appGroupUserDefaults.object(forKey: "Buttontitles")
if (loadedButtonList as? [String] != nil) {//do something}
It seems like loadedButtonList is nil. Can't I use appGroupUserDefaults in this watchOS version? And Does anyone know the way to share data like this without using WatchConnectivity??
You can't use App Groups to share data between phone and watch since WatchOS3.
You can use updateapplicationContext to guarantee the data will be passed to Apple Watch.
let serverConfig = UserDefaults.standard.dictionary(forKey: "com.apple.configuration.managed") print("serverConfig count:
\(String(describing: serverConfig?.count))")
Above code always returning nil for iOS 11.
I want transfer few data to managed application using MDM portal via managed configuration as per the apple developer document they didn't update anything in to it.
I don't know what I'm doing wrong.Please help me out with this.
Why doesn't the Watson Text-To-Speech service on Bluemix work for mobile devices? Is this a common issue for outputstream data coming from the server? Thanks!
Edit: Sry, somebody have changed my question totally. I am talking about Text-to-Speech
Text To Speech works in Android, and there is a SDK you can use.
http://watson-developer-cloud.github.io/java-wrapper/
For example, to get all the voices you can do:
import com.ibm.watson.developer_cloud.text_to_speech.v1.TextToSpeech;
import com.ibm.watson.developer_cloud.text_to_speech.v1.model.VoiceSet;
TextToSpeech service = new TextToSpeech();
service.setUsernameAndPassword("<username>", "<password>");
VoiceSet voices = service.getVoices();
System.out.println(voices);
where username and password are the credentials you get in Bluemix when you bind the service. You can learn more about the Text to Speech methods by looking at the javadocs here.
It was released today and I made it so let me know if you find any issue.
The Watson Speech-To-Text service is a REST API. You will need to call the REST API from your mobile app. For more info about the REST API check out the API docs.
If you want to use Watson Text-To-Speech for iOS devices it might be handy using Watson-Developer-Cloud SDK for iOS - you might checkout the example on my blumarek.blogspot, just simply build an app in XCode 7.3+:
step 1. use carthage to get all the dependencies:
(create a file cartfile in a project root directory and run the command carthage update --platform iOS)
$ cat > cartfile
# cartfile contents
github "watson-developer-cloud/ios-sdk"
and then you need to add the frameworks to the XCode project - check Step 3: Adding the SDK to the Xcode project on my blumareks.blogpost
step 2. add the code to call the Watson TTS and leverage AVFoundation
(AVFoundation is being deprecated):
- do not forget to add the Watson TTS service in Bluemix.net and get the credentials from it:
{
"credentials": {
"url": "https://stream.watsonplatform.net/text-to-speech/api",
"username": "<service User name>",
"password": "<password>"
}
}
And the code is a plain one:
import UIKit
//adding Watson Text to Speech
import WatsonDeveloperCloud
//adding AVFoundation
import AVFoundation
class ViewController: UIViewController {
#IBOutlet weak var speakText: UITextField!
override func viewDidLoad() {...}
override func didReceiveMemoryWarning() {...}
#IBAction func speakButtonPressed(sender: AnyObject) {
NSLog("speak button pressed, text to say: " + speakText.text!)
//adding Watson service
let service = TextToSpeech(username: "<service User name>", password: "<password>")
service.synthesize(speakText.text!)
{(data, error) in
do {
let audioPlayer = try AVAudioPlayer(data: data!)
audioPlayer.prepareToPlay()
audioPlayer.play()
sleep(10) //the thread needs to live long enough to say your text
} catch {
NSLog("something went terribly wrong")
}
}}}
It is unclear if you are asking about the speech to text or vice versa. The speech to text is covered in most of the questions above and can be referenced on the Watson site -
http://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/speech-to-text.html
The Speech to Text service converts the human voice into the written word. This easy-to-use service uses machine intelligence to combine information about grammar and language structure with knowledge of the composition of the audio signal to generate a more accurate transcription. The transcription is continuously sent back to the client and retroactively updated as more speech is heard. Recognition models can be trained for different languages, as well as for specific domains.
If you look at this github project https://github.com/FarooqMulla/BluemixExample/tree/master which uses the Old SDK
There is an example that uses the real time speech to text api which sends audio packets to bluemix and receives back the transcribed string in real time.
Beware as of 1/22/16 the new Swift based SDK is broken for this particular functionality.