How to calculate Voice and Data usage in BlackBerry 10 application - blackberry-10

Hi I am developing an application in BlackBerry 10 platform which will calculate data usage and voice usage of user device for particular duration.
There are some of the feasibility check need to be done for this which are as follows
Does BB10 API supports data usage calculation? If yes, Can I differentiate 3G/Cellular data from WiFi data?If yes, how can I achieve this?
How can I calculate Voice usage in BB 10 application? Voice usage is nothing but duration of all calls happened within particular timespan
Is there any API BB10 provides through which I can check if device is currently in Roaming or not?
Please let me know if this can be done in BB 10 application

Does BB10 API supports data usage calculation?
Yes, there are a few for API's for this
Can I differentiate 3G/Cellular data from WiFi data?
Yurp you can.
1) Add the following line to your .pro file:
LIBS += -lbbdevice
2) Make sure you include:
#include <bb/device/NetworkDataUsage>
3) Getting data useage for cellular network only
bb::device::NetworkDataUsage *nduCell = new bb::device::NetworkDataUsage("cellular0");
nduCell ->update();
quint64 bytesSent = nduCell ->bytesSent();
quint64 bytesReceived = nduCell ->bytesReceived();
4) Getting data useage for wifi only
bb::device::NetworkDataUsage *nduWifi = new bb::device::NetworkDataUsage("tiw_sta0");
nduWifi ->update();
quint64 bytesSent = nduWifi ->bytesSent();
quint64 bytesReceived = nduWifi ->bytesReceived();
That will give your the data useage since the device has started.
You will need to call ndu->update() regularly to get the most recent data usage statistics.
Extra Info:
Changing the parameter for the NetworkDataUsage changes the interface it minitors:
cellular0 == Cellular
tiw_sta0 == Wifi
ecm0 == USB
To find out which interfaces are available on your device:
1) Add the following line to your .pro file:
QT += network
2) Make sure you include:
#include <QNetworkInterface>
3) Displaying available interfaces
QList<QNetworkInterface> interfaces = QNetworkInterface::allInterfaces();
for (int i = 0; i < interfaces.size(); i++) {
qDebug() << QString::number(i) + ": " + interfaces.value(i).humanReadableName();
}
How can I calculate Voice usage in BB 10 application? Voice usage is nothing but duration of all calls happened within particular timespan
This can be done using Phone class.
There is signal call void callUpdated (const bb::system::phone::Call &call, ) using which we can get to know if incoming call is received or outgoing call is initiated.
With the combination of this and Timer class we can calculate Voice usage of device. (This code is not tested)

Related

Using multiple audio devices simultaneously on osx

My aim is to write an audio app for low latency realtime audio analysis on OSX. This will involve connecting to one or more USB interfaces and taking specific channels from these devices.
I started with the learning core audio book and writing this using C. As I went down this path it came to light that a lot of the old frameworks have been deprecated. It appears that the majority of what I would like to achieve can be written using AVAudioengine and connecting AVAudioUnits, digging down into core audio level only for the lower things like configuring the hardware devices.
I am confused here as to how to access two devices simultaneously. I do not want to create an aggregate device as I would like to treat the devices individually.
Using core audio I can list the audio device ID for all devices and change the default system output device here (and can do the input device using similar methods). However this only allows me one physical device, and will always track the device in system preferences.
static func setOutputDevice(newDeviceID: AudioDeviceID) {
let propertySize = UInt32(MemoryLayout<UInt32>.size)
var deviceID = newDeviceID
var propertyAddress = AudioObjectPropertyAddress(
mSelector: AudioObjectPropertySelector(kAudioHardwarePropertyDefaultOutputDevice),
mScope: AudioObjectPropertyScope(kAudioObjectPropertyScopeGlobal),
mElement: AudioObjectPropertyElement(kAudioObjectPropertyElementMaster))
AudioObjectSetPropertyData(AudioObjectID(kAudioObjectSystemObject), &propertyAddress, 0, nil, propertySize, &deviceID)
}
I then found that the kAudioUnitSubType_HALOutput is the way to go for specifying a static device only accessible through this property. I can create a component of this type using:
var outputHAL = AudioComponentDescription(componentType: kAudioUnitType_Output, componentSubType: kAudioUnitSubType_HALOutput, componentManufacturer: kAudioUnitManufacturer_Apple, componentFlags: 0, componentFlagsMask: 0)
let component = AudioComponentFindNext(nil, &outputHAL)
guard component != nil else {
print("Can't get input unit")
exit(-1)
}
However I am confused about how you create a description of this component and then find the next device that matches the description. Is there a property where I can select the audio device ID and link the AUHAL to this?
I also cannot figure out how to assign an AUHAL to an AVAudioEngine. I can create a node for the HAL but cannot attach this to the engine. Finally is it possible to create multiple kAudioUnitSubType_HALOutput components and feed these into the mixer?
I have been trying to research this for the last week, but nowhere closer to the answer. I have read up on channel mapping and everything I need to know down the line, but at this level getting the audio at. lower level seems pretty undocumented, especially when using swift.

Adafruit I2S mems microphone not working with Contune Speech recognition. (Google cloud speech api)

I am using this library with Raspberry pi 3 with Raspbian and Adafruit I2S mems microphone. I am able to work i2s mic with raspberry pi and it is working working perfectly for normal recording, but while using Speech_Recognition with Google Speech Cloud API it was not working so, I did some Trickery and it is working perfect with microphone_recognition.py.
The trick I used is in
microphone_recognition.py
r = sr.Recognizer()
with sr.Microphone(device_index = 2, sample_rate = 48000) as source:
print("Say something!")
audio = r.record(source, duration = 5)
speech_recognition/__init__.py
self.device_index = device_index
self.format = self.pyaudio_module.paInt32
and it worked perfectly output
ALSA lib pcm.c:2495:(snd_pcm_open_noupdate) Unknown PCM bluealsa
connect(2) call to /tmp/jack-1000/default/jack_0 failed (err=No such
file or directory) attempt to connect to server failed Say something!
Google Cloud Speech thinks you said hello
But while trying background_listening.py
import time
import speech_recognition as sr
# this is called from the background thread
def callback(recognizer, audio):
GOOGLE_KEY= r"""{ MY KEY }"""
# received audio data, now we'll recognize it using Google Speech Recognition
try:
print("Google Speech Recognition thinks you said " + recognizer.recognize_google_cloud(audio, credentials_json=GOOGLE_KEY))
except sr.UnknownValueError:
print("Google Speech Recognition could not understand audio")
except sr.RequestError as e:
print("Could not request results from Google Speech Recognition service; {0}".format(e))
r = sr.Recognizer()
with sr.Microphone(device_index=2, sample_rate = 48000) as source:
r.adjust_for_ambient_noise(source) # we only need to calibrate once, before we start listening
# start listening in the background (note that we don't have to do this inside a `with` statement)
stop_listening = r.listen_in_background(source, callback)
# `stop_listening` is now a function that, when called, stops background listening
# do some unrelated computations for 5 seconds
for _ in range(50): time.sleep(0.1) # we're still listening even though the main thread is doing other things
# calling this function requests that the background listener stop listening
stop_listening(wait_for_stop=False)
# do some more unrelated things
while True: time.sleep(0.1)
ALSA lib pcm.c:2495:(snd_pcm_open_noupdate) Unknown PCM bluealsa
connect(2) call to /tmp/jack-1000/default/jack_0 failed (err=No such
file or directory) attempt to connect to server failed
I am using the same settings of speech_recognition/init. Not giving the output.
Addition to the question after debugging
I stay in the loop given below and don't go forward and dont get to call the callback(recognizer, audio) I am using ### init.py self.format = self.pyaudio_module.paInt32 and the belove code is also from the same.
Some debugging -
print(format(energy))
print(format(self.energy_threshold))
if energy > self.energy_threshold:
print("inside energy")
print(format(energy))
print(format(self.energy_threshold))
break
# dynamically adjust the energy threshold using asymmetric weighted average
if self.dynamic_energy_threshold:
print(" inside dynamic_energy_threshold")
damping = self.dynamic_energy_adjustment_damping ** seconds_per_buffer
target_energy = energy * self.dynamic_energy_ratio
self.energy_threshold = self.energy_threshold * damping + target_energy * (1 - damping)
print(format(self.energy_threshold))
print("end dynamic_energy_threshold")
102982050
117976102.818 - Beginng and always high so never gone break this loop, Even after speaking, inside dynamic_energy_threshold
119548935.608 end dynamic_energy_threshold inside listen -6.1.1 97861662
119548935.608 inside dynamic_energy_threshold
120722993.56 end dynamic_energy_threshold inside listen -6.1.1 84062664
120722993.56
Here I tried equalling energy and trishold energy and break the loop.
print(format(energy))
print(format(self.energy_threshold))
**self.energy_threshold = float(energy)**
if energy >= self.energy_threshold:
**print("inside energy")**
print(format(energy))
print(format(self.energy_threshold))
break
Output
105836705
116487614.952 = This is the Energy threshold Which is always High. and continue looping inside energy
105836705
105836705.0
So i did a work around,
Soltion -
speech_recognition/__init__.py - Lib
Microphone.__init__() - Method
self.format = self.pyaudio_module.paInt32
Recognizer/adjust_for_ambient_noise - Method
energy = audioop.rms(buffer, source.SAMPLE_WIDTH)/1000000
And it is working really perfect but now i have to find a way to stop sending request to Google speech api if there no Speech.

Add Heart rate measurement service to iPhone as peripheral

I'm using Apple BLTE Transfer to emulate the iPhone as a peripheral.
My goal is to simulate a heart rate monitor that uses the heart rate measurement profile.
(I know how to generate the data but needs to define the service on the peripheral side)
I've already have a code on the other side to collect data from BLE heart rate monitors.
I need some guidance how to define the Heart rate service and it's characteristics (ON the peripheral side).
I've also seen the use of specific service UUID (180D) and some characteristics UUID's (such as 2A37 for Heart rate measurement, 2A29 for manufacturer name etc.) Where do I get those numbers? and where they are defined?
If any other information need please advise.
The heart rate service is detailed on the bluetooth developer portal.
Say you have a CBPeripheralManager named peripheralManager initialized and you already received the peripheralManagerDidUpdateState: callback with the CBPeripheralManagerStatePoweredOn state. Here is how you can set up the service itself after this.
// Define the heart rate service
CBMutableService *heartRateService = [[CBMutableService alloc]
initWithType:[CBUUID UUIDWithString:#"180D"] primary:true];
// Define the sensor location characteristic
char sensorLocation = 5;
CBMutableCharacteristic *heartRateSensorLocationCharacteristic = [[CBMutableCharacteristic alloc]
initWithType:[CBUUID UUIDWithString:#"0x2A38"]
properties:CBCharacteristicPropertyRead
value:[NSData dataWithBytesNoCopy:&sensorLocation length:1]
permissions:CBAttributePermissionsReadable];
// Define the heart rate reading characteristic
char heartRateData[2]; heartRateData[0] = 0; heartRateData[1] = 60;
CBMutableCharacteristic *heartRateSensorHeartRateCharacteristic = [[CBMutableCharacteristic alloc]
initWithType:[CBUUID UUIDWithString:#"2A37"]
properties: CBCharacteristicPropertyNotify
value:[NSData dataWithBytesNoCopy:&heartRateData length:2]
permissions:CBAttributePermissionsReadable];
// Add the characteristics to the service
heartRateService.characteristics =
#[heartRateSensorLocationCharacteristic, heartRateSensorHeartRateCharacteristic];
// Add the service to the peripheral manager
[peripheralManager addService:heartRateService];
After this you should receive the peripheralManager:didAddService:error: callback indicating the successful addition. You should add the device information service (0x180A) similarly Finally, you should start advertising with:
NSDictionary *data = #{
CBAdvertisementDataLocalNameKey:#"iDeviceName",
CBAdvertisementDataServiceUUIDsKey:#[[CBUUID UUIDWithString:#"180D"]]};
[peripheralManager startAdvertising:data];
Note: The heart rate service was the first I implemented too. Good choice. ;)
Everything regarding Gatt Specifications can be found on the Bluetooth Developer Site. What you need to do is basically this:
1.)Set up your CBPeripheralManager
2.)After it is powered on, create the CBMutableService and CBMutableCharacteristics that match the Heart rate service. Advertise them and you'll be good to go.

Linux and reading and writing a general purpose 32 bit register

I am using embedded Linux for the NIOS II processor and device tree. The GPIO functionality provides the ability to read and or write a single bit at a time. I have some firmware and PIOS that I want to read or write atomically by setting or reading all 32 bits at one time. It seems like there would be a generic device driver that if the device tree was given the proper compatibility a driver would exist that would allow opening the device and then reading and writing the device. I have searched for this functionality and do not find a driver. One existing in a branch but was removed by Linus.
My question is what is the Linux device tree way to read and write a device that is a general purpose 32 bit register/pio?
Your answer is SCULL
Character Device Drivers
You will have to write a character device driver with file operations to open and close a device. Read, write, ioctl, and copy the contents of device.
static struct file_operations query_fops =
{
.owner = THIS_MODULE,
.open = my_open,
.release = my_close,
.ioctl = my_ioctl
};
Map the address using iomem and directly read and write to that address using rawread and rawwrite. Create and register a device as follows and then it can be accessed from userspace:
register_chrdev (0, DEVICE_NAME, & query_fops);
device_create (dev_class, NULL, MKDEV (dev_major, 0), NULL, DEVICE_NAME);
and then access it from userspace as follows:
fd = open("/dev/mydevice", O_RDWR);
and then you can play with GPIO from userspace using ioctl's:
ioctl(fd, SET_STATE);

Need an API that detects when an iPhone is plugged in

I'm making an application for Mac, and I need an API that detects when an iPhone is plugged in. Thanks.
EDIT : To clarify, specifically, I need an API that detects when an iPhone is plugged into a USB port on a Mac.
I don't have a complete answer, but a program that achieves what you want is USB Prober, supplied with Xcode and located at /Developer/Applications/Utilities/USB Prober.app. That program is open source, with the browser viewable repository being here and the whole project being included in this download. I actually found an older version to be more helpful, as available here, especially BusProbeClass.
They all rest on IOKit for which Apple's documentation seems to be very lacking in both quantity and quality.
That's heavy reading, but if you check out + USBProbe then you'll see it gets a list of current USB devices, gets IOUSBDeviceInterfaces for each in the variable deviceIntf and then pushes them to somewhere useful for the rest of the program. If you check out + outputDevice: locationID:deviceNumber: lower down in the same source file, you'll see that GetDescriptor can seemingly be used on an IOUSBDeviceDescriptor to get properties including the vendor and product ID, the combination of which is guaranteed to be unique by the USB Implementer's Forum.
Using the vendor and product ID, you can search for any specific USB device. From my Mac's System Information I can tell you that an iPhone 4 has a product ID of 0x1297 and Apple's vendor ID is 0x05ac.
Extra: from dissecting that code, if you remove a whole bunch of checks that things are succeeding and compress it all down to demonstrative stuff then the following is at least a test for whether an iPhone 4 is plugged in right now (you'll need to link to the Foundation and IOKit frameworks):
#include <stdio.h>
#import <Foundation/Foundation.h>
#import <IOKit/usb/IOUSBLib.h>
#import <IOKit/IOCFPlugIn.h>
#import <mach/mach_port.h>
int main (int argc, const char * argv[])
{
// get the port through which to talk to the kernel
mach_port_t masterDevicePort;
IOMasterPort(MACH_PORT_NULL, &masterDevicePort);
// create a dictionary that describes a search
// for services provided by USB
CFDictionaryRef matchingDictionary = IOServiceMatching(kIOUSBDeviceClassName);
// get an iterator for all devices that match
// the dictionary
io_iterator_t deviceIterator;
IOServiceGetMatchingServices(
masterDevicePort,
matchingDictionary,
&deviceIterator);
// iterate through the iterator...
io_service_t ioDevice;
while((ioDevice = IOIteratorNext(deviceIterator)))
{
IOUSBDeviceInterface **deviceInterface = NULL;
IOCFPlugInInterface **ioPlugin = NULL;
SInt32 score;
// get a pointer to the device, stored to ioPlugin
IOCreatePlugInInterfaceForService(
ioDevice,
kIOUSBDeviceUserClientTypeID,
kIOCFPlugInInterfaceID,
&ioPlugin,
&score);
// ask the device for its interface
(*ioPlugin)->QueryInterface(
ioPlugin,
CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID),
(void *)&deviceInterface);
// make and issue a request to get the device descriptor
IOUSBDeviceDescriptor deviceDescriptor;
IOUSBDevRequest request;
request.bmRequestType = USBmakebmRequestType(kUSBIn, kUSBStandard, kUSBDevice);
request.bRequest = kUSBRqGetDescriptor;
request.wValue = kUSBDeviceDesc << 8;
request.wIndex = 0;
request.wLength = sizeof(deviceDescriptor);
request.pData = &deviceDescriptor;
(*deviceInterface)->DeviceRequest(deviceInterface, &request);
// now we have the device descriptor, do a little cleaning up -
// release the interface and the device
(*deviceInterface)->Release(deviceInterface);
IOObjectRelease(ioDevice);
// ensure that the values returned are in the appropriate
// byte order for this platform
CFSwapInt16LittleToHost(deviceDescriptor.idVendor);
CFSwapInt16LittleToHost(deviceDescriptor.idProduct);
// check whether we have an iPhone 4 attached
if(deviceDescriptor.idVendor == 0x05ac && deviceDescriptor.idProduct == 0x1297)
printf("iPhone 4 is connected!");
}
// clean up by releasing the device iterator
// and returning the communications port
IOObjectRelease(deviceIterator);
mach_port_deallocate(mach_task_self(), masterDevicePort);
return 0;
}
I haven't yet figured out how to observe for changes in plugged-in devices.