I'm trying to recreate the information displayed for the current Wi-Fi network when option-clicking on the Wi-Fi status bar item. One of the parameters shown is the MCS Index, but I can't find any way to query this value using the CWInterface class, which is where I am getting most of the other data:
if let interface = CWWiFiClient.shared().interface() {
rssi = interface.rssiValue()
noise = interface.noiseMeasurement()
// etc.
}
Since both the Wi-Fi status bar item and the airport command line tool display the MCS Index, it seems like there should be some way to query it:
MacBook:~ mark$ /System/Library/PrivateFrameworks/Apple80211.framework/Versions/Current/Resources/airport -I
agrCtlRSSI: -46
agrExtRSSI: 0
agrCtlNoise: -90
agrExtNoise: 0
state: running
op mode: station
lastTxRate: 878
maxRate: 1300
lastAssocStatus: 0
802.11 auth: open
link auth: wpa2-psk
BSSID: xx:xx:xx:xx:xx:xx
SSID: MyWiFi
MCS: 7
channel: 149,80
I've also seem some Python sample code that seems to indicate that the MCS Index should be available, but I don't see it in the docs or code completion.
Is there some way to get this value through Core WLAN or some other framework, or is this something I need to calculate based on other values?
I found another Python script wifi_status.py
which reports the WiFi status. From the lines
def wifi_status(properties=('bssid', 'channel', 'txRate', 'mcsIndex', 'rssi', 'noise')):
xface = CWWiFiClient.sharedWiFiClient().interface()
while True:
yield({name: getattr(xface, name)() for name in properties})
one can conclude that these attributes can be retrieved with
Key-Value Coding.
And that really works:
if let iface = CWWiFiClient.shared().interface() {
if let mcsIndex = iface.value(forKey: "mcsIndex") as? Int {
print(mcsIndex)
}
}
But I have now idea if that approach is officially supported,
or will work in the future, so use at your own risk.
Related
I try to make a sip video call using Pjsip/Pjsua on my raspberry pi 3.
Before coding, I'm using the main sample app to test different options. Everything seems to work (registering, audio calling,..) but when I try to start a video call, the programs stops with the following message :
pjsua-armv7l-unknown-linux-gnueabihf: ../src/pjmedia-videodev/v4l2_dev.c:737: vid4lin_stream_get_frame_mmap: Assertion `!"frame buffer is too small for v4l2"' failed.
I've searched a lot, including the source code :
/* get frame from mmap */
static pj_status_t vid4lin_stream_get_frame_mmap(vid4lin_stream *stream, pjmedia_frame *frame)
{
struct v4l2_buffer buf;
pj_time_val time;
pj_status_t status = PJ_SUCCESS;
pj_bzero(&buf, sizeof(buf));
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.memory = V4L2_MEMORY_MMAP;
status = xioctl(stream->fd, VIDIOC_DQBUF, &buf);
if (status != PJ_SUCCESS)
return status;
if (frame->size < buf.bytesused) {
/* supplied buffer is too small */
pj_assert(!"frame buffer is too small for v4l2");
status = PJ_ETOOSMALL;
goto on_return;
}
So I understand that the pjmedia_frame has a "size" inferior to the v4l2 buffer, resulting to my failure.
My question is simple : how can i change this setting ?
I tried evetything in the sample app : changing resolution, bitrate, fps,..
I found some ressources saying to change the h264 profile level.. ok, but where do i set it ? Is it within the v4l2 manager ? or directly in the app ? How can i do it ?
I played with different options in v4l2 to reduce the bitrate/resolution in order to have a small buffer, but still getting the same error.
At this point I'm completely clueless.
For info, I compiled PJsip using openh264 (no libx264) as suggested by PjSip.
Thanks for your help/ideas ;)
According to your question about profile level, you can try with:
const pj_str_t codec_id = {"H264", 4};
pjmedia_vid_codec_param param;
pj_status_t status;
status = pjsua_vid_codec_get_param(&codec_id, ¶m);
param.dec_fmtp.param[0].name = pj_str("profile-level-id");
param.dec_fmtp.param[0].val = pj_str("42e01f");
status = pjsua_vid_codec_set_param(&codec_id, ¶m);
do this anywhere after pjsua_start(). Last two characters in val property are profile level. Description of levels can be found here (link). More information about h264 profile here (link).
I'm not an expert of v4l2, but have little experience with encoding video on rpi3, and I suggest you to use FFmpeg instead of pure openh264, beacuse of support of hardware acceleration (link).
Good luck!
I am writing a simple app in Coffeescript to control a Philips Hue light. I have included this module into my project. The below code seems to work fine up until I try to set the color of the lights using setLightState. The compiler says the function isn't a function. I don't quite understand why it doesn't recognize the function.
# Connect with Philips Hue bridge
jsHue = require 'jshue'
hue = jsHue()
# Get the IP address of any bridges on the network
hueBridgeIPAddress = hue.discover().then((bridges) =>
if bridges.length is 0
console.log 'No bridges found. :('
else
(b.internalipaddress for b in bridges)
).catch((e) => console.log 'Error finding bridges', e)
if hueBridgeIPAddress.length isnt 0 # if there is at least 1 bridge
bridge = hue.bridge(hueBridgeIPAddress[0]) #get the first bridge in the list
# Create a user on that bridge (may need to press the reset button on the bridge before starting the tech buck)
user = bridge.createUser('myApp#testdevice').then((data) =>
username = data[0].success.username
console.log 'New username:', username
bridge.user(username)
)
if user?
#assuming a user was sucessfully created set the lighting to white
user.setLightState(1, {on:true, sat:0, bri:100, hue:65280}).then((data) =>
# process response data, do other things
)
As you can see on the github page of the jshue lib, bridge.createUser does not directly return a user object.
Instead the example code sets the user variable inside the then function of the returned promise:
bridge.createUser('myApp#testdevice').then(data => {
// extract bridge-generated username from returned data
var username = data[0].success.username;
// instantiate user object with username
var user = bridge.user(username);
user.setLightState( .... )
});
It can be expected that - using this approach - the user variable will be set correctly and user.setLightState will be defined.
A self-contained example:
Take this Codepen for example:
url = "https://api.ipify.org?format=json"
outside = axios.get(url).then (response) =>
inside = response.data.ip
console.log "inside: #{inside}"
inside
console.log "outside: #{outside}"
The console output is
outside: [object Promise]
inside: 178.19.212.102
You can see that:
the outside log is first and is a Promise object
the inside log comes last and contains the actual object from the Ajax call (in this case your IP)
the then function implicitly returning inside does not change anything
I have found ways to make pocketsphinx activate using multiple keywords, but I want to run diffrent commands depending on which keyword was said. I have already made it connect to Amazon's Alexa server when "Alexa" is said and now I want to add a command when I say "TV Off" and "TV On."
The best thing is to use python, something like this:
import sys, os
from pocketsphinx.pocketsphinx import *
from sphinxbase.sphinxbase import *
import pyaudio
modeldir = "../../../model"
# Create a decoder with certain model
config = Decoder.default_config()
config.set_string('-hmm', os.path.join(modeldir, 'en-us/en-us'))
config.set_string('-dict', os.path.join(modeldir, 'en-us/cmudict-en-us.dict'))
config.set_string('-kws', 'keyword.list')
p = pyaudio.PyAudio()
stream = p.open(format=pyaudio.paInt16, channels=1, rate=16000, input=True, frames_per_buffer=1024)
stream.start_stream()
# Process audio chunk by chunk. On keyword detected perform action and restart search
decoder = Decoder(config)
decoder.start_utt()
while True:
buf = stream.read(1024)
if buf:
decoder.process_raw(buf, False, False)
else:
break
if decoder.hyp() == "tv on":
print ("Detected keyword tv on, turning on tv")
os.system('beep')
decoder.end_utt()
decoder.start_utt()
if decoder.hyp() == "tv off":
print ("Detected keyword tv off, turning off tv")
os.system('beep beep')
decoder.end_utt()
decoder.start_utt()
It's the first time I'm using OMNET++.
I want to start from the very basic stuff to understand how it works.
I successuffly created my first simulation with two hosts continuosly exhanging mesagges (from tictoc example).
What I would like to do now is to simulate a simple client-server wireless communication between one AP and one wireless node. I'm trying to do that using elements from the inet class but I'm stuck and it is not working.
import inet.networklayer.configurator.base.NetworkConfiguratorBase;
import inet.networklayer.configurator.ipv4.IPv4NetworkConfigurator;
import inet.networklayer.configurator.ipv4.IPv4NodeConfigurator;
import inet.node.inet.WirelessHost;
import inet.node.wireless.AccessPoint;
import inet.physicallayer.common.packetlevel.RadioMedium;
import inet.physicallayer.contract.packetlevel.IRadioMedium;
import inet.physicallayer.ieee80211.packetlevel.Ieee80211RadioMedium;
//
// TODO documentation
//
network net
{
string mediumType = default("IdealRadioMedium");
#display("bgb=620,426");
submodules:
wirelessHost1: WirelessHost {
#display("p=423,164");
}
accessPoint1: AccessPoint {
#display("p=147,197");
}
iRadioMedium: <mediumType> like IRadioMedium {
#display("p=523,302");
}
iPv4NetworkConfigurator: IPv4NetworkConfigurator {
#display("p=270,324");
assignDisjunctSubnetAddresses = false;
}
}
and then I created a wirelessHost.cc source file using the tictoc beahviour to make the two nodes communicate.
But it is not working, I get this error:
<!> Error in module (inet::IPv4NodeConfigurator) infrastructure.wirelessHost1.networkLayer.configurator (id=13) during network initialization: Configurator module 'configurator' not found (check the 'networkConfiguratorModule' parameter).
But before doing something, it was another error about the Access Point (could not find wlan[0] module).
Can someone help me to understand how to configure this model?
EDIT
Here's the configuration .ini file
[General]
network = infrastructure
#cmdenv-output-file = omnetpp.log
#debug-on-errors = true
tkenv-plugin-path = ../../../etc/plugins
#record-eventlog = true
**.constraintAreaMinX = 0m
**.constraintAreaMinY = 0m
**.constraintAreaMinZ = 0m
**.constraintAreaMaxX = 600m
**.constraintAreaMaxY = 500m
**.constraintAreaMaxZ = 0m
**.mobility.typename = "StationaryMobility"
**.mobilityType = "StationaryMobility"
# access point
*.accessPoint.wlan[0].mac.address = "004444444444"
*.accessPoint.wlan[0].radio.channelNumber = 11
# host1 is associated with AP1 on channel 0
**.wirelessHost1.wlan[0].mgmt.accessPointAddress = "004444444444"
*.wirelessHost1.**.mgmtType = "Ieee80211MgmtSTASimplified"
# global data rates
**.wlan*.bitrate = 11Mbps
# application level: host1 pings host2
**.numPingApps = 1
*.wirelessHost1.pingApp[0].destAddr = "accessPoint"
*.wirelessHost1.pingApp[0].sendInterval = 10ms
but running the simulation I get
<!> Error in module (inet::ICMP) infrastructure.wirelessHost1.networkLayer.icmp (id=17) at event #4, t=0.008442657441: check_and_cast(): cannot cast (inet::GenericNetworkProtocolControlInfo*) to type 'inet::IPv4ControlInfo *'.
An instance of IPv4NetworkConfigurator in network has to be named configurator. After changing its name your second problem should be resolved too.
Moreover, the name of RadioMedium instance module has to be: radioMedium (instead of iRadioMedium).
EDIT
You have made two mistakes.
AccessPoint does not have network layer, because it only relays and sends MAC frames using MAC layer and MAC addresses - like in real network. As a consequence, it does not have an IP address and it is impossible to send ICMP ping to it.
OMNeT++ allows to use module's name instead of an IP address in ini file, for example **.destAddr = "wirelessHost1". In your ini you are trying to use non existing accessPoint instead of accessPoint1 (which is incorrectly because of the first error).
I suggest adding a new WirelesHost (for example wirelessHost2) and sending ping towards it, i.e.
*.wirelessHost1.pingApp[0].destAddr = "wirelessHost2"
I'm trying to read weight from Sartorius Weighing Scale model No BS2202S using the following code in C#.net 2.0 on a Windows XP machine:
public string readWeight()
{
string lastError = "";
string weightData = "";
SerialPort port = new SerialPort();
port.PortName = "COM1";
port.BaudRate = 9600;
port.Parity = Parity.Even;
port.DataBits = 7;
port.StopBits = StopBits.One;
port.Handshake = Handshake.RequestToSend;
try {
port.Open();
weightData = port.ReadExisting();
if(weightData == null || weightData.Length == 0) {
lastError = "Unable to read weight. The data returned form weighing machine is empty or null.";
return lastError;
}
}
catch(TimeoutException) {
lastError = "Operation timed out while reading weight";
return lastError;
}
catch(Exception ex) {
lastError = "The following exception occurred while reading data." + Environment.NewLine + ex.Message;
return lastError;
}
finally {
if(port.IsOpen == true) {
port.Close();
port.Dispose();
}
}
return weightData;
}
I'm able to read the weight using Hyperterminal application (supplied with Windows XP) with the same serial port parameters given above for opening the port. But from the above code snippet, I can open the port and each time it is returning empty data.
I tried opening port using the code given this Stack Overflow thread, still it returns empty data.
Kindly assist me.
I know this is probably old now ... but for future reference ...
Look at the handshaking. There is both hardware handshaking and software handshaking. Your problem could be either - so you need to try both.
For hardware handshaking you can try:
mySerialPort.DtrEnable = True
mySerialPort.RtsEnable = True
Note that
mySerialPort.Handshake = Handshake.RequestToSend
I do not think sets the DTR line which some serial devices might require
Software handshaking is also known as XON/XOFF and can be set with
mySerialPort.Handshake = Handshake.XOnXOff
OR
mySerialPort.Handshake = Handshake.RequestToSendXOnXOff
You may still need to enable DTR
When all else fails - dont forget to check all of these combinations of handshaking.
Since someone else will probably have trouble with this in the future, hand shaking is a selectable option.
In most of the balances you will see the options for Software, Hardware 2 char, Hardware 1 char. The default setting for the Sartorius balances is Hardware 2 Char. I usually recommend changing to Software.
Also if it stops working all together it can often be fixed by defaulting the unit using the 9 1 1 parameter. And then resetting the communication settings.
An example of how to change the settings can be found on the manual on this page:
http://www.dataweigh.com/products/sartorius/cpa-analytical-balances/