How to configure a specific PCIe devices link speed - operating-system

I've been experimenting with some UEFI/Kernel code and am working on the various PCI-Express elements. I have obtained the MCFG ACPI table, Enumerated all PCI devices into my own structures and have access to all the devices MMIO regions and the full 4kb configuration space.
For this specific PCIe device which I have identified I have followed the configuration space as follows:
Test capability list bit, assuming it is set,
use offset 0x34, follow the pointers until I find a PCI Express configuration capability (ID = 0x10).
From here register 0x0c (Link capabilities) specifies the max link width as x16 and the max link speed as 3 (which is an index into the supported link speed vector and equates to 8.0 GT/s or Gen3 speed which the device is capable of).
The link status register is showing that the negotiated link width is x16, however the link speed is 1 (2.5 GT/s).
What I've tried is using the Link Control 2 Register to set the Target Speed to 3 then set Bit 5 on the Link Control Register to trigger a re-training. I also disable the autonomous link speed to ensure the device remains at the selected speed.
I then wait a small duration (1 second for testing) then poll the Link Status register checking for the Link Training bit to clear. This seems to clear immediately regardless of the above delay and when checking the Link Status register again the link speed is still 1.
I have checked several of the registers for error notifications and haven't spotted anything yet.
Clearly I need to find the correct process to establish a new link speed on the device, possibly configure de-emphasis values or apply the same link speed settings to upstream devices/bridges.
Any advice would be hugely appreciated.

Related

Setting trainer resistance using Swifty Sensors and Wahoo's cycling power service extension

I'm using the SwiftySensors CocoaPod to connect to a Wahoo Smart Trainer. It's advertising CyclingPowerService and DeviceInformationService. I've been able to get speed and power values without issue. Wahoo apparently extended the CyclingPowerService standard to allow setting resistance via that service instead of the Fitness Machine Control service.
https://github.com/codeinversion/sensors-swift links out to another Github page dealing with that extension, but that link is broken.
My question is: how should I go about setting the trainer's resistance? Wahoo's app can do it, so the machine is equipped for it. This is the only time I need to change the trainer's settings. Otherwise, I'm just reading sent information and SwiftySensors works great.
I've referenced the following post: Writing BLE to Cycling Control Point - Adding Resistance. Someone there said using CyclingPowerService to set resistance was possible without offering any guidance. I'm not very experienced with Bluetooth, so any information would be great!
Thank you Jordan. That was the answer. The broken link I referenced must have been pointing to the following repo: https://github.com/WahooFitness/sensors-swift-trainers
The following instructions assume that you're already able to connect to the trainer to receive data from it, like speed and power, using the SwiftySensors CocoaPod and the CyclingPowerService. Using the repo linked above, I was able to set the resistance to the Wahoo Snap trainer. Note that after you install that new repo, before you start scanning for sensors to connect to, you need to call
CyclingPowerService.WahooTrainer.activate()
From there, you set the resistance with
if let wahooTrainer = cyclingSpeedService.wahooTrainer {
wahooTrainer.setResistanceMode(resistance: 0.5)
}
The resistance is set using percentages. The value for resistance will be a Float, somewhere between 0 and 1.

MS Teams | Accessibility Insight | Dual Monitor

Objective: Accessibility behavior of MS Teams on Dual Monitor, with Monitors setup at different scales, example 100% and 125%, with 1920*1080 resolution. The tool I use is Accessibility Insight.
Problem: Accessibility Insight is unable to locate the MS Teams' Elements correctly when I launch Teams App in Monitor with 100% scale, which is also my Primary Monitor, and move it to the monitor with 125% scale. I see the position of the identified Element is off by about 280 from the Top. I also see that Left seems to be off by about a factor of 1.25, which I presume could be due to Scaling.
If I work on single Monitor with 125% (or any other scale), Accessibility Insight works nicely on MS Teams.
What I Read/Understand: I understand MS Teams is a Per Monitor DPI Aware App and so is Accessibility Insight. If I enable GDI scaling, reading Improve High DPI Experience , I do see that Accessibility Insight is able to locate the Element as it should. Further, Accessibility Insight works well on "Display Settings" itself (SystemSettings.exe process), which is also Per Monitor DPI Aware. It makes me presume that Per Monitor Awareness in MS Teams is not correctly implemented.
Questions:
Is my presumption correct that MS Teams doesn't work as expected on Dual/multi Monitors that is, it scales up or down correctly in Dual monitors with different scale factors?
Is there anyway to get Accessibility Insight to work correctly on MS Teams without changing the GDI Scaling/Overriding High DPI Scaling of MS Teams?
Is there a challenge itself with Accessibility Insight running on Electron Application? I observe similar issue with Slack.
[Edit] Added result of using Windows Automation API
The Monitor where Teams runs is at 125% and 1920x1080. While my demo app is marked as Per Monitor DPI Aware and runs on Monitor 100%, 1920x1080. Both the Monitors are of 14 inches in size. The result shows Root [Teams' Main Window] Element's Left and Top location as well as location of Left and Top of "Search" box, at top of the title bar in Teams App, that Automation API retrieves. As per Microsoft's documentation, Automation API retrieves Physical coordinates. Observations
Physical Location of Mouse says X:2455 and Y:10
Left and Top location of Element Search Box from Automation API comes out as 2935 & 280 respectively.
Value of 2935, when scaled down by 1.25 is 2348, which matches Physical Location of Mouse on Search box when I run my App in System DPI Aware or DPI Unaware mode. So the Left Coordinate in Per Monitor Mode is scaled up version of Left Coordinate in System Aware or Unaware mode.
I cannot draw any correlation with anything to Top value of 280
We investigated this on the Accessibility Insights end of things and it looks to be an issue with Teams. We were able to verify this with Magnifier; we configured it to track keyboard focus and found that it is inconsistent in identifying location of elements as well (indicating a Teams problem). As in, some controls were correct in being tracked while others were not.
Note: this was even without dual monitor setup.

How does my operating system get information about disk size, RAM size, CPU frequency, etc

I can see from my OS the informations about my hard disk, RAM and CPU. But I've never told my OS these info.
How does my OS know it?
Is there some place in the hard disk or CPU or RAM that stores this kind of information?
Is there some standard about the format of this kind of information?
SMBIOS (formerly known as DMI) contains much of this information. SMBIOS is a a data structure/API that is part of the BIOS/UEFI firmware and contains info like brand and model of the computer, etc.
The rest is gathered by the OS querying hardware directly.
Answer grabbed from superuser by Mokubai.
You don't need to tell it because each device already knows (or has a way) to identify itself.
If you get the idea that every device is accessed via address and data lines, and in some cases only data lines then you come to the relaisation that in those data lines you need some kind of "protocol" that determines just how you talk to those devices.
In amongst that protocol you have commands that say "read this" and "send that" or "put this over there". It is also relatively easy to have a command that says "identify yourself" which, rather than reading a block of disk or memory or painting a pixel a particular colour, will return a premade string or set of strings that tell the driver or operating system what that device is. Using a series of identity commands you could discover a device type, it's capabilities and what driver might be able to work with it.
You don't need to tell a device what it is, because it already knows. And you don't need to tell the operating system what it is because it can ask the device itself.
You don't tell people what they're called and how they talk, you ask them.
Each device has it's own protocol for these messages, and they don't store the details of other devices because to do so would be insane and near useless given that you can remove any device at any time. Your hard drive doesn't need to store information about your memory or graphics card except for the driver that the operating system uses to talk to it with.
The PC UEFI specification would define a core set of system specifications that every computer has, allowing the processor to be powered up and for a program stored in an EEPROM to begin the asbolute basic system probing necessary to determine the processor, set up the RAM, find a disk and display and thus continue to boot the computer.
From there the UEFI system would hand over to the operating system which would have more detailed probing and identification procedures, but it all starts at the most basic "I have a processor, what is around me?" situation.

Does x86_64 have an equivalent to the aarch64 at instruction?

ARM's aarch64 has an AT (Address Translate) instruction that runs a virtual address through a stage of address translation returning a physical address in PAR_EL1, along with status to indicate whether the translation exists. See ARMv8 ARM, Section C5.5.
The question is: does x86_64 have the equivalent? Intel's System Programming Guide (Volume 3, Chapter 5) talks about pointer validation, but these methods seem to apply to segment-level protection, and there do not appear to be any page-level protection pointer validation instructions.
Is anybody aware of an ARMv8-AT-like instruction for x86_64?
No, the x86-64 instruction set does not have an instruction to perform physical-to-virtual address translation. It only has basic instructions like setting the page directory register, invalidating addresses, and enabling paging.
If you want this functionality on x86-64, I'm afraid you need to be in supervisor mode to do so. You would read the CR3 register, possibly change a few page table mappings to access the physical addresses you need, and perform the address translation by manually walking the page directory and tables.
Your question raises a question in response: For what purpose do you need to know about virtual-to-physical address translations? Paging is supposed to be transparent to application programs, and it is rare to have a good reason to know the physical memory address corresponding to a particular virtual memory address.

Kinect SDK 2.0 has significantly less functionality than 1.8?

I miss several functionality that I belive was present in the previous SDK-s.
For example:
//Getting reference to sensor(s)
//Old (1.8)
sensor = KinectSensor.KinectSensors[0];
//New (2.0)
sensor = KinectSensor.GetDefault();
//
//the latter one does not support multiple sensors?
Also miss the option to use multiple sensors to track skeletons:
https://msdn.microsoft.com/en-us/library/dn188677.aspx
Is this missing too?
With the new sensors, there are increased hardware requirements, making multiple sensors more difficult, as Carmine Sirignano reported.
Only one Kinect for Windows v2 sensor is supported. It is both a runtime and hardware issue of the system due to available USB3 bandwidth could only support one sensor. You would need a system with multiple USB3 host controllers in addition to those host controls on separate PCI Express 2.0 buses at a minimum.
And Wyck continues at the same link:
The Kinect uses a lot (more than half) of the available bus bandwidth to operate normally. So even though you could physically connect two or more sensors, there is no feasible way to have them both sustain enough of a data rate for them to operate normally using the existing hardware.
As a result of the hardware limitations, it appears that the runtime will only work with a single device, so the API was changed to reflect that.