MS Teams | Accessibility Insight | Dual Monitor - ui-automation

Objective: Accessibility behavior of MS Teams on Dual Monitor, with Monitors setup at different scales, example 100% and 125%, with 1920*1080 resolution. The tool I use is Accessibility Insight.
Problem: Accessibility Insight is unable to locate the MS Teams' Elements correctly when I launch Teams App in Monitor with 100% scale, which is also my Primary Monitor, and move it to the monitor with 125% scale. I see the position of the identified Element is off by about 280 from the Top. I also see that Left seems to be off by about a factor of 1.25, which I presume could be due to Scaling.
If I work on single Monitor with 125% (or any other scale), Accessibility Insight works nicely on MS Teams.
What I Read/Understand: I understand MS Teams is a Per Monitor DPI Aware App and so is Accessibility Insight. If I enable GDI scaling, reading Improve High DPI Experience , I do see that Accessibility Insight is able to locate the Element as it should. Further, Accessibility Insight works well on "Display Settings" itself (SystemSettings.exe process), which is also Per Monitor DPI Aware. It makes me presume that Per Monitor Awareness in MS Teams is not correctly implemented.
Questions:
Is my presumption correct that MS Teams doesn't work as expected on Dual/multi Monitors that is, it scales up or down correctly in Dual monitors with different scale factors?
Is there anyway to get Accessibility Insight to work correctly on MS Teams without changing the GDI Scaling/Overriding High DPI Scaling of MS Teams?
Is there a challenge itself with Accessibility Insight running on Electron Application? I observe similar issue with Slack.
[Edit] Added result of using Windows Automation API
The Monitor where Teams runs is at 125% and 1920x1080. While my demo app is marked as Per Monitor DPI Aware and runs on Monitor 100%, 1920x1080. Both the Monitors are of 14 inches in size. The result shows Root [Teams' Main Window] Element's Left and Top location as well as location of Left and Top of "Search" box, at top of the title bar in Teams App, that Automation API retrieves. As per Microsoft's documentation, Automation API retrieves Physical coordinates. Observations
Physical Location of Mouse says X:2455 and Y:10
Left and Top location of Element Search Box from Automation API comes out as 2935 & 280 respectively.
Value of 2935, when scaled down by 1.25 is 2348, which matches Physical Location of Mouse on Search box when I run my App in System DPI Aware or DPI Unaware mode. So the Left Coordinate in Per Monitor Mode is scaled up version of Left Coordinate in System Aware or Unaware mode.
I cannot draw any correlation with anything to Top value of 280

We investigated this on the Accessibility Insights end of things and it looks to be an issue with Teams. We were able to verify this with Magnifier; we configured it to track keyboard focus and found that it is inconsistent in identifying location of elements as well (indicating a Teams problem). As in, some controls were correct in being tracked while others were not.
Note: this was even without dual monitor setup.

Related

PageSpeed insights went through upgrade as on 27th May 2020

https://developers.google.com/speed/pagespeed/insights/
On 27th May 2020 I had used page speed where I got a pretty good score for desktop (90+) and for mobile around (85+), but as on 28th May 2020 the metrics seems to drastically changed, I can see the PageSpeed has new version (v6) but no proper release notes are provided here https://developers.google.com/speed/docs/insights/release_notes.
Anyone had faced similar issue and found that google pagespeed did undergo certain upgrades then please provide me some references if possible.
After some digging I managed to find the draft of the proposed scoring weights.
https://web.dev/performance-scoring/?utm_source=lighthouse&utm_medium=wpt
The large shift in scores is down to how the weightings have changed
Lighthouse v6
First Contentful Paint 15%
Speed Index 15%
Largest Contentful Paint 25%
Time to Interactive 15%
Total Blocking Time 25%
Cumulative Layout Shift 5%
Lighthouse v5
First Contentful Paint 20%
Speed Index 27%
First Meaningful Paint 7%
Time to Interactive 33%
As you can see there is a massive shift to emphasis on Total Blocking Time (TBT) (JavaScript execution time mainly) and when the Largest Contentful Paint (LCP) occurs (presumably as this indicates large shifts in the page layout / visible content that may be distracting / is a good indicator of when the above the fold content is fully loaded (as opposed to showing a 'spinner')).
They have also added a third new metric Cumulative Layout Shift (CLS), a metric that works out how much the page layout 'moves around'. This has a low weighting at the moment but I imagine it is part of a larger plan to ensure all late-loading assets are captured that affect the 'above the fold' content and may cause frustration (trying to click a link to find an advert loaded in and moved it for example).
These huge changes in weightings and introduction of new metrics are the cause of massive score decreases you may experience.
I can confirm that my site that used to score 99 or 100 now only scores 87 so it is indeed a large shift in how they are scoring. They also now seem to take into account SVG rendering as my site is 100% SVG driven yet scores low on the LCP, this is something they did not initially take into account with the LCP stats.
For now focus on the two articles I linked on TBT and LCP as those are the new metrics they are choosing to emphasise making up 50% of your score.
Update
As OP pointed out in the comments the main changes for the new PSI v6 are located here

Difference values between PageSpeed Insights and Google Search Console - Speed (experimental)

I like your website and it does a good job, but when I analyze my website in PageSpeed Insights, I get a 96 for mobile and a 98 for desktop, and when I look in Google Search Console (GSC), it rates my mobile website as moderate, presumably between 50-89, and the desktop as "not enough data".
Why is there that much of a difference between PageSpeed Insights and GSC? And is Google ranking my site poorly because GSC looks to be getting a poor score? Does the location of my server make any difference to the score? Should it be near the Search Console's server to receive a better score/rank?
So the issue you are experiencing is because of how PSI processes data to calculate your score vs how Search Console does.
If real world data is available Search Console will prioritise that over the simulated data to calculate your scores (which makes sense), PSI will always use the speeds it calculates under 'lab data'.
The real world data is more accurate but you need to read it correctly to know how to improve it.
The 3 bars (green, orange and red) show data as follows for First Contentful Paint (FCP) and First Input Delay (FID):-
FCP green: less than 1 second
FCP orange: 1 to 3 seconds
FCP red: over 3 seconds
and
FID green: less than 100ms
FID orange: 100ms to 300ms
FID red: over 300ms.
These are calculated for the 75th percentile for FCP and 95th percentile for FID. (although not technically correct think of it as 3 in 4 people will have this experience or better for FCP and 19/20 people will have a better experience than shown for FID).
This is where you get a 'moderate' score in Search console.
The average break down for FCP is around 23%, 58% 19% respectively. You get 36%, 45%, 19% so you are pretty close to the average.
Similar story for FID.
What to look at
You have quite a variance on FCP, there are lots of possible causes of this but the most likely ones are:-
You have a lot of visitors from other countries and aren't using a CDN (or at least not to it's full potential).
The site is receiving spikes in traffic and your server is hitting capacity, check your resource logs / fault logs on the server.
Your JavaScript is CPU heavy (230ms FID says it might be) and part of your page render depends on the JS to load. The simulated runs do a 4 times CPU slowdown, in the real world some mobile devices can be up to 6-8 times slower than desktop PCs so JS differences start to add up quickly.
Test it in the real world
Simulated tests are great but they are artificial at the end of the day.
Go and buy a £50 android device and test your site on 4G and 3G and see how the site responds.
Another thing to try is open up Dev tools and use the performance tab. Set 'network' to '3G' and 'CPU' to '6x slowdown' and observe how the site loads. (after pressing the record button and refreshing the page).
If you have never used this tab before you may need to search for a couple of tutorials on how to interpret the data but it will show JS bottle necks and rendering issues.
Put some load time monitoring JS into the page and utilise your server logs / server monitoring software. You will soon start to see patterns (is it certain screen sizes that have an issue? Is your caching mechanism not functioning correctly under certain circumstances? Is your JS misbehaving on certain devices?)
All of the above have one thing in common, more data to pinpoint issues that a synthetic test cannot find.
Summary / TL;DR
Search console uses real world data when you have enough of it, PSI always uses lab data from the run you just completed.
PSI is a useful tool but is only there for guidance, if Search console says your site is average you need to examine your speed using other real world methods for bottlenecks.

What is the expected Google home query request frequency for a device?

I have a Google smart home app released that supports various light bulb brands. I have a user with 5 Phillip Hue light bulbs and there are approximately 1360 query state requests per day for the 5 bulbs. Is this frequency of query requests common and expected for all devices?
That's one query request every ~5 minutes.
https://developers.google.com/actions/smarthome/develop/process-intents#QUERY
It is normal for Google to periodically send QUERY intents to your service to ensure that the data in Home Graph is up to date. You can mitigate this process by making sure that you have implemented Report State to publish all relevant state changes to Google in real time, as this also directly updates the state in Home Graph.
The actual frequency is a bit more difficult to pin down as it relates to not only how often you report state for devices, but also user activity on those devices. Generally speaking, the more often you report state to Google the less you should see QUERY polling.
We are also actively working on ways to reduce the need for QUERY polling, so in the future you should see the frequency of this reduced so long as you have Report State implemented for all your users' devices.

How to configure a specific PCIe devices link speed

I've been experimenting with some UEFI/Kernel code and am working on the various PCI-Express elements. I have obtained the MCFG ACPI table, Enumerated all PCI devices into my own structures and have access to all the devices MMIO regions and the full 4kb configuration space.
For this specific PCIe device which I have identified I have followed the configuration space as follows:
Test capability list bit, assuming it is set,
use offset 0x34, follow the pointers until I find a PCI Express configuration capability (ID = 0x10).
From here register 0x0c (Link capabilities) specifies the max link width as x16 and the max link speed as 3 (which is an index into the supported link speed vector and equates to 8.0 GT/s or Gen3 speed which the device is capable of).
The link status register is showing that the negotiated link width is x16, however the link speed is 1 (2.5 GT/s).
What I've tried is using the Link Control 2 Register to set the Target Speed to 3 then set Bit 5 on the Link Control Register to trigger a re-training. I also disable the autonomous link speed to ensure the device remains at the selected speed.
I then wait a small duration (1 second for testing) then poll the Link Status register checking for the Link Training bit to clear. This seems to clear immediately regardless of the above delay and when checking the Link Status register again the link speed is still 1.
I have checked several of the registers for error notifications and haven't spotted anything yet.
Clearly I need to find the correct process to establish a new link speed on the device, possibly configure de-emphasis values or apply the same link speed settings to upstream devices/bridges.
Any advice would be hugely appreciated.

How can I programmatically visualize a graph with manually positioned vertices on a headless machine?

I need to generate frames of animation to visualize a new graph layout algorithm. The frames will be generated on a server (50+ cores, 256GB RAM), but it is completely headless.
Every library that I have found wants to do the layout for me rather than allowing me to specify the position of each vertex manually. Perhaps I have overlooked something, but my Google-fu has failed me. My graphs are large; there will be millions of nodes and edges, and the graph structure will change over time, which is why I need to script the project.
I really don't want to waste a lot of time on the visualization, because that is not where the research is happening.
Does anyone know of a visualization tool that can work in a headless environment, that will allow me to specify the location of each vertex manually, and that can handle millions of nodes and edges quickly enough so that I can generate thousands of images?