number of DOM nodes are significantly different between chrome lighthouse and performance monitor - dom

I measured the number of DOM nodes in this site by using the following two tools
lighthouse
performance monitor in chrome devtools
As shown in the picture below, the lighthouse gives 1132 elements while the performance monitor gives 3631.
Why are they so different?

Related

MS Teams | Accessibility Insight | Dual Monitor

Objective: Accessibility behavior of MS Teams on Dual Monitor, with Monitors setup at different scales, example 100% and 125%, with 1920*1080 resolution. The tool I use is Accessibility Insight.
Problem: Accessibility Insight is unable to locate the MS Teams' Elements correctly when I launch Teams App in Monitor with 100% scale, which is also my Primary Monitor, and move it to the monitor with 125% scale. I see the position of the identified Element is off by about 280 from the Top. I also see that Left seems to be off by about a factor of 1.25, which I presume could be due to Scaling.
If I work on single Monitor with 125% (or any other scale), Accessibility Insight works nicely on MS Teams.
What I Read/Understand: I understand MS Teams is a Per Monitor DPI Aware App and so is Accessibility Insight. If I enable GDI scaling, reading Improve High DPI Experience , I do see that Accessibility Insight is able to locate the Element as it should. Further, Accessibility Insight works well on "Display Settings" itself (SystemSettings.exe process), which is also Per Monitor DPI Aware. It makes me presume that Per Monitor Awareness in MS Teams is not correctly implemented.
Questions:
Is my presumption correct that MS Teams doesn't work as expected on Dual/multi Monitors that is, it scales up or down correctly in Dual monitors with different scale factors?
Is there anyway to get Accessibility Insight to work correctly on MS Teams without changing the GDI Scaling/Overriding High DPI Scaling of MS Teams?
Is there a challenge itself with Accessibility Insight running on Electron Application? I observe similar issue with Slack.
[Edit] Added result of using Windows Automation API
The Monitor where Teams runs is at 125% and 1920x1080. While my demo app is marked as Per Monitor DPI Aware and runs on Monitor 100%, 1920x1080. Both the Monitors are of 14 inches in size. The result shows Root [Teams' Main Window] Element's Left and Top location as well as location of Left and Top of "Search" box, at top of the title bar in Teams App, that Automation API retrieves. As per Microsoft's documentation, Automation API retrieves Physical coordinates. Observations
Physical Location of Mouse says X:2455 and Y:10
Left and Top location of Element Search Box from Automation API comes out as 2935 & 280 respectively.
Value of 2935, when scaled down by 1.25 is 2348, which matches Physical Location of Mouse on Search box when I run my App in System DPI Aware or DPI Unaware mode. So the Left Coordinate in Per Monitor Mode is scaled up version of Left Coordinate in System Aware or Unaware mode.
I cannot draw any correlation with anything to Top value of 280
We investigated this on the Accessibility Insights end of things and it looks to be an issue with Teams. We were able to verify this with Magnifier; we configured it to track keyboard focus and found that it is inconsistent in identifying location of elements as well (indicating a Teams problem). As in, some controls were correct in being tracked while others were not.
Note: this was even without dual monitor setup.

Difference values between PageSpeed Insights and Google Search Console - Speed (experimental)

I like your website and it does a good job, but when I analyze my website in PageSpeed Insights, I get a 96 for mobile and a 98 for desktop, and when I look in Google Search Console (GSC), it rates my mobile website as moderate, presumably between 50-89, and the desktop as "not enough data".
Why is there that much of a difference between PageSpeed Insights and GSC? And is Google ranking my site poorly because GSC looks to be getting a poor score? Does the location of my server make any difference to the score? Should it be near the Search Console's server to receive a better score/rank?
So the issue you are experiencing is because of how PSI processes data to calculate your score vs how Search Console does.
If real world data is available Search Console will prioritise that over the simulated data to calculate your scores (which makes sense), PSI will always use the speeds it calculates under 'lab data'.
The real world data is more accurate but you need to read it correctly to know how to improve it.
The 3 bars (green, orange and red) show data as follows for First Contentful Paint (FCP) and First Input Delay (FID):-
FCP green: less than 1 second
FCP orange: 1 to 3 seconds
FCP red: over 3 seconds
and
FID green: less than 100ms
FID orange: 100ms to 300ms
FID red: over 300ms.
These are calculated for the 75th percentile for FCP and 95th percentile for FID. (although not technically correct think of it as 3 in 4 people will have this experience or better for FCP and 19/20 people will have a better experience than shown for FID).
This is where you get a 'moderate' score in Search console.
The average break down for FCP is around 23%, 58% 19% respectively. You get 36%, 45%, 19% so you are pretty close to the average.
Similar story for FID.
What to look at
You have quite a variance on FCP, there are lots of possible causes of this but the most likely ones are:-
You have a lot of visitors from other countries and aren't using a CDN (or at least not to it's full potential).
The site is receiving spikes in traffic and your server is hitting capacity, check your resource logs / fault logs on the server.
Your JavaScript is CPU heavy (230ms FID says it might be) and part of your page render depends on the JS to load. The simulated runs do a 4 times CPU slowdown, in the real world some mobile devices can be up to 6-8 times slower than desktop PCs so JS differences start to add up quickly.
Test it in the real world
Simulated tests are great but they are artificial at the end of the day.
Go and buy a £50 android device and test your site on 4G and 3G and see how the site responds.
Another thing to try is open up Dev tools and use the performance tab. Set 'network' to '3G' and 'CPU' to '6x slowdown' and observe how the site loads. (after pressing the record button and refreshing the page).
If you have never used this tab before you may need to search for a couple of tutorials on how to interpret the data but it will show JS bottle necks and rendering issues.
Put some load time monitoring JS into the page and utilise your server logs / server monitoring software. You will soon start to see patterns (is it certain screen sizes that have an issue? Is your caching mechanism not functioning correctly under certain circumstances? Is your JS misbehaving on certain devices?)
All of the above have one thing in common, more data to pinpoint issues that a synthetic test cannot find.
Summary / TL;DR
Search console uses real world data when you have enough of it, PSI always uses lab data from the run you just completed.
PSI is a useful tool but is only there for guidance, if Search console says your site is average you need to examine your speed using other real world methods for bottlenecks.

How to get the timestamp of performance profiling events in Chrome Dev Tools?

In Chrome Dev Tools how can I get the timestamp of performance profiling events so that I can match them with network events in the network tab?
You can to use the performance tab in the dev tools.
Use the snapshot to profile the application.
Start using the application.
You can then analyse and save it.
Tbh I don't know a way to get exact timestamps but you can make approximations by sliding the cursors as you examine timings between sequence of events, approximate timings for paint on load etc.
EDIT
Once you have profiled the application - you can get specific timings on network events by selecting the network dropdown. It should show each event with the time for execution when you hover over the event.

Kubernetes dashboard: How to access more than 15 minutes of CPU and memory usage in the Web UI dashboard

The web UI dashboard (which is rendered by running 'kubectl proxy') is very useful and gives a great highlevel overview of the cluster. However the CPU and memory usage graphs seems to be hardcoded to display only the last 15 minutes. I am not able to find any settings that allows me to increase this, nor could I find any documentation on how to do this. Our team is exploring setting up grafana/influxdb and other services to get more detailed metrics, but it will be nice if there is an option to increase the timeline to the webui dashboard.

How is Google PageSpeed Round Trip represented in Chrome Dev Tools Network Tab

In Google PageSpeed Insights under mobile render-blocking JavaScript and CSS section, the comments may refer to the number of server Round Trips before content is displayed.
What does a server Round Trip look like in Google Chrome Developer Network Tools? When I look at the network overview, it looks like the TCP requests are grouped into blocks of 10. Is a Round Trip one of these blocks of 10? If so, is the Round Trip measure only applicable to the start of a website load as the blocks of 10 start to merge as the various elements take different times to load.
According to the official documentation,
Time spent waiting for the initial response, also known as the Time To First Byte. This time captures the latency of a round trip to the server in addition to the time spent waiting for the server to deliver the response.
This is displayed as a green bar (TTFB) in the resourcing timing views.