Can I use pagespeed insights for my local host website or offline? - web-testing

Can I use pagespeed insights for my localhost website or offline?

Yes.
Use the "Lighthouse" tab from your google chrome dev tools.
This is a great starter tutorial on how to do that:
https://www.youtube.com/watch?v=5fLW5Q5ODiE
Edit: user izogfif pointed out the "Audit" tab was replaced by "Lighthouse".

An alternative way to run Lighthouse
Although this is an old question there is an alternative way to run Lighthouse (the engine behind Page Speed Insights) locally that may be useful to people in some circumstances.
You can install the Lighthouse Command Line Interface (CLI) locally on your machine quite easily.
This gives you some significant advantages over using the "Lighthouse" tab in Developer tools.
Automation
Firstly you can automate it. You could have it run on every significant change / commit to check you haven't broken something.
Or if you want to check every page on your website you can automate that, very useful if you have hundreds of pages.
Storing results
Secondly you get the full JSON response (or CSV or HTML report, your choice) so you can store some (or all) of the audit results to a database for each page and see if any pages are performing poorly or whether you are improving or ruining your page performance.
Customisation
You can also set your own parameters when running tests.
For example I like to set my "cpuSlowdownMultiplier" very high (8 or 10) as I have a decent CPU and I want to catch any bottlenecks / long tasks that I may miss on slower devices. This is great for making you realise how sloppy your (my!) JavaScript is!
You can also pass headers, set cookies (slightly difficult at the moment but something they are working on) etc. before a run.
You can even use --disable-storage-reset to see how the site responds on a subsequent page visit where the user has already cached images etc. (you can do this in the Lighthouse tab in Developer tools so maybe not that good a reason).
Because you get the raw timings data you can also set your own criteria if you want.
Puppeteer
The icing on the cake is that you can use puppeteer (or similar) to automate complex tasks.
Lets say you want to check a page that is only accessible when you have logged in, use puppeteer to log in and then run lighthouse.
So which should I use?
I would advocate for the CLI if you are going to test regularly / want to automate testing, the Developer tools version for quick and dirty checks / infrequent testing.
Personally it took me about an hour to install and get used to Lighthouse, but I also had to install and learn how to use nodeJS (npm) command line to install lighthouse into my project (yes I am a slow learner!).
If I didn't have to learn that, probably 5 minutes to install and run your first test.
It is actually really simple to use the CLI once you have it installed.
Only down side is you need to update every few months, which is automatic in the browser. However even then that is a positive to me as if you are comparing over time using an older version might be useful.
Oh and you can run it on remote sites as well, so you can test the production site automatically from your own machine (useful if you are located a long way from the PSI server and want an idea of how your site performs in your local community).
This is also really useful if you have a staging server that only allows whitelisted IP addresses and want to test on there (yet again can be done with Developer tools Lighthouse but useful for bulk testing etc.)

Related

How to implement continuous migration for large website?

I am working on a website of 3,000+ pages that is updated on a daily basis. It's already built on an open source CMS. However, we cannot simply continue to apply hot fixes on a regular basis. We need to replace the entire system and I anticipate the need to replace the entire system on a 1-2 year basis. We don't have the staff to work on a replacement system while the other is being worked on, as it results in duplicate effort. We also cannot have a "code freeze" while we work on the new site.
So, this amounts to changing the tire while driving. Or fixing the wings while flying. Or all sorts of analogies.
This brings me to a concept called "continuous migration." I read this article here: https://www.acquia.com/blog/dont-wait-migrate-drupal-continuous-migration
The writer's suggestion is to use a CDN like Fastly. The idea is that a CDN allows you to switch between a legacy system and a new system on a URL basis. This idea, in theory, sounds like a great idea that would work. This article claims that you can do this with Varnish but Fastly makes the job easier. I don't work much with Varnish, so I can't really verify its claims.
I also don't know if this is a good idea or if there are better alternatives. I looked at Fastly's pricing scheme, and I simply cannot translate what it means to a specific price point. I don't understand these cryptic cloud-service pricing plans, they don't make sense to me. I don't know what kind of bandwidth the website uses. Another agency manages the website's servers.
Can someone help me understand whether or not using an online CDN would be better over using something like Varnish? Is there free or cheaper solutions? Can someone tell me what this amounts to, approximately, on a monthly or annual basis? Any other, better ways to roll out a new website on a phased basis for a large website?
Thanks!
I think I do not have the exact answers to your question but may be my answer helps a little bit.
I don't think that the CDN gives you an advantage. It is that you have more than one system.
Changes to the code
In professional environments I'm used to have three different CMS installations. The fist is the development system, usually on my PC. That system is used to develop the extensions, fix bugs and so on supported by unit-tests. The code is committed to a revision control system (like SVN, CVS or Git). A continuous integration system checks the commits to the RCS. When feature is implemented (or some bugs are fixed) a named tag will be created. Then this tagged version is installed on a test-system where developers, customers and users can test the implementation. After a successful test exactly this tagged version will be installed on the production system.
A first sight this looks time consuming. But it isn't because most of the steps can be automated. And the biggest advantage is that the customer can test the change on a test system. And it is very unlikely that an error occurs only on your production system. (A precondition is that your systems are build on a similar/equal environment. )
Changes to the content
If your code changes the way your content is processed it is an advantage when your
CMS has strong workflow support. Than you can easily add a step to your workflow
which desides if the content is old and has to be migrated for the current document.
This way you have a continuous migration of the content.
HTH
Varnish is a cache rather than a CDN. It intercepts page requests and delivers a cached version if one exists.
A CDN will serve up contents (images, JS, other resources etc) from an off-server location, typically in the cloud.
The cloud-based solutions pricing is often very cryptic as it's quite complicated technology.
I would be careful with continuous migration. I've done both methods in the past (continuous and full migrations) and I have to say, continuous is a pain. It means double the admin time for everything, and assumes your requirements are the same at all points in time.
Unfortunately, I would say you're better with a proper rebuilt on a 1-2 year basis than a continuous migration, but obviously you know best about that.
I would suggest you maybe also consider a hybrid approach? Build yourself an export tool to keep all of your content in a transferrable state like CSV/XML/JSON so you can just import into a new system when ready. This means you can incorporate new build requests when you need them in a new system (what's the point in a new system if it does exactly the same as the old one) and you get to keep all your content. Plus you don't need to build and maintain two CMS' all the time.

How to A/B test an entire website design

We're building a new website design and instead of cutting over to it 100%, we'd like to ease into it so we can test as we go. The goal would be to have users that visit http://oursite.com to either get the "old" website or the new, and we could control the percentage of who gets the new site by 10%, 50%, etc.
I'm familiar with A/B tests for pages, but not an entire website domain. We're on a LAMP stack so maybe this can be done with Apache VHosts? We have 2 cloud servers running behind a cloud load balancer in production. The new site is entirely contained in an svn branch and the current production site runs out of the svn trunk.
Any recommendations on how I can pull this off?
Thanks you!
You absolutely can do this, and it's a great way to quickly identify things that make a big difference in improving conversion rates. It's dependent on a couple of things:
Your site has a common header. You'll be A/B testings CSS files, so this will only work if there's a single CSS call for the entire site (or section of the site).
You're only testing differences in site design. In this scenario all content, forms, calls to action, etc. would be identical between the versions. It is technically possible to run separate tests on other page elements concurrently. I don't recommend this as interpretation of results gets confusing.
The A/B testing platform that you choose supports showing the same version to a visitor throughout their visit. It would be pretty frustrating to see a site's theme change every time they hit another page. Most A/B testing platforms that I've used have an option for this.
Since you're testing substantial differences between versions, I also recommend that you calculate sample sizes before you begin. This will keep you from showing the losing version to too many people, and it will also give you confidence in the results (it's a mistake to let tests run until they reach statistical significance). There are a couple of online calculators that you can use (VisualWebsiteOptimizer, Evan Miller).

Using a Script; install a Windows Store App

Requirement: install a Windows Store app without requiring the user to nav to the store and click Install. Perhaps a batch file. Perhaps a Powershell script. Perhaps something else.
This is not a side-load question; this is a public, Windows Store question
Scenario 1: Maybe, my company has a new app in the Store that I want to push it out to every single employee without requiring them to nav to the Store and click Install.
Scenario 2: Maybe, my company has just subscribed to online CRM (or something) and I want to push out the CRM client to every single employee without requiring them to nav to the Store and click Install.
Scenario 3: Maybe, my company is hiring new employees & preparing new computers. In their first-time login script (or something) I want to ensure they have the Apps important to my business - without requiring they nav to the Store and click Install (perhaps several times).
Scenario 4: Maybe, my company is very virtualized, and we provision new VMs all the time. The VM performs fine, but bandwidth is our problem. To streamline the user experience, users logon and watch as the VM prepares itself for them by downloading and installing Windows Store Apps for them.
Please don't pick on the scenarios, I am just trying to give a possible use case.
Complication: I have been told (by people who know this sort of thing) that there is no built-in API to accomplish this. But we are developers. Nobody dares tell us something is impossible. But, if there isn't a built-in API, how could a network administrator or developer on a team solve this problem? I realize this question is somewhat brainstorming. But it gets asked over and over and over and over. I would like to provide a resource for others who might be considering the same scenario.
Hey, perhaps this is easy. Please share.
We have SCCM in our environment and some PS scripts are deployed in C:\Windows\CCM\SignedScripts that may be worth investigating. They are not SCCM specific. The most relevant of the three is "C:\Windows\CCM\SignedScripts\installwindows8app.ps1". The script just passes parameters to Add-AppxPackage though I am not sure how it would get the path to the .appx in the MS store.
You can get the location of installed apps on a model machine with (Get-AppxPackage -Name "*").InstallLocation but then you would need to repackage, store, deploy, and maintain them--not really the solution you were looking for.
Between investigating how SCCM would do it with these scripts and digging in the installed apps, maybe someone runs across something.
Almost 8 years into the future and we are getting closer to an answer!
Recent versions of Windows 10 now come with "winget" and they've added some Microsoft Store support.
It seems hit and miss as to what apps I can install. Using Spotify as an example,
winget search "Spotify"
Probably best to install via id instead though:
winget install 9NCBCSZSJRSB
Since it stops and asks if you want to agree to the terms and conditions, you answer automatically via:
echo Y | winget install 9NCBCSZSJRSB
As one of my references states: "What’s interesting is that if you have the Microsoft Store open at the same time as running winget install, you’ll see the install progress updating in real time in both the command line window and the Store GUI."
There is plenty left to be desired with this answer since most automated installs will probably just run into the error "Verifying/Requesting package acquisition failed: no store account found". But, if you're able to run it as the user, you might have some more luck. I'd love to see this tool get more finished so it can actually accomplish all the scenarios you listed. As Microsoft updates the tool, we could update this answer accordingly.
References and notes:
There's also the unlisted option --scope user or --scope machine found via: https://aka.ms/winget-settings
This page was helpful in my discovering of the tool: https://petri.com/how-to-programmatically-install-microsoft-store-apps-using-windows-package-manager
If you want to change and compile winget on your own, the source is here: https://github.com/microsoft/winget-cli/
Isn't this what Intune is for? http://www.microsoft.com/en-us/server-cloud/products/windows-intune/default.aspx#fbid=CFXRSOlwIM2

Crossrider Extensions - are they safe

I have started writing an extension using Crossrider, and really like it. But I have read some negative stuff about them being a browser hijacker - in particular search.crossrider.com
I am unsure if search.crossrider.com is a malicious extension built using Crossrider, or Crossrider itself. Among other places, this is a link which recommends you delete this.
http://forums.anvisoft.com/viewtopic-45-1190-0.html
Before I continue developing in this, I thought I would ask the experts.
Any comments, gratefully received.
Thanks
Crossrider is very safe to use!
We had some incidents in the past where developers had tried to write malicious extensions using our framework, but with our security co-operations with Google and Facebook we managed to mitigate them. (and the fact that we are a cloud-based solution allows us to remotely disable any malicious use that is against our T&C.)
Besides being very safe Crossrider is also a free and a must-have tool for any extension developers. (There are more than 20,000 developers the Crossrider community)
Crossrider not only provides the technical solution of building the API to support all major browsers including Chrome, Firefox, Internet Explorer and Safari, but also gives the developer further tools and features to solve and simplify all the heavy lifting tasks when it comes to developing and publishing browser extensions:
Full statistical dashboards with information on numbers of Installations, Active Users, Uninstalls which can be broken down to per-country and per-browser usage.
Crossrider provides an online IDE that developers can actually start coding extension online in matters of seconds. The developer doesn't need to download any development packages to their computer (unless they really want to), and as you develop, you see your code changes take affect in real-time on your browser.
Another feature is Crossrider's auto code update mechanism, where any code change (including new releases or bug fixes) is getting published to all existing users (and new obviously) in matter of hours. Regardless if you have several users or millions.
Crossrider also provides advances publishing tools such as embeddable installation widget for your website, direct download links and more.
One of those publishing tools is the Advanced Window Installer that can install your extension on all browsers automatically. This installer can be easily configured to the developers needs and we even supply with an automated Code Signing Service where developers can sign their installers in real-time.
24/7 Support - We are really keen about our support. We always strive to keep our response time to the minimum and we treat the smaller developer(s) as it was our most important client. We even try to help developers when it's not 100% Crossrider related questions as we also believe in good karma :)
Hope this helps your decision of working with Crossrider.
p.s Not sure 100% about the search.crossrider.com thread you have mentioned but as we do not have any affiliation with this subdomain (in fact, it does not even exists on our DNS records) you can rest assure this has nothing to do with Crossrider as a framework.
(Disclosure: I work for Crossrider)

Ideas on setting up a version control system

I've been tasked with setting up a version control for our web developers. The software, which was chosen for me because we already have other non-web developers using it, is Serena PVCS.
I'm having a hard time trying to decide how to set it up so I'm going to describe how development happens in our system, and hopefully it will generate some discussion on how best to do it.
We have 3 servers, Development, UAT/Staging, and Production. The web developers only have access to write and test their code on the Development server. Once they write the code, they must go through a certification process to get the code moved to UAT/Staging, then after the code is tested thoroughly there, it gets moved to Production.
It seems like making the Developers use version control for their code on Development which they are constantly changing and testing would be an annoyance. Normally only one developer works on a module at a time so there isn't much, if any, risk of over-writing other people's work.
My thought was to have them only use version control when they are ready to go to UAT/Staging. This allows them to develop and test without constantly checking in their code.
The certification group could then use the version control to help see what changes had been made to the module and to make sure they were always getting the latest revision from the developer to put up on UAT/Staging (now we rely on the developer zip'ing up their changed files and uploading them via a web request system).
This would take care of the file side of development, but leaves the whole database side out of version control. That's something else that I need to consider...
Any thoughts or ideas would be greatly appreciated. Thanks.
I would not treat source control as annoyance. See Nicks answer for the reasons.
If I were You, I would not decide this on my own, because it is not a
matter of setting up a version control software on some server but
a matter of changing and improving development procedures.
In Your case, it might be worth explaining and discussing release branches
with Your developers and with quality assurance.
This means that Your developers decide which feature to include into a release
and while the staging crew is busy on testing the "staging" branch of the source,
Your developers can already work on the next release without interfering with the staging team.
You can also think about feature branches, which means that there is a new branch for every specific new feature of the web site. Those branches are merged back, if the feature is implemented.
But again: Make sure, that Your teams agreed to the new development process. Otherwise, You waste Your time by setting up a version control system.
The process should at least include:
When to commit.
When to branch/merge.
What/When to tag.
The overall work flow.
I have used Serena, and it is indeed an annoyance. In addition to the unpleasantness of the workflow overhead Serena puts on top of the check in-check out process, it is a real pain with regard to doing anything besides the simplest of tasks.
In Serena ChangeMan, all code on local machines is managed through a central server. This is a really bad design. This means a lot of day-to-day branch maintenance work that would ordinarily be done by developers has to go through whomever has administrator privileges, making that person 1) a bottleneck and 2) embittered because they have a soul-sucking job.
The centralized management also strictly limits what developers are able to do with the code on their own machine. For example, if you want to create a second copy of the code locally on your box, just to do a quick test or whatever, you have to get the administrator to set up a second repository on your box. When you limit developers like this, you limit the productivity and creativity of your team.
Also, the tools are bad and the user interface is horrendous. And you will never be able to find developers who are already trained to use it, because its too obscure.
So, if another team says you have to use Serena, push back. That product is terrible.
Using source control isn't any annoyance, it's a tool. Having the benefits of branching and tagging is invaluable when working with new APIs and libraries.
And just a side note, a couple of months back one of the dev's machine's failed and lost all his newest source, we asked when the last time he committed code to the source control and it was 2 months. Sometimes just having it to back up stuff when you reach milestones is nice.
I usually commit to source control a couple of times a week, depending if I've hit a good stopping point and I'm about to move on to something different or bigger.
Following on from the last two good points I would also ask your other non-web developers what developmet process they are using so you won't have to create a new one. They would also have encountered many of he problems that occur in your environment, both technical using the same OS and setup and managerial.