A/B testing a controller method? - ab-testing

I recently worked on refactoring a class for my company's web site which in essence assembles a user profile. Say the old class is called User::Profile::OldImpl and my refactored version is User::Profile::NewImpl. The two classes have exactly the same public interface and are expected to behave identically. The new class is just much easier to read and more efficient--or such was my intention.
I changed the place where the profiler is instantiated from this:
profile = User::Profile::OldImpl.new(...)
...to this:
profile = User::Profile::NewImpl.new(...)
During code review, my reviewer flagged this line and told me to add an A/B test here. By this I suppose he means to sometimes use the old class and sometimes use the new class.
I'm mightily confused. I understand "A/B testing" to mean showing different versions of a web page to end users and tracking which version generates the best response, but my code is meant to be invisible to end users--it doesn't touch the view at all. I've searched around and haven't been able to find a discussion of A/B testing that relates to backend code like mine, and I can't imagine why my change merits an A/B test when the great majority of checkins don't.
The review was submitted near end-of-day Friday, and I'm obviously going to chat with the reviewer first thing tomorrow, but I was hoping to first learn from the SO community whether A/B testing can legitimately be applied to a change like mine, and if so, how.

You reviewer is worried that despite no visible changes, your implementation may affect some important business metrics. What he wants you to do is to prove empirically that there's in fact no change from the user perspective, i.e. that your A/B test is actually an A/A test. I'd ask him what KPIs he has in mind.

Related

How broad does an action have to be to be allowed to be published?

Imagine I own a small dog grooming business in a small town. Imagine my company is called "Happy Dog" (I'm making this up). Can I write an action that could then be published as "Happy Dog" such that when a user says "Talk to Happy Dog" it will direct users to my action to find out store hours, make bookings and learn prices?
I am not asking about the technical characteristics of such an action. I'm imagining that if I build the action and submit it to Google, they will have to enter it into their database to cause it to be triggered. I'm assuming that this will then trigger its presence "globally".
When I look at https://assistant.google.com/explore/ ... I see nothing that would seem to show small businesses and other such actions being visible. This implies to me that Google rejects such submissions. Is it easy or difficult to get a new assistant action registered with Google? Is there any reading material I should be studying to learn about publishing new actions?
That is correct, we can think of Actions being published globally (there are caveats to that - but ignoreable for your question). Despite this, Actions should be broad enough to be useful, but can certainly be of interest to only a narrow audience.
As an analogy - this is similar to how web sites are published globally. The invocation name is a rough parallel to the domain name for a web site (with the exclusion of the ".com" or whatever TLD you use). Google acts as the sole domain registrar, if we wish to extend this analogy. And while it does enforce certain rules about naming and the directory entry there is no restriction specifically about local or small businesses.
However... you do start running into naming conflicts and trademark issues. For example, you probably can't get "MacDonald's" because it is too close to a trademarked name. One word invocations aren't allowed unless you can verify you also have the corresponding domain name.
To continue the analogy using your example, if you started your "Happy Dog" grooming company, you may try to create a web site and discover that "happydog.com" was already taken. So perhaps you would go with "HappyDogGrooming.com" or "HappyDogSpringfield.com" or something else. In the same way, if "talk to Happy Dog" was already taken, you may need to register "talk to Happy Dog Grooming" or something similar.
It is not difficult to get new Actions published, although you do need to make sure you follow the rules. The review process mostly makes sure you have created a good conversational experience that actually works and does not confuse users. Sometimes there is a bit of back-and-forth with the review team to resolve issues.

Getting up to speed on current web service design practices

I'm admittedly unsure whether this post falls within the scope of acceptable SO questions. If not, please advise whether I might be able to adjust it to fit or if perhaps there might be a more appropriate site for it.
I'm a WinForms guy, but I've got a new project where I'm going to be making web service calls for a Point of Sale system. I've read about how CRUD operations are handled in RESTful environments where GET/PUT/POST/etc represent their respective CRUD counterpart. However I've just started working on a project where I need to submit my requirements to a developer who'll be developing a web api for me to use but he tells me that this isn't how the big boys do it.
Instead of making web requests to create a transaction followed by requests to add items to the transaction in the object based approach I'm accustomed to, I will instead use a service based approach to just make a 'prepare' checkout call in order to see the subtotal, tax, total, etc. for the transaction with the items I currently have on it. Then when I'm ready to actually process the transaction I'll make a call to 'complete' checkout.
I quoted a couple words above because I'm curious whether these are common terms that everyone uses or just ones that he happened to choose to explain the process to me. And my question is, where might I go to get up to speed on the way the 'big boys' like Google and Amazon design their APIs? I'm not the one implementing the API, but there seems to be a little bit of an impedance mismatch in regard to how I'm trying to communicate what I need and the way the developer is expecting to hear my requirements.
Not sure wrt the specifics of your application though your general understanding seems ik. There are always corner cases that test the born though.
I would heed that you listen to your dev team on how things should be imolemented and just provide the "what's" (requirements). They should be trusted to know best practice and your company's own interpretation and standards (right or wrong). If they don't give you your requirement (ease-of-use or can't be easily reusable with expanded requirements) then you can review why with an architect or dev mgr.
However, if you are interested and want to debate and perhaps understand, check out Atlassian's best practice here: https://developer.atlassian.com/plugins/servlet/mobile#content/view/4915226.
FYI: Atlassian make really leading dev tools in use in v.large companies. Note also that this best-practices is as a part of refactoring meaning they've been through the mill and know what worked and what hasn't).
FYI2 (edit): Reading between the lines of your question, I think your dev is basically instructing you specifically on how transactions are managed within ReST. That is, you don't typically begin, add, end. Instead, everything that is transactional is rolled within a transaction wrapper and POSTed to the server as a single transaction.

Speccing out new features

I am curious as to how other development teams spec out new features. The team I have just moved up to lead has no real specification process. I have just implemented a proper development process with CI, auto deployment and logging all bugs using Trac and I am now moving on to deal with changes.
I have a list of about 20 changes to our product to have done over the next 2 months. Normally I would just spec out each change going into detail of what should be done but I am curious as to how other teams handle this. Any suggestions?
I think we had a successful approach in my last job as we delivered the project on time and with only a couple of issues found in production. However, there were only 3 people working on the product, so I'm not entirely sure how it would scale to larger teams.
We wrote specs upfront for the whole product but without going into too much detail and with an emphasis on the UI. This was a means for us to get a feel for what had to be done and for the scope of the project.
When we started implementing things, we had to work everything out in a lot more detail (and inevitably had to do some things differently from the spec). To that end, we got together and worked out the best approach to implementing each feature (sometimes with prototypes). We didn't update the original spec but we did make notes after the meetings as it's very easy to forget the details afterwards.
So in summary, my approach is to treat specs as an exploratory tool and to work out finer details during implementation. Depending on the project, it may also be a good idea to keep the original spec up to date as the application evolves (which we didn't need to do this time).
Good question but its can be subjective. I guess it depends on the strategy of the product, if its to be deployed to multiple clients in the same way or to a single client on a bespoke project, the impact, dependency these changes have on the system and each other and the priority these changes need to be made.
I would look at the priority and the dependency, that will naturally start grouping things?

What is your iPhone app testing strategy?

Before submitting to the App Store, it is a good idea to test the App once again precisely. I tend to install my App on a device and give it a friend for a while. Then I take the feedback and start changing my app accordingly.
I'd like to know what your testing strategies are.
Write a test plan. If you don't have experience with doing that, start with a list of every feature and UI control in the application.
Write down a simple set of steps that could be followed to determine whether or not each feature is working correctly.
Two major points:
Use unit testing. You can use Google Toolbox for Mac for that or just roll your own.
User testing, well, it's user testing. A colleague of mine designed a 50-point walkthrough/questionnaire of the app and had some 10-20 people do it -- and then repeated certain parts when we made changes to certain sections.
You are talking about two different things:
Defect testing and usability testing.
Or I think you may be. The other answers are about defect testing, your approach sounds like usability testing - or a mix of both.
Defect testing is about finding errors in your code. Other people have responded about this:
Have unit tests but don't rely on them
User testing - firstly by you. Think about your code and what might break it. Throck on controls, paste a zillion lines of text into your editos
Have other people who are not familiar with the code use the app
use tools like ObjectAlloc and clang to find non-functional defects
In my mind testing is not about tools but the attitude. How hard you look for defects and how honest you are about reporting your own defects.
You should also have a good defect tracking system to keep a handle on them.
Usability testing is more difficult. People do not understand their own thought process when interacting with software.
A good (cheap) approach is to give the softwar eto your friend and then ask him to speak outloud what he is thinking. Then you get statments like "I see this screen but I don't know what to press (you need to add help or cues). I'm not sure if deleted this worked (you need to add feedback). Etc.
You can buy very sophisticated tools to help with user testing but this approach gets a long way there.
At first, I do a functional testing to check if the function of every features work fine. Then, I execute a system testing to check the interaction between functions and perform exploratory testing.
At the end, I make a focus group, which represent the users, to get feedback on its usability. Actually, a focus group will be great if it is held at the beginning of development and the end of development. The first event aims to get feedback on the user interface design and the second one is to get feedback on the real application.
For a serious professional app that you plan on making money with -- first you do in-house "white box" alpha testing with Instruments, etc., then you hire a professional quality assurance testing company to do "black box" functional beta testing, and then you hire a professional usability testing company to do user testing on live guinea pigs with video surveillance.
In terms of unit testing, I have found that GHUnit and OCMock are two very good tools. Especially GHUnit because it comes with it's own test runner which will run on the device or simulator.
I would first install Crashlytics. So that anyone you give the app that has issues, you can see exactly what is going on. Then another thing you could do is install Hockeykit, so you can push new updates just to the beta users. Those are my suggestions.
https://www.crashlytics.com/
https://github.com/TheRealKerni/HockeyKit

How to write a spec for a website

As I'm starting to develop for the web, I'm noticing that having a document between the client and myself that clearly lays out what they want would be very helpful for both parties. After reading some of Joel's advice, doing anything without a spec is a headache, unless of course your billing hourly ;)
In those that have had experience,
what is a good way to extract all
the information possible from the
client about what they want their
website to do and how it looks? Good
ways to avoid feature creep?
What web specific requirements
should I be aware of? (graphic
design perhaps)
What do you use to write your specs in?
Any thing else one should know?
Thanks!
Ps: to "StackOverflow Purists" , if my question sucks, i'm open to feed back on how to improve it rather than votes down and "your question sucks" comments
Depends on the goal of the web-site. If it is a site to market a new product being released by the client, it is easier to narrow down the spec, if it's a general site, then it's a lot of back and forth.
Outline the following:
What is the goal of the site / re-design.
What is the expected raise in customer base?
What is the customer retainment goal?
What is the target demographic?
Outline from the start all the interactive elements - flash / movies / games.
Outline the IA, sit down with the client and outline all the sections they want. Think up of how to organize it and bring it back to them.
Get all changes in writing.
Do all spec preparation before starting development to avoid last minute changes.
Some general pointers
Be polite, but don't be too easy-going. If the client is asking for something impossible, let them know that in a polite way. Don't say YOU can't do it, say it is not possible to accomplish that in the allotted time and budget.
Avoid making comparisons between your ideas and big name company websites. Don't say your search function will be like Google, because you set a certain kind of standard for your program that the user is used to.
Follow standards in whatever area of work you are. This will make sure that the code is not only easy to maintain later but also avoid the chances of bugs.
Stress accessibility to yourself and the client, it is a big a thing.
More stuff:
Do not be afraid to voice your opinion. Of course, the client has the money and the decision at hand whether to work with you - so be polite. But don't be a push-over, you have been in the industry and you know how it works, so let them know what will work and what won't.
If the client stumbles on your technical explanations, don't assume they are stupid, they are just in another industry.
Steer the client away from cliches and buzz words. Avoid throwing words like 'ajax' and 'web 2.0' around, unless you have the exact functionality in mind.
Make sure to plan everything before you start work as I have said above. If the site is interactive, you have to make sure everything meshes together. When the site is thought up piece by piece, trust me it is noticeable.
One piece of advice that I've seen in many software design situations (not just web site design) relates to user expectations. Some people manage them well by giving the user something to see, while making sure that the user doesn't believe that the thing they're seeing can actually work.
Paper prototyping can help a lot for this type of situation: http://en.wikipedia.org/wiki/Paper_prototyping
I'm with the paper prototyping, but use iplotz.com for it, which is working out fine so far from us.
It makes you think about how the application should work in more detail, and thus makes it less likely to miss out on certain things you need to build, and it makes it much easier to explain to the client what you are thinking of.
You can also ask the client to use iplotz to explain the demands to you, or cooperate in it.
I also found looking for client questionnaires on google a good idea to help generate some more ideas:
Google: web client questionnaire,
There are dozens of pdfs and other forms to learn from