Launch new products without experiment - ab-testing

How can you launch a new product if you can't run an experiment? Or how can you adapt a metric so you can run an experiment?
Example in this link: https://hbr.org/2018/11/using-experiments-to-launch-new-products
Uber wanted to launch Express Pool, so they do the tipical A/B testing and compare metrics ,but in this case they have metrics to compare before and after launching the product (revenue per user, avg trips per users, etc.)
But what if this is a complete new product? Example: Uber trying to launch a Wallet?
If I don't have a counterfactual, what can I do?

There are multiple things you can do before launching a new product.
You can run survey for different groups of users asking the about their needs.
The scope of this survey is to identify problems that you can then solve using your product. This is a very early age tactic you can use to determine if your product has a potential fit into the market.
You can create pitches and crowd-founding campaigns
The scope of these are to determine if there is a potential demand for your solution in the market. You are basically starting to sell in the idea phase, before even getting to build anything. Note that you don't want to scam people here, you are just trying to determine if there is a potential in the market.
You can launch an alpha or beta pre-release version of the product.
The scope of this pre-release is to invite few users into your early application and get their feedback. Based on the feedback you get here you can either improve/ change or update your product before launching it.
You can launch an MVP (minimum viable product) and then track KPIs in the real world. The MVP can give you enough information so you know where to go. Just make sure that you are tracking the right KPIs
Good luck!

You don't always need to come up with specific hypothesis and validate it. Sometimes it's best to understand and quantify how a new feature affects the overall health of your product. A lot of times in the past when we expose a new feature to a small population of users, we'd quickly figure out if everything is working as expected or if there are unforeseen consequences.
This is hard to do without the right tooling. One such tool that provides a holistic view of the product's health is Statsig. Here's a quick screenshot of what to expect when you build and rollout new features (without having to set up a formal A/B experiment).
Disclaimer: I work at Statsig

Related

CMS martial arts membership management or own?

While I found quite some interesting suggestions on this site (the typical WP vs. Joomla) I just couldn't find an answer that could help me get started.
I know this is close to some of the other CMS questions but I'm missing specificities that need answering.
I'm looking for a CMS that can provide me with the following key functionalities, either through minimal programming or additional plugin installations. I'm stating this because it won't be just me, who can program, but also other trainers who are not technically inclined that will handle the site (in the future).
The functionalities I'm looking for:
Schedule management of training
Trainees of the club must check-in before or after the training to proof attendance, thus site must be mobile friendly. This is more proof-of-concept since not everyone has/wants a smartphone.
Each trainee has his own profile that logs said attendance
Possibility to provide feedback on training. For example: give a thumbs up on the last training, give a "yellow card" if the trainee misbehaved, two/three/four and you're prohibited from training ones/twice/thrice.
The attendance allows the trainee to become eligible for the next exam
Schedule management of said exam
Yearly subscription reminders for the trainees and if under-aged required parent information
Management of trainee profiles and subscriptions
Is the above possible through a CMS or is it too specific and will I need to program this myself? Either is fine by me but I'd first like to find out if a CMS can offer this.
I've decided to Go for a custom solution using ReactJS.
There are very good open-source solutions for the admin part and the open/client part is fairly simple so React is perfect for what I want to achieve. Additionally, it also challenges to think differently since I never worked with ReactJS before.
With ReactJS I have a lot of freedom in how I implement the above scenarios while at the same time have a lot of support available online in cause of issues.

Release timetables in Agile Environments [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We are currently utilising an agile environment in work. One of my tasks involve setting up a release timetable. A part of this is providing a time frame of how long a project would take to go from a development environment, to staging and then live.
I have conflicting thoughts regarding whether such a timetable needs to be done.
For a start, we are quickly moving into a Continuous Integration / Constant Delivery environment where an application is tested amongst all environments when a change is made to the code base. Therefore, there is no time frame, but things should be "just" deployable. (Well, we always need a little bit of contingency as the best laid plans can always go awry)
Can anyone steer my in the right direction on what would be the best way to handle such time tables and timeframes if needed in Release Management in an Agile Product Development Environment.
Regards,
Steve
Can anyone steer my in the right direction on what would be the best way to handle such time tables and timeframes if needed in Release Management in an Agile Product Development Environment.
First of all the Scrum Framework guidelines never guides you to not have a Release Plan or Time table ever. What is leading you to have conflicting thoughts? I would like to know the source which is leading you to this conflict.
Best way to create a Release Plan is like this (this may take a week or so depending on the size of your project):
Get the Stakeholders in a room and get a EPIC user story written on the board using their guidance. The EPIC user story should include the end product vision. (ignore if already done)
List out the type of users.(ignore if already done)
Break the Epic user story into smaller and smaller chunks of user stories till they are small enough to be doable in sprints.(ignore if already done)
Ask the Product Owner(s) of the Scrum Team(s) to prioritize the stories in the uncommitted backlog list(s) Also do some form of effort estimation fairly quickly and do not waste a lot of time estimating.
Get the target end date or Go Live date of the project from Stakeholders.
Divide the time frame from now until the end date into Releases. Ask the stakeholders which features need to be delivered by when and include the appropriate user stories in them, and call them Releases. You can also give those Releases themes if needed.
The Release Plan now is conceptualized.
After this draw it on a white board or put it in a visible and transparent location where everyone can see it - add user story cards to the appropriate release.
Now your initial release plan should be ready
Ideas for implementation:
Form a Scrum Team specifically for Operations Activities. They could follow Scrum or Kanban would be better.
As and when Development teams get "shippable products" put in the shelf, the Operations Kanaban Team can do the deployment and release branching etc tasks as per the Release Plan.
So this way the development Teams don't really focus on the Release plan or work, just the Operations Team does that. The Development Team just focussed on the Sprint Work, it would be the Product Owners headache to make sure the right user stories are in the right Release and in the right order. The direction would be given by the Stakeholders.
To be honest you really don't have to do anything yourself, it's all in the stakeholders and POs hand, I don't know where is is the fuss??
I hope you get the picture.
I usually maintain a release plan for the management that is mainly based on a combination of the estimated & prioritized user stories (I group them to match a main new feature of the product) and velocity.
With a well maintained product backlog it's pretty easy to do your release plan. I usually plan three to four releases a year.
What I like with Scrum is that I can potentially release after each iterations.
If you want to master your release management, you will need more information that few answers of practionners. I highly suggest you this book.
If you currently utilising and agile environment you should check Agile estimating and Planning book for some suggestions. This book also contains small chapter about Release planning.
Some release planning should be always done. Release is a target wich usually covers 3-12 months of development = set of iterations. It something which describes target criteria for project to success. It is usually described as combination of expected features and some date. Features in this case are usually not directly user stories but epics or whole themes because you don't know all user stories several months ahead. Personally, I think release is something that says when the project based on vision can be delivered. It takes high level expectations and constraints from the vision and converts them to some estimation. You can also divide project to several releases.
But remember that three forces works in agile as well. There is direct relation among Feature set, Release date and Resources (+ sometimes also mentioned fourth force: Quality). Pushing one of these forces always move others. It is usually modelled as equilateral triangle (or square).
There are different approaches to plan a release. One is mentioned in the book. It is based on user stories estimation, iteration length selection and velocity estimation but I'm little bit sceptic to this approach because you don't have simple user stories for whole release and estimating epics and themes is inaccurate. On the other hand high level feature definition is exactly what you need for three forces. If you don't have enough time you will implement only basic features from all themes. If you have more time you will implement more advanced features. This is task for product owner to correctly set business priority when dividing epics and themes into small user stories.
The most important part in agile is that you will know more quite soon. After each iteration you will have better knowledge of your velocity and you will also reestimate some planned user stories. For this reason I think the real estimate (accurate) and realease date should be planned after few iterations. As I was told on one training effort should not be estimated, effort should be measured. If anybody complains about it show him Waterfall and ask him when will he get relatively accurate estimate? Hint: Hardly before end of analysis wich should be say after 30% of the project.
It is also important what type of projects do you want to implement using agile / scrum and how long will project be. Some projects are strictly budget or date driven others can be more feature driven. This can affect your release planning. For short projects you usually have small user stories and you can provide much more accurate estimate at the beginning.
This is a very loaded question, and depends on your company to be sure. I first have to ask, why are you using 3 environments and continuous integration (your reason matters)? Are you performing automated tests at all? How are your code branches setup? Do you release for some functionality, or just routine maintenance fixes?
Answering these will give you an idea of why you need a release, and how you should go about it.
For example, if you only have a staging environment for the purpose of integration and perform automated tests, then can't having a separate code branch in which continuous integration tests run be sufficient?
If staging is to perform some sort of user acceptance, does your company have a dedicated testing team or are they members of the agile teams?
As you correctly stated, if the code is always integrated and tested, then why would you need a timetable and moving from environment to environment unless you were unsure about the actual "done" condition of the features? By that, I mean that it's not that you're unsure that the feature was coded correctly, but are you worried it will introduce other bugs? Will it integrate well with code already in production? Address the concerns at the root of the problem. Don't just do it because you think you're supposed to have X environments or testing should be in another group. Maybe the solutions to those problems may be to adjust the definition of "done" accordingly.
As you can see there are many, many factors that will make your organization unique. There is no one right way to answer this, just tradeoffs that you are willing to accept.
I find that having multiple environments with teams of people working at the various layers tends to be anti-agile and counterproductive. The best bet is to analyze your concerns, and try to find ways to solve them (such as expanding the definition of "done", or breaking up the various organizations and putting them on the teams, eliminating as many environments as possible and simplifying the process, etc). That may not be possible in your organization, so you may have to live with tradeoffs.

how to maintain multiple components for multiple client for multiple features?

Basically my project is product based.
Once we developed a project and catch the multiple client and deploy the application based on their needs.
But We decided to put the new features and project dependent modules are as component.
Now my application got many number of customer.
Every customer needs a different features based on the component.
But we have centralized component for all client . we move the components additional feature to client specific folder and deploy.
My problem is , I am unable maintain the components features for multiple client.
My component feature code is increased and I am unable to track the client features.
Is there any solution for maintaining the multiple component features for multiple client ?
I've worked for a couple of companies in a similar space - product software but very heavily customised.
Essentially there is a decision the company needs to make - are you a product company (that is you ship broadly the same to every client) or are you a bespoke company. At the moment it sounds like they're between two stools and wanting the economies of being a product company with the ability to meet specific client demands the way a bespoke software company can.
Assuming the company wants to be a product software company, unless there are specific technical reasons why you can't, you need to move to a single code base with the modifications for each customer being handled through customisable options (i.e. flags saying how this particular situation is being handled, whether this feature is available and so on).
These can be set at run time (so they can be changed as the client wants - think options in Word or Excel), or build time (so code is included / excluded when you do the build), but the key things is that every client has to be pulled from the same code base.
But this needs to be agreed with the business as it limits what they can sell - every change they sell has to fit into an overall vision which can be accommodated by the single product.
The alternative is that you're essentially producing bespoke software for each client (that is coded specifically for what they want) but using many common libraries. That's fine and allows you to produce something which is exactly what they want but in the end it is going to be more work and the business needs to understand and cost for that.
We actually do a bit of both - there is a server product which is identical for all clients, and then web and mobile clients which are specific to them (in the case of mobile you can't have lots of dead code on the device - the web stuff is historic and will be moving to a standard product for all clients).
Good luck though, it's a difficult problem with no easy solution.
You are essentially talking about software product lines (SPLs): variations from a common base. Since you already package your features as components, you need a specialized tool to manage such variations.
You can then build a complete custom application based on a configuration that is unique to any given customer. Easier said than done, of course.
A model-driven software development(MDSD) approach can help a lot on this task. One such system that can support this development setup is ABSE, an emerging MDSD approach that among other things, can implement a software product line (info at http://www.abse.info - Disclaimer: I am the ABSE project lead). There is no product yet though. An alpha preview is coming.
Again, I know some companies that, using an MDSD coupled with code generation, have achieved what I understand you want: products that are half pre-packaged, half custom.

Speccing out new features

I am curious as to how other development teams spec out new features. The team I have just moved up to lead has no real specification process. I have just implemented a proper development process with CI, auto deployment and logging all bugs using Trac and I am now moving on to deal with changes.
I have a list of about 20 changes to our product to have done over the next 2 months. Normally I would just spec out each change going into detail of what should be done but I am curious as to how other teams handle this. Any suggestions?
I think we had a successful approach in my last job as we delivered the project on time and with only a couple of issues found in production. However, there were only 3 people working on the product, so I'm not entirely sure how it would scale to larger teams.
We wrote specs upfront for the whole product but without going into too much detail and with an emphasis on the UI. This was a means for us to get a feel for what had to be done and for the scope of the project.
When we started implementing things, we had to work everything out in a lot more detail (and inevitably had to do some things differently from the spec). To that end, we got together and worked out the best approach to implementing each feature (sometimes with prototypes). We didn't update the original spec but we did make notes after the meetings as it's very easy to forget the details afterwards.
So in summary, my approach is to treat specs as an exploratory tool and to work out finer details during implementation. Depending on the project, it may also be a good idea to keep the original spec up to date as the application evolves (which we didn't need to do this time).
Good question but its can be subjective. I guess it depends on the strategy of the product, if its to be deployed to multiple clients in the same way or to a single client on a bespoke project, the impact, dependency these changes have on the system and each other and the priority these changes need to be made.
I would look at the priority and the dependency, that will naturally start grouping things?

What is your iPhone app testing strategy?

Before submitting to the App Store, it is a good idea to test the App once again precisely. I tend to install my App on a device and give it a friend for a while. Then I take the feedback and start changing my app accordingly.
I'd like to know what your testing strategies are.
Write a test plan. If you don't have experience with doing that, start with a list of every feature and UI control in the application.
Write down a simple set of steps that could be followed to determine whether or not each feature is working correctly.
Two major points:
Use unit testing. You can use Google Toolbox for Mac for that or just roll your own.
User testing, well, it's user testing. A colleague of mine designed a 50-point walkthrough/questionnaire of the app and had some 10-20 people do it -- and then repeated certain parts when we made changes to certain sections.
You are talking about two different things:
Defect testing and usability testing.
Or I think you may be. The other answers are about defect testing, your approach sounds like usability testing - or a mix of both.
Defect testing is about finding errors in your code. Other people have responded about this:
Have unit tests but don't rely on them
User testing - firstly by you. Think about your code and what might break it. Throck on controls, paste a zillion lines of text into your editos
Have other people who are not familiar with the code use the app
use tools like ObjectAlloc and clang to find non-functional defects
In my mind testing is not about tools but the attitude. How hard you look for defects and how honest you are about reporting your own defects.
You should also have a good defect tracking system to keep a handle on them.
Usability testing is more difficult. People do not understand their own thought process when interacting with software.
A good (cheap) approach is to give the softwar eto your friend and then ask him to speak outloud what he is thinking. Then you get statments like "I see this screen but I don't know what to press (you need to add help or cues). I'm not sure if deleted this worked (you need to add feedback). Etc.
You can buy very sophisticated tools to help with user testing but this approach gets a long way there.
At first, I do a functional testing to check if the function of every features work fine. Then, I execute a system testing to check the interaction between functions and perform exploratory testing.
At the end, I make a focus group, which represent the users, to get feedback on its usability. Actually, a focus group will be great if it is held at the beginning of development and the end of development. The first event aims to get feedback on the user interface design and the second one is to get feedback on the real application.
For a serious professional app that you plan on making money with -- first you do in-house "white box" alpha testing with Instruments, etc., then you hire a professional quality assurance testing company to do "black box" functional beta testing, and then you hire a professional usability testing company to do user testing on live guinea pigs with video surveillance.
In terms of unit testing, I have found that GHUnit and OCMock are two very good tools. Especially GHUnit because it comes with it's own test runner which will run on the device or simulator.
I would first install Crashlytics. So that anyone you give the app that has issues, you can see exactly what is going on. Then another thing you could do is install Hockeykit, so you can push new updates just to the beta users. Those are my suggestions.
https://www.crashlytics.com/
https://github.com/TheRealKerni/HockeyKit