I'm having trouble listing the goals created with the recently launched Smart Goals feature. They appear in the GA user interface in the goals list, but not in the API.
The API documentation doesn't mention anything about Smart Goals and I assumed they would be listed alongside normal goals in the results from the goals API endpoint. However, fetching all goals won't list the Smart Goals based on a few tries.
Has any else run into this issue?
There is a trick around this problem.
You can build a segment that includes users/sessions (depends on what you are looking for) that have more than 0 completions.
Then in the query, set the segment to the one you just created and you will get what you are looking for.
One thing - Smart goals might change and the way you build the segment is true about now. it could be that if there were significant changes in the smart goals, you would not extract correct data.
Related
I am doing data analysis on GitHub projects and I want to filter projects having continuous integration (on GitHub).
There are two types of checks and statuses on GitHub: Checks and Statuses! Projects can use GitHub Apps to run checks or mark their commits with external services (CI or other) [source]. My question here is: does having GitHub Checks (or statuses) results available for a project mean that the project is using CI for sure? If not what other factors should be presented to say a project is having continuous integration?
Possibly. But you can't be sure. It means that some check runs and some status is updated. But without looking at the automation, there is no way to conclude that Continuous integration takes place.
Maybe it checks whether the contributor has signed a contribution agreement.
Maybe it checks for the presence of a Issue id or an attachment.
Maybe it updated some external system (like Service Now) so the issue can be tracked there as well...
Checks and statuses are used in many different ways.
And Continuous Integration looks different for different technologies. Some languages need to be compiled, others won't need that. They'd hopefully have some kind of tests to validate nothing broke during integration, but there is no surefire way to know as it may simply be running a script or using a test framework or something else.
You can probably easily conclude that the absence of checks and statuses likely means that CI isn't being performed (even that can't be said with 100% certainty as an external system may be performing the CI, just not reporting back the status). The presence if checks and statuses means that something happens. But you likely need to dig a bit deeper to classify whether the thing that happens constitutes to CI.
How can you launch a new product if you can't run an experiment? Or how can you adapt a metric so you can run an experiment?
Example in this link: https://hbr.org/2018/11/using-experiments-to-launch-new-products
Uber wanted to launch Express Pool, so they do the tipical A/B testing and compare metrics ,but in this case they have metrics to compare before and after launching the product (revenue per user, avg trips per users, etc.)
But what if this is a complete new product? Example: Uber trying to launch a Wallet?
If I don't have a counterfactual, what can I do?
There are multiple things you can do before launching a new product.
You can run survey for different groups of users asking the about their needs.
The scope of this survey is to identify problems that you can then solve using your product. This is a very early age tactic you can use to determine if your product has a potential fit into the market.
You can create pitches and crowd-founding campaigns
The scope of these are to determine if there is a potential demand for your solution in the market. You are basically starting to sell in the idea phase, before even getting to build anything. Note that you don't want to scam people here, you are just trying to determine if there is a potential in the market.
You can launch an alpha or beta pre-release version of the product.
The scope of this pre-release is to invite few users into your early application and get their feedback. Based on the feedback you get here you can either improve/ change or update your product before launching it.
You can launch an MVP (minimum viable product) and then track KPIs in the real world. The MVP can give you enough information so you know where to go. Just make sure that you are tracking the right KPIs
Good luck!
You don't always need to come up with specific hypothesis and validate it. Sometimes it's best to understand and quantify how a new feature affects the overall health of your product. A lot of times in the past when we expose a new feature to a small population of users, we'd quickly figure out if everything is working as expected or if there are unforeseen consequences.
This is hard to do without the right tooling. One such tool that provides a holistic view of the product's health is Statsig. Here's a quick screenshot of what to expect when you build and rollout new features (without having to set up a formal A/B experiment).
Disclaimer: I work at Statsig
Our company has recently introduced Azure DevOps to streamline project management process. Currently, 140 projects are created under our organization in Azure DevOps. As and when requirement comes from client for any specific project, we create tasks/bugs for different developers under that project. Currently we use only two Work Item type - Bug and Task.
Now the issue is Management of the company wants to see Project-wise number of "New/Open", "Active" and "Closed" Tasks and Bugs in a SINGLE chart. That means, that single chart must fit consolidated data of 140 projects. If a person views that single chart they must get idea, for example that - Project 1 has 2 new/open work items, 2 active work items and 2 closed Work items , Project 2 has 1 new/open work items, 10 active work items and 3 closed Work items and so on.. This is done so that management in a glance can understand which project is lagging behind for customer delivery. So that they can work accordingly build more manpower for those Projects.
I have tried to create various such charts and widgets with different queries in Azure DevOps. I used widget burn up and burn out charts but it gives data for tasks of single Project only. Also when we add multiple projects to it, it shows summation of completed/remaining tasks for those Projects & NOT Project name-wise completed/remaining tasks bifurcation.
I also tried "Charts for work item" widget but it also fetches count as per- Assignee, State and Work Item type and not project name wise count is fetched.
I don't want to navigate through 140 projects pages to see it's open, active and closed tasks. So please help me out in suggesting the ideas on How can I build a single chart from where we can get all this data? I will be forever grateful for your answers.
Thank you!
You could create a query across projects, select Team Project column in Column option, ad save the query as shared query. Check the screenshots below:
Then add a chart widget to a dashboard, select Pivot table, and set Team Projects, State as Rows and Columns. Check the screenshot below:
You could expand the view to see more details:
https://learn.microsoft.com/en-us/azure/devops/report/dashboards/charts?view=azure-devops#add-a-chart-widget-to-a-dashboard
I'd be slightly careful based on what you've put down because I don't think your management team quite understand what they need, and DevOps can only do so much. I'd be challenging them around the setup of your DevOps process personally because I don't think it's advisable to not have user stories as part of your setup. Although it simplifies some aspects of DevOps, our experience has been that people have been able to group things together better with user stories as well as tasks.
Appreciate it's a good idea to be able to see what's going on across all projects, but I think there are probably further criteria to think about. E.g. do you want to see estimates instead of/as well as the count of the items since that will have a better reflection of the effort required. In terms of completed items and in fact, probably all that you're displaying, again it's more on your project process, but are management genuinely interested in everything? For example, do they need to know that something was closed 6 months ago, or are they just interested in the last month?
I suppose what I'm getting at is you probably need a bit more information from management about what they want to use the report for so you can give them what they need rather than they want. There's a temptation to say you want everything because you don't understand the capabilities of the solution or what you're going to use it for, and my recommendation would be to challenge them on this so you can better present things (giving them what they need rather than what they want).
In terms of what you're looking to do, I'll openly admit I'm not clued up on everything DevOps related but I doubt you'll be able to report at a project level within DevOps. I think what you'd need to do is set up your query, which would look across all projects in your organisation, and then export the results to Excel. From there I'd create a pivot table (or perhaps more than 1) with the data that you need. Have Project names down the left side (row headers), and bring in whatever else you need as columns. I think that's probably a good quick win to get something in front of your management team, and then you could challenge from there - almost picking holes in it so that they realise that the business decisions that they'd make from this may not be fully informed, and suggesting some changes. From experience, it's probably better to consider it almost as a prototype and not get bogged down with a solution at this stage because you may be asked for changes when they can visualise what they've originally asked for. Once management is happy you could look at other solutions to provide the report, but Excel is typically a good starting point I've found in the past when working on something new like this.
I am currently developing a desktop application based on eclipse.
Currently the user needs to perform many redundant actions like doing step A in View 1 then doing step B in View 2 then repeat. I am wondering if anybody knows a solution that records/recommends user actions in eclipse based applications.
Maybe based on the history much like the web based solutions.
Any help would be good.
Thanks.
1)
Do you want to record the user clicks (actions)?
If so eclipse provides a Location tracker, so you can analyse the use cases from the field.
OperationHistoryActionHandler
2)
Do you want to have a smarter way the user uses your tool?
Think about using Wizards. in a Wizard you can have a defined number of execution steps. The user does not need to search some button in a view.
With a Wizard a specific execution flow is very clean and good to understand.
3)
As Jonah mentioned you can use cheatsheets as well.
We once did something similar, where we had a rather big user interface that had heaps and heaps and heaps of different functionalities. Our solution was this:
We abstracted all actions into commands. They were all implemented in a way that they can be cascaded, undone, redone etc. See for example IUndoableOperation
The commands had conditions that made it easy to decide if one could combine these commands.
All commands have an ID and can be easily identified
We then continued to integrate our own run configurations. We added a UI that gave the user the option to cascade multiple commands into one big one. For example, A user wanted to create a new file, apply a template, generate some graphs, export them into a given location etc, the user would create a run configuration adding those commands together.
That way we kept the UI comprehensive but gave the expert user the ability to create their own workflow based on what they do every day.
Our users liked that quite a bit.
Background:
JIRA offers a single set of statuses for all types of issues in a project.
Problem:
The problem is that the status set for a task is ToDo, InProgress, and Done. While for a UserStory in the same project it might be Designing, Developping, Testing, Releasing, and Done. It can even be different for a bug or an Epic.
Question:
How do you keep track of the workflow of your product and at the same time manage the status of your tasks using the single set of JIRA status.
PS: I know they can be customized for each project, but it doesn't help because you can't customize them for each issue type separately.
I think one of the reasons that JIRA offers the To Do, In Progress, and Done is that these can apply to anything. You either haven't done it, you're doing something, or you finished. That set can apply to any type of item.
That being said, I feel your pain in wanting to have a better view into the true state of an issue. What we have found we use for our OnDemand agile boards is to set up something like the following:
To Do
In Progress
Ready for Review
In Review
Done
For most types of issues, this can work. It adds that bit of extra layer to be able to identify what is ready for testing.
One of the things that is tricky is dependent tasks. For example, I noticed you mentioned "Designing" as a stage, and I'm not sure this makes sense in an agile sense. If the design is emerging from the development, it may be better to allow the design/development to flow within the development team. However, we all know that sometimes you need to get some details ironed out before you can proceed, or there may be some people that need to become involved before a dev can proceed. We made the mistake of trying to turn this into a stage, but what we found was that this was really either a sub-task for part of the team, or an impediment (blocker). By flagging stories, you can identify that a story requires something to be done before the development team can proceed.
If you are using Kanban, and not a Scrum board, the sub-task approach will not be for you. In those cases, you'll just need to make sure you have stages that make sense for all the issues you create. Stages will have to be fairly 'generic'. This sounds bad.
But it is not!
I believe teams generally use the stages for a few reasons:
Checking on status of an iteration
Inform other team members that they can pick up an item
Try to get a visual estimate on how close to Done an issue is.
More stages doesn't necessarily give a better status on an iteration as you really just need to see how many points you've closed and how many are in progress. So, at least for that goal, a more generic set of stages should work.
As for informing team members, too often I've seen teams retreat to the digital board to replace communication with each other. The fewer stages you have, the more you can force your team to talk to each other and work together to get a story to done. Things will work better this way, I guarantee it! Having a bit of a break-down helps, especially if you are working on a lot of items at once or have distributed teams working in different time zones, but keeping it simple is usually better.
Tracking the "how close to Done" is the hardest to do with generic stages. However, the multiple stages can be misleading. An item that is almost all the way across might have a severe bug in it that hasn't been found yet, so no matter how many stages you have your view on this item isn't any more accurate than a single "In Progress" stage. It isn't Done until it's Done :)
This was a long way for me to recommend keeping your workflow simple and letting your team use communication to keep on top of things. Maybe I should have just started with that!
The statuses that are available to each project is determined by the Workflow to which it is assigned. Not only does a workflow define the statuses, but it also defines what statuses you can progress to from a particular status. You can either create your own Workflows or you can download predefined workflows that suite your need.
In order to have separate workflows for different issue types, we need to define a Workflow Scheme:
1- Go to Jira Administration -> Workflow Schemes
2- Edit the Wokflow Scheme that is assigned to your project
3- Click the "Add Workflow" to add a new workflow for the issue types for which you need a different workflow and assign those issue types.