Availability of Apache Geode releases roadmap - geode

Could you please tell me whether do you have releases roadmap? We are waiting for 1.14 - do you have any planned date for this release?
Thanks,
Vadim

I don't think there's an up to date website containing this information, other than the Release Schedule (totally outdated) or Shipping patch Releases (no schedule whatsoever) within the wiki.
That said, the best option at the moment would be to monitor (or directly ask) within the Users or Devs mailing lists (you can subscribe here)

Related

Google Analytics 4 (GA4) - Analytics Data API - When will this be out of Alpha, Beta?

Based on the Analytics Data API Banner ("Keep in mind that these APIs are pre-release and subject to change. Code built using these APIs should not be pushed to production. While we will try to notify you of upcoming changes, you should expect to encounter breaking changes before the APIs are publicly released."), the APIs are pre-release and subject to change.
https://developers.google.com/analytics/trusted-testing/analytics-data
When is the Analytics Data API expected to be out of Alpha?
When is it expected to be out of Beta?
Is this timeline a few months, a few quarters, or will it take a year or more to stabilize and publish?
Followup question, if this is going to take some time to move out of Alpha / Beta, do you expect to allow "App+Web" upgrades to downgrade back to "Universal Analytics"?
I have also sent an email to the address in the documentation with no response.
Thanks!
Brie,
I don't believe there is a public timeline on the API release cycle, but we hope to move on to Beta fairly soon. As for your second question, it is not possible to downgrade GA4 (formely App+Web) properties back to "Universal Analytics", as they are fundamentally different.
Thanks,
Ilya
The Google Analytics Team
I imagine most developers are waiting until the official release of the API before incorporating it into their workflows. But I would recommend that we all spend some time testing the API and provide feedback to Google. That way we can point out any issues and suggest features that will be of value.
For example, I want to pull up to 50+ dimensions and metrics but the API limits runReport requests to 9 dimensions and 10 metrics. I doubt Google will budge on those quotas so I figured I'd run multiple queries and merge them programmatically. Unfortunately, that's not a viable approach since there is no universal key/column available to effectively join data across those queries.
However, if the Google Analytics session id were a dimension it could serve as that universal column. So I made an entry under the Google Analytics Issue tracker requesting just that (feel free to star Issue#: 188980721).
So get involved, the sooner we do and vocalize our needs (especially at this stage of development) the more likely the API will meet those goals.

Does a github repo have a programmatic way to report what branches / releases are supported?

Is there a way to advertise and have consumers tell from the available REST APIs for repos or their branches or releases that particular releases or major version branches are actively supported, maintenance support, or unsupported? Or application of some common support policy, like "n-2 major releases supported"?
This is towards writing automation that can alert, or even automate updating of dependent versions based on availability (e.g., there's a supported version to upgrade from my maintenance version).
From the GitHub API alone, no, as the notion of "support" is not a primary notion.
You could setup for a release naming convention, which would then allow the interpretation of the repos/releases API.
Or describe that support explicitly in the README that you can get through the repos/contents API.
But any solution you might consider will involve some kind of convention/normalization.

How to get GitHub commit notifications by email after January

I'm part of several teams that depend heavily on GitHub's convenient "send an email every time anyone pushes commits" service, which is slated to disappear in a few weeks. I'm aware that it's been deprecated in favor of a more general WebHooks mechanism, but the docs are not very clear on exactly how one would instantiate the general mechanism to get back what the existing one does.
What is the easiest way to replicate the functionality that's going away?
Beside the original post (Replacing Services with webhooks), you have:
GitHub Actions, still in beta, but which should make it possible to accomplishes this (registration here).
efforts made to look for a webhook-based alternative.
For instance: pyinstaller/pyinstaller issue 3579. But there are no clear answer yet.
Update Feb. 2020: this issue is now closed (GitHub Actions are very much the standard now)
As far as I can tell, GitHub has now restored the previous functionality and even documented it again: https://help.github.com/articles/about-email-notifications-for-pushes-to-your-repository

How to track deployments?

What is a good way to track deployments of our code base? I would like to be able to see when a version was deployed on a specific server, who released it, what issues were solved by it, etcetera.
Currently we have a deployment tool that generates an issue in our issue tracker with all this information. This makes it easy to link the release issue against related issues, but it also pollutes our issue database.
We also want to start with Continuous Integration internally, which would mean there would be a ton more release issues.
Are there better ways of tracking releases?
Our technology stack is PHP (Symfony2) using Phing as a build system, a custom, web-based deployment tool, Mantis for bugtracking and Bitbucket for repository hosting.
You can use something like Beanstalk or dploy.io to deploy your apps. It will give you an ability to manage deploy permissions, see a timeline of all deployments (who deployed what and when), trigger deployments with a single click and notify your team via email and integrations when something is deployed.
You can get an idea from this screenshot:
http://cl.ly/image/3C1v1w2C3K2v
P.S. I work at Wildbit, company that makes both products.
You should check out my company's product BuildMaster, it was designed to solve every problem you've listed.
At this time we do not yet have the first-class integration with Mantis, but it can be added pretty easily via extensibility in the same way as the other bug/issue trackers we integrate with. It could be either built by your team if you are interested in that or our team contingent on an Enterprise edition purchase.

What Check-In Policies should be considered for version control?

I'm tasked with helping to set up the process templates and check-in policies for my company's TFS 2008 installation.
Aside from three check-in policies (a check-in action must have comments against it, a code file must be peer-reviewed, there must be a work item associated with a check-in), I have been asked to consider and implement any others.
What are some of the most important or useful policies to enforce for version control?
The fewer the better.
Usually in an organization you want to ease the friction of check-in to ensure that you are encouraging developers to make frequent small discrete check-ins rather than checking out a load of stuff at once. Then again you want to ensure that you have a working codebase for everyone who needs it and are capturing the data that you need to improve your software delivery process.
Personally, a policy to enforce changeset comments and a work item association policy are ok - as they capture meta-data that is very easy to remember at the time but hard to find afterwards. It also encourages developers to get into the habit of having a work item to track all pieces of work - even experimental development or spikes.
The peer review process might be better performed using branching or another process rather than forcing a peer review on every check-in - however that depends on your process. Remember as well that you can have mandatory check-in notes in TFS to capture meta-data such as code reviewer. A check-in note is slightly different to a check-in policy and is often confused.
If you want read more discussion about check-in policies, take a look at a blog post I did on the balancing act a while ago. Also to hear some more discussion about check-in policies, I recorded a podcast recently with a fellow Team System MVP talking about their use of TFS and it might be interesting (Radio TFS, Using TFS with Ed Blankenship). Finally we also did a Radio TFS episode all about check-in policies in 2008 that might be of interest.
Don't break the build! Of course, finding an automated way to check on that and reject the check-in are the challenge.
Some rules that we follow in our company:
Commit all changes related to the same task at once (that will help review the changes and future rollbacks or merges if needed).
template based comments (eg: prefix all comments with a code that represents what was done, + for adds, - for removes, * for updates, ! for important modifications, etc).
Obviously always check-in code that compiles, and finished work to the main-line.
check-in daily unfinished work to branches.
The ones we use where I work on TFS are:
Code Analysis
This ensures that all the code was compiled on the devs machine before it was checked in
Work Item Association
If you've done a change there should have been an assigned task!
Last Build Successful
Using the TFS Build Server to check that the current code in source control compiled on an independant machine
Check In Comments (part of the TFS Powertools - http://msdn.microsoft.com/en-us/teamsystem/bb980963.aspx)
It's good to be able to see a summary of the check in without having to go to the work item(s)
Try to keep the number of developers working on the same branch small. That way the branch stays stable with respect to compilation, the unit tests, and regressions. It's a nightmare if a developer does a check in which compiles but his code breaks a key area of the application (such as login).
If you really have to have more than 10 developers checking code into the same branch, we've started an email policy where the developer checking in warns everyone that they're checking in, so that no one attempts to update their copy of the branch in the midst of a check in. Sometimes, we've had to have the converse, where we set aside an time in the date to prohibit check ins, so that updates are safe.
Frankly, the less policies, the better. The more policies you have, the greater the incentive for NOT using version control. What happens then is:
Code is developed on parallel, uncontrolled source control systems, and just the final revision goes to the official one.
People delay committing as much as possible, decreasing visibility of what they are doing to other developers.
People will actually avoid committing something if they can get away with it, and some will find a way to get away with it.
In fact, I think your three check-in policies are already too much. For instance:
Having code being peer-reviewed before check-in makes it much more difficult to have work-in-progress stored there. Instead, if the source control system allows it (and many do), control whether the source is peer reviewed or not. With some systems you can create a life cycle for a revision, with others you might create branches, and still others you might use tags.
Having a work-item associated with a check-in makes it impossible for developers to do exploratory programming, or having initiative on possible improvements. It stifles the developers. Instead, make sure that any revision going into integration tests or user acceptance tests, not to mention production itself, is associated with a work item.
This might sound anti-Enterprise, but it's just some things we have learned in a few decades of software development. Most enterprise organizations haven't been clued in to this, but, eventually, they will. So, you might go the very opposite way, but don't say no one ever told you.
I recommend the Agile Manifest, and, particularly, Lean Software Development for general principles.
Or, taking Stack Overflow design philosophy into account, make the system reward the behavior you want.