Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I know that there are no fixed rules about software version control but I have several questions.
1) How to upgrade versions correctly
I have a small software that I started a while ago and since i started from scratch I started with version 0.1.
As I added more functionality I have been upgrading the minor number. Now I'm in v0.5.7 (minor (.5) for new functions and revision (.7) for bug fixes and minor changes), the thing is that the program is almost complete for distribution, but now I'm "missing" several minor versions, how do you guys handle that situation? do you simply just jump the numbers?
That brings me to the second question.
2) Which is a good starting version number
I am about to start a new project. This time is not that small of a project and is going to be public and free for modifying, I do not want to have the issues mentioned above. So which would be a good starting point?
Bonus question:
3) Is it ok to make numbers above 10? like v1.25 or v2.2.30?
I haven't seen software with that kind of numbering (probably they show it only in the help section or in their web-page), again I am aware that there are no rules for that but it seems to be that there is a general consent on how to keep the version numbers.
Version numbering policies can be a bit crazy at times (see Version numbers and JSR277, or Oracle, with its Oracle Database 11g Release 2: 11.2.0.1.0.
See also Software Versioning is Ridiculous).
But you can start by looking et the Eclipse Version Number policy as a good start.
If you really think you need more than three digits, this V.R.M.F. Maintenance Stream Delivery Vehicle terminology explanation is also interesting, but more so for post 1.0 softwares, where fix pack and interim fixes are in order.
1/ "Ship it already": 1.0.0
Also known as the "1.oh-oh" version. At least, it is out there and you can begin to get feedback and iterate fast.
2/ 0.x if major features are still missing; 1.0.0 if the major features are there.
3/ Yes, but I would say only for large projects with a lifespan over several years (a decade usually)
Note that "correctly" (while being described at lenght in Semantic Versioning 2.0.0) can also be guided by more pragmatic factors:
See the announcement for Git 1.9 (Januaury 2014):
A release candidate Git v1.9-rc2 is now available for testing at the usual places.
I've heard rumours that various third-party tools do not like the two-digit version numbers (e.g. "Git 2.0") and started barfing left and right when the users install v1.9-rc1.
While it is tempting to laugh at them for their sloppy assumption, I am also practical and
do not mind calling the upcoming release v1.9.0 to help them.
If we go that route (and I am inclined to go that route at this moment), the versioning scheme will be:
The next release candidate will be v1.9.0-rc3, not v1.9-rc3;
The first maintenance release for v1.9.0 will be v1.9.1 (and Nth one be v1.9.N); and
The feature release after v1.9.0 will be either v1.10.0 or v2.0.0, depending on how big the feature jump we are looking at.
Update Feb. 2019: semver itself is about to evolve (again, after semver2).
See "What’s next for SemVer", and semver/semver/CONTRIBUTING.
At our company, we are using four token versioning concept. It's something like a.b.c.d kind:
(major).(feature).(revision).(bug/refactoring)
It's related with issue types which we use in our development life cycle. We can track what has done or changed between two following versions at a glance. By comparing two following version numbers you can identify number and types of issues done.
For more information, full documentation is here.
Even i faced this problem with defining the version number while developing with CRM, since i wanted to deploy the same build across all the systems. I found a way to resolve it with System value + Manual + randoms.
The version information for an assembly consists of four values:
Major Version . Minor Version . Build Number . Revision
Which makes the initial Version 1.0.0.0 by default. To make more sense i replace it with
TFS Release . TFS Sprint . Change Number . Incremented build number
Suppose in your TFS a single release consists 30 sprints and after that the release becomes 2, so the values for the first release will be:
TFS Release : 1
If the current sprint is 4, the TFS Sprint will have
TFS Sprint : 4
Now the third part is managed by the developer, for a initial version we can have 0 which can be incremented +1 for each ChangeRequest or BugFix.
Change Number: 0 -- represent initial version
Change Number: 1 -- represent change
Change Number: 2 -- represent bugfix
Although for every bugfix there might be a change in code, so it represents the code change as default. But to more more sense you can have the odd number to represent the change and even number to represent the bugfix.
The last part was the default number, thankfully .Net allows us to put * to put a default build number. It keeps on increment with every compile/build which provides a signature stamp, if someone else rebuilds it changes.
How to Implement:
Open the AssemblyInfo.cs in the Properties folder and Comment the AssemblyFileVersion, this will make the AssemblyFileVersion & AssemblyVersion same.
// You can specify all the values or you can default the Build and Revision Numbers
// by using the '*' as shown below:
[assembly: AssemblyVersion("1.4.2.*")]
//[assembly: AssemblyFileVersion("1.0.0.0")]
So the final version will come out as: 1.4.2.4512 or something
The benefits of this approach is that you can track the Task for which the code is generated. Just by looking at the version number i can say, "Hey its the release 1 for sprint 4 and with some code change.
Semantic Versioning is pretty much de facto nowadays.
I'm "missing" several minor versions, how do you guys handle that situation?
You're not missing versions. It's perfectly acceptable to… (see next answer)
Which is a good starting version number
Depends if people are using your code in production. If it's already used in production jump straight to v1.0.0. But since you said your code is alpha, beta, or rc (release candidate) quality but you're planning to move to production quickly consider starting with v1.0.0-[alpha].N where [alpha] is the software grade and N is a build number or other enumerable value.
Is it ok to make numbers above 10? like v1.25 or v2.2.30?
That's the idea. Lexicographical sorting may not work but that's okay.
Related
I am using Eclipse (version: Kepler Service Release 1) with Prolog Development Tool (PDT) plug-in for Prolog development in Eclipse. Used these installation instructions: http://sewiki.iai.uni-bonn.de/research/pdt/docs/v0.x/download.
I am working with Multi-Agent IndiGolog (MIndiGolog) 0 (the preliminary prolog version of MIndiGolog). Downloaded from here: http://www.rfk.id.au/ramblings/research/thesis/. I want to use MIndiGolog because it represents time and duration of actions very nicely (I want to do temporal planning), and it supports planning for multiple agents (including concurrency).
MIndiGolog is a high-level programming language based on situation calculus. Everything in the language is exactly according to situation calculus. This however does not fit with the project I'm working on.
This other high-level programming language, Incremental Deterministic (Con)Golog (IndiGolog) (Download from here: http://sourceforge.net/p/indigolog/code/ci/master/tree/) (also made with Prolog), is also (loosly) based on situation calculus, but uses fluents in a very different way. It makes use of causes_val-predicates to denote which action changes which fluent in what way, and it does not include the situation in the fluent!
However, this is what the rest of the team actually wants. I need to rewrite MIndiGolog so that it is still an offline planner, with the nice representation of time and duration of actions, but with the causes_val predicate of IndiGolog to change the values of the fluents.
I find this extremely hard to do, as my knowledge in Prolog and of situation calculus only covers the basics, but they see me as the expert. I feel like I'm in over my head and could use all the help and/or advice I can get.
I already removed the situations from my fluents, made a planning domain with causes_val predicates, and tried to add IndiGolog code into MIndiGolog. But with no luck. Running the planner just returns "false." And I can make little sense of the trace, even when I use the GUI-tracer version of the SWI-Prolog debugger or when I try to place spy points as strategically as possible.
Thanks in advance,
Best, PJ
If you are still interested (sounds like you might not be): this isn't actually very hard.
If you look at Reiter's book, you will find that causes_vals are just effect axioms, while the fluents that mention the situation are usually successor-state-axioms. There is a deterministic way to convert from the former to the latter, and the correct interpretation of the causes_vals is done in the implementation of regression. This is always the same, and you can just copy that part of Prolog code from indiGolog to your flavor.
we have an ERP system running in our company based on Progress 8. Can you give an indication how compatible OpenEdge 11 is to version 8? Is it like "compile the source" and it will run (of course testing :-)) or more like every second line will need rework?
I know it's a general question but maybe you can provide a general answer? :o)
Thanks,
Gunter
Yes. Convert the db and recompile.
Sometimes you might run across keyword conflicts. A quick fix for that is the -k parameter (the "keyword forget list"). Using -k is a quick way to get old code that has variables or table/field names that have become new keywords to compile while you work on changing the names.
You might also see the occasional situation where the compiler has tightened up the rules a bit. For instance, there was some tightening of rules around defining shared variables in the v8/v9 time frame -- most of what I remember about that was looking at the impacted code and asking myself "how did that ever compile to start with?"
Another potential issue -- if your application uses a framework (such as "smart objects") whose API might change from release to release it is important to make sure that you compile against the version of that framework that your code requires -- not something newer but different.
Obviously you need to test but the overwhelmingly vast majority of code recompiles and runs without any issues.
We just did the conversion from Progress 8.3E to OpenEdge 11 a few days ago. It went on much like Tom wrote. Convert and recompile.
The only problem was one database that was originally created in Progress version 7 . Here conversion failed - but since it was a small database, it was quicker to dump , recreate and load.
For example, netbeans has versions like 6.9.1; 7.0.0; 7.1.1 etc. Does the second number in the hierarchy mean improvement and the first is more about introducing new functionality, which makes it more likely to have bugs. So how do I interpret these version numbers, especially if I am willing to get the most stable version?
In the general case, version numbers have no fixed semantics. A common convention is to use a "major" version number to signify major changes, and "minor" version numbers to signify small changes, but this is not cast in stone. The recent release of Linux 3.0 is a good illustration -- the major version number was incremented purely for PR value; the release was no more significant than earlier releases in the 2.xx series.
Some projects have a stable / unstable convention; odd-numbered minor version numbers are development versions, and the even-numbered minor versions are release versions. The Linux project uses this convention, as do Gnome and various other open source projects.
TeX has a playful version numbering convention of its own; it goes from 3.14, to 3.141, to 3.1415, to 3.14159 etc. (Hint: pi.)
See also http://en.wikipedia.org/wiki/Software_versioning
Each product has to describe its own version number policy.
For instance, Netbeans allude to it in its Guidelines
Usually:
the third number means: "bug fix only, no API evolution", and you can see it when checking API changes for 7.1.1, or for 7.1.2
the second number means "API changes, but still backward compatible": see 7.1 release
the first number is for "API changes, possibly not compatible with the previous versions": see 7.0 release.
You can have deprecated API fully removed, existing API modified, ...
A migration guide is involved.
Lots of other versioning policies exist out there.
See:
"Code version change “rules”")
"What version numbering scheme do you recommend?"
The official document on such policies is in semver.org: Semantic versioning.
Why don't you use the Label feature? You can add comments to this specific version and it's easy for you to find and get it in the future.
Does anyone here try to adopt xtext2 and migrate from xtext1.x to xtext2.0?
It seems xtext2 brings many new atractive features. Such as A Reusable Expression Language and Xtend: A Code Generation Language . Many performance enhancement is made to the Xtext workbench and rename capability. So any one tell you experence about xtext2? Probably this is a bit early question. But I just wait and see.
xtext2 homepage
I updated an existing, not too complex language from Xtext 1 to Xtext 2, and tried to develop a new one using Xtext2 and XBase. I had to re-run the code generation step, and also had to modify the hand-written validators, because the error and warnings locations are to be specified using literals instead of integers. E.g.
error("File does not exist with path: " + path, fileReference, ViatraTestDslPackage.FILE__PATH);
is to be replaced with
error("File does not exist with path: " + path, ViatraTestDslPackage.Literals.FILE__PATH);
Similarly, the workflow has to be changed as well to incorporate some new features: the outline API uses different fragments (outline.OutlineTreeProviderFragment and outline.QuickOutlineFragment), for rename and compare support new fragments are to be added (refactoring.RefactorElementNameFragment and compare.CompareFragment).
With my experiments of XBase it seems, that adding that to a language that already supported some kind of expressions can be labour-intensive, because either old expressions has to be replaced with XBase expressions (or at least altered in a way to make them available in XBase expressions), otherwise you have to maintain two kind of expression support in your code generator or interpreter.
To conclude my answer, I believe, if you have a simple Xtext 1.0 editor, where you mostly relied on the automatically generated features, migrating to Xtext 2.0 seems easy and recommended; however, if you customized a lot of things in manually written code, be careful, because the migration might not be straight-forward, and I have found no real migration guide.
http://www.eclipse.org/Xtext/documentation/2_0_0/213-migrating-from-1.0.php#migrating_from_1_0_x_5_4
I just find this useful link.
Also I do meet some problem especially in serialization module. Luckily in mwe2 file, it leaves a version 1.0 serialization, i use that and fix the problem when using version 2.0 serialization module. Not knowing why..
Another problem is there is a strange bug in the xtext validation. It always complain about ClassCastException. cast from String to QualifiedName error.
It is still early considering the recent release date:
The team have just presented/demo'ed XTend2 in democamps during last month (June 2011).
When a simple refactoring like “rename field” has been done on one branch it can be very hard to merge the changes into the other branches. (Extract method is much harder as the merge tools don’t seem to match the unchanged blocks well)
Now in my dreams, I am thinking of a tool that can record (or work out) what well defined refactoring operations have been done on one branch and then “replay” them on the other branch, rather than trying to merge every line the refactoring has affected.
see also "Is there an intelligent 3rd merge tool that understands VB.NET" for the other half of my pain!
Also has anyone try something like MolhadoRef (blog article about MolhadoRef and Refactoring-aware SCM), This is, in theory, refactoring-aware source control.
You could use coccinelle to do the same kind of refactoring operations on different branches. It will not record or figure out what is being done by itself, you have to explicitly tell it what to do, but other than that it will more or less effortlessly do the same refactoring on as many branches you point it to.
This tool have been used in the linux kernel for updating API usage etc.
To quote from its web page:
"Coccinelle is a program matching and
transformation engine which provides
the language SmPL (Semantic Patch
Language) for specifying desired
matches and transformations in C code."
Darcs supports a 'token replace' operation in a commit, which replaces all instances of one token with another, and merges as you'd want it to.
Araxis Merge doesn't understand common refactoring but it is the only three way merge tool that I've used. It is available for both the Mac and Windows, and it supports an Automation API so I would imagine that you could do what you want with that if you were so inclined. For the record I have no connection with Araxis other than I've used their product.
Plastic SCM (www.plasticscm.com) 3-way merge tool implements Xmerge which is the only one able to assist you merging code that has been moved.
There are now some better merge tools (for example SemanticMerge) that are based on language parsing, designed to deal with code that has been moved and modified. JetBrains (the create of ReShaper) has just posted a blog on this.
There has been lots of research on this over the years, at last some products are coming to market.
In Linux you can use Meld or in Windows Winmerge.
In any case, both tools only "understand" about lines of text. Refactoring requires a way of understanding the code, which is beyond any merging/comparing tool that I known.