I have found two places of the mongodb driver in Go
http://godoc.org/gopkg.in/mgo.v2 and http://godoc.org/gopkg.in/mgo.v2/bson
http://godoc.org/labix.org/v2/mgo
and http://godoc.org/labix.org/v2/mgo/bson
Are they the same distribution and version of mongodb driver in Go?
Why are there two pathnames for the same package?
Which one of the two shall I use?
Thanks.
The package labix.org/v2/mgo was moved to gopkg.in/mgo.v2 according to
a commit in the gopkg.in/mgo.v2 source.
The author of mgo also created gopkg.in. He moved several of his packages from his "vanity" path on labix.org to gopkg.in.
The source for labix.org/v2/mgo is at http://bazaar.launchpad.net/+branch/mgo/v2/files/head:/. The most recent update is July 1, 2014.
The source for gopkg.in/mgo.v2 is at https://github.com/go-mgo/mgo/tree/v2. This tree is a continuation of bazaar.launchpad.net/+branch/mgo/v2. The most recent update is June 9, 2016.
Use gopkg.in/mgo.v2.
You can find more information in the official page. The page links
gopkg.in/mgo.v2
From what I can see, labix.org/v2/mgo is probably the version 1 of the driver, whereas gopkg.in/mgo.v2 is the new version.
Go doesn't have traditional package distribution or versioning. Therefore, if you need a major refactoring and you want to break backward-compatibility, a common approach is to publish a different version at a different path.
I guess that's what happened here.
Related
In sidekick, I can create as many versions of a page and can restore as well.
What I am looking for is, how to limit the number of "creation of page versions" . Suppose, after 5 versions I want to display an error - "more versions are not allowed".
I followed the link for reference but no luck: http://www.wemblog.com/2012/08/how-to-work-with-version-in-cq.html
You have to create a osgi:Config within repository for this (com.day.cq.wcm.core.impl.VersionManagerImpl).
You can control number of version created by activation by setting versionmanager.maxNumberVersions property.
Thanks in advance
There is no pro-active way to stop any versions from being created in the AEM repository. The configuration you are referring to is from: https://docs.adobe.com/docs/en/aem/6-2/deploy/configuring/version-purging.html#Version Manager
versionmanager.maxNumberVersions (int, default 5)
on purge, any version older than the n-th newest version will be removed. If this value is less than 1, purging is not performed based on the number of versions
This is the setting for version purge task which retains a maximum of n number of versions after purging where n is the number defined in the above config.
A preemptive version disabler won't work as versions are created from background tasks like workflows asynchronously. These tasks will fail without any feedback to user which will be problematic in most scenarios.
If you want to change the sidekick and disallow version creation, then you will have to rewrite core logic of the UI which can be a big task. Version Purging is the recommended way to setup your instance to limit the number of versions.
I built random forest models using ml.classification.RandomForestClassifier. I am trying to extract the predict probabilities from the models but I only saw prediction classes instead of the probabilities. According to this issue link, the issue is resolved and it leads to this github pull request and this. However, It seems it's resolved in the version 1.5. I'm using the AWS EMR which provides Spark 1.4.1 and sill have no idea how to get the predict probabilities. If anyone knows how to do it, please share your thought or solutions. Thanks!
I have already answered a similar question before.
Unfortunately, with MLLIb you can't get the probabilities per instance for classification models till version 1.4.1.
There is JIRA issues (SPARK-4362 and SPARK-6885) concerning this exact topic which is IN PROGRESS as I'm writing the answer now. Nevertheless, the issue seems to be on hold since November 2014
There is currently no way to get the posterior probability of a prediction with Naive Baye's model during prediction. This should be made available along with the label.
And here is a note from #sean-owen on the mailing list on a similar topic regarding the Naive Bayes classification algorithm:
This was recently discussed on this mailing list. You can't get the probabilities out directly now, but you can hack a bit to get the internal data structures of NaiveBayesModel and compute it from there.
Reference : source.
This issue has been resolved with Spark 1.5.0. Please refer to the JIRA issue for more details.
Concerning AWS, there is not much you can do now for that. A solution might be if you can fork the emr-bootstrap-actions for spark and configure it for you needs, then you'll be able to install Spark on AWS using the bootstrap step.
Nevertheless, this might seem a little complicated.
There is some thing you might need to consider :
update the spark/config.file to install you spark-1.5. Something like :
+3 1.5.0 python s3://support.elasticmapreduce/spark/install-spark-script.py s3://path.to.your.bucket.spark.installation/spark/1.5.0/spark-1.5.0.tgz
this file list above must be a proper build of spark located in an specified s3 bucket you own for the time being.
To build your spark, I advice you reading about it in the examples section about building-spark-for-emr and also the official documentation. That should be about it! (I hope I haven't forgotten anything)
EDIT : Amazon EMR release 4.1.0 offers an upgraded version of Apache Spark (1.5.0). You can check here for more details.
Unfortunately this isn't possible with version 1.4.1, you could extend the random forest class and copy some of the code I added in that pull request if you can't upgrade - but be sure to switch back to the regular version once you are able to upgrade.
Spark 1.5.0 is now supported natively on EMR with the emr-4.1.0 release! No more need to use the emr-bootstrap-actions, which btw only work on 3.x AMIs, not emr-4.x releases.
We want to implement semantic versioning in our process, we are in version 1.0.0, and we have added two new functions. We will deliver these functions soon.
The question is: should we name our next version 1.1 or should we name it 1.2 because we have created two new functions.
In general, if we add n new functions, should we increment by n the minor component of the version, or we only increment by one in each delivery?
Version does not depend on how many functions
you have written in that particular release.
If your current version is 1.0.0 ,then
it should be 1.0.1 or 1.1 depend upon your
naming rule that you have put for your
product and dependencies.
There is no absolut right solution to version numbers.
The way most people i know do it is by increasing it on every version they plan on making available.
Microsoft themselves for example use the "major, minor, build, and revision" semantic for their version numbers. Just don't change up the way you do your version numbers after deciding on one. Because then they become useless :-)
I'm having some issues with the installation of Rational Team Concert on my server.
The thing is that when I upload some changes to the server (any kind), it changes the last modified attribute of the file, but it shouldn't do it.
Is there a way to avoid this behavior?
Thank you in advance!
This is something that we have tried to add to RTC SCM (and we still plan to). However, we found that it needs to be an option on load/update.
There are numerous details and discussions available # this work item on jazz.net
Regarding timestamp, getting over the fact that relying on it in a version control tool isn't always considered a best-practice (see "What's the equivalent of use-commit-times for git?"), it is actually a complex issue:
an SCM loader wouldn't use just timestamp to determined what file has changed (Task 179263)
you can have various requirements for that timestamp (like in Defect 159043, where the file timestamp of the modified file on disk that of when it was delivered, not when I accepted.). The variable JAZZ_CCM_SKIP_MOD_TIME=true is mentioned so check if that could improve your specific case.
it is all based on the assumption the timestamp is correctly set by the local workstation, which isn't always true, as illustrated in Task 77201
I am using VS 2008(Professional edition) with SP1.I am new to ADO.NET DataServices.I am watching Mike Taulty videos.
He used [DataWebKey] attribute to specifty the key field and he referred the namespace
Microsoft.Data.Web. To follow that example I am trying to refer the same assembly,but it is not found in my system.
How to fix it?
Looks like you're not the first person to come across this.
http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/2008/05/19/10424.aspx
Apparently you should use DataServiceKey which is in System.Data.Services.Common.
I see that Mike's videos mostly date from mid 2008. ADO.NET Data Services has changed since then, that may be why you're unable to find the right reference.
It think you're better off trying to find some more recent material, preferably from the last 6 months.