Version control for InterSystems Ensemble/Caché [closed] - version-control

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm in a group which is starting to develop using InterSystems Ensemble (an integration framework built on top of InterSystems Caché).
InterSystems has not made the Ensemble Management Portal source-control-aware and this seems a source of problems for development team we would like to address.
I would like to know which version control system are you using for Ensemble/Caché and how are you structuring your development process around it.

I've found VC/m, a version control system designed for Caché.
Feel free to add your comments if you have had any experiences with it.

Another alternative seems TrackWare which is also designed specifically for Caché.

If you are not afraid by development work, you can make some development to hook studio to your current source control tool. There are hooks in place in Cache that allows you to detect modifications on files and to interact with your source control tool.
Here a link to a pdf the describes the basis :
Using the Studio Source Control Hooks
Of course with this solution you will have to do a lot of work on your side.

I'm using Mercurial and though I do use a Cache Studio source control hook (I'm not using ensemble) I think basically the same solution would work for you.
The key is that it's distributed source control. So all the hook does is, on a save, export the current file to a folder on my hard drive, and check it in to my local repository. When things are working right locally, I push it to the central repository - in other words, I just use distributed source control in a normal way.
It's nice to commit each save since this gives me a way to roll things back if I mess something up, but it isn't really necessary. You could write something that pushes the code out to your local repository when you call it from the Cache command prompt.
With distributed source control the fact that check-in and check-out features aren't supported doesn't matter, you handle those issues by merging when you push to the central repository (or however you decide to structure your repositories).
One warning - for Cache class definitions, they are exported as XML in a format you don't define. It includes a time stamp of when the file was generated, and a last modified date. These fool the source control system into thinking they have changed when they have not. So you will have to parse the XML at least enough to strip those out. I don't know of a flag to prevent them from being generated in the first place.

Late reply, but anyway - you can take a look at the CodeTools from Synerva. . CodeControl works as a Studio plugin

Caché Source Control
The Best Solution!
Good luck!

Synerva's CodeTools offer a pretty good solution for that. have been using that on several projects for quite a while.

Related

GitHub vs Google Code for a hobby project [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Note: I have seen this and tried to take as much from it as possible; but I believe my context is different.
I am working on a small-ish project. Call it Foobar. I'm wanting to get this done more organised..I've tried a few projects, mostly as an unorganised programming-as-a-light-hobby student. I'm trying to get more organised; 90% of those projects went after I either failed to document at all, or because I lost them.
As such, I've been thinking about getting version control/hosting going. Not only will it organise me more, but (a big if here) if it gets anywhere into a usable state, it will be easier for people to get.
The two places I'm considering are Google Code and GitHub. From the question I linked:
Google Code:
As with any Google page, the complexity is almost non-existent
Everyone (or almost everyone) has a Google account, which is nice if
people want to report problems using the issues system
GitHub:
May (or may not) be a little more complex (not a problem for me though) than Google's pages but...
...has a much prettier interface than Google's service
It needs people to be registered on GitHub to post about issues
I like the fact that with Git, you have your own revisions locally
From this I'm leaning towards GitHub, as Google Code doesn't look appealing to me.
For a small hobby project - basically making community features irrelevant - are there features that should take me over to one side or the other?
I prefer Google Code since it's just easier for my small personal projects. At the end of the day, for free projects, it's hard to steal time from family, friends or other commitments and the key to making small free projects a success is being realistic with your time. (Elsewise, you get the "80% done" problem.)
Google Code now has GIT support.
Biggest advantage of Google Code is that you don't need a website.
- The frontpage of the project is enough.
- You can add simple binary downloads in the Downloads section.
- In comparison, GITHUB's interface is REALLY confusing to non-programmers. Your frontpage is full of technobabble and so unless it's a coder's tool, you'll need a separate website.
- Marketing's really good- You get a good rank on Google and often you'll be picked up and sometimes reviewed by other download sites. There's no sense donating your time if no one can find your project.
If it is entirely a coder's tool (not just a handy IT tool), then perhaps GITHUB is better.
You say "I believe my context is different", but don't give any reasons why it is. As such, I can't offer you any specific suggestions other than the generic pros and cons, which are outlined in various documents and tutorials online.
My suggestion: pick a program first (git, Mercurial, or SVN) and use it. Find a hosting site that supports the software (at the time of this answer, GitHub for git, BitBucket or Google Code for Mercurial, Google Code for SVN) and use it. If you run into problems, switch to another one.
I've used all three, and typically the problem isn't the hosting, but the fact that you need to learn the program itself. All of the hosting providers listed here will suit you fine until you have a specific reason why it doesn't.
I would go with Github. The single reason for this is, that Google code shows your email and your full name (name only if you have google+ i think). And you cannot disable this at the moment.
Let's split the problem into two parts: for developers and for users.
In fact, if just terminal users are considered, both google code and gitbud has friendly interfaces, and as we all know, google is more well-known towards those who do not program.
But when we turn to programmers, git is more fashion and more comfortable(question?).
So, personally I will choose google code if I am planning an terminal user oriented product and github of course if I want to involve lots of potential collaborators of I was developing an complete programmers' product, like a API something.

In what ways is Mercurial better/worse than TFS? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've just joined a new company and at the moment we're using Microsoft SourceSafe as our repository. The settings aren't ideal and it's proving to be a big pain in the neck.
I've recently used Mercurial and thought it was amazing, so I'm advocating switching to that, but it looks like the company already has a Team Foundation Server licence and wants to use that instead.
Can anyone give me a list of points where one is better than the other? I've not used TFS and so I don't know what it's good/bad at.
You cannot directly compare TFS and a DVCS.
If your company leans toward TFS, that may be because of the other features TFS comes with (data collection, reporting, and project tracking, all well integrated with Microsoft products)
On the pure Version-Control side, the Team Foundation Server 2010, with its Team Foundation Version Control (TFVC) 2010, introduces branches as first-class citizen.
See Team Foundation Server and branching characteristics, compared to others.
I still find their branching models more complex than a Mercurial or Git one.
See TFS2010 Branching into a subfolder of another branch vs. Guide to Branching Model in Mercurial (and this SO question which also details merges and branches with DVCS)
That being said, it remains a CVCS (Centralized VCS), meaning you get different working processes than with a DVCS: see Describe your workflow of using version control (VCS or DVCS).
The true killer feature of a DVCS remains its merge capability (simpler and faster than any CVCS).
But introducing a DVCS in a corporate environment remains hard.
I recommend Joel on Software http://hginit.com for a list of very good reasons to switch to distributed version control.
I have found a few gotchas with TFS that make it a little different than other CVCS.
TFS is very difficult to use outside of Visual Studio. Even diffing versions is done inside VS. Personally I only like to use VS for writing code.
We have had lots of issues with dll's and other binary files not updating to the latest version.
TFS makes all your files under version control read-only. This makes modifying files outside of VS very painful. In fact, this is still causing issues with out Silverlight projects in our Continuous Integration build in TFS.
The command line tool for TFS is not easy to use from the command line. (Personally, I like to use the command line)
Background:
My company switched from SVN and TFS and I use Mercurial/Git for my side projects. I also followed this blog about using Mercurial with TFS and it has made my work with TFS much more enjoyable.
TFS is an Application Lifecylce Management Tool not ONLY a source code repository / versioning system.
It's strength's are:
-It's natural integration into Visual Studio (+100)
-It's Full App Lifecycle support from Work Item through Q/A acceptance.
-It's integration with MS Project / Sharepoint, and all the other
hoo-ha's you get
-And now TFS 2012 has added support for "Local Workspaces" which allows
for off-line working, but also allows "Server Workspaces" which is
similiar to TFS 2010.
-Diff on every Check-in / Commit
The Source control side of it is also very strong, however, personally, as long as I can see the entire history, not lose code, and not have my code "stepped on". I could give a darn.
I've been using TFS since 2008 and the latest round of improvement further demonstrates Microsofts commitments to evolving their products and keeping up with industry changes. Personally I love it, but i stay in the Microsoft environment (which i also love).. outside of that, it may not work with everyone's needs.
Now, a few days into working with Mercurial professionally (BitBucket / Mercurial / tortoiseHG / VisualHG ) , i have to say the tools seem a bit dated. The integration with Visual Studio is like luke warm coffee (ho-hum), and the explorer integration takes me back to "the good ol days" when i was lucky to NOT be working on Visual Source Safe.
Another thing to take note of is the ease in migrating from Visual Source Safe into TFS, it's fairly painless.. i recently moved my last companies entire history in VSS into TFS and it just took a couple command line utils and overnight to get all the change history moved over. I was shocked (as where my colleagues) at how easy the migration was, it even kept all the history since the beginning (by request of the powers that be)
I'm definitely biased having worked with MS tools for a long time, but there's not much to source control as long as it works..
If your organization wants to truly manage all aspects of application development, and they haven't got integrated tools or processes yet, TFS will afford them the ability to grow and manage from the get go.
Start with Source Control, end up with specs originated in MS Project, tied to work items tied to Unit Test tied to acceptance tests tied to automated builds and deployments
And Lastly: Burn Down / Velocity Charts

How can I build something like Amazon S3 in Perl? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going):
Easy API implementation for client side apps. (maybe REST (?)
Centralized database server for the USERDB (maybe PostgreSQL (?).
Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?).
Easy server side configuration (config file(s) stored on the servers).
Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases)
Fast
High Uptime
Low memory usage
Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?).
Maybe a cache of some sort (memcached or parlbal or something else (?).
Thanks in advance
Perhaps MogileFS may help?
MogileFS homepage
Contributing to MogileFS
Google code repo (however note sixapart repo in contributing link).
Also there was a recent discussion about MogileFS performance on the Google Groups / Mailing list which maybe of interest to you.
/I3az/
here I found a ruby impl
https://github.com/jubos/fake-s3
hope that helps
mike
I have created a super simple server, see the put routine in Photo::Librarian::Server.pm it supports the s3cmd put of a file, nothing more for now.
https://github.com/h4ck3rm1k3/photo-librarian-server
https://github.com/h4ck3rm1k3/photo-librarian-server/commit/837706542e57fbbed21549cd9e59257669d0220c

How do you convert your office to build automation? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
The title should say it all, then I can solidify 2 more ticks on the Joel test.
I've implemented build automation using a makefile and a python script already and I understand the basics and the options.
But how can I, the new guy who reads the blogs, convince my cohort of its inherent efficacy?
Ask for forgiveness, instead of permission.
Get it working in private (which it looks like you have) and then demonstrate its advantages.
One thing that always gets people is using CruiseControl's Tray utility - people love it when they can see, through their system tray, that the build succeeded. (this is assuming you're in a Windows environment, that CruiseControl will work with your existing systems, etc.)
NOTE: If asking for forgiveness instead of permission will result in instant termination, you might not want to do the above. You might also want to look for work somewhere else. Your mileage may vary.
Implement build lights ... we did something similar with lava lamps and it was a huge hit. For added bonus marks give every developer a red light over their desk and have the right light come on when the build breaks.
Grab an old spare computer & put it in the corner of your office. Set it up to build your project. Write a small script that does:
Get latest version of all files.
If there was a file change, build
Notify you if there's a failure.
When you catch a break, compassionately get it fixed.
Consider adding a step to run unit tests, too.
If you can avoid scolding people for their mistakes, pretty soon people will be impressed with how reliable the build has been since you arrived. Build from there.
The trick is to spend very little of your time to generate a lot of value for your team, without pissing anyone off.
Set up an autobuilder. Once you have it building and running the tests automatically, it won't matter if you convince other people to save their own time :)
If you're using git for version control, here's an autobuilder that automatically finds the exact checkin that started causing the tests to fail: http://github.com/apenwarr/gitbuilder/
I would take a spare box, install a continuous integration server (Hudson or CruiseControl in the Java world) and set up a job that builds your application each time someone checks in some code.
You can either try to convince your coworker or just wait until someone breaks the build. In the latter case, just send the following email:
to: all developers
Guys,
I've just noticed that I can build our software using the
latest version because of the following error:
...
I you want to be notified by our continuous
build system (attached is the mail I received when
it failed to build our application), just let me know.
Usually it doesn't take that long until everyone is on the list
I would set up the automated build as a nightly process such that every night it grabs the most recent code revision, builds it, and generates a report. Now you will know first thing every morning whether or not the build is broken, and if it is, you can notify the team. If broken builds are much of a problem on your project, people will probably start coming to you first to find out if it is safe to sync to the latest code, since you will be the person who tends to know on any given day whether or not the build is broken (by the way, an automated suite of unit tests helps a great deal with this as well). With any luck, people will start to realize that your nightly build is a useful thing to have, and you'll be able to just set up your daily build report as an email that goes out.
James Shora has two great links:
For hardware
http://jamesshore.com/Blog/Continuous-Integration-on-a-Dollar-a-Day.html
For "Humanware"
http://jamesshore.com/Change-Diary/
( The history of how he did it. The read is long but changing an organization is harder )
When the build is needed by the team on a regular basis, it's pretty easy. You appoint a team member (rotated periodically) to do the build. If the build process is complicated enough, the team will on its own come up with a way of at least partially automating the build. In the worst case, you'll have to automate the build yourself, but no-one will be against the automation.
Demonstration is the best, and really the only way to change anyone's mind who is resistant to doing things differently.
Here we showed how useful automated builds are by having the ability for QA to grab a green light build straight from the build server and install it and test without any direction from the developers. They are able to continue working, they know that it at least passes it's unit tests. It helped integrate testing and development reducing time bugs were in the system.

Telligent's Community Server [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
The company I work for is wanting to add blog functionality to our website and they were looking to spend an awful amount of money to have some crap being built on top of a CMS they purchased (sitecore). I pointed them to Telligent's Community Server and we had a sales like meeting today to get the Marketing folks on board.
My question is if anyone has had issues working with Community Server, skinning it and extending it?
I wanted to explain a bit why I am thinking Community Server, the company is wanting multiple blogs with multiple authors. I want to be out of the admin part of this as much as possible and didn't think there were too many engines that having multiple blogs didn't mean db work. I also like the other functionality that Community Server provides and think the company will find it useful, particularly the media section as right now we have some really shotty way of dealing with whitepapers and stuff.
edit: We are actually using the Sitecore blog module for a single blog on our intranet (which is actually what the CMS is serving). Some reasoning for why I don't like it for our public site are they are on different servers, it doesn't support multiple authors, there is no built in syndication, it is a little flimsy feeling to me from looking at the source and I personally think the other features of Community Server make its price tag worth it.
another edit: Need to stick to .net software that run on sql server in my company's case, but I don't mind seeing recommendations for others. ExpressionEngine looks promising, will try it out on my personal box.
I've done quite a few projects using Community Server. If you're okay with the out-of-the-box functionality, or you don't mind sticking to the version you start with, I think you'll be very happy.
The times I've run into headaches using CS is when the client wants functionality CS does not provide, but also insists on keeping the ability to upgrade to the latest version whenever Telligent releases an update. You can mostly support that by making all of your changes either in a separate project or by only modifying aspx/ascx files (no codebehinds). Some kind of merge is going to be required though no matter how well you plan it out.
Community Server itself has been very solid for me, but if all you need is a blogging engine then it may be overkill. Skinning it, for example, is quite a bit of work (despite their quite powerful Chameleon theme engine).
I'd probably look closer at one of the dedicated blog engines out there, like BlogEngine.NET, dasBlog or SubText, if that's all you need. Go with Community Server if you think you'll want more "community-focused" features like forums etc.
You can also take a look at Telligent Graffiti CMS.
http://graffiticms.com/
It supports multiple blogs and authors.
Update: It's now open source and available at http://graffiticms.codeplex.com/
Community Server 2008.5 lets you add several members that can post articles. Also with
Community Server 2008.5 you now have wiki's along with forums and the blogs. This probably has one of the better web based admin control panel's I seen in a while. This let's you easily change several things including the site's theme (or skin). To me it is one of the most scalable applications I have seen in a while. We are using it for our site http://knowledgemgmtsolutions.com.
Skinning is pretty straightforward, and the sidebar widgets aren't very difficult to create (if you don't mind building controls in code). The widgets also allow options for the users to customize them in the control panel very easily. I doubt you'll find a strong community of widget builders for Community Server however. Nothing compared to the dev community for blogs like wordpress.
I recommend starting templates from scratch and adding in CS controls as needed, to get the markup you prefer for styling and to use only what you need.
Setting up different roles for users to post to different blogs is also very easy and requires no coding. You can have blog groups, and allow only certain users to post to certain blogs.
Sitecore's Forum module is powered by Community Server and integrated with Sitecore CMS.
Expression Engine with the Multi-Site Manager works great for that kind of situation.
Have you had a look at the Shared Source blog module for Sitecore?