Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
What is continuous integration and what are its benefits?
This is by far the best explanation I have read so far.
At its simplest, it is simply a mechanism that rebuilds your project whenever a check in is made into some revision control system (CVS etc). This can be extended though to include running tests, all the way through to generating a CD image, mounting it within VMs, installing the product and running full tests on it.
It has the simple advantage of highlighting when code changes break the system as early as possible. Not only does it detect breaks in the code, it highlights who caused the break. This psychological effect is very effective in encouraging good testing prior to check in!
It is the practice of ensuring that all aspects of your software development process are lined up to permit the daily creation of a working version of your product. It is best known as part of Extreme Programming.
This involves things as far afield as build automation, automated testing, daily check-ins, using a source code repository, etc. But the ultimate goal is to help the entire project run according to core Agile Principles so that you deliver early and often. This, in turn, helps you leverage feedback from your users, etc.
+1 for the link to Fowler's page.
Personally, I just found it "nice" to know whenever something didn't compile because we had the poor practice of having a single build (yes, we developed on the production build; we were awesome). We hadn't got the integrated testing phase before I left.
After a while, it did, however, lessen the amount of massive coding changes (compared to the "check in and pray my changes don't conflict" that was rampant). Eventually, most developers started making small changes frequently just to get confirmation from the CC.Net tray icon.
Overall, I found it very comforting to know that we could send out a build immediately if we had to. Had we had just a few smoke tests integrated, I think the stress-level would have been substantially lower.
Just to refresh. At this point there is a huge difference between Continuous Integration (CI) and Continuous Delivery (CD). While most of posts above described CD I'll try to show how CI extends now CD definition. Having all the tools needed to build a package and deploy new version of app automatically is a crucial part of CD. Adding to that test automation (based on three level verification: General Health-check, Detailed Statistics and Historical entries) and a proper governance you're creating a really good piece of CI. Only because of such an extended definition building extraordinary cloud tools is possible. Think about muleESB or esbeetle.com. For both of them CI is something natural although only the second one is supporting both ESB and ETL components.
I hope that it was helpful.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I ask because my team previously was trying to "do" scrum. We had 2-week sprints, but no releases to go with those sprints! There were several other reasons for this, but one big one was that deployments took too long and were too complicated to do that frequently.
You sort of answered your own question: if deployments take too long, they can't be done frequently and you can't have short iterations, you can't ship the code at anytime, you can't demo progresses at the end of an iteration, etc. So, in your context, not having an automatic deployments was a major impediment that should have been identified and removed very early (through inspection and adaptation).
Back to the question now. Is automatic deployment a crucial practice? Well, as hinted, I'd say it depends on the context: a project with a simple and straightforward manual deployment process can probably live with it. Is automatic deployment a good practice? I think so: an automated process is typically faster, less error prone and humans obviously don't add much value at doing something that can be automated (see also Three Strikes And You Automate).
I'd say yes, it's an essential practice. If deployments are too complicated and taking too long you've got a bigger problem. I think you should sort that out. You have to do them sometime. Doing the work to make it possible to build at will can only help your project.
Being able to claim the "agile" label isn't important; having an automated, hands-off, repeatable build is the point.
Crucial, no, you can get away without it but, as you've found out, everything that you have to do manually will slow down your cycle time.
You should be aiming to automate as much as possible, build, deployment, regression testing and so forth, so that you're not unnecessarily delayed.
The idea of a sprint without a release is an ... interesting ... one. I can't say I've ever seen that tried before :-)
You have to distinguish clearly two things: a shippable product increment and an actual shipment. First is what your team should produce every sprint, second is what may happen with it if it makes business sense.
In other words: what the team produces each and every sprint must be a piece of completely working code, a new increment of whatever it is that you develop. It should be fully built and tested - which is why automatic testing and building environment is a must if you are to do this. However, whether it should be deployed every sprint to production servers and whether this should be automatic is a completely different thing which has nothing to do with the development process itself.
If it is a requirement that you deploy to live production server every sprint (by all means a very good thing to do) then it would probably make sense to automatize it, but whether this process is fully automatic or no shouldn't impede your team's ability to produce fully working code every sprint.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
when starting with a project and using source control i find it hard to separate the things people are working on so they don't either write duplicate code or think it should be named one thing and so on.
this problem diminishes over time because the general foundation is in place and it's easier to separate the tasks so they don't overlap as much
how do you manage working with source control in the beginning phase?
EDIT:
I can see that it don't really have anything to do with source control, but it gets more apparent when you have source control too. so the question becomes more along the lines of "how do you manage to separate the tasks so they don't overlap too much. I think it's really hard and i haven't really seen much about how to do it.
Well, as far as source control goes, somebody needs to take the lead and set up the basic structure of the project, directories, etc. and communicate it to the team. On projects I work on, this is usually an architect or senior developer, someone who knows the best practices for project organization for the team/company.
With respect to avoiding having multiple people working on the same tasks, that's a project management function; someone needs to determine what tasks need to be done, and communicate it to the team. If you are working in an agile/scrum environment, the team may divide and hand out work items amongst themselves, but in either case you need to communicate to avoid doing the same work twice.
EDIT
To address the issue of multiple people working on the same task, I tend to work on smaller teams, 2-6 people; in this environment, I have had a lot of success with a scrum-influenced approach using the Crystal Clear methodology:
Architect(s)/designer(s) come up with high level design
Architect(s)/designer(s) define iterations/deliveries, the first of which is a "project skeleton" which consists of architectural and back-end components and a thin slice of the app
Lead person breaks up features into 1-3 day tasks/units of work (estimated)
Team meets and discusses priority, timing and dependencies of tasks, and divides up the first set of tasks
The team has brief daily meetings to discuss status/priorities and dependencies, and change direction if necessary
With larger projects/teams, you will almost certainly need someone whose main job is dedicated to tracking status, dependencies and conflicts.
I don't think source control has much to do with the problem of coordinating people's efforts (except that it can catch some "conflicts" when people erroneously try to modify the same files in different ways -- but, that's not as good as preventing conflicts, and even just "preventing conflicts" does not per se ensure that everybody is working on what they should ideally be working right now, in terms of priorities). Coordination is properly managed with practices (and perhaps tools, e.g. Pivotal Tracker -- but, using the right practices is even more important than using nice tools!-) that specifically focus on ensuring coordination. For example, the practices that Tracker is designed to support and enhance, such as story-based iterative planning, and other compatible ones, such as stand-ups, offer ways to meet these needs.
You must be having a base version that everyone is using, check that into the repository, and then make incremental changes to the repository, make sure that everyone works on different part of the code, commit every working change, and resolve conflicts as and when they occur. That is how I would do it.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Currently we use FogBugz for tracking issues and found it to be ok. I'm looking for something else that can allow end users the ability to track their cases along with us. And something that actually works well with email. I've found a few alternatives that support those features but they don't integrate with version control. We've got all the SVN hooks in fog bugz and we use them - but I haven't really found them all that useful. Has anyone found a really good reason to need version control integration with the bug trackers?
Clearly, this kind of integration is not something that is essential to the operation of the software. With a bit of discipline every check-in can be accompanied with a bug number manually, and every bug resolution can manually have a version control tag added to it.
All else being equal however, I personally will always prefer automation over 'discipline of the users', because the latter will always sooner or later let you down from time to time. Not because the users are malicious or incompetent, but simply because people cannot be 100% alert all of the time.
I find the integration of SVN with TRAC very helpful. Through SVN hooks, commits to the repository with a ticket number insert a comment on the ticket with a link to a nice visual HTML representation of the revision number, showing inserts, deletes, and diffs.
As a supervisor over a small team of programmers, I find this as a helpful tool for me to do code reviews, so I can verify that the commit truly addresses the associated issue. I wouldn't exactly call this integration essential, but it was a nice free extra on my issue tracker that I've grown to love.
It is absolutely critical for us.
Here is a typical commit log for one of our projects (sample):
Make sure filedes is cleared in child list prior to reallocating
When p->child-filedes is > 0, the child list is active and can not
be collected.
[ Impact: Closes bug 123457 ]
Note the [ Impact: ] line, which could also be "Relates-To", "Caused" or any number of other things.
This lets us use simple greps and automated scripts allowing the person committing to automatically close, or even re-open a bug.
Though we typically use Git and Mercurial, these sort of hooks would work on (almost) any VCS, especially proprietary ones that do not feature some modular plug-in that you need.
If you think of your bug system as just another part of your VCS, its really easy to see how they depend upon each-other.
Other stuff, such as fetching patches submitted with bugs is possible, too.
It is a question about your code size, and how many bugs you need to track.
And it is also really useful for non coders in the organisation i.e. managers and customer support. They can find answers to questions like "When and where was this bug fixed"...
I think it's helpful to distinguish between bugs found internal to the development organization, e.g. from peer code review, versus bugs found by a test group that is external to the development organization.
The (small) benefit to coordinating version control with bugs found by an external test group would be for historical reference.
The larger benefit is in coordinating bugs found via peer code review with version control -- by doing so you can certify that all code is peer review bug free before releasing it to external test groups; a common requirement.
FYI, Code Collaborator from SmartBear, Inc. handles this nicely.
I have found version control integration to be extremely helpful in maintaining and managing multiple versions (stable, development trunk, etc.) of a project.
Using the version control integration and a bit of discipline from coders to reference bug tickets in commits (or some pre-commit hooks to forcibly require ticket references) has allowed us to quickly and easily generate lists of changesets that are required to fix any given bug. This is instrumental when merging the fixes into various stable branches of the code.
It's not a necessity, but it certainly makes life easier for release management.
I've used SVN + Trac and Atlassian's Jira product with Fisheye SVN plugin and have found both tools to be very good. Trac seems to be a bit simpler, but very easy to use. Jira, in my opinion, had a nicer look and feel and quite a few more bells and whistles, but was almost too much at times.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
We're currently using Mantis as our bugtracker, and we're pretty much sick and tired with it. The developers want more SVN integration, the customers want an easier system to work with.
As such, we're looking for a new bugtracker and at the moment we're looking at Redmine. However, in its default setup it doesn't match our desired workflow, or at least not much better than Mantis does.
We have the following workflow, and would like a bugtracker to match it.
A bug is reported (often by the customer), and is considered 'new'.
These bugs are regularly reviewed and either acknowledged (it's a bug) or marked as a feature (customer often needs to pay) and delayed until the financial part has been worked out.
The bugs are then assigned and handled by a developer
when finished, it's marked as 'ready-for-review' (by another developer)
when reviewed it's marked as 'reviewed'
when marked as 'reviewed', the original developer places the new code at the staging environment and marks the bug as 'ready-to-be-tested' (by the bug-reporter)
bug-reporter marks the bug as 'resolved'
when placed on production, bug-reporter closes the bug
Of course, feedback is often required especially during the early stages. We're looking for a way to distinguish between who is required to take the next step, and who the bug is assigned to (developer). We also want the customer to do so using a simple gui - asking them to change the assignee from their own account to the developer, or even more difficult: a 3rd party (think: design agency) has just too much to ask using the regular gui's.
The gui should show them what to do and which options there are - not search for them.
Does anyone have any experience with a bugtracker that works this way? Is our workflow really wack? How do you make sure everyone involved understands where the bug stands, and who is required to take which step?
Last year we had the same problem, and we figured out that the best solution for us was Jira.
With respect our workflow is more robust and complicated than yours.
We have pretty much the same kind of workflow which we are managing using Redmine with email integration. The customer logs bugs into Redmine directly. Notification comes to the project manager who decides which developer can work on the bug.
The developer opens the bug and puts it into the Investigating state.
If its a feature, he replies to it stating the reasons and puts it into the Replied state which is then revisited later.
If its a bug, then the developer starts development. Before this he puts the bug in Coding state.
Once the coding is over, he changes the state of bug as Review and the peer reviews happen.
If there is any rework, then the developer changes the state to Rework.
Once everything is ok, the developer changes the state to Delivered.
The QA verifies the bug and the finally closes it by changing the state to Closed.
We've defined all of this workflow in Redmine and have been using this pretty effectively without any hassles. Email integration makes everything easy for the project manager to track whenever any bug changes its state.
You can also create and save custom reports, which is a cool feature as well.
I've been using Trac for a small personal project, and at work we used Bugzilla for this.
The workflow you described also sounds like how Red Hat utilizes Bugzilla.
As other's have said, Jira is very good. I especially like its ability to create a custom issue workflow
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 days ago.
Improve this question
In an ideal world, our development processes would be perfect, resulting in regular releases that were so thoroughly tested that it would never be necessary to "hotfix" a running application.
But, unfortunately, we live in the real world, and sometimes bugs slip past us and don't rear their ugly heads until we're already busy coding away at the next release. And the bug needs to be fixed Now. Not as a part of the next scheduled release. Not tonight when the traffic dies down. Now.
How do you deal with this need? It really can run counter to good design practices, like refactoring your code into nice, discrete class libraries.
Hand-editing markup and stored procedures on a production server can be a recipe for disaster, but it can also avert disaster.
What are some good strategies for application design and deployment techniques to find a balance between maintenance needs and good coding practices?
[Even though we test a lot before we release, ] What we do is this:
Our SVN looks like this:
/repo/trunk/
/repo/tags/1.1
/repo/tags/1.2
/repo/tags/1.3
Now whenever we release, we create a tag which we eventually check out in production. Before we do production, we do staging which is [less servers but] pretty much the same as production.
Reasons to create a "tag" include that some of the settings of our app in production code are slightly different (e.g. no errors are emailed, but logged) from "trunk" anyway, so it makes sense to create the tag and commit those changes. And then checkout on the production cluster.
Now whenever we need to hotfix an issue, we fix it in tags/x first and then we svn update from the tag and are good. Sometimes we go through staging, with some issues (e.g. minor/trivial fixes like spelling) we by-pass staging.
The only thing to remember is to apply all patches from tags/x to trunk.
If you have more than one server, Capistrano (link to capify.org doesn't go to the intended anymore) is extremely helpful to run all those operations.
One strategy is to heavily use declarative-style external configuration files for the different components.
Examples of this:
Database access/object-relational mapping via a tool like IBatis/IBatis.NET
Logging via a tool like JLog/NLog
Dependency injection via a tool like Spring/Spring.NET
In this way, you can often keep key components separated into discrete parts, hotfix a running application without recompile, and seamlessly use source control (particularly in comparison to stored procedures, which usually require manual effort to source control).
We divide our code in framework code and business customizations. Business customization classes are loaded using a separate classloader and we have tool to submit changes to a running instance of production. whenever we need a change in any class we change it and submit it to a running instance. the running instance will reject the old classloader and use a new classloader insance to load the classes again. This is similar to Jboss hot deploy of EJBs.