How can I tell a github repository's quality? [closed] - github

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Many a time do I find myself pointed out to use some code in some github repository, and I find it hard to asses whether I should trust and use the code.
Assuming The code is an answer to my visible needs, what other parameters should I check in order to decide if using the code is a good idea?

You should check:
documentation - Is everything clearly documented? Would you need support from the author to use the code?
activity - Sometimes authors could not constantly push updates to the library, but it is important issues and pull requests are resolved rather quickly. Common bugs often are resolved by others in a pull request, but if it's not merged it's rather hard to handle all the forks.
Also you should check the Pulse page in the repo. It will show the activity in issues, commits and releases.
extensibility - You may want to do something different with the library. Or you may want to build something on top of it. You should check the API (the public interface), the configuration and whether some components could be changed with something else (think interfaces and the composite design pattern).
tests - Unit tests are important. You should write tests for your own application. When you use an external library, make sure it is well tested so you use a component which will work the same when you update it or use it in a different environment. If the code is not tested you should not use it. Unless you wrote the tests yourself.

You can check out the chrome extension DevGib that I wrote. It automatically rates Stackoverflow questions and Github repositories before accessing them, by showing a small colored icon next to the link. It's still work in progress but it does the job for me.

Related

source control when starting up a new project [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
when starting with a project and using source control i find it hard to separate the things people are working on so they don't either write duplicate code or think it should be named one thing and so on.
this problem diminishes over time because the general foundation is in place and it's easier to separate the tasks so they don't overlap as much
how do you manage working with source control in the beginning phase?
EDIT:
I can see that it don't really have anything to do with source control, but it gets more apparent when you have source control too. so the question becomes more along the lines of "how do you manage to separate the tasks so they don't overlap too much. I think it's really hard and i haven't really seen much about how to do it.
Well, as far as source control goes, somebody needs to take the lead and set up the basic structure of the project, directories, etc. and communicate it to the team. On projects I work on, this is usually an architect or senior developer, someone who knows the best practices for project organization for the team/company.
With respect to avoiding having multiple people working on the same tasks, that's a project management function; someone needs to determine what tasks need to be done, and communicate it to the team. If you are working in an agile/scrum environment, the team may divide and hand out work items amongst themselves, but in either case you need to communicate to avoid doing the same work twice.
EDIT
To address the issue of multiple people working on the same task, I tend to work on smaller teams, 2-6 people; in this environment, I have had a lot of success with a scrum-influenced approach using the Crystal Clear methodology:
Architect(s)/designer(s) come up with high level design
Architect(s)/designer(s) define iterations/deliveries, the first of which is a "project skeleton" which consists of architectural and back-end components and a thin slice of the app
Lead person breaks up features into 1-3 day tasks/units of work (estimated)
Team meets and discusses priority, timing and dependencies of tasks, and divides up the first set of tasks
The team has brief daily meetings to discuss status/priorities and dependencies, and change direction if necessary
With larger projects/teams, you will almost certainly need someone whose main job is dedicated to tracking status, dependencies and conflicts.
I don't think source control has much to do with the problem of coordinating people's efforts (except that it can catch some "conflicts" when people erroneously try to modify the same files in different ways -- but, that's not as good as preventing conflicts, and even just "preventing conflicts" does not per se ensure that everybody is working on what they should ideally be working right now, in terms of priorities). Coordination is properly managed with practices (and perhaps tools, e.g. Pivotal Tracker -- but, using the right practices is even more important than using nice tools!-) that specifically focus on ensuring coordination. For example, the practices that Tracker is designed to support and enhance, such as story-based iterative planning, and other compatible ones, such as stand-ups, offer ways to meet these needs.
You must be having a base version that everyone is using, check that into the repository, and then make incremental changes to the repository, make sure that everyone works on different part of the code, commit every working change, and resolve conflicts as and when they occur. That is how I would do it.

What is continuous integration? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
What is continuous integration and what are its benefits?
This is by far the best explanation I have read so far.
At its simplest, it is simply a mechanism that rebuilds your project whenever a check in is made into some revision control system (CVS etc). This can be extended though to include running tests, all the way through to generating a CD image, mounting it within VMs, installing the product and running full tests on it.
It has the simple advantage of highlighting when code changes break the system as early as possible. Not only does it detect breaks in the code, it highlights who caused the break. This psychological effect is very effective in encouraging good testing prior to check in!
It is the practice of ensuring that all aspects of your software development process are lined up to permit the daily creation of a working version of your product. It is best known as part of Extreme Programming.
This involves things as far afield as build automation, automated testing, daily check-ins, using a source code repository, etc. But the ultimate goal is to help the entire project run according to core Agile Principles so that you deliver early and often. This, in turn, helps you leverage feedback from your users, etc.
+1 for the link to Fowler's page.
Personally, I just found it "nice" to know whenever something didn't compile because we had the poor practice of having a single build (yes, we developed on the production build; we were awesome). We hadn't got the integrated testing phase before I left.
After a while, it did, however, lessen the amount of massive coding changes (compared to the "check in and pray my changes don't conflict" that was rampant). Eventually, most developers started making small changes frequently just to get confirmation from the CC.Net tray icon.
Overall, I found it very comforting to know that we could send out a build immediately if we had to. Had we had just a few smoke tests integrated, I think the stress-level would have been substantially lower.
Just to refresh. At this point there is a huge difference between Continuous Integration (CI) and Continuous Delivery (CD). While most of posts above described CD I'll try to show how CI extends now CD definition. Having all the tools needed to build a package and deploy new version of app automatically is a crucial part of CD. Adding to that test automation (based on three level verification: General Health-check, Detailed Statistics and Historical entries) and a proper governance you're creating a really good piece of CI. Only because of such an extended definition building extraordinary cloud tools is possible. Think about muleESB or esbeetle.com. For both of them CI is something natural although only the second one is supporting both ESB and ETL components.
I hope that it was helpful.

What are the best practices with source code control? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
After a recent "accident" at work, whereby some bugs that previously had been fixed were reintroduced, I was asked to document a set of guidelines for the use of source code control (CVS in this case).
What do you consider to be best practices for using source code control? In particular, how do you manage branching and labelling and how do you ensure that the current production release can be patched while continuing to develop new features? For context, the team size is up to 10 developers in two locations.
8 Commandments of Source Control pretty much sums it up.
On the topic of branching and labeling what we do at work is:
Labeling
When ever an environmental release is done it is labeled with at the very least the date of the release. All (related) bugs are then set so that the "resolved in release" is this label.
Branching
Only created on an as-needed basis. A branch is done off a label so that a change can be done against a previously released version (ie, fixing a bug on production without including all other bug fixes)
Eric Sink already put one together in his Source Control Howto.
I'm not sure that I would put "CVS" and "best practice" in the same sentence. There are many other, better, more modern choices for source control that are well-supported by the community.
The mainline model. The tofu scale.
Read this: http://oreilly.com/catalog/practicalperforce/chapter/ch07.pdf
update as often as possible(depeding on the project growing speed) this way fixed files wont be able to be reintroduced.
Intruct the developers to perforn an update before comiting.
There are different kinds of workflows, you will have to consider which best meets your team needs.
Also I always recommend the SVN Book.
The book "Pragmatic Version Control (using Subversion)" is a really nice place to start. Even though its examples are specific to Subversion, it's a good intro to all the important concepts and practices.
We try very, very hard not to branch. If we do create a branch it's a team decision and is carefully scrutinized. So I guess the practice would be "don't branch lightly".

Is our bugtracking workflow so unique? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
We're currently using Mantis as our bugtracker, and we're pretty much sick and tired with it. The developers want more SVN integration, the customers want an easier system to work with.
As such, we're looking for a new bugtracker and at the moment we're looking at Redmine. However, in its default setup it doesn't match our desired workflow, or at least not much better than Mantis does.
We have the following workflow, and would like a bugtracker to match it.
A bug is reported (often by the customer), and is considered 'new'.
These bugs are regularly reviewed and either acknowledged (it's a bug) or marked as a feature (customer often needs to pay) and delayed until the financial part has been worked out.
The bugs are then assigned and handled by a developer
when finished, it's marked as 'ready-for-review' (by another developer)
when reviewed it's marked as 'reviewed'
when marked as 'reviewed', the original developer places the new code at the staging environment and marks the bug as 'ready-to-be-tested' (by the bug-reporter)
bug-reporter marks the bug as 'resolved'
when placed on production, bug-reporter closes the bug
Of course, feedback is often required especially during the early stages. We're looking for a way to distinguish between who is required to take the next step, and who the bug is assigned to (developer). We also want the customer to do so using a simple gui - asking them to change the assignee from their own account to the developer, or even more difficult: a 3rd party (think: design agency) has just too much to ask using the regular gui's.
The gui should show them what to do and which options there are - not search for them.
Does anyone have any experience with a bugtracker that works this way? Is our workflow really wack? How do you make sure everyone involved understands where the bug stands, and who is required to take which step?
Last year we had the same problem, and we figured out that the best solution for us was Jira.
With respect our workflow is more robust and complicated than yours.
We have pretty much the same kind of workflow which we are managing using Redmine with email integration. The customer logs bugs into Redmine directly. Notification comes to the project manager who decides which developer can work on the bug.
The developer opens the bug and puts it into the Investigating state.
If its a feature, he replies to it stating the reasons and puts it into the Replied state which is then revisited later.
If its a bug, then the developer starts development. Before this he puts the bug in Coding state.
Once the coding is over, he changes the state of bug as Review and the peer reviews happen.
If there is any rework, then the developer changes the state to Rework.
Once everything is ok, the developer changes the state to Delivered.
The QA verifies the bug and the finally closes it by changing the state to Closed.
We've defined all of this workflow in Redmine and have been using this pretty effectively without any hassles. Email integration makes everything easy for the project manager to track whenever any bug changes its state.
You can also create and save custom reports, which is a cool feature as well.
I've been using Trac for a small personal project, and at work we used Bugzilla for this.
The workflow you described also sounds like how Red Hat utilizes Bugzilla.
As other's have said, Jira is very good. I especially like its ability to create a custom issue workflow

What are some good strategies to allow deployed applications to be hotfixable? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 days ago.
Improve this question
In an ideal world, our development processes would be perfect, resulting in regular releases that were so thoroughly tested that it would never be necessary to "hotfix" a running application.
But, unfortunately, we live in the real world, and sometimes bugs slip past us and don't rear their ugly heads until we're already busy coding away at the next release. And the bug needs to be fixed Now. Not as a part of the next scheduled release. Not tonight when the traffic dies down. Now.
How do you deal with this need? It really can run counter to good design practices, like refactoring your code into nice, discrete class libraries.
Hand-editing markup and stored procedures on a production server can be a recipe for disaster, but it can also avert disaster.
What are some good strategies for application design and deployment techniques to find a balance between maintenance needs and good coding practices?
[Even though we test a lot before we release, ] What we do is this:
Our SVN looks like this:
/repo/trunk/
/repo/tags/1.1
/repo/tags/1.2
/repo/tags/1.3
Now whenever we release, we create a tag which we eventually check out in production. Before we do production, we do staging which is [less servers but] pretty much the same as production.
Reasons to create a "tag" include that some of the settings of our app in production code are slightly different (e.g. no errors are emailed, but logged) from "trunk" anyway, so it makes sense to create the tag and commit those changes. And then checkout on the production cluster.
Now whenever we need to hotfix an issue, we fix it in tags/x first and then we svn update from the tag and are good. Sometimes we go through staging, with some issues (e.g. minor/trivial fixes like spelling) we by-pass staging.
The only thing to remember is to apply all patches from tags/x to trunk.
If you have more than one server, Capistrano (link to capify.org doesn't go to the intended anymore) is extremely helpful to run all those operations.
One strategy is to heavily use declarative-style external configuration files for the different components.
Examples of this:
Database access/object-relational mapping via a tool like IBatis/IBatis.NET
Logging via a tool like JLog/NLog
Dependency injection via a tool like Spring/Spring.NET
In this way, you can often keep key components separated into discrete parts, hotfix a running application without recompile, and seamlessly use source control (particularly in comparison to stored procedures, which usually require manual effort to source control).
We divide our code in framework code and business customizations. Business customization classes are loaded using a separate classloader and we have tool to submit changes to a running instance of production. whenever we need a change in any class we change it and submit it to a running instance. the running instance will reject the old classloader and use a new classloader insance to load the classes again. This is similar to Jboss hot deploy of EJBs.