Related
While working on projects, I often come to the dilemma of working on UI first or on logic first. Having UI first gives a nice overview of how the end product is going to look like whilehaving logic first uncovers any possible roadblocks in technology.
However, it is not always that crystal clear.. some times UI may need data to be populated to really show what it means and simulating the data could be more difficult than inplementing the logic.. what is your preferred approach for development and why? Which is more efficient and effective?
( I am seeing this issue more and more with iphone projects)
Neither.
You will have to so some thinking at the beginning of the project anyway, deciding the general approach you will take and what operations you will support.
Do that reasonably well and you will have defined the interface between the view and the underlying logic. Look at the Model-View-Controller approach for some inspiration.
What you want to have early on is an idea of what are the basic operations your logic code needs to do to in order to achieve a purpose. Usually it will be a simple function call but sometimes it may involve more than that. Have that clear first.
Next, a complex system that works is based on a simple system that works.
Which means you will need to have a basic UI you'll use to test a basic logic implementation first.
A simple form with a button which presents a message is basic enough. Then, it can grow, you implement a piece of functionality and then you add a simple UI you can test it with.
It is easier to do both piece by piece as the logic and UI for a small piece of the logic are conceptually similar and it will be easy to keep track of both while you implement and test.
The most important part is to keep the UI and logic decoupled, making them talk through a common interface. This will allow you to make quick edits when needed and improve the looks of the GUI towards the end.
You will be better able to just scrap the UI if you don't like it. All you'll need to do is use the same interface, one which you know how to do because you wrote it and you already implemented it.
If at some point you realize you've made a big mistake you can still salvage parts of the code, again because the UI and logic are decoupled and, hopefully, the logic part is also modular enough.
In short: think first, do both the UI and the logic in small increments and keep things modular.
Iterate between both (logic and UI). Back and forth. The UI may change as you understand how and what you can do with the logic and whatever constraints that may present. The logic may change as you change the features, behavior, or performance requirements of an easy-to-use, responsive, and decent looking UI.
I usually do the minimum possible of each until I have some barely working mockups. Then I use each mockup to test where my assumptions about the proper UI and/or logic might be right or wrong. Pick the most flawed, and start iterating.
Apple suggests mocking up the UI on paper first. (Have an eraser handy...)
If possible, in parallel.
But personally, I advocate logic first.
I start with the fundamentals first and that means getting the logic coded and working first. There are two reasons for this:
If I can't get the logic working correctly having a pretty UI is useless and a waste of my time
You will most likely change the UI when you work on the logic aspect making the UI process longer and more expensive
I usually get my UI in order first. The reason? As I prototype different designs, sometimes my idea for the app changes. If it does, then it's no consequence - there is no code to rewrite.
However, sometimes it is helpful to get the fundamentals first in order to determine if the app is going to work or not. If it's not going to function, then why waste time making interfaces?
I like to start by laying out the different parts of my project in something like Vizio.
I make boxes for the different views I expect to have, and I fill them with the information I expect them to contain.
I make another set of boxes for my expected model objects (logic). I fill them with the information I expect they will work with, and I draw lines between models and views where I think it will be necessary.
I do the same thing for object graphs (if I plan on using CoreData), and for database tables if I am going to have an external database.
Laying everything out visually helps me decide if I am missing any important features or interactions between project components. It also gives me something to quickly refer to if I later lose track of what I was doing.
From that point, I tend to work on a model until it has enough done to fill out part of a view, then I work on a view until it can interact with the model.
I also try to identify views or models that could be reused for multiple purposes so that I can reduce the overall amount of work I have to do.
Take the agile approach and work on small amounts of both in iterations. Slowly building each functional piece of the program so as to not build any monolithic piece at once.
I'm working on an app for iPhone, and the data model looks a little crazy - 16 entity types. Could you offer any advice for masking complexity like this from the user? I know all these need to be there, but I'm trying to make it look simple, cause otherwise people will not understand.
The Tricks I have figured out so far:
My app is a bit secret in nature (in development) but use for example an app that deals with nature trips, i.e. hike, mountain bike, skiing, etc. Say I would have an object for a certain journey, i.e. by bike, through the rocky mountains (Journey: Mode of transport, location). Then I would have a different object type to store a specific journey, i.e. This january, do the rockies bike trip (Trip: Date, Journey).
Trick 1: I found that the user doesn't really grasp the difference between the two objects I just mentioned (trip and journey), and generally wouldn't care if I called them both "Trips" on the UI. (Try explaining the two distinct objects to a non-programmer, 25 minutes you'll never get back).
Trick 2: certain things, like for example pieces of camping equipment, may be objects to me, (equipment name, weight) but to the user they are just words, so I treat them as such and when they type it in my app says "You've never mentioned 'tent' before, how much does it weigh?" Then they tell me, and I have created an object without telling them it exists.
These kinds of tricks are what I'm looking for, my app needs to reduce these 16ish objects into maybe 3-4 the user is aware of, the rest hide in the mists, so anything I can get would help.
Thanks.
P.s. before you say it, I know, keep it simple, shouldn't have so many object types, etc. Just looking to find a workaround for this rule (guideline) with coffee and genius.
Stop trying to look at it from a data point of view. Rather, think about what the user is trying to accomplish with your app, from the most basic point of view.
Your nature analogy is too vague for me to help any more, maybe expound on that?
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
Can someone please explain me what a software framework is? Why do we need a framework? What does a framework do to make programming easier?
I'm very late to answer it. But, I would like to share one example, which I only thought of today. If I told you to cut a piece of paper with dimensions 5m by 5m, then surely you would do that. But suppose I ask you to cut 1000 pieces of paper of the same dimensions. In this case, you won't do the measuring 1000 times; obviously, you would make a frame of 5m by 5m, and then with the help of it you would be able to cut 1000 pieces of paper in less time. So, what you did was make a framework which would do a specific type of task. Instead of performing the same type of task again and again for the same type of applications, you create a framework having all those facilities together in one nice packet, hence providing the abstraction for your application and more importantly many applications.
Technically, you don't need a framework. If you're making a really really simple site (think of the web back in 1992), you can just do it all with hard-coded HTML and some CSS.
And if you want to make a modern webapp, you don't actually need to use a framework for that, either.
You can instead choose to write all of the logic you need yourself, every time.
You can write your own data-persistence/storage layer, or - if you're too busy - just write custom SQL for every single database access.
You can write your own authentication and session handling layers.
And your own template rending logic.
And your own exception-handling logic.
And your own security functions.
And your own unit test framework to make sure it all works fine.
And your own... [goes on for quite a long time]
Then again, if you do use a framework, you'll be able to benefit from the good, usually peer-reviewed and very well tested work of dozens if not hundreds of other developers, who may well be better than you. You'll get to build what you want rapidly, without having to spend time building or worrying too much about the infrastructure items listed above.
You can get more done in less time, and know that the framework code you're using or extending is very likely to be done better than you doing it all yourself.
And the cost of this? Investing some time learning the framework. But - as virtually every web dev out there will attest - it's definitely worth the time spent learning to get massive (really, massive) benefits from using whatever framework you choose.
The summary at Wikipedia (Software Framework) (first google hit btw) explains it quite well:
A software framework, in computer programming, is an abstraction in which common code providing generic functionality can be selectively overridden or specialized by user code providing specific functionality. Frameworks are a special case of software libraries in that they are reusable abstractions of code wrapped in a well-defined Application programming interface (API), yet they contain some key distinguishing features that separate them from normal libraries.
Software frameworks have these distinguishing features that separate them from libraries or normal user applications:
inversion of control - In a framework, unlike in libraries or normal user applications, the overall program's flow of control is not dictated by the caller, but by the framework.[1]
default behavior - A framework has a default behavior. This default behavior must actually be some useful behavior and not a series of no-ops.
extensibility - A framework can be extended by the user usually by selective overriding or specialized by user code providing specific functionality.
non-modifiable framework code - The framework code, in general, is not allowed to be modified. Users can extend the framework, but not modify its code.
You may "need" it because it may provide you with a great shortcut when developing applications, since it contains lots of already written and tested functionality. The reason is quite similar to the reason we use software libraries.
A lot of good answers already, but let me see if I can give you another viewpoint.
Simplifying things by quite a bit, you can view a framework as an application that is complete except for the actual functionality. You plug in the functionality and PRESTO! you have an application.
Consider, say, a GUI framework. The framework contains everything you need to make an application. Indeed you can often trivially make a minimal application with very few lines of source that does absolutely nothing -- but it does give you window management, sub-window management, menus, button bars, etc. That's the framework side of things. By adding your application functionality and "plugging it in" to the right places in the framework you turn this empty app that does nothing more than window management, etc. into a real, full-blown application.
There are similar types of frameworks for web apps, for server-side apps, etc. In each case the framework provides the bulk of the tedious, repetitive code (hopefully) while you provide the actual problem domain functionality. (This is the ideal. In reality, of course, the success of the framework is highly variable.)
I stress again that this is the simplified view of what a framework is. I'm not using scary terms like "Inversion of Control" and the like although most frameworks have such scary concepts built-in. Since you're a beginner, I thought I'd spare you the jargon and go with an easy simile.
I'm not sure there's a clear-cut definition of "framework". Sometimes a large set of libraries is called a framework, but I think the typical use of the word is closer to the definition aioobe brought.
This very nice article sums up the difference between just a set of libraries and a framework:
A framework can be defined as a set of libraries that say “Don’t call us, we’ll call you.”
How does a framework help you? Because instead of writing something from scratch, you basically just extend a given, working application. You get a lot of productivity this way - sometimes the resulting application can be far more elaborate than you could have done on your own in the same time frame - but you usually trade in a lot of flexibility.
A simple explanation is: A framework is a scaffold that you can you build applications around.
A framework generally provides some base functionality which you can use and extend to make more complex applications from, there are frameworks for all sorts of things. Microsofts MVC framework is a good example of this. It provides everything you need to get off the ground building website using the MVC pattern, it handles web requests, routes and the like. All you have to do is implement "Controllers" and provide "Views" which are two constructs defined by the MVC framework. The MVC framework then handles calling your controllers and rendering your views.
Perhaps not the best wording but I hope it helps
at the lowest level, a framework is an environment, where you are given a set of tools to work with
this tools come in the form of libraries, configuration files, etc.
this so-called "environment" provides you with the basic setup (error reportings, log files, language settings, etc)...which can be modified,extended and built upon.
People actually do not need frameworks, it's just a matter of wanting to save time, and others just a matter of personal preferences.
People will justify that with a framework, you don't have to code from scratch. But those are just people confusing libraries with frameworks.
I'm not being biased here, I am actually using a framework right now.
In General, A frame Work is real or Conceptual structure of intended to serve as a support or Guide for the building some thing that expands the structure into something useful...
A framework provides functionalities/solution to the particular problem area.
Definition from wiki:
A software framework, in computer
programming, is an abstraction in
which common code providing generic
functionality can be selectively
overridden or specialized by user code
providing specific functionality.
Frameworks are a special case of
software libraries in that they are
reusable abstractions of code wrapped
in a well-defined Application
programming interface (API), yet they
contain some key distinguishing
features that separate them from
normal libraries.
A framework helps us about using the "already created", a metaphore can be like,
think that earth material is the programming language,
and for example "a camera" is the program, and you decided to create a notebook. You don't need to recreate the camera everytime, you just use the earth framework (for example to a technology store) take the camera and integrate it to your notebook.
A framework has some functions that you may need. you maybe need some sort of arrays that have inbuilt sorting mechanisms. Or maybe you need a window where you want to place some controls, all that you can find in a framework. it's a kind of WORK that spans a FRAME around your own work.
EDIT:
OK I m about to dig what you guys were trying to tell me ;) you perhaps havent noticed the information between the lines "WORK that spans a FRAME around ..."
before this is getting fallen deeper n deeper. I try to give a floor to it hoping you're gracfully:
a good explanation to the question "Difference between a Library and a Framework" I found here
http://ifacethoughts.net/2007/06/04/difference-between-a-library-and-a-framework/
Beyond definitions, which are sometimes understandable only if you already understand, an example helped me.
I think I got a glimmer of understanding when loooking at sorting a list in .Net; an example of a framework providing a functionality that's tailored by user code providing specific functionality. Take List.Sort(IComparer). The sort algorithm, which resides in the .Net framework in the Sort method, needs to do a series of compares; does object A come before or after object B? But Sort itself has no clue how to do the compare; only the type being sorted knows that. You couldn't write a comparison sort algorithm that can be reused by many users and anticipate all the various types you'd be called upon to sort. You've got to leave that bit of work up to the user itself. So here, sort, aka the framework, calls back to a method in the user code, the type being sorted so it can do the compare. (Or a delegate can be used; same point.)
Did I get this right?
I've noticed in pretty much every company I've worked that they have a common library that is generally shared across a number of projects. More often than not this has been a single companyx-commons project that ends up as a dumping ground for common programs including:
Command Line Parsers
File Utilities
Framework Helpers
etc...
Some of these are well thought out and some duplicate functionality found in Apache commons-lang, commons-io etc..
What are the things you have in your common library and more importantly how do you structure the common libraries to make them easy to improve and incorporate across other projects?
In my experience, the single biggest factor in the success of a common library is user buy-in; users in this case being other developers; and culture of your workplace/team(s) will be a big factor.
Separate libraries (projects/assemblies if you're in .Net) for different application tiers is essential (e.g: there's obviously no point putting UI and data access code together).
Keep things as simple as possible; what you don't put in a common library is often at least as important as what you do. Users of the library won't want to have to think, so usage needs to be super easy.
The golden rule we stuck to was keeping individual functions focused on a single task - do one thing and do it well (or very very well); don't try and provide something that tries to take every possibility into account, the more reusable you think you're making it - the less likely it is to be used. Code Complete (the book) has some excellent content on common libraries.
A good approach to setting/improving a library up is to do regular code reviews and retrospectives; find good candidates that you've already come up with and consider re-factoring them into a library for future projects; a good candidate will be something that more than one developer has had to do on more that one project (for example).
Set-up some sort of simple and clear governance of the libraries - someone who can 'own' a specific library and ensure it's overal quality (such as a senior dev or team lead).
I have so far written most of the common libraries we use at our office.
We have certain button classes that are just slightly more useful to us than the standard buttons
A database management class that does some internal caching and can connect to ODBC, OLEDB, SQL, and Access databases without even the flip of a parameter
Some grid and list controls that are multi threaded so we can add large amounts of data to them without the program slowing and without having to write all the multithreading code every time there is a performance issue with a list box/combo box.
These classes make it easier for all of us to work on each other's code and know how exactly they work since we all use the exact same interfaces throughout our products.
As far as organization goes, all of the DLL's are stored along with their source code on a shared development drive in the office that we all have access to. (We're a pretty small shop)
We split our libraries by function.
Commmon.Ui.dll has base classes for ui elements.
Common.Data.Dll is sort of a wrapper around Enterprise library Data access classes.
Common.Business is a dumping ground for other common classes that don't fit into one of those.
We create other specialized dlls as needs arise.
I'm occasionally unfortunate enough to have to make alterations to very old, poorly not documented and poorly not designed code.
It often takes a long time to make a simple change because there is not much structure to the existing code and I really have to read a lot of code before I have a feel for where things would be.
What I think would help a lot in cases like this is a tool that would allow one to visualise an overview of the code, and then maybe even drill down for more detail. I suspect such a tool would be very hard to get right, given that is trying to find structure where there is little or none.
I guess this is not really a question, but rather a musing. I should make it into a question - What do others do to assist in getting their head around other peoples code, the good and the bad?
Hmm, this is a hard one, so much to say so little time ...
1) If you can run the code it makes life soooo much easier, breakpoints (especially conditional) break points are you friend.
2) A purists' approach would be to write a few unit tests, for known functionality, then refactor to improve code and understanding, then re-test. If things break, then create more unit tests - repeat until bored/old/moved to new project
3) ReSharper is good at showing where things are being used, what's calling a method for instance, it's static but a good start, and it helps with refactoring.
4) Many .net events are coded as public, and events can be a pain to debug at the best of times. Recode them to be private and use a property with add/remove. You can then use break point to see what is listening on an event.
BTW - I'm playing in the .Net space, and would love a tool to help do this kind of stuff, like Joel does anyone out there know of a good dynamic code reviewing tool?
I have been asked to take ownership of some NASTY code in the past - both work and "play".
Most of the amateurs I took over code for had just sort of evolved the code to do what they needed over several iterations. It was always a giant incestuous mess of library A calling B, calling back into A, calling C, calling B, etc. A lot of the time they'd use threads and not a critical section was to be seen.
I found the best/only way to get a handle on the code was start at the OS entry point [main()] and build my own call stack diagram showing the call tree. You don't really need to build a full tree at the outset. Just trace through the section(s) you're working on at each stage and you'll get a good enough handle on things to be able to run with it.
To top it all off, use the biggest slice of dead tree you can find and a pen. Laying it all out in front of you so you don't have to jump back and forward on screens or pages makes life so much simpler.
EDIT: There's a lot of talk about coding standards... they will just make poor code look consistent with good code (and usually be harder to spot). Coding standards don't always make maintaining code easier.
I do this on a regular basis. And have developed some tools and tricks.
Try to get a general overview (object diagram or other).
Document your findings.
Test your assumptions (especially for vague code).
The problem with this is that on most companies you are appreciated by result. That's why some programmers write poor code fast and move on to a different project. So you are left with the garbage, and your boss compares your sluggish progress with the quick and dirtu guy. (Luckily my current employer is different).
I generally use UML sequence diagrams of various key ways that the component is used. I don't know of any tools that can generate them automatically, but many UML tools such as BoUML and EA Sparx can create classes/operations from source code which saves some typing.
The definitive text on this situation is Michael Feathers' Working Effectively with Legacy Code. As S. Lott says get some unit tests in to establish behaviour of the lagacy code. Once you have those in you can begin to refactor. There seems to be a sample chapter available on the Object Mentor website.
I strongly recommend BOUML. It's a free UML modelling tool, which:
is extremely fast (fastest UML tool ever created, check out benchmarks),
has rock solid C++ import support,
has great SVG export support, which is important, because viewing large graphs in vector format, which scales fast in e.g. Firefox, is very convenient (you can quickly switch between "birds eye" view and class detail view),
is full featured, intensively developed (look at development history, it's hard to believe that so fast progress is possible).
So: import your code into BOUML and view it there, or export to SVG and view it in Firefox.
See Unit Testing Legacy ASP.NET Webforms Applications for advice on getting a grip on legacy apps via unit testing.
There are many similar questions and answers. Here's the search https://stackoverflow.com/search?q=unit+test+legacy
The point is that getting your head around legacy is probably easiest if you are writing unit tests for that legacy.
I haven't had great luck with tools to automate the review of poorly documented/executed code, cause a confusing/badly designed program generally translates to a less than useful model. It's not exciting or immediately rewarding, but I've had the best results with picking a spot and following the program execution line by line, documenting and adding comments as I go, and refactoring where applicable.
a good IDE (EMACS or Eclipse) could help in many cases. Also on a UNIX-platform, there are some tools for crossreferencing (etags, ctags) or checking (lint) or gcc with many many warning options turned on.
First, before trying to comprehend a function/method, i would refactor it a bit to fit your coding conventions (spaces, braces, indentation) and remove most of the comments if they seem to be wrong.
Then I would refactor and comment the parts you understood, and try to find/grep those parts over the whole source tree and refactor them there also.
Over the time, you get a nicer code, you like to work with.
I personally do a lot of drawing of diagrams, and figuring out the bones of the structure.
The fad de jour (and possibly quite rightly) has got me writing unit tests to test my assertions, and build up a safety net for changes I make to the system.
Once I get to a point where I'm comfortable enought knowing what the system does, I'll take a stab at fixing bugs in the sanest way possible, and hope my safety nets neared completion.
That's just me, however. ;)
i have actuaally been using the refactoring features of ReSharper to help m get a handle on a bunch of projects that i inherited recently. So, to figure out another programmer's very poorly structured, undocumented code, i actually start by refactoring it.
Cleaning up the code, renaming methods, classes and namespaces properly, extracting methods are all structural changes that can shed light on what a piece of code is supposed to do. It might sound counterintuitive to refactor code that you don't "know" but trut me, ReSharper really allows you to do this. Take for example the issue of red herring dead code. You see a method in a class or perhaps a strangely named variable. You can start by trying to lookup usages or, ungh, do a text search, but ReSharper will actually detect dead code and color it gray. As soon as you open a file you see in gray and with scroll bar flags what would have in the past been confusing red herrings.
There are dozens of other tricks and probably a number of other tools that can do similar things but i am a ReSharper junky.
Cheers.
Get to know the software intimately from a user's point of view. A lot can be learnt about the underlying structure by studying and interacting with the user interface(s).
Printouts
Whiteboards
Lots of notepaper
Lots of Starbucks
Being able to scribble all over the poor thing is the most useful method for me. Usually I turn up a lot of "huh, that's funny..." while trying to make basic code structure diagrams that turns out to be more useful than the diagrams themselves in the end. Automated tools are probably more helpful than I give them credit for, but the value of finding those funny bits exceeds the value of rapidly generated diagrams for me.
For diagrams, I look for mostly where the data is going. Where does it come in, where does it end up, and what does it go through on the way. Generally what happens to the data seems to give a good impression of the overall layout, and some bones to come back to if I'm rewriting.
When I'm working on legacy code, I don't attempt to understand the entire system. That would result in complexity overload and subsequent brain explosion.
Rather, I take one single feature of the system and try to understand completely how it works, from end to end. I will generally debug into the code, starting from the point in the UI code where I can find the specific functionality (since this is usually the only thing I'll be able to find at first). Then I will perform some action in the GUI, and drill down in the code all the way down into the database and then back up. This usually results in a complete understanding of at least one feature of the system, and sometimes gives insight into other parts of the system as well.
Once I understand what functions are being called and what stored procedures, tables, and views are involved, I then do a search through the code to find out what other parts of the application rely on these same functions/procs. This is how I find out if a change I'm going to make will break anything else in the system.
It can also sometimes be useful to attempt to make diagrams of the database and/or code structure, but sometimes it's just so bad or so insanely complex that it's better to ignore the system as a whole and just focus on the part that you need to change.
My big problem is that I (currently) have very large systems to understand in a fairly short space of time (I pity contract developers on this point) and don't have a lot of experience doing this (having previously been fortunate enough to be the one designing from the ground up.)
One method I use is to try to understand the meaning of the naming of variables, methods, classes, etc. This is useful because it (hopefully increasingly) embeds a high-level view of a train of thought from an atomic level.
I say this because typically developers will name their elements (with what they believe are) meaningfully and providing insight into their intended function. This is flawed, admittedly, if the developer has a defective understanding of their program, the terminology or (often the case, imho) is trying to sound clever. How many developers have seen keywords or class names and only then looked up the term in the dictionary, for the first time?
It's all about the standards and coding rules your company is using.
if everyone codes in different style, then it's hard to maintain other programmer code and etc, if you decide what standard you'll use have some rules, everything will be fine :) Note: that you don't have to make a lot of rules, because people should have possibility to code in style they like, otherwise you can be very surprised.