Has anyone been to the Pragmatic Studio iPhone course, and if so, is it worth it? - iphone

I am thinking of going to the pragmatic studio iphone course, but am a little wary due to the price. I currently do contract work and therefore don't have a company that will pay the course costs for me, other than mine of course.
After travel, food, hotel and currency conversion the course runs up to over 3500.
Is it worth it?

I'm honestly not a fan of coding bootcamps or short duration courses in terms of cost/benefit.
If money wasn't an issue I would say go for it but since you are paying your own way and brought up the money aspect I'm going to assume that at least some consideration must be taken before dropping $3.5k.
I find that I can honestly absorb much more material and at greater depth by simply reading a book and attempting exercises or small projects myself. Certainly there are some tips & tricks that can be imparted to you through actual instruction that you might not glean right away from self-learning but those things won't be the bulk of the knowledge you have to learn.
These programs remind me of short duration certification bootcamps: expensive and of little substance. You have to be realistic, you are there for 4 days. You are going to learn how to setup the development environment, shown some canned examples that you follow along with illustrating the basics of the accelerometer, photos, and other iPhone basics.
Is that really worth it to you? To me, a serious professional developer would simply sit down with a few of the cocoa/iPhone books and start banging out some chapters on their time off.
So, in my opinion, save yourself some money and spend $90 bucks on some books and take a few days off work.

I hemmed and hawed about the BigNerd Ranch bootcamp and decided against it.
Classrooms are great, and you can really learn a lot, but I think I would benefit more from an advanced bootcamp for iPhone now vs one for newbies to the platform back them.
Knowing how stuff works makes it easier to learn how similar stuff works. Having a class on hardcore multi-threading, UI best practices, various networking patterns, and in-depth CoreAnimation...that is a bootcamp you could sell videos of, let alone pack to the rafters right now.
What I found was doing real projects and trying to make stuff teaches you far more than a classroom setting would, and faster. Get your hands dirty, and start making mistakes.

I personally have not been but I hear Bill Dudney is great. It sounds expensive but maybe you can right it off for tax purposes, that being said anytime you get a chance to learn something, it's probably worth it. If you think that you would like to make commercial iPhone apps, AND you think you can make at least $3500 dollars doing this, then I say go for it. If you think your just going to be doing this as a hobby or you think you'll make less than $3500 then I totally agree with Simucal.
P.S. Maybe you could write a tutorial on how to make iPhone apps after this. Also the networking section looks fun!

Related

How to motivate team to work on legacy products

We are a team working on legacy code which is pretty old and written in languages of initial programming days. As the team members are trained in latest technology and are now put to work on legacy code, they are not happy. How to motivate them to work in legacy code also?
Send your team to meet users and watch them use software. They should find out what are most critical problems users have with that software.
Getting to know users makes work more real - your team will know that adding new functionality or eliminating some bugs will help some real person. That should motivate programmers to get boring job done.
Only cash can not make the developers happy. You should provide them good environment so that they can pay attention to their work.
Another thing is no technology is bad OR legacy OR older. Thing is if your company needs to maintain it, then you must keep it going. But keep all standards for designing, coding, testing, code review, interactive sessions etc..
Also you can motivate them to convert your legacy code to some new platform for better performance and maintainability. Every company does that even once i think, because they want to compete with other market products.
Also provide them some cool sessions for other technologies which are used in your company but they don't know or use. Let them be deeply in the things, give them proper time and support for problem solving. Main goal is to deliver on time with less rework and bugs.
Provide some rewards towards their work and keep them happy about their work.
thanks.
i really like "Send your team to meet users and watch them using software"
If i have to motivate my team, I will really ask my developer to visit the use and find out how much user is happy with the product.
I will really like to take challenge on how we can make it more better then what exist.
Do you have some scope for retiring the legacy code in the foreseeable future? If so, "we only need to keep this going until..." might sweeten the pill.
Are the team members experienced in the languages/environments which the legacy code is written in? If not, it might be simple reluctance to do something that they don't know how to operate. Possibly scheduling in some time for them to gain at least a passing familiarity might be in order; provided it's not too much of a paradigm shift from latest technology, it shouldn't be all that hard?
Are the team members only allowed to work on the legacy code team or can their time be split between different projects? I don't think anyone is going to be happy about spending a 40 hour week on FORTRAN debugging. But if you have to spend a few hours on the legacy code knowing that you can take breaks during the day to work on something you actually enjoy it's a little less painful.
And I'll reiterate what was said before about making sure the team members have time to learn and gain experience with the old technologies before throwing them in there. Try to make the training enjoyable too. Our legacy code training was set up as a competition to see who could come up with the fastest/shortest/most complete/etc solution to interesting problems rather than having to look exclusively at the code we were to be working on. Really, that could be applied to the team's plan even if you don't have time set aside for training. Add a little competition to the task at hand or allow a little bit of time for challenging and competitive side projects.
How are they getting rewarded for working on these legacy products? Do you know what motivates them? Some people may prefer timely recognition and praise while others may expect cash or understanding that this isn't necessarily what they signed up for when they initially took the job. I'd be tempted to suggest having 1:1 meetings to see what would they like that would make them happier. Is it more money? More flexibility in time off? Training in the legacy technologies? Affirmation that they are doing good work on these ancient systems, as the initial programming days makes me think of mainframes and other really old tools that one may wonder, "How much longer will this run really?"
Cash isn't the answer. Free food, soft drinks, whatever, that only goes so far at alleviating the drudgery of legacy code work. What about trying to change their perspective?
"Anyone can do good work with modern code that has a nice IDE with refactoring built in, a ton of resources just one Google search away, but we proud few, we band of brothers, we are good enough to do this with ancient procedural languages. We'll tame this awful mess of code and do it with one hand behind our backs and create processes and tools to make sure the next poor bastards won't have it so bad."
I would say the simplest way to attract the most positive emotion from developers to legacy coding would be to make the old new in some way.
Have a session or two to identify what it is that the legacy code does, and then get an idea of what it would take to do it anew on a new architecture. The "new architecture" part is key, because 9/10 times, it's the architecture that is dreaded (spaghetti code, pre-standard conventions, etc...).
If you can't get your re-write estimates approved, then at least work out a plan to get your refactoring of legacy code into the daily maintenance. At the very least your developers will feel as though they are working towards something, and something new at that, instead of just monkey-wrenching the old decay that no one wants to even remember.
Just my 2ยข.
You can for example try to do fancy things on the testing side. Try out mocking frameworks etc.
Try also emphasize that handling legacy code is a good experience if you want to become a solid programmer, since every technology becomes eventually legacy.
Extra cash? :) Don't know anything else...
Even if it's new technologies legacy code it's not always a pleasure to work on such code, so on "initial technologies"... i guess the only motivating thing is to discover how programming was these days...
The amount of Time required in spending motivating the team and learning the legacy code and fixing it half heartedly can easily be used to build the same stuff in new platform , given the amount of resources , IDEs , expertise , frameworks etc available for free, the Good news you have the system in place you just need to meet the same behavior in new platform unlike we have to build something new for some product whose behavior and user experience we dont know .

Encouraging good development practices for non-professional programmers? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
In my copious free time, I collaborate with a number of scientists (mostly biologists) who develop software, databases, and other tools related to the work they do.
Generally these projects are built on a one-off basis, used in-house, and eventually someone decides "oh, this could be useful to other people," so they release a binary or slap a PHP interface onto it and shove it onto the web. However, they typically can't be bothered to make their source code or dumps of their databases available for other developers, so in practice, these projects usually die when the project for which the code was written comes to an end or loses funding. A few months (or years) later, some other lab has a need for the same kind of tool, they have to repeat the work that the first lab did, that project eventually dies, lather, rinse, repeat.
Does anyone have any suggestions for how to persuade people whose primary job isn't programming that it's of benefit to their community for them to be more open with the tools they've built?
Similarly, any advice on how to communicate the idea that version control, bug tracking, refactoring, automated tests, continuous integration and other common practices we professional developers take for granted are good ideas worth spending time on?
Unfortunately, a lot of scientists seem to hold the opinion that programming is a dull, make-work necessary evil and that their research is much more important, not realising that these days, software development is part of scientific research, and if the community as a whole were to raise the bar for development standards, everyone would benefit.
Have you ever been in a situation like this? What worked for you?
Software Carpentry sounds like a match for your request:
Overview
Many scientists and engineers spend
much of their lives programming, but
only a handful have ever been taught
how to do this well. As a result, they
spend their time wrestling with
software, instead of doing research,
but have no idea how reliable or
efficient their programs are.
This course is an intensive
introduction to basic software
development practices for scientists
and engineers that can reduce the time
they spend programming by 20-25%. All
of the material is open source: it may
be used freely by anyone for
educational or commercial purposes,
and research groups in academia and
industry are actively encouraged to
adapt it to their needs.
Let me preface this by saying that I'm a bioinformatician, so I see the things you're talking about all the time. There's some truth to the fact that many of these people are biologists-turned-coders who just don't have the exposure to best practices.
That said, the core problem isn't that these people don't know about good practices, or don't care. The problem is that there is no incentive for them to spend more time learning software engineering, or to clean up their code and release it.
In an academic research setting, your reputation (and thus your future job prospects) depends almost entirely on the number and quality of publications that you've contributed to. Publications on methods or new algorithms are not given as much respect as those that report new biological findings. So after I do a quick analysis of a dataset, there's very little incentive for me to spend lots of time cleaning up my code and releasing it, when I could be moving on to the next dataset and making more biological discoveries.
I'll also note that the availability of funding for computational development is orders of magnitude less than that available for doing the biology. In a climate where only 10% of submitted grants are getting funded, scientists don't have the luxury of taking time to clean and release their code, when doing so doesn't help them keep their lab funded.
So, there's the problem in a nutshell. As a bioinformatician, I think it's perverse and often frustrating.
That said, there is hope for the future. With second-and-third generation sequencing, in particular, biology is moving into the realm of high-throughput discovery, where data mining and solid computational pipelines become integral to the success of the science. As that happens, you'll see more and more funding for computational projects, and more and more real software engineering happening.
It's not exactly simple, but demonstration by example would probably drive the point home most effectively - find a task the researcher needs done, find someone who did take the time to make a tool w/source available, and point out how much time the researcher could save as a result due to having that tool available - then point out that they could give back to the community in the same fashion.
In effect, what you are asking them to do is become professional developers (with their copious free time), in addition to their chosen profession. Their reluctance is understandable.
Does anyone have any suggestions for how to persuade people whose primary job isn't programming that it's of benefit to their community for them to be more open with the tools they've built?
Give up. Seriously, this is like teaching a pig to sing. (I can say this because I used to be a physicist so I know what they're like.)
The real issue is that your colleagues are rewarded for scientific output measured in publications, not software. It's hard enough in computer science to get recognized for building software; in the other sciences, it's nearly impossible.
You can't sell good development practice to your biology friends on the grounds that "it's good for you." They're going to ask "should I invest effort in learning about good software practice, or should I invest the same effort to publish another biology paper?" No contest.
Maybe framing it in terms of academic/intellectual responsibility would help, to a degree - sharing your source is, in many ways, like properly citing your sources or detailing your research methodology. There are similar arguments to be made for some of the "professional software developer" behaviors you'd like to encourage, though I think releasing the code is probably an easier sell on these grounds than other things which could require significantly more work.
Actually, asking any busy project team to include in their schedule time for making their software suitable for adoption by another team is extremely hard in my experience.
Doing extra work for the public good is a big ask.
I've seen a common pattern of "harvesting" after the project is complete, reflecting that immediate coding for reuse tends to get lost in the urgency of the day.
The only avenue I can think of is if the reuse is within an organisation with a budget for a "hunter gatherer", someone whose reason for being there is IT.
You may be on more of a win for things such as unit tests because they have immediate payback for the development.
For one thing, could we please stop teaching biologists Perl? Teaching non-professional programmers a write-only language is practically guaranteed to lead to unmaintainable, throw-away code. Python fills the same niche, is just as easy to learn (it's even used to teach kids programming!), and is much more readable.
Draw parallels with statistics. Stats is a crucial part of scientific research, and one where the only sensible advice is: either learn to do it properly, or get an expert to do it for you. Incorrectly-done stats can completely undermine a paper, just as badly-written code can completely undermine a public database or web resource.
PS: This blog is very good, but getting them to read it will be an uphill struggle: Programming for Scientists
Chris,
I agree with you to a degree, but in my experience what ends up happening is that in their eagerness to publish you end up with too many "me too" codes and methods, which don't really add to the quality of science. If there was a little more thought about open sourcing code and encouraging others to contribute (without necessarily getting publications out of it) then everyone would benefit.
Definitely agree that a separation between the scientific programmers and the software engineers is a good thing, especially for production applications. But even for scientific programming, the quality of my code would have been so much better if I had followed good practices at the time.
In my experience the best way of getting people to program cleanly is to show a good example when you're working with them.
eg: "I never spend hopeless days debugging my code because the first things I code are automated unit tests that will pinpoint problems when they are small and easily detectable"
or: "I'm very bad at keeping track of versions of things, but sometimes my new code does break what did work before. So I use svn/git/dropbox to keep track of things for me"
In my experience that kind of statement can raise the interest of "biologists that learned how to script".
And if you need to collaborate on a bigger project, make it clear that you have more experience and that everything will go more smoothly if things are done your way.
Regarding publication of code, current practice is indeed frustrating. I would like to see a new journal like Source Code for Biology and Medecine, where code is peer-reviewed and can be published, but that has no (or very low) publication costs. Putting code on sourceforge or others is indeed not "scientifically worth it" because it doesn't make a line on your publication list, and most code is not revolutionary enough to warrant paying $1,000 for publication in Source Code for Biology and Medecine or PLoS One...
You could have them use a content management system, like Joomla. That way they only push content and not code.
I wouldn't so much persuade as I would streamline the process. Document it clearly, make video tutorials and bundle some kind of tool chain that makes it ridiculously easy to get source repositories set up without requiring them to become experts in something that isn't their main field.
Take a really good programmer who already knows best practices, ask your scientists to teach him what they need and what they do, eventually the programmer will have minimum domain knowledge (I suspect it takes between 1 and 3 years depending on the domain) to do what scientists asks for.
Developers always learn another domain of competency, because most of their programs are not for developers, so they need to know what the "client" do.
To be devil's advocate, is teaching scientists to be good software engineers the right thing to do? Software in research is usually very purpose specific - sometimes to the point where a piece of code needs to run successfully only once on a single data set. The results then feed into a publication and the goal is met. And there's a high risk that your technique or algorithm will be superseded by a better one in short order. So, there's a real risk that effort spent producing sparkling code will be wasted.
When you're frustrated by wading through a swamp of ill-formed perl code, just think that the code you're looking at is one of the rare survivors. Mountains of such code has been written, used a few times, then discarded never to see the light of day again.
I guess I'm just saying there's a big place in research for smelly heinous one-off prototype code. There are good reasons why such code exists. It may not be pretty, but if it gets the job done, who cares? We can always hire a software engineer to write the production-ready version later, IF it turns out to be justified, and let our scientists move on.

Pitfalls of developing for iPhone

Are there any guidelines on pitfalls to avoid while developing iPhone applications?
Sure, thousands. The same is true for any software development. Unfortunately, the easiest way to enumerate them is to write them down on a sheet of paper while waiting for a friendly soul to release you from the one you just fell into.
However:
Don't try to reinvent the wheel. The iPhone API is very complete -- you just have to LOOK for the facility you need. Things are NOT always implemented the way you would expect. Read the guides, carefully. Look at the tutorials and analyze how they work. (Try changing a line here or there in the tutorial to see what difference the change makes.) The single biggest mistake I have made in 1 year of iPhone development is not trying hard enough to find the iPhone way of doing something.
Don't ignore memory management; master it early and often. Use the Object Allocation and Leaks tools in Instruments to check for memory leaks frequently. I'd recommend checking after you complete each feature or view; more often than that if you keep finding bugs. Eventually you may understand it so well you can stop doing this.
Don't just use the default build settings. Play around with them to understand what they do. Figure out certification and distribution. GET INTO THE DEVELOPER PROGRAM QUICKLY -- it can take a while to push through that pipeline. [ AND when you get that notification that you need to renew, get it on instantly -- there have been problems with that process. ]
Don't neglect to read the Human Interface Guidelines (HIG) carefully. If they say not to do something -- DON'T DO IT. Apple will reject applications that misuse their iconography.
Don't stint on marketing. Yes, the App Store puts your app in front of millions of people... In theory. But the odds of getting front-paged are slim. There are a lot of great apps on the App Store that haven't sold much because no one knows about them.
Don't rest on your laurels. If a new technology comes out, find out if it makes your job easier; if it does, take the time to learn it. Personal example: I'm just now trying to switch from SQLite-based data management to Core Data, because I was in a hurry at the time I started my most recent project; now I wish I had slowed down and thought about it.
Don't go into your design thinking (for example) "How do I implement my concept with a table view?" It's true that table views are natural for many informational and utility applications, but don't be constrained. Instead, think about what users will want to be able to do, how you can make it easier for them -- put things together that will be used together, etc. If you've never explored the concept of Use Cases, read up on them.
Don't hesitate to build composite views. Many of the questions I have seen here on Stack Overflow have to do with putting a toolbar at the top of a table, or having an image in the background of a text field. I understand the desire to do things the easy way, and as I state in #1 above, if there is an easy way, use it. But in many cases the solution is just to layer a couple of views with appropriate placement and transparency.
Think about what might be Apple-approved from the start.
App Rejected is one of several useful sites to help understand Apple's mostly undocumented standards. (One more.) (A previous question on app store rejection reasons.)
A few quick examples:
Using a UIWebView can get your app a 17+ rating.
Coding with an undocumented/private API = rejected
Version number < 1.0 might= rejected
Not enough feedback about network success/fail = rejected
Too much network use = rejected
Clearly limited free version vs full version = rejected
The word 'iPhone' in the app name = rejected
The above links contain many more examples, and more details about those examples.
Don't neglect the programming guides. While the documentation is quite extensive, the programming guides contains a veritable trove of useful tips and "insider" information that simply cannot be gleaned from reading method definitions. I spend just as much time reading the guides for a technology (say, Core Data) as I do actually implementing it.
Don't assume you know what a method does. If you have any degree of doubt about the functionality of a method, it is well worth your time to go look it up in the documentation to verify.
Wonderful examples from #Amagrammer above.
I would love to add that the first place to start is iPhone development is Photoshop. This is still the best advice I can give to anyone who is starting out. I now use OmniGraffle because it has awesome stencil templates.
What I find is that even for super simple app's, draw up your prototype and look for usability issues and work flow issues. It is 100x quicker to redraw your app than re-code it. I have fallen into this trap numerous times and now actually draw up some pretty simple functionality to see what it will look and feel like.
This advice will save you 10s maybe even 100s of hours in hopefully getting your app right first time and getting you to think through what the issues are. Throwing away code sucks and I have done it not because the code was bad but because it made the usability or solution worse. I think the best of us end up throwing code away and prototyping your design definitely will help in having to RTFM for something you did not have to build in the first place.
If you don't have an great designer, and can't do great design by yourself, then don't even start iPhone app development. This rule only applies if you want/need to make money with your apps.

Frameworks & Doneworks: Do you thinks it is cool to use those?

By using somebody else's works you advertize the authors of those works (At least, among other programmers). Do you think it is cool?
This line of questioning could go up one more level and become "Programming Languages: Do you think it's cool to use those?" Because someone(s) wrote those too. I can continue this up to the types of computers, to the components, etc...
Monet did not make the brushes or the paint or the canvas (well maybe, not sure). But who creates those building blocks is not quite what stands out at the end.
Languages/Frameworks/etc were built and released to be utilized by the masses (or make money for the creators).
I think it's always cool. Be more efficient, reduce redundancy, promote other useful code.
If you're trying to learn though, reading and understanding the framework you're using is very helpful. There are always other things you can be programming and learning, not necessarily reinventing the wheel.
If using their work has saved you time reimplementing the same thing (but with more bugs) then don't they deserve credit?
Or put another way, stealing other peoples' work without credit (or paying them, depending on whether we're talking about free or commercial software here) isn't cool.
Of course, nobody's stopping you from writing your own framework, if that's what you want to do...
It depends on what kind of programming you're doing.
Are you doing it to achieve a finished program? Then a framework could save you a lot of time.
Are you doing it to create something truly original? Then a framework might simply tie you into an existing way of thinking.
Rembrandt made his own paints. Michelangelo selected his own marble from the quarry. Alan Kay said "People who are really serious about software should make their own hardware". The Excel team famously has their own compiler. The iPhone ain't just an alternate firmware for the Blackberry. ISTM if you want to be at the very top of your game, you've got to get down and dirty with the nitty gritty of it.
I don't know anything about advertising, other programmers, or what's "cool", so I can't respond to those parts of your question.

Which technologies/concepts do you suggest I learn before creating an iPhone game?

Sorry if this is a broad question, but other than Objective-C, Cocoa, and OpenGL ES, what technologies or concepts would you suggest I read up on before writing a game for the iPhone? I'm a beginning game developer and need all the help I can get :)
MATHS - I would advise this topic
Some example areas of interest for applications in Game Development
Calculus, Geometry,The Cartesian Co-ordinate System, Vectors, Matrices, Transformations etc...
Sorry, my answer is not computing related.
A game tells a story, a great game tells a great story. So I would suggest to learn principles of storytelling.
Not going as scholar as Aristotle's Poetics, I recommend more modern Story by Robert McKee. It focuses on movie making, but I am pretty sure that many of the concepts he develops can be applied to game making.
You should read some articles on GameDev. Obviously, learning some of the fundamental concepts in computer graphics would be very helpful. But really, once you get to where you can write Objective-C and understand the APIs, go ahead and get started. You will learn a lot in the process; of course, keep learning and reading about these things I mentioned, but start coding. Find some books on game programming, particularly AI and so forth. Go ahead and get your feet wet programming though. Of course, be sure you learn your language thoroughly.
Quite frankly, I have found that I never know what I need to know until I actually get my hands dirty. That's why I suggested here that someone looking to jump into designing a 3-D iPhone game start with some simpler, targeted projects. These targeted projects can teach you core concepts as you put them to practical use. OpenGL seemed like this impossible-to-understand black box until I made myself perform some simple tasks with it. In a few weeks, I had an application based on it.
In college, I would spend weeks trying to understand the theory behind an aspect of thermodynamics, but then I would see one practical application for it and the whole thing would fall into place. Since then, I've focused on finding specific applications for concepts before spending too much time with the pure theory behind them.
A solid understanding of what makes a good gaming UI especially on the iPhone would be key, especially with the options it provides, be it accelerometers, or onscreen touch inputs.
I'd be sure to try out existing games and see what works, what doesn't, and what gets good feedback. You may also want to look at Flash and DS based games to see what works on other small screens/devices.