I have an application that contains close to 90 different projects written in at least 3 different languages (C#, Visual C++, and Visual Basic) targetting .NET 3, 3.5, 4.0, and 4.5,, with 6 different configurations. We are about to start scripting our compilations due to some messy configuration hassles involving preprocessing out features for different clients. I have seen multiple methods of using Powershell scripts to compile such applications but they seem to boil down to two options: compile the entire solution or compile each project individually.
So my question is: is there an industry best practice for such things? And if so what is it?
The project lead seems to be leaning toward compiling and individually configuring each project, but it seems wasteful since we already have configurations in VS to configure out individual projects. If this question is too subjective, I'd be happy to take it down. And sorry if my information is kind of vague, but I have to walk on egg shells with this project. Thanks.
If you are dealing with a large number of projects, and you are working with the Microsoft technology stack, the "Official" way to go would be to use Microsoft's full Team Foundation Server technology and formalize the development lifecycle.
It does Source Control, Build management, Testing, etc. I would then combine it with Release Management to more fully automate things.
The move to TFS is a big leap, and will require some time to get set up and configured, etc., but I think it is the best thing out there once you start reaching dozens of projects. PowerShell, and other methods are good, but become unmanageable when you are starting to scale out to large numbers of projects.
Related
I'm looking for a framework that is small and reliable and works in Flex 4.
I have some suggestions (but which should I choose):
Mate
swiz framework
robotlegs
Parsley is another choice that is well documented and can be used in a very lightweight manner. I'm partial to Robotlegs personally, as I like that it is very tiny as a framework and most of the broader functionality is provided by the community through extensions and add-ons.
For what it's worth, I've used Mate on several fairly large projects and must say it works quite well. I personally found it easier to learn and use than Cairngorm.
Property injection alone has made developing some of these projects a lot cleaner/smoother/faster. If I had to choose whether to use Mate on a project or go without a framework at all, I'd choose Mate every time.
I've been working for years on a project that is correspondingly huge. I've used Mate as the core framework of this project, and love it. I have found it to be just enough for what I need. I get the features I want without dramatically changing the design of my project. Contrast that with Cairgorm where your project becomes a complete frankenstein that doesn't remotely resemble how your project would look without it.
I have years of MVC experience (mostly Java Struts, shudder) and dependency injection experience (Spring, Guice, etc). As mentioned, I've also dealt with Cairgorm and found it to be one of the most painful experiences of my entire career. Out of the MVC and DI frameworks I've dealt with, Mate is the one I've enjoyed the most. I have no experience with Robotlegs or Swiz, so I can't directly compare them.
The only knock I would give against Mate is that it does not seem to be very actively maintained these days. However, I find it to be very bug free, and not in much need of maintenance. It isn't broke, and doesn't need much fixing.
All three are solid frameworks and I know very talented and seasoned developers who are partial to one or the other for various reasons.
All three have a dependency injection mechanism built into them and that is the sweet spot.
Mate is by far the most lightweight since it focuses primarily on dependency injection. Robotlegs and Swiz are a little more full featured and have more MVC components built into it.
So to that, I agree with Jason.
I'm moving away from strict Android development and wanting to create iPhone applications. My understanding is that I can code the backend of iOS applications in C/C++ and also that I can use the NDK to include C/C++ code in Android apps. My question however is how? I've googled quite a bit and I can't find any clear and concise answers.
When looking at sample code for the NDK, it seems that all the function names etc. are Android (or at least Java) specific and so I would not be able to use this C/C++ backend to develop an iPhone frontend?
I'd appreciate some clarification on this issue and if at all available some code to help me out? (even just a simple Hello World that reads a string from a C/C++ file and displays it in an iOS and Android app).
Thanks guys
Chris
Note that I almost exclusively work on "business/utility/productivity" applications; things that rely heavily on fairly standard UI elements and expect to integrate well with their platform. This answer reflects that. See Mitch Lindgren's comment to Shaggy Frog's answer for good comments for game developers, who have a completely different situation.
I believe #Shaggy Frog is incorrect here. If you have effective, tested code in C++, there is no reason not to share it between Android and iPhone, and I've worked on projects that do just that and it can be very successful. There are dangers that should be avoided, however.
Most critically, be careful of "lowest common denominator." Self-contained, algorithmic code, shares very well. Complex frameworks that manage threads, talk on the network, or otherwise interact with the OS are more challenging to do in a way that doesn't force you to break the paradigms of the platform and shoot for the LCD that works equally badly on all platforms. In particular, I recommend writing your networking code using the platform's frameworks. This often requires a "sandwich" approach where the top layer is platform-specific and the very bottom layer is platform-specific, and the middle is portable. This is a very good thing if designed carefully.
Thread management and timers should also be done using the platform's frameworks. In particular, Java uses threads heavily, while iOS typically relies on its runloop to avoid threads. When iOS does use threads, GCD is strongly preferred. Again, the solution here is to isolate the truly portable algorithms, and let platform-specific code manage how it gets called.
If you have a complex, existing framework that is heavily threaded and has a lot of network or UI code spread throughout it, then sharing it may be difficult, but my recommendation still would be to look for ways to refactor it rather than rewrite it.
As an iOS and Mac developer who works extensively with cross-platform code shared on Linux, Windows and Android, I can say that Android is by far the most annoying of the platforms to share with (Windows used to hold this distinction, but Android blew it away). Android has had the most cases where it is not wise to share code. But there are still many opportunities for code reuse and they should be pursued.
While the sentiment is sound (you are following the policy of Don't Repeat Yourself), it's only pragmatic if what you can share that code in an efficient manner. In this case, it's not really possible to have a "write once" approach to cross-platform development where the code for two platforms needs to be written in different languages (C/C++/Obj-C on iPhone, Java for Android).
You'll be better off writing two different codebases in this case (in two different languages). Word of advice: don't write your Java code like it's C++, or your C++ code like it's Java. I worked at a company a number of years ago who had a product they "ported" from Java to C++, and they didn't write the C++ code like it was C++, and it caused all sorts of problems, not to mention being hard to read.
Writing a shared code base is really practical in this situation. There is some overhead to setting up and keeping it organized, but the major benefits are these 1) reduce the amount of code by sharing common functionality 2) Sharing bug fixes to the common code base. I'm currently aware of two routes that I'm considering for a project - use the native c/c++ (gains in speed at the expense of losing garbage collection and setting targets per processor) or use monodroid/monotouch which provide c# bindings for each os's platform functionality (I'm uncertain of how mature this is.)
If I was writing a game using 3d I'd definitely use approach #1.
I posted this same answer to a similar question but I think it's relevant so...
I use BatteryTech for my platform-abstraction stuff and my project structure looks like this:
On my PC:
gamename - contains just the common code
gamename-android - holds mostly BatteryTech's android-specific code and Android config, builders point to gamename project for common code
gamename-win32 - Just for building out to Windows, uses code from gamename project
On my Mac:
gamename - contains just the common code
gamename-ios - The iPhone/iPad build, imports common code
gamename-osx - The OSX native build. imports common code.
And I use SVN to share between my PC and Mac. My only real problems are when I add classes to the common codebase in Windows and then update on the mac to pull them down from SVN. XCode doesn't have a way to automatically add them to the project without scripts, so I have to pull them in manually each time, which is a pain but isn't the end of the world.
All of this stuff comes with BatteryTech so it's easy to figure out once you get it.
Besides using C/C++ share so lib.
If to develop cross-platform apps like game, suggest use mono-based framework like Unity3D.
Else if to develop business apps which require native UI and want to share business logic code cross mobile platforms, I suggest use Lua embedded engine as client business logic center.
The client UI is still native and get best UI performance. i.e Java on Android and ObjectC on iOS etc.
The logic is shared with same Lua scripts for all platform.
So the Lua layer is similar as client services (compare to server side services).
-- Anderson Mao, 2013-03-28
Though I don't use these myself as most of the stuff I write won't port well, I would recommend using something like Appcelerator or Red Foundry to build basic applications that can then be created natively on either platform. In these cases, you're not writing objective-c or java, you use some kind of intermediary. Note that if you move outside the box they've confined you to, you'll need to write your own code closer to the metal.
I'm planning on starting a new project, and am evaluating various web frameworks. There is one that I'm seriously considering, but I worry about its lasting power.
When choosing a web framework, what should I look for when deciding what to go with?
Here's what I have noticed with the framework I'm looking at:
Small community. There are only a few messages on the users list each day
No news on the "news" page since the previous release, over 6 months ago
No svn commits in the last 30 days
Good documentation, but wiki not updated since previous release
Most recent release still not in a maven repository
It is not the officially sanctioned Java EE framework, but I've seen several people mention it as a good solution in answers to various questions on Stack Overflow.
I'm not going to say which framework I'm looking at, because I don't want this to get into a framework war. I want to know what other aspects of the project I should look at in my evaluation of risk. This should apply to other areas besides just Java EE web, like ORM, etc.
I'll say that so-called "dead" projects are not that great a danger as long as the project itself is solid and you like it. The thing is that if the library or framework already does everything you can think you want, then it's not such a big deal. If you get a stable project up and running then you should be done thinking about the framework (done!) and focus only on your webapp. You shouldn't be required to update the framework itself with the latest release every month.
Personally, I think the most important point is that you find one that is intuitive to your project. What makes the most sense? MVC? Should each element in the URL be a separate object? How would interactivity (AJAX) work? It makes no sense to pick something just because it's an "industry standard" or because it's used by a lot of big-name sites. Maybe they chose it for needs entirely different from yours. Read the tutorials for each framework and be critical. If it doesn't gel with your way of thinking, or you have seen it done more elegantly, then move on. What you are considering here is the design and good design is tantamount for staying flexible and scalable. There's hundreds of web frameworks out there, old and new, in every language. You're bound to find half a dozen that works just the way you want to think in your project.
Points I consider mandatory:
Extensible through plug-ins: check if there's already plug-ins for various middleware tasks such as memcache, gzip, OpenID, AJAX goodness, etc.
Simplicity and modularity: the more complex, the steeper the learning curve and the less you can trust its stability; the more "locked" to specific technologies, the higher the chances that you'll end up with a chain around your ankle.
Database agnostic: can you use sqlite3 for development and then switch to your production DB by changing a single line of code or configuration?
Platform agnostic: can you run it on Apache, lighttpd, etc.? Could you port it to run in a cloud?
Template agnostic: can you switch out the template system? Let's say you hire dedicated designers and they really want to go with something else.
Documentation: I am not that strict if it's open-source, but there would need to be enough official documentation to enable me to fully understand how to write my own plug-ins, for example. Also look to see if there's source code of working sites using the same framework.
License and source code: do you have access to the source code and are you allowed to modify it? Consider if you can use it commercially! (Even if you have no current plans to do that currently.)
All in all: flexibility. If I am satisfied with all four points, I'm pretty much done. Notice how I didn't have anything about "deadness" in there? If the core design is good and there's easily installable plug-ins for doing every web-dev 3.0-beta buzzword thing you want to do, then I don't care if the last SVN commit was in 2006.
Here are the things I look for in a framework before I decide to use it for a production environment project:
Plenty of well laid out and written documentation. Bad documentation just means I'm wasting time trying to find how everything works. This is OK if I am playing around with some cool new micro framework or something else, but not when it's for a client.
A decently sized community so that you can ask questions, etc. A fun and active IRC channel is a big plus.
Constant iteration of the product. Are bugs being closed or opened on a daily/weekly basis? Probably a good sign.
I can go through the code of the framework and understand what's going on. Good framework code means that the projects longterm life has a better chance of success.
I enjoy working with it. If I play with it for a few hours and it's the worst time of my life, I sure as hell won't be using it for a client.
I can go on, but those are some primary ones off the top of my head.
Besides looking at the framework, you also need to consider a lot of things about yourself (and any other team members) when evaluating the risks:
If the framework is a new, immature, "bleeding-edge" framework, are you going to be willing and able to debug it and fix or work around whatever problems you encounter?
If there is a small community, you'll have to do a lot of this debugging and diagnosis yourself. Will you have time to do that and still meet whatever deadlines you may have?
Have you looked at the framework yourself to determine how good it is, or are you willing to rely on what others say about it? Why do you trust their judgment?
Why do you want to use this rather than the "officially sanctioned Java EE framework"? Is it a pragmatic reason, or just a desire to try something new?
If problems with the framework cause you to miss deadlines or deliver a poor product, how will you talk about it with your boss or customer?
All the signs you've cited could be bad news for your framework choice.
Another thing that I look for are books available at Amazon and such. If there's good documentation available, it means that authors believe it has traction and you'll be able to find users that know it.
The only saving grace I can think of is relative maturity. If the framework or open source component is mature, there's a chance that it does the job as written and doesn't require further extension.
There should still be a bug tracker with some evidence of activity, because no software is without bugs (except for mine). But it need not be a gusher of requests in that case.
Over the years my application has grown from 1MB to 25MB and I expect it to grow further to 40, 50 MB. I don't use DLL's, but put everything in this one big executable.
Having one big executable has certain advantages:
Installing my application at the customer is really: copy and run.
Upgrades can be easily zipped and sent to the customer
There is no risk of having conflicting DLL's (where the customer has not version X of the EXE, but version Y of the DLL)
The big disadvantage of the big EXE is that linking times seem to grow exponentially.
Additional problem is that a part of the code (let's say about 40%) is shared with another application. Again, the advantages are that:
There is no risk on having a mix of incorrect DLL versions
Every developer can make changes on the common code which speeds up developments.
But again, this has a serious impact on compilation times (everyone compiles the common code again on his PC) and on linking times.
The question Grouping DLL's for use in Executable mentions the possibility of mixing DLL's in one executable, but it looks like this still requires you to link all functions manually in your application (using LoadLibrary, GetProcAddress, ...).
What is your opinion on executable sizes, the use of DLL's and the best 'balance' between easy deployment and easy/fast development?
A single executable has a huge positive impact on maintainability. It is easier to debug, deploy (size issues aside) and diagnose in the field. As you point out, it completely sidesteps DLL hell.
The most straightforward solution to your problem is to have two compilation modes, one that builds a single exe for production and one that builds lots of little DLLs for development.
The tenet is: reduce the number of your .NET assemblies to the strict minimum. Having a single assembly is the ideal number. This is for example the case for Reflector or NHibernate that both come as a very few assemblies. My company published free two white books on the topic One big executable or many small DLL's:
Partitioning code base through .NET assemblies and Visual Studio projects (8 pages)
Defining .NET Components with Namespaces (7 pages)
Arguments are developed in these white-books come with invalid/valid reasons to create an assembly and a case study on the code base of the tool NDepend.
The problem is that MS fosters(and is still fostering) the idea that assemblies are components while assemblies are just physical artifact to pack code. The notion of component is a logical artifact and typically an assemblies should contains several components. It is a good idea to partition component with the notion of namespaces although it is not always practicable (especially in the case of a framework with a public API where namespace are used to partition the API and not necessarily the components)
One big executable is definitely beneficial - you can have whole program optimization and less overhead and maintenance is much simpler.
As for the link time - you could have both the "many DLLs" and "one big executable" at the same time. For each DLL have a project configuration that builds a static library. So when you debug things you compile the "DLL" configuration of the project and when you need to ship you compile the "static library" configurations of your projects. Sometimes you will have different behavior in different configurations, but this will have to be addressed per incident.
An easier way to maintain large programs is to compose them into smaller manageable parts. A program can be composed into a shell and modules that add feature to the shell. Large programs like Visual Studio, outlook all use the same concepts. Try this approach to make a more maintainable and robust programs.
At work we're developing a large-scale application with quite a few front-end, back-end and support components. Typically the front-end is developed in C# and the back-end is developed in Java, although parts of the back-end are also developed in C# and possibly later C++.
The choice of language and platform is not arbitrary; we try to weigh the relative merits of each in development time, tool-chain cost, familiarity with the language by the specific development team etc. What all these components have in common, though, is that they are all required for the complete operation of the product, and that they are being developed concurrently by independent (but highly communicative) teams.
Previously, we have used Team Foundation Server for our .NET code and Subversion for our Java code; because there was clear separation of the teams' responsibilities, this caused little problem beyond the inconvenience of placing binaries (WARs, in this case) generated from one source tree in another, and the high manual overhead of keeping the branches and revisions in sync. With this project, the degree of separation between the teams is intentionally much smaller, and the volume of branching/merging is expected to be considerably higher; as a result we're moving to a unified VCS, more specifically Subversion.
This brings me to the meat of the question: how does one mix Java and C# code effectively? In practice, we'll have .NET code dependent on a Java codebase; the Java binaries are required to run anything other than unit test code (integration tests already require the binaries, and QA, acceptance testing etc. certainly does as well). What we currently have in mind looks something like:
/trunk
/java
/component1
/component2
/library1
/library2
/net
/assembly1
/assembly2
/...
project.sln
The idea is that the entire source tree is placed under one branch; the .NET code is dependant on the Java code, so we'll add a post-build step to the solution which will (most likely) call the ant script for the Java components. This allows branching of the entire codebase (for .NET developers) or just the Java components (for Java developers).
The problems with this solution are:
What happens when one of the two codebases becomes so large that making copies of it for every branch gets impractical? (our thoughts: split to separate repositories for .NET and Java code and use svn:externals, any input on this would be greatly appreciated).
We use Eclipse for Java development. How do we manage the "shared" workspace (i.e. which projects are required for which components, the dependency graph etc.)? Up until now we've had relatively few Java components, so each developer could just keep all of them in the workspace at the same time. With the increase in Java components and Java developers I don't see how we can keep doing that; any suggestions on how to keep the workspace versioned (a la solution files) while still maintaining sync between the two code-bases?
I would love to hear your input!
1: I've found it best to group things by component, rather than langugage. If one component requires several languages for interface, you still need to develop, test and release them as one. So, splitting a component across several repos is not a good idea.
If one part of the code depends tightly on the other, keep it together. Better to split components across repos. (This even goes for internal structure, where, especially as things grow, it's difficult if you package things by type, rather than by function, i.e. in MVC, don't have three huge packages for each category, rather keep FooView, FooModel and FooController tight.)
svn:externals might work, and with the later versions I think you can use "internals", i.e. link to other dirs in the same repo. That is miles easier than managing separate repos, especially with tagging and branching. (shudder)
2: You could always have the developers setup different workspaces, or perhaps use working sets. Commercial Eclipse releases has better support for sharing workspace settings than the OS variant. (Haven't tried, only worked and been frustrated with the OS one)
I've done C++ (MSVS) and Java (Eclipse) in one repo, and it works pretty well. Also C++/Python similarly. Make sure your build system supports building and testing everything (even if your IDEs only build one part).