So I've been using Smalltalk for about 6 months now (Squeak and Pharo), mostly doing data analytics, and I'm about to start my first Seaside app. So my question to all you Smalltalkers out there is, what is your favorite persistence solution? I've been looking at Magma, GOODS, and GLORP. I'm a long-time python hacker, so I get ORM, but it seems like Magma or GOODS would be a better solution, since they seem object-oriented.
A quick note: I want to scale my app across multiple VM's, so just saving data to the image wont really work.
Thanks!
If you want to scale across multiple VMs, you might want to to take a look at GemStone/S.
Be aware, however, that GemStone is a proprietary, commercial product. So, you will have to pay for it. However, the pricing model is generally designed in such a way, that if you need a bigger edition then you will generally also have the users to pay for that edition. The prices start out at 0 $ for the 4 GiByte disk / 1 GiByte RAM / 1 CPU version.
Another thing to note is that GemStone Smalltalk is its own dialect, so your Squeak code will probably not run unmodified, but should be fairly easy to port. (For example, the GemStone engineers have created an adapter that allows you to load Monticello (Squeak's version control system) packages into GemStone/S, also they generally make sure that Seaside runs.)
So, what is GemStone? Basically, it's a distributed VM with automatic object persistence. It's easiest to explain compared to a normal Smalltalk VM. If you have two Smalltalk VMs running side-by-side, each of them has its own Object Memory (i.e. the thing the garbage collector manages). And that Object Memory is in RAM. In GemStone, all VMs in a cluster share the same Object Memory and it lives on disk, not in RAM. So, you don't need a database, not even an object-oriented one, because your objects are "just there", everywhere, all the time.
(That's only a very simplistic description. For example, the heap is not really shared across VMs. That wouldn't make sense, you wouldn't want to replicate every temporary object you create across the network. Instead, you have a global repository object (basically, a dictionary) and just like the garbage collector will start at some well-known root object and then keep all objects that are reachable from there, and delete those that aren't, GemStone will start at the global repository object, and persist/replicate only the objects that are reachable from there.)
GemStone also has database-ish features, so access to the global repository is wrapped in ACID transactions, and there is a SQL-inspired but Smalltalkish query language.
GemStone has a nice appliance that they call "GLASS" (for GemStone, Linux, Apache, Seaside and Smalltalk) analogous to the well-known "LAMP" (Linux, Apache, MySQL and PHP). GLASS includes the gratis edition of GemStone with Seaside preinstalled and everything setup with Apache running on top of Xubuntu, everything neatly packaged into a VMWare disk image.
GLASS alone doesn't really help to give you an overview over your data. SandstoneDB does. You can use SandstoneDB with both GOODS and GLASS, (or even alone), depending on how much money you wish to spend (Sandstone is free in all senses, GLASS is commercial, but free as in free beer for small installations).
Check out the sandstoneDB page. And here's the adaptor for GOODS. To use SandstoneDB with GLASS, just switch the store to SDMemoryStore, see the class comments on SDMemoryStore in SandstoneDB.
If you can choose, I would also choose GLASS or Magma (it depends of how big is the project).
Take into account that GLorp in Squeak only works with PostgreSQL. We developed SqueakDBX which is a database driver to communicate with most databases. We are now modifying Glorp so that you can use it with all of them (not only PostgreSQL). But this won't be until end of this year.
Related
Me and few friends run a little app creation business in our spare time, our current development environment is a 3 macbooks laptops running just snow leopard, 4 asus laptops with dual boot windows 7 and ubuntu and a rubbish test server box that is similar to our vps.
Our setup currently work okayish at the moment, with a few minor issues, like not knowing what version of software we are working on, caused by continually switching operating systems and lost of productivity from being to lazy to switch the laptop we are working, having to unplug it and plug in the new one, including the second monitor, keyboard and mouse.
Our system is far from professional and we are looking to upgrade. This is because we wish to increase our staff and we have some cash saved up, so why not. The phone we are targetting are iOS, android and Win7. Our servers are written in php and json. So my question is basically, how do you guys manage with all these multiple operating systems.
iOS requires mac os x
android can use all
json require linux/mac os x
windows phone 7 requires windows
do you guys use some form of virutalization?
or try those libraries that compile to each phone binary such as unity?
There are many many different ways to solve this and you may have to find what works best for you. Here are some suggestions though.
Using the macbooks, set up bootcamp so you can dual boot to OSX or Windows. This will mean you can use the Macbook for all development without having to bother swapping monitors, etc. Doing this will leave your other Windows laptops spare which you can use for the next suggestion....
Set up a central repository for your sourcecode. Use one of the servers you have, or re-purpose one of the other machines and install a decent source code repository system. CVS, Git, etc. There's plenty of resources about these. This will allow you to keep your code in one place so it won't matter which machine you are working on - you can always get the most recent code. Plus it will help you track your code changes. Oh, and don't forget having it all in one place will be much easier for backups (you do do backups, don't you....?)
Don't fall into the trap of upgrading hardware just because you have some money floating around. You may just need to use the hardware you have more wisely. You mention what you have is "far from professional". You don't need the latest, greatest hardware and software to do development. I've done iOS development on 4 year old Macbook Pro, used an 8 year old PC as a server for web and database and still use Windows XP every day.
Depending on how many of you there are, you may not have enough Macbooks. If this is the case, then perhaps you have some who are specialists in the server-side stuff (ie they don't do iOS development and so don't need the Macs).
Virtualisation - using VMWare or similar tools are an excellent way of getting more from what you have. For example, you could have a couple of test servers that aren't very heavily utilised. Using virtualisation, you could put both of these servers onto one machine. This will then free up the other box for something else. It also makes it very easy to backup (you are doing backups, aren't you...?) an entire server and recover it back to the exact state in the case of a hardware failure. You can also very easily create a server tailored for each client/project and switch between them quickly without having to maintain lots of other stuff (think if you had a web server configured for one project and you then work on another project that needs a different configuration and you change it, then you need to change it back, etc).
EDIT: Update in response to comments.
If using Bootcamp isn't an option, then consider running a Windows and/or Linux virtual machine inside OSX. Depending on the spec of your macbooks and as long as you don't need very low-level hardware access on Windows, then this would probably work as well and not need to switch in and out using BootCamp. Same goes for the Linux virtual machine. I'm a big fan of using Virtual Machines on development environments as it allows you to copy around and switch in and out servers without having to rely on physical hardware connections. And you can very easily return to a known state with the server configuration and data.
With regards the source control "in the cloud". I'm not a fan of this approach. It's my source code and I want to control it. I don't want to be reliant on some other company and I don't want to hope I've read some Terms and Conditions correctly and I'm not handing over my code to some other company to do what they want with it. Aside from that, what happens if your internet access goes down and you absolutely must get some coding done for a customer? If you are relying on another service, then you are risking problems. Yes, it has advantages for multi-site, they do the backups for you, etc. But it really isn't a problem unless you have lots of developers spread all across the world. And even then it isn't necessarily a problem. You could always do a backup of your code to some package file, encrypt it and then throw that up in the cloud for a backup storage (as well as burning it to disc, writing to another external hard drive and storing them off-site). But I certainly wouldn't want to rely on an external source control unless I was doing open source stuff.
There's sooooo much more to these subjects and there are many other subjects you will probably encounter along the way of building up your business.
One of the most important things about software development is to keep it organised and to get that organisation part done at the start. If you are just each keeping a copy of the code on local drives, then changing code and hoping that you haven't changed the same file as someone else, then this will just lead to pain. The source control aspect is key from the start.
Oh, and did I mention backups?
I would also consider the IDE you're using as part of the equation. For instance a good cross platform IDE (Like QT4+) and a centralised code repository on a server will go a long way towards mitigating your working problems. Eclipse, Netbeans and QT4+ are cross platform and will work with all 3 systems. Virtualisation as you mentioned is an option, but first I would decide on the IDE platforms to use before worrying about your dev infrastructure setup.
Bro, I'm not a pro, but you have two options:
Either multiboot your system by installing multiple OSes...(Obviously, you need a separate MACbook)
Or use Virtual Machines like VMWare etc.
Personally, I haven't heard much about libraries like Unity etc.
Go for dedicated systems & not just libraries.
I am about to use Apache Hadoop, the headlines read:
The Apache Hadoop project develops
open-source software for reliable,
scalable, distributed computing.
I can relate "scalability" to programming, but I just don't know how this "distributing" can help me in my development. According to wikipedia:
A distributed system consists of
multiple autonomous computers that
communicate through a computer
network. The computers interact with
each other in order to achieve a
common goal
So does this mean I can deploy my web apps across multiple computers and do some sort of "intense computing"? The terms that come into my mind are Content Delivery Networks and Cloud Computing.
Web development has always been about distributed computing, since clients have been on different machines to the servers they talk to, web pages can pull in resources from many servers to build a page's content, and servers may talk to other machines to achieve their goals. CDNs make this more obvious than before, but really they're just an evolution, an introduction of a virtualization/indirection layer between what you ask for and the hardware used to provide it.
Clouds are about taking the concepts of virtualization and applying them to remote hosting, both of low-level OSes and higher-level software platforms. The really interesting thing about them is that this enables different business models on the part of customers (and with different risks too, but that's mostly not related to the fact that it's distributed computing but rather that it is not wholly under your control in your own jurisdiction).
I've found that the most effective use of distributed computing is when you think in terms of connecting together distinct services, each of which with different capabilities (which might be for technical reasons, or might not; sometimes, it's for business or legal reasons that things have to be divided up) and where each of those services may be provided by many components in multiple locations. There are, and continue to remain, issues with balancing the need for performance (which is a force that brings components together) and the need for robustness (which tends to lead to distribution and replication) within the overall context of the general capabilities map.
My goodness! That paragraph sounds like terrible piffle! What I'm trying to say is that it's all trade-offs, and you should be prepared for not getting it right first time.
(Hadoop is a mechanism for doing a distributed file store, and for efficiently applying certain classes of operation – those that fit well with MapReduce or other similar scatter-gather algorithms – across that whole dataset. If that shoe fits, use it. But it doesn't solve all problems, and thank goodness for that! Things that can do everything tend to look very much like things that can't actually do anything at all, and usefulness and comprehensibility come in the restrictions.)
Hadoop is typically used to process massive data sets by distributing the processing of that data set across multiple machines.
What this means is you probably don't want to use it to "deploy an application". You might use it to process stats on your application, however. For instance, you might have very large logs of user data. This would happen if your user data grows to become too large to fit on a single hard drive, and/or would take too long for one machine to process stats on (using standard methods like an SQL query).
Ygam. While the traditional role of "clients" and "servers" have been pretty stable from 1960 till about 2005.
I believe with every fiber of my being, that distributed computing is that we all carry processors around in our pockets.
Phones do computing work. Phones do NOT need centralized servers, but they DO benefit from them.
Phones , Smartphones, tablets are an example of where distributed computation is going.
You can make a wifi base-station out of an Android device now. So now a phone becomes a server of sorts for just that instant in the coffee shop that you turn it on for that cute person next to you without internet ....and now I digress.......
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Duplicate
Reasons to Use a VM For Development
I'm trying to roll out a policy in my company where all developers have to work on a virtual machine (e.g. VMWare Workstations) that have the dev. environment such as IDE, tools, service packs already installed to make it easier for new team comers, smoother to provision new machines, etc...
do you recommend such an approach or do you work in a similar fashion in your company?
I've got a colleague who likes to work this way. He's got a virtual machine for each project he works on.
I personally don't like using a virtual machine to do development.
It's slower than working directly on
my machine.
It doesn't do multiple
monitors well.
Don't protect your devs from knowing the gritty details about IDEs, tools, and service packs. They need to know these things.
Also, don't force your devs to work a certain way. Some may not be happy about it, and unhappy devs = less productive devs.
I have worked with both methods for years. Currently I use VMs. They have many many advantages. However, don't force anyone into one particular way. They won't be productive if they are forced. If you can, convince them.
Advantages of VM for Dev:
Very quick deployment: One volunteer updates the image with the latest and customizes, and all get the benefit.
Each project can get a separate copy, no interference and no conflicts.
Very simple to "freeze" everything and restart! No need to save, close, run, load...
When things go wrong, it's an image, scape it, clone a new one and checkout your code.
Freeze while debugging or testing (sometimes you want to capture a specific state). Snapshots help if you want to go back and repeat some actions (think testing).
VMWare has remote debugging and backward execution!
Reproducibility! You devs and testers can reproduce bugs since the environment is controlled (assuming nothing other than work is on the image) and with saved states (assuming they use snapshots).
On the other hand, there are disadvantages:
VMs are bulkier, take a lot of space and memory.
You won't get 100% of your hardware performance.
You will lose some time on image maintenance.
Some people just hate it.
I highly recommend using virtual machines for development. Local virtual machines have very little performance penalty and make it much safer to try new ideas/software.
Just make sure you have enough RAM to allow for several VMs and the host OS.
See also
Where i work at the policy mandates that we all have a physical machine wich runs a VM. We only have admin privileges to the VM and not the physical machine. This tends to create problems when we have to run several development applications, builds tend to be slow, everything is slow for that fact. Also when the VM starts reaching the 15gb limit (around a month and a half use) things get complicated as the VMs start crashing and we need to ask for VM compression.
My experience has been bad so i wouldn't recommend it. We usually run the following applications in the VM : Text editors, IDE, Weblogic instance, TOAD for database access. Explorer and Firefox, office applications and less.
With modern IDEs there's a lot of graphics and disk IO going on, neither of which is performed well by VMs. So - if your VM responds fast enough for the developers to use, then I'd say there's no reason why not. If it doesn't you either need to get a faster machines for them or go back to documenting how to setup the build environment.
the other factor against VMs is that if you change the environment, you have to do it for all VMs, and document the changes anyway. If you're telling everyone how to set up their system, you might as well let them set their own system up on the base metal.
Incidentally, we do have VMsa for this - but they tend to be for old versions of the product, so we can still build it without having to install the old service packs, sdks and compilers. Its ok, but I find installing everything locally and switching between them (using junction to point to the build directories) is easier.
Now IIRC VMware has a virtualisation project called thinApp that transparently puts a OS environment onto your local box, so you can have several conflicting applications running side-by-side. I've not used it, but did look into it as something that might be better than whole guest VMs running in their own windows.
Personally, while I feel it's a good idea for all the reasons you mentioned, I also feel that it requires quite a bit of extra cost on machines. I was just trying out Windows 7 over the weekend on VMWare and a moderate machine (AMD X2 4600, 2 GB RAM) I find that working in a VM can very much be a worse experience than working on top of the real hardware.
At our shop, we pretty much use all VM's for development. One useful strategy we've employed though to increase VM performance, is to always run them on a high speed external hard drive. Doing this makes them run incredibly fast, since VM's usually a demand a lot of disk IO, on as the prior post mentioned.
There are valid reasons to use VMs for development. However, if you're thinking of doing this just to standardize development environments across your organization, there are better ways to accomplish that (ie, having standard machine images).
In some cases, like doing SharePoint dev work, you are more or less required to work on a server, and I just don't like the idea of turning my laptop into a 2003/2008 server :-)
We have two VMWare ESX boxes that hosts our dev machines and it works great as long as people remember to switch off those images that are not in use. Another advantage is that we have a complete network of ESX images in their own domaine which gives us the abillity to do a lot of funny stuff :-)
Start with some developers and try to gather some actual data about productivity change.
I'm looking for a way to give out preview or demo versions of our software to our customers as easy as possible.
The software we are currently developing is a pretty big project. It consists of a client environment, an application server, various databases, web services host etc.
The project is developed incrementally and we want to ship the bits in intervals of one to two months. The first deliveries will not be used in production. They have the puropse of a demo to encourage the customers to give feedback.
We don't want to put burden on the customers to install and configure the system. All in all we are looking for a way to ease the deployment, installation and configuration pain.
What I thought of was to use a virtualizing technique to preinstall and preconfigure a virtual machine with all components that are neccessary. Our customers just have to mount the virtual image and run the application.
I would like to hear from folks who use this technique. I suppose there are some difficulties as well. Especially, what about licensing issues with the installed OS?
Perhaps it is possible to have the virtual machine expire after a certain period of time.
Any experiences out there?
Since you're looking at an entire application stack, you'll need to virtualize the entire server to provide your customers with a realistic demo experience. Thinstall is great for single apps, but not an entire stack....
Microsoft have licensing schemes for this type of situation, since it's only been used for demonstration purposes and not production use a TechNet subscription might just cover you. Give your local Microsoft licensing centre a call to discuss, unlike the offshore support teams they're really helpful and friendly.
For running the 'stack' with the least overhead for your clients, I suggest using VMware. The customers can download the free VMware player, load up the machines (or multiple machines) and get a feel for the system... Microsoft Virtual PC or Virtual Server is going to be a bit more intrusive and not quite the "plug n play" solution that you're looking for.
If you're only looking to ship the application, consider either thinstall or providing Citrix / Terminal services access - customers can remotely login to your own (test) machines and run what they need.
Personally if it's doable, a standalone system would be best - tell your customers install vmware player, then run this app... which launches the various parts of your application stack (maybe off of a DVD) and you've got a fully self contained demo for the marketing guys to pimp out :)
You should take a look at thinstall(It has been bought by vmware and is called thinapp now), its an application virtualizer.
It seems that you're trying to accomplish several competing goals:
"Give" the customer something.
Simplify and ease the customer experience.
Ensure the various components coexist and interact happily.
Accommodate licensing restrictions, both yours and the OS vendor's.
Allow incremental and piecewise upgrades.
Can you achieve all of these by hosting the back end (database, web server, etc.) and providing your customers with a CD (or download) that contains the client? This will give them the "download/upgrade experience" that goes along with client software, without dealing with the complexity of administering the back end.
For a near plug-and-play experience, you might consider placing your demo on a live linux or Windows CD. Note: you need a licensed copy of Windows for the latter.
Perhaps your "serious" customers might be able to request their own demo copies of the back end as well; they'd be more amenable to the additional work on their part.
As far as OS licenses, if your vendor(s) of choice aren't helpful, you might consider free or open-source alternatives such as FreeDOS or linux.
Depending on if you can fit all the needed services into a single OS instance or not...
Vmware Ace or whatever they're calling it nowadays will let you deliver single virtual machines under strict control, with forced updates, expiration and whatnot. But it sounds easier to just set up a demo environment and allow remote access to it.
The issue here I guess is getting several virtual machines to communicate under unknown circumstances - if one is not enough?
An idea then is to ship a physical server preconfigured with virtualisation and whatever amount of virtual servers needed to demonstrate the system.
Using trial versions of the operating system might be good enough for the licensing dilemma - atleast Windows Server is testable for 60 days, extendable to 240 when registering.
Thinstall is great for single apps, but not an entire stack....
I didn't try it yet, but with the new version of thinstall you are able to let different thinstalled application communicate.
But I guess you're right a vm-ware image would be easier
I'm currently working at a small web development company, we mostly do campaign sites and other promotional stuff. For our first year we've been using a "server" for sharing project files, a plain windows machine with a network share. But this isn't exactly future proof.
SVN is great for code (it's what we use now), but I want to have the comfort of versioning (or atleast some form of syncing) for all or most of our files.
What I essentially want is something that does what subversion does for code, but for our documents/psd/pdf files.
I realize subversion handles binary files too, but I feel it might be a bit overkill for our purposes.
It doesn't necessarily need all the bells and whistles of a full version control system, but something that that removes the need for incremental naming (Notes_1.23.doc) and lessens the chance of overwriting something by mistake.
It also needs to be multiplatform, handle large files (100 mb+) and be usable by somewhat non technical people.
SVN is great for binaries, too. If you're afraid you can't compare revisions, I can tell you that it is possible for Word docs, using Tortoise.
But I do not know, what you mean with "expanding the versioning". SVN is no document management system.
Edit:
but I feel it might be a bit overkill for our purposes
If you are already using SVN and it fulfils your purposes, why bother with a second system?
If you have a windows 2003 server, you can have a look at Sharepoint Services 3.0 (http://technet.microsoft.com/en-us/windowsserver/sharepoint/bb684453.aspx).
It can do version control for documents, and has a nice integration with Office, starting with Office xp, but office 2003 and 2007 are better. Office and PDF files can be indexed (via Adobe IFilter), and searched. You can also add IFilters to search metadata in your documents.
Regarding large files, by default the max filesize is 50MB, but it can be configured.
We've just moved over to Perforce and have been really happy with it. It's a commercial product, but it's so powerful and easy to use that it's worth the price per seat IMHO.
A decent folder structure and naming scheme?
VCS don't really handle images and such very well - would it be possible to have the code in a VCS (SVN/Git/Mercurial etc), along-side a sensible folder structure for the binary-assets (source photos, Photoshop PSD files, Illustrator files and so on)?
It wouldn't handle syncing, but a central file-server would achieve the same thing.
It would require some enforcing and kitten-herding to get people to name things properly, but I think having a version folder for each asset (like someproject/asset/header_logo/v01/header_logo_v01.psd) will basically be like a VCS, but easier to move between different revisions (no vcs checkout blah -r 234 when a client decides they prefered v02 more than v03)
Your question is interesting because your specifying that it be suitable for a small office. At the enterprise level, I would recommend something along the line of EMC Documentum's eRoom, but obviously thats going to be way more than you need, and more than you want cost-wise as well. I'm not sure of the licensing details on this but I've heard that if your office has MS Office, you have access to Sharepoint, which might work well for you. I'm also sure there are a lot of SAAS implementations of this kind of stuff, so you may want to look at that, keeping in mind that the servers will not be hosted by you, so if the material is extremely sensitive, thats obviously not the proper route.
You might want to consider using a Mac as your server and using Time Machine to backup your shared folders. Doing this gives you automatic backups and allows you to share through Samba so everyone can have a network drive on their computer. A Mac server is probably overkill. A Mac Mini would do for a small office or a repurposed desktop machine.
You might also consider Amazon's S3 service to do offline backups. Since it's a pay-as-you-go service this can scale with use, and if you feel you want to move to something else you can always download your data and take it somewhere else.
Windows Vista features local file versioning in its file system, which can be useful, but is limited in terms of teamwork. However, if somebody overwrites somebody else's file, a new version is stored as it should be.
Also consider KnowledgeTree. Have a look at it, some demos/screenshots are available at
http://www.knowledgetree.com/
It has a free open source Community Edition - so it's cost effective. We haven't tried it, but we chose this one over other systems for a small business looking for document versioning solution.