As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have been using MAC OSX and since beginning I have had problems with memory consumption of JAVA ides.. I have tried Netbeans, Eclipse and Intellij, tried to configure jvm settings especially for eclipse but memory problem remains...
basically, although i am not doing any compiling or building, keeping one single project open makes my ide to consume more than 750 mb of ram.. this is same for all the ides i wrote above and customizations of ini files makes a little impact...
are there any low memory consuming ides around? or something written for mac only so probably it will handle ram issues in a better way?
Smart IDE needs to index all your project and SDK files to provide code completion and other intelligent features. The index needs to be stored somewhere, so there would be always a tradeoff between intelligence, performance and memory consumption.
If IDE chooses to minimize the memory usage, it will have to store cache on disk and load it when you invoke some feature that requires some data from the cache, you'll get a delay when using a feature, which is unacceptable and will slow down editing.
Of course you want your IDE to be fast. To achieve this, it needs to always have most of the caches and indexes already loaded in memory, but you'll see higher memory usage.
Whether you like it or not, but most modern applications prefer to be faster and consume more RAM, rather than to be slow and consume less RAM. Chrome browser with 5 open tabs will consume more memory than your IDE.
Memory is a cheap resource (unless you have old hardware that you can't upgrade by installing more RAM, like a laptop). Normally developers do not save on hardware. In order to be productive when using IDEs they have systems with at least 8GB of RAM. Developers working with Java and application servers can have even more. The price of 24GB is ~$100-150 now.
Would you save on RAM and then look for a slow or feature limited IDE that can work on your system? Or would you buy better hardware and forget about this resource for several years while enjoying the IDE providing more features and speed?
You may be better off with a text editor like http://www.sublimetext.com/
Or if you want to be old school, vi or emacs.
You can reduce Eclipse's memory usage if you tweak settings and remove things you don't need. In particular, don't run the EE version, disable spell checking, and keep your project workspace tidy (1 project).
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
For the last few years I have been developing a 2D video game, which would most likely fall under the category of a singleplayer RPG with a post-release goal of adding in multiplayer. My goals have always been very realistic, attempting to achieve small chunks of progress before anything too serious.
My 2D game requires a hefty sum of assets (artwork; primarily images, some which are very large). While most of the last few years has been working on the artwork, for the last year I have been sharpening my programming skills, learning about game engineering, and building small games in various languages and frameworks/engines. After much work with different frameworks/engines, I realized I need (or at least want) to squeeze every last bit of performance that I can. My game will most likely take 1-2GB of ram maximum, with at least 4-6GB of HDD space. With content expansions I'd expect it to get quite large, at least on the HDD. This is why I, very unfortunately, do not believe HTML5 is ready for my game. Otherwise I would choose it, buy specific (highly rated) books, and start work right away.
I started my project with XNA Game Studio for primarily two reasons: a hefty bulk of tutorials at XNA GPA Tutorials to give me an easier start, and the ability to port to a secondary platform besides my primary target of Windows (port to XBOX). Soon after some development, I ran into a few issues with the framework, content pipeline and its (crappy) compression into XNA's (crappy) .xnb format, and the understanding of severe limitations for XBOX Live Arcade games and their size limit. The XNA GPA tutorials are nice, but I realized they're nothing special that I need to build my game using another framework.
SDL had amazing performance, so I started to work on developing my game. Progress was quite nice, and I did not run into any problems. However, with the way SDL is advertised (talked about) I assumed it used OPEN GL by default, not software rendering. I implemented some OPENGL, but read online that its open GL implementation was a bit rough, and that SFML is a much better engine for those who do not mind that it is higher level.
Off to SFML I went! And after a few days, it went straight into the trash. Reading about intel graphic problems with rendering, having to trash 1.6 due to the unfixable ATI glitch, and then trolling through google to find a custom nightly build of 2.0 that works with mingw 4.7.1, and learning that I am not the only person skeptical of SFML's quality as a whole, as well as some irrational changes in intuitive code from 1.6 to 2.0, I just wanted to give up in frustration.
So I am back to SDL, with very little developed. It is not that I want to use SDL, but that I cannot find a framework that is suitable for my game that is better than SDL. I fell in love with HTML5 and then was very interested in Isogenic Engine (HTML5) but I severely doubt that HTML5 is capable of handling my large (and numerous) sprite sheets and heavy asset 2D game.
Portability is important to me, but not a requirement. I figure I might as well just start programming my game for just Windows, and then once it is finished if I want to, then port it elsewhere. The best piece of advice I've ever received was, "It doesn't matter what you use. Just start, and do it. Having a completed game is more important than having a high performance piece of incomplete software."
However, I have heard by many that it would resolve a lot of headaches and some extra heavy lifting if I were to pick a library/framework/engine that handled networking at a higher level than what I'd have to implement if I were to use SDL.
I've researched game engines for years now, so I am really looking for convincing reasons to pick [your suggestions] as opposed to a simple, "I haven't used it, but..."
I want to be convinced away from SDL and onto something higher level, but I am having trouble finding reasons to choose differently. There are hundreds of game engines on wikipedia's list, and many more not listed-- but sorting through the chaff is exhausting, especially when I have already tried so many and been quite disappointed with the results.
Think of my game as a high resolution image version of Baldurs Gate 2, Sanitarium, Diablo 2, or Ultima Online. Not in gameplay, but in the fact it is entirely in 2D images which result in art style, chunks of ram usage, tons of HDD space, and a few quite large spritesheets/image sequences (Dragons, for example) amongst many very small sprite sheets (7MB total HDD space for a human character in 15 animations across 8 directions, and under 30MB memory usage for the sheets).
Also, thank you for letting me vent while I ask for detailed suggestions.
i have the same problem as you. I would really want to find right framework. I tried a lot and i think i found it. It's Polycode http://polycode.org/ http://polycode.tumblr.com/
But it's not finished(IDE, library is usable) and developer says it's going to be till the end of January. But i started developing my own just in case Polycode disappoint me.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am compiling various lists of competencies that self taught programmers must have.
Among all subjects, Operating Systems is the trickiest one, because creating even a toy operating system is a rather non-trivial task. However, at the same time an application developer (who may not have formally learned CS) must at least be aware of and hopefully should have implemented some key concepts to appreciate how an OS works, and to be a better developer.
I have a few specific questions:
What key concepts of operating systems are important for a self taught programmer to understand so they can be better software developers (albeit working on regular application development)?
Is it even remotely possible to learn such a subject in byte sized practical pieces ? (Even a subject like compiler construction can be learned in a hands on way, at a rather low level of complexity)
I would suggest reading Andrew S. Tanenbaum ( http://en.wikipedia.org/wiki/Andrew_S._Tanenbaum ) book on Modern Operating Systems (ISBN 978-0-13-600663-3) as everything is there.
However from the book index we can identify the minimum key topics:
Processes
Memory management
File systems
Input/output
And the easiest way to start playing with this topics will be to download MINIX:
http://www.minix3.org/
and study the code. Older versions of this operating system might be easier to understand.
Another useful resource is Mike Saunders How to write a simple operating system that shows you how to write and build your first operating system in x86 assembly language:
http://mikeos.sourceforge.net/write-your-own-os.html
Every OS designer must understand the concepts behind Multics. One of the most brilliant ideas is the notion of of a vast virtual memory partioned into directly readable and writable segments with full protections, and multiprocessor support to boot; with 64 bit pointers, we have enough bits to address everything on the planet directly. These ideas are from the 1960s yet timeless IMHO.
The apparent loss of such knowledge got us "Eunuchs" now instantiated as Unix then Linux and an equally poor design from Microsoft, both of which organize the world as a flat process space and files. Those who don't know history are doomed to doing something dumber.
Do anything you can to get a copy of Organick's book on Multics, and read it, cover to cover. (Elliott I. Organick, The Multics System: An Examination of Its Structure).
The wikipedia site has some good information; Corbato's papers are great.
I believe it depends on the type of application you are developing and the OS platform you are developing for. For example if you are developing a website you don't need to know too much about the OS. In this example you need to know more about your webserver. There are different things you need to know when you are working on Windows, Linux or Android or some embedded system or sometimes you need to know nothing beyond what your API provides. In general it is always good for a developer or CS person to know following.
What lies in the responsibility of application, toolchain and then OS.
Inter process communication and different IPC mechanism the OS system calls provides.
OS is quite an interesting subject but mostly consist of theory but this theory comes to action when you working on embedded systems. On average for desktop applications you don't see where all that theory fits in.
Ok, operating system concepts that a good programmer should be aware of.
practically speaking. Unless you are concerned about performance. If you are writing in a cross os language. None.
If you care about performance.
The cost of user/system transitions
How the os handles locking/threads/deadlocks and how to best use them.
Virtual Memory/Paging/thrashing and the cost thereof.
Memory allocation, how the os does it, and how you should take advantage of that to when A, use the OS allocator ( see 1) and when to allocate from the os and sub allocate.
As earlier put, process creation/ and inter process communication.
How the os writes/reads to disk by default to read/write optimally ( see why databases use B-trees)
Bonus, sub-os, what cache size and cache lines can mean to you in terms of performance.
but generally it would boil down to what does the OS provide you that isn't generic, and what and why does it cost, and what will cost too much ( too much cpu, too much disk usage, too much io, too much network ect).
Well that depends on the need of the developer like:-
Point.
Applications such as web browsers and email tools are
performing an increasingly important role inmodern desktop computer
systems. To fulfill this role, they should be incorporated as part of the
operating system. By doing so, they can provide better performance
and better integration with the rest of the system. In addition, these
important applications can have the same look-and-feel as the operating
system software.
Counterpoint.
The fundamental role of the operating system is to manage
system resources such as the CPU, memory, I/O devices, etc. In addition,
it’s role is to run software applications such as web browsers and
email applications. By incorporating such applications into the operating
system, we burden the operating system with additional functionality.
Such a burdenmay result in the operating system performing a less-thansatisfactory
job at managing system resources. In addition, we increase
the size of the operating system thereby increasing the likelihood of
system crashes and security violations.
Also there are many other important points which one must understand to get a better grip of Operating System like Multithreading, Multitasking, Virtual Memory, Demand Paging, Memory Management, Processor Management, and more.
I would start with What Every Programmer Should Know About Memory. (Not completely OS, but all of it is useful information. And chapter 4 covers virtual memory, which is the first thing that came to mind reading your question.)
To learn the rest piecemeal, pick any system call and learn exactly what it does. This will often mean learning about the kernel objects it manipulates.
Of course, the details will differ from OS to OS... But so does the answer to your question.
Simply put:
Threads and Processes.
Kernel space/threads vs user space/threads (probably some kernel level programming)
Followed by the very fundamental concepts of process deadlocks.
And thereafter monitors vs semaphores vs mutex
How Memory works and talks to process and devices.
Every self-taught programmer and computer scientist alike should know the OSI model and know it well. It helps to identify where a problem could lie and who to contact if there are problems. The scope is defined here and many issues could be filtered out here.
This is because there is just too much in an operating system to simply learn it all. As a web developer I usually work in the application level when an issue ever goes out of this scope I know when i need help. Also many people simply do not care about certain components they want to create thing as quickly as possible. The OSI model is a place where someone can find their computer hot spot.
http://en.wikipedia.org/wiki/OSI_model
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Took about a two week break from development and came back this week to hit it hard again. xCode is CRAWLING. When it opens it takes FOREVER - 5+ minutes - and then every time I click on anything in the GUI the "cursor colorwheel" spins for 2+ minutes. I tried deleting my .xcodeproj file with no effect. Been reading various posts and the net but can't find anything that improves my situation. Anyone have any ideas? Thanks.
While Justin provided a really good list to start with, I'd like to add something that I think can instantly improve your Xcode and Mac OS X performance (after some re-installation effort, though):
get an upgrade to a good, fast SSD drive!
From my friends' experience it should give better results than upgrade to 8GB of RAM. However a combo of 8GB and fast SSD is be the best option to go.
Just to check I would use Time Machine and go back in time a while, then check Xcode again. If this happens with all projects - even with a new one - then certainly there is sth wrong in general. Check your HD and other progs, also check with activity meter if Xcode is really doing sth.
Anyway, I am currently working on an app with about 200k line and about 15 MB target size. I can use Xcode 4.2 without any delay.
ps once before with XCode 4.0 I experienced sth similar, however, even with a much smaller app. After even trying with Apple support it turned out that for some reason my installation was corrupt and could not be cleaned up. I had to install another Version of Xcode than everything was OK again.
some things to help you get started:
keep with the official releases (read: no betas).
make sure you have enough RAM (8GB w/ Xc4 is a good minimum for me) and a fast drive.
revert to 10.6
revert to Xcode 3
reduce project interdependencies
reduce header dependencies (faster indexing and compile times)
disable live issues (Xc4)
clear your live search fields (this alone can block UI and builds for minutes).
file bugs
try running it in 32-bit (garbage collection consumes a lot of time, and it will reduce the total amount of memory required).
provide more details of what Xcode is doing that it requires so much time (sample)
That's not normal. Sample it and find out where it's spending all that time.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What's the best operating system to study in order to write your own x86 operating system from scratch?
I think Minix was created for pretty much that exact purpose.
Have fun!
It might be difficult to comprehend the source for an entire OS all at once. The tutorials over at osdev.org have a few "bare bones" code samples to get you started.
I just wrote my version of x86 kernel from scratch! (for my OS class project) and that was experience I couldn't probably describe. You can find valuable resources at above link.
For my OS class in college we used the Nachos OS Project and implemented that. I did the C++ version, however I think there is also a Java port of this as well. I remember it being very interesting and learning a great deal, even though it was a lot of work.
It all depends on how you want your Operating System to function, if you want a microkernel you should probably study Minix 3, or if you want a monolithic kernel the current linux kernel is a good place to start from (HINT: look in arch/x86/boot, there is some very interesting code in there).
However I personally think that you should read through the Intel and AMD manuals, and then do a bit of reading on the osdev.org forums and wiki. They have plenty of code to study, and are generally helpful towards newbies.
Honestly, you should probably not start with an x86 architecture, or even operating systems but maybe something like an 8-bit starter kit, like a basic Fox11 development kit. In college, I wrote my first (and only) OS in Assembly for an M68HC11 processor (the one in the kit).
If you really want to build your own OS from scratch, you've got a long road ahead of you.
I think best way to read many different operating system sources, definitely osdev barebone tutorials, whitepapers on OS research and documentation on your target hardware.
I personally would recommend looking at l4-ka pistachio kernel, written in pretty darn good C++. There are also multiple smaller projects definitely worth checking out, like jimix or pedigree.
Best to stick around osdev forums and wiki - there is a lot of information there already answered - see http://forum.osdev.org and http://wiki.osdev.org
I read this article a while back. You might find it interesting. This guy wrote MINIX back in the day for the very purpose of teaching OS concepts. So it would probably be a good simple OS to study. http://www.cs.vu.nl/~ast/brown/
However, as Martin and Cory mentioned, it's a big chunk to chew.
There is not much point in studying obsolete OS's which is pretty much all current OS's as they tend to have long lives. Have a look at some fresh ideas (although based on tried and true) like Singularity
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What is the average time that it would take a complete novice, whose background is mostly Windows XP, to go through the FreeBSD handbook and get sufficient mastery to setup a server from the ground up?
It's impossible to say. Not only is it highly dependent upon what sort of person you are, but it also depends on what exactly you are doing and how you define "sufficient mastery". Being able to get Apache operational is a simple matter of following step-by-step tutorials, you could do that in a matter of hours. Being able to run a multi-user server competently takes a hell of a lot longer, and the handbook isn't nearly enough.
It would depend on how much knowledge you have of unix, and from the sounds of things, you probably do not have a whole lot.
Assuming you have little knowledge of unix at all, I would say that it will probably take a few days to get a grasp of what is going on, and possibly a week to have something working.
The FreeBSD handbook is pretty detailed though, and does provide you with a good grounding of everything you need to do to get things to work.
I know that this sounds like an awful lot of time, but in my experience, they really are quite different OS paradigms.
You could start with PC BSD (an easy to use distro) to get a feeling of BSD and then move to more advanced stuff like setting up servers.
As others have noted, configuring a service to do a couple of things isn't very hard, you just have to follow some steps (which any monkey could do), but if you want more, you'll need extra time. A competent sysadmin does not know only the how, but also the why. Grandma can click all day in Windows and even if Windows Server has a GUI for server administration, it doesn't mean she can configure IIS or the DHCP service. By the way, it would be a good thing if you could learn an (Unix) editor, preferably vi, since it's the standard on BSDs; emacs, joe, pico are nice too, but they aren't so popular.
As for the time, it took about two days for me to configure a server. But I had previous Linux experience and the server didn't do anything fancy.
Look if you've never touched a Unix platform, you should learn a lot of things, basically a different philosophy. The FreeBSD Handbook and the community is simply wonderful, but a reference book like the FBSD handbook contains a lot of information that you must develop yourself.
Also, the BSD platform is not easiest of the Unix family to begin from zero.
Good sources to learn:
Absolute BSD book.
The Complete BSD book (this is for Release 5, it's good for learning also).
Man pages. The BSDs man pages are a LOT better than the Linux ones.
FreeBSD Handbook.
FreeBSD forums: forums.freebsd.org and daemonforums.
Any Unix/Linux resource you can get your hands on. Many things are compatible (or near-compatible). e.g, if your friend tells you "I've found an old SGI IRIX / HPUX or (insert unix here) manual that I will throw in the thrashcan" stop it and see what you can learn from it.
Keep in mind that you've a long road ahead. But you'll enjoy it.
Depends on your reading speed :-)
Depends on your needs (I mean: what kind of server).
Once upon a time I did this - installing a FreeBSD on x86- (although I had some Linux knowledge already at that time), and it took me 3 hours, mainly that much time, because I was working on another machine in parallel.
Depends on your background: Did you ever use power shell or other command line "applications" (like batches ;-). For me one of the greatest challenges to switch from a completely GUI'd operating system to an operating system that works best with a shell (something a little bit like the DOS prompt). But the moment you get the hang of it you'll be fine again.
Another aspect is the availability of a second computer beside the one you are setting up. If you can do web searches for additional information while in the midst of doing an install, it can save a lot of time.
As for the original topic, I've used Linux and Unix extensively, but have yet to get FreeBSD working after several tries over many years. I'd always get frustrated before I could get it fully installed and configured for a nice graphical desktop. (So personality obviously matters.) But it has been about two years since I've tried, and it may be simple now...
Please do not consider this a flame against FreeBSD... just a true story that for some reason I couldn't seem to make it work. If it were not a good OS, I wouldn't have attempted so many times.
If you're coming from a primarily Windows background, I think FreeBSD would be a great way to dive into UNIX, but you may also want to check out Ubuntu Linux-- specifically, Ubuntu Server.
Got a spare Pentium 4-based system laying around at home? Burn yourself a CD and go to it.
As a fan of FreeBSD myself, I have to second the recommendation for the "Absolute FreeBSD" book above-- another book worth a look is "Building a Server with FreeBSD 7."
My original rationale for choosing FreeBSD was getting better control over what gets installed-- I was really tired of installing RedHat and/or SuSE and having a few gigabytes of stuff I wasn't going to use installed as part of the base install that wasn't easily removed after the fact. I've grown rather enamored with the BSD way of doing things, but it isn't necessarily for everyone.
Something to consider-- if you have the hardware, run VMWare or VirtualBox, and set up a few virtual machines to get used to various distributions before making the commitment to install a particular one on bare hardware.