Use of learning computer system architecture? [closed] - cpu-architecture

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have it in my course and i dont understand the practical use of learning it

This isn't the forum to be asking ultra-novice questions, or I apologize but overall "stupid" questions like that, but I'll give it a go to answer you. Hoping that understanding the "why" will fuel your compassion to learn what appears to be an unnecessary class and hopefully to one day prevent you from boning over your company's sysAdmin ultra hard.
Plain an simple "You need to know hardware to not program crap software" and I don't mean you need to know what GPU is recommended by Tom's Hardware, or that your machine has 8GB of ram and a sweet SSD.
You need to know endianess(https://en.wikipedia.org/wiki/Endianness) to program efficiently. You need to know core counts to run a self-managing, self-scaling nodejs clustered server, and what chip onboard codecs run best for a given configuration. You need to know that also to create a valid virtualbox machine or VMWare machine. You need to know the lowest possible denominator so that you know how your RAID Array is going to perform.
You need to know how many read/writes SSD's can manage before breaking down so you don't botch a company's entire data repository cause "You thought Samsung SSD's were good!"
You need to know about platter space so you can keep things running nicely, and trimming so your SSD on that old "Vista" machine they won't get rid of doesn't fill up with non trimmed data.
You need to go into Bob Joe Company, who's using WindowsXP and tell them where they can utilize their spend dollars the most efficient and safe way.
You need to know input/output, you need to understand the differences between VRAM and physical RAM, clock speeds, overclocking, latency, cycles, Hyper Threading, onboard/discrete hardware. You need to know why many things once existed to know why they are done the way they are today.
For example, when you have a horrendous update on your physical machine that blows everything up, you need to know that in a FUBAR situation you can jump CMOS and start over (that's a physical HARDWARE button that many lvl II's and III's often don't even have a clue about, guess where I learned it before ever being in IT? " Intro to computers - Hardware ")
"Unlearn" while you're in school and you'll find there's much you did not "learn" as you thought you did the first time around.

If you don't get pumped up over hardware.... software is not for you bro.
If you're not frequenting Tom's and the ][-][, or even worse if you don't know what those mean you're probably wasting your time. I'm not saying you can't be a software programmer, but you should at least read the article below and reconsider why you chose "programming" as a major.
(http://blog.codinghorror.com/please-dont-learn-to-code/)
That said I can tell you this.... I failed out of software in ITT-Tech but I never stopped attending, I changed majors to "Multimedia" which included some Adobe CS3, and also some JavaScript in web and finished in IT-Multimedia with honors and I STILL until a few years ago(about 3-4 years after graduating) only started to pick up some coding in my spare time. I found that I really enjoyed it, and you might to so I'm not discouraging you by any means. But read the above linked page and ask yourself if you're really meant to be happy as an IT worker in coding software. Maybe find something you're going to grasp to get your feet wet first, and take some low cost coding classes to get the idea of OOP like "CodeSchool.com"(40/mo but lots more content than) or "CodeCademy.com"(free, but not as structured or thorough as codeschool.com) You may find that you want to code, but it's going to take some time and practice to get readied up for true development.
I hope it helps man, I'd hate to see you make my job harder forcing yourself into a career you're not meant, or ready for. But I'd also genuinely like to see people be happier with more than a decent paycheck. :)
http://www.CodeSchool.com $40/mo
http://www.CodeCademy.com FREE - less content, slower to update

Related

Siemens PLC programming best practices [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
My question is pretty simple. Is there any useful place for learning to work with Siemens PLCs?
Full Disclosure:
I was a Software Engineer for Rockwell Automation working with their A|B PLCs
You probably won't like my answer
To put it plainly programming PLCs whether you're using Ladder Logic, Structure Text, Instruction List, Sequential Flow Chart, FBD, or Continuous Flow Chart isn't the same as programming software in a language like C++, Java, JavaScript, etc...
Simply put, there is not one set of "best practices" that fits every use case. The reason for that is, because unlike your standard software development which you can apply principles like the SOLID principles to always make your code easier to read, maintain, and extend. PLC programs are associated with a very real physical process and physical machinery. Often times what you find in the industry is that every plant/manufacturer/facility establishes their own set of best practices given their facilities needs and process.
To give an example:
Scenario 1:
The logic used to run the distillation process for a small local brewery may include sub-routines or even a loop. They may allow 5 or less warnings in their code, and allow a few unused tags. That is totally fine, because they are making beer, the process isn't critical, a bad batch won't kill anyone, and they only have 2 pumps that their using the logic to iterate over. So if there is a problem that needs trouble shooting the logic in the sub-routines or loop won't be too much of a headache.
Scenario 2:
I am a global pharmaceutical company producing 100's of millions of life critical drugs each year (say insulin). Now my logic is has zero sub-routines, no looping, I have zero tolerance for errors or warnings, and absolutely no unused tags. Why, because I am a highly regulated industry and if their is an issue with one of my products, people may die. Also why no sub-routines or looping, because I am a huge company with hundreds of pumps, mixers, etc... When one of those pieces of equipment go down I don't want to look at some horrible looping logic that is responsible for the logic of hundreds of pumps. I want to look at one select piece of the logic that I can quickly understand, correct, and get my line back up and operating.
I am sure you can find some articles or courses out there (like the one you already took) that explains some basic "best practices", but in the real world you will need to adapt your logic to every individual scenario in order to achieve the best outcome. That is my humble two cents on the matter, best of luck to you!
Udemy - there are some courses there, though I haven't tried them myself.
I've watched lots of useful videos on YouTube.
http://www.plcdev.com/siemens_simatic_step_7_programmers_handbook -
quite old, but could be usefull.
Siemens forums, official manuals, guides. There is lots of info there, quality varies sometimes, but mostly good.
BTW, a nice thing about Siemens is that you can often look up things just by searching the web. That is not the case for some other PLCs...
Good luck!
If you work already in a factory. Read the code that's run in PLC-s. And start modifying it, if needed. Thats how i started, I was initially lowly automation guy. Pulled cables, changed broken sensors etc.
If you don't, and you need a break to the field, then as ordinary tech worker, the path is usually from electrician or automation engineer. Or as entreprenuer/independent contractor, i have seen people just do it. Like win a contract for some public company request, do some schematics, write code, do electrical montage all by themselves. Or just do parts of it with other contractors. You need previous experience to pull it off
As for some practices:
If you are modifying existing code. Always use existing style, existing functions and blocks.
Do not use programming patterns from ordinary IT world in low PLC code. Or use with caution. Reason for this is that your code probably has to live for years and years, and has to be debuggable. Patterns usually add layers of complexity, complexity leads to harder debugging. In automation world it's usually better to debug stuff thats closer to hardware.
If you are starting to make project where you have tens or hundreds of sensors/motors/actuators, start using reusable blocks.
All best practices are learned in the field, sadly theres no other way. I know it's kind of catch22 sometimes. Need work to get experience, need experience to get work. I entered automation world, and later IT world the same way: get a job and the low end, maintanence guy or junior IT developer, gather experience, in a year or two you will be in mid-level.
And don't lose any of those constraints while your programming PLC :
PLC programming is very low level programming
memory size matters, each byte must be important
logical have to be concise and as short as possible : sometimes you have to be good in math !
the machine you're working on is dangerous and can provoque product, equipment or human damages
the machine you're working on is expensive and is built to produce for years
It's the same as in computer programming : each programmer has its own way to program, there's no truth. Sometime you'll find some interesting existing code : don't hesitate to re-use it if it looks smarter and is more efficient.
Find your way and keep in mind the machine you're working on is dangerous for you and the people walking around (it's not always the case but it's important to keep this in mind while programming).
And moreover: don't forget the first rule in industrial automation : if it runs correctly, don't touch it !

Best way to organize bioinformatics projects? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I come from a computer science. background, but I am now doing genomics.
My projects include a lot of bioinformatics typically involving: aligning sequences, comparing overlap, etc. between sequences and various genome-annotation-features, from different classes of biological samples, time-course data, microarray, high-throughput sequencing ("next-generation" sequencing, though it's the current generation actually) data, this kind of stuff.
The workflow with this kind of analyses is quite different from what I experienced during my computer science studies: no UML and thoughtfully designed objects shining with sublime elegance, no version management, no proper documentation (often no documentation at all), no software engineering at all.
Instead, what everyone does in this field is hacking out one Perl-script or AWK-one-liner after the other, usually for one-time usage.
I think the reason is that the input data and formats change so fast, the questions need to be answered so soon (deadlines!), that there seems to be no time for project organization.
One example to illustrate this: Let's say you want to write a raytracer. You would probably put a lot of effort into the software engineering first. Then program it, finally in some highly-optimized form. Because you would use the raytracer countless of times with different input data and would make changes to the source code over a duration of years to come. So good software engineering is paramount when coding a serious raytracer from scratch. But imagine you want to write a raytracer, where you already know that you will use it to raytrace one, single picture ever. And that picture is of a reflecting sphere over a checkered floor. In this case you would just hack it together somehow. Bioinformatics is like the latter case only.
You end up with whole directory trees with the same information in different formats until you have reached the one particular format necessary for the next step, and dozen of files with names like "tmp_SNP_cancer_34521_unique_IDs_not_Chimp.csv" where you don't have the slightest idea one day later why you created this file and what it exactly is.
For a while I was using MySQL which helped, but now the speed in which new data is generated and changes formats is such that it is not possible to do proper database design.
I am aware of one single publication which deals with these issues (Noble, W. S. (2009, July). A quick guide to organizing computational biology projects. PLoS Comput Biol 5 (7), e1000424+). The author sums the goal up quite nicely:
The core guiding principle is simple:
Someone unfamiliar with your project
should be able to look at your
computer files and understand in
detail what you did and why.
Well, that's what I want, too! But I am following the same practices as that author already, and I feel it is absolutely insufficient.
Documenting each and every command you issue in Bash, commenting it with why exactly you did it, etc., is just tedious and error-prone. The steps during the workflow are just too fine-grained. Even if you do it, it can be still an extremely tedious task to figure out what each file was for, and at which point a particular workflow was interrupted, and for what reason, and where you continued.
(I am not using the word "workflow" in the sense of Taverna; by workflow I just mean the steps, commands and programs you choose to execute to reach a particular goal).
How do you organize your bioinformatics projects?
I'm a software specialist embedded in a team of research scientists, though in the earth sciences, not the life sciences. A lot of what you write is familiar to me.
One thing to bear in mind is that much of what you have learned in your studies is about engineering software for continued use. As you have observed a lot of what research scientists do is about one-off use and the engineered approach is not suitable. If you want to implement some aspects of good software engineering you are going to have to pick your battles carefully.
Before you start fighting any battles, you are going to have to critically examine your own ideas to ensure that what you learned in school about general-purpose software engineering is valid for your current situation. Don't assume that it is.
In my case the first battle I picked was the implementation of source code control. It wasn't hard to find examples of all the things that go wrong when you don't have version control in place:
some users had dozens of directories each with different versions of the 'same' code, and only the haziest idea of what most of them did that was unique, or why they were there;
some users had lost useful modifications by overwriting them and not being able to remember what they had done;
it was easy to find situations where people were working on what should have been the same program but were in fact developing incompatibly in different directions;
etc etc etc
Once I had gathered the information -- and make sure you keep good notes about who said what and what it cost them -- it became relatively easy to paint a picture of a better world with source code control.
Next, well, next you have to choose your own next battle. But one of the seeds of doubt you have to sow in your scientist-colleagues minds is 'reproducibility'. Scientific experiments are not valid if they are not reproducible; if their experiments involve software (and they always do) then careful software engineering is essential for reproducibility. A lot of this is about data provenance, but that's a topic for another day.
Part of the issue here is the distinction between documentation for software vs documentation for publication.
For software development (and research plan) design, the important documentation is structural and intentional. Thus, modeling the data, reasons why you are doing something, etc. I strongly recommend using the skills you've learned in CS for documenting your research plan. Having a plan for what you want to do gives you a lot of freedom to multi-task while long analyses are running.
On the other hand, a lot of bioinformatics work is analysis. Here, you need to treat documentation like a lab notebook, and not necessarily a project plan. You want to be document what you did, maybe a brief comment why (e.g. when you are troubleshooting data), and what the outputs and results are.
What I do is fairly simple.
First, I start in a directory and create a git repo. Then, whenever I change some file, I commit it to the repo. As much as possible, I try to name data outputs in a way that I can drop then into my git ignore files.
Then, as much as possible, I work on a single terminal session for a project at a time, and when I hit a pause point (like when I've got a set of jobs sent up to the grid, I run 'history |cut -c 8-' and paste that into a lab notes file. I then edit the file to add comments for what I did, and remember, change the git add/commit lines to git checkout (I have a script that does this based on the commit messages). As long as I start it in the right directory, and my external data doesn't go away, this means that I can recreate the entire process later.
For any even slightly complex processing tasks, I write a script to do it, so that my notebook, as much as possible, looks clean. To an approximation, a helper script can be viewed as a subroutine in a larger project, and should be documented internally to at least that level.
Your question is about project management. Bad project management is not unique to bioinformatics. I find it hard to believe that the entire industry of bioinformatics is commited to bad software design.
About the presure... Again there are others in this world that have very challenging deadlines, and they are still using good software designs.
In many cases, following a good software design does not hold down the projects and may even speed its design and maintainance (at least on the long run).
Now to your real question... You can offer your manager to redesign small parts of the code that have no influence on the rest of the code as a proof of concept (POC), but it's really hard to stop a truck from keep on moving, so don't get upset if he feels "we worked this way for years - we know what we are doing, and we don't need a child to teach us how to do our work". Learn to work like the rest and when you will gain their trust, you could
do your thing once in a while (I hope you will have time and the devotion to do the right thing).
Good luck.

Encouraging good development practices for non-professional programmers? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
In my copious free time, I collaborate with a number of scientists (mostly biologists) who develop software, databases, and other tools related to the work they do.
Generally these projects are built on a one-off basis, used in-house, and eventually someone decides "oh, this could be useful to other people," so they release a binary or slap a PHP interface onto it and shove it onto the web. However, they typically can't be bothered to make their source code or dumps of their databases available for other developers, so in practice, these projects usually die when the project for which the code was written comes to an end or loses funding. A few months (or years) later, some other lab has a need for the same kind of tool, they have to repeat the work that the first lab did, that project eventually dies, lather, rinse, repeat.
Does anyone have any suggestions for how to persuade people whose primary job isn't programming that it's of benefit to their community for them to be more open with the tools they've built?
Similarly, any advice on how to communicate the idea that version control, bug tracking, refactoring, automated tests, continuous integration and other common practices we professional developers take for granted are good ideas worth spending time on?
Unfortunately, a lot of scientists seem to hold the opinion that programming is a dull, make-work necessary evil and that their research is much more important, not realising that these days, software development is part of scientific research, and if the community as a whole were to raise the bar for development standards, everyone would benefit.
Have you ever been in a situation like this? What worked for you?
Software Carpentry sounds like a match for your request:
Overview
Many scientists and engineers spend
much of their lives programming, but
only a handful have ever been taught
how to do this well. As a result, they
spend their time wrestling with
software, instead of doing research,
but have no idea how reliable or
efficient their programs are.
This course is an intensive
introduction to basic software
development practices for scientists
and engineers that can reduce the time
they spend programming by 20-25%. All
of the material is open source: it may
be used freely by anyone for
educational or commercial purposes,
and research groups in academia and
industry are actively encouraged to
adapt it to their needs.
Let me preface this by saying that I'm a bioinformatician, so I see the things you're talking about all the time. There's some truth to the fact that many of these people are biologists-turned-coders who just don't have the exposure to best practices.
That said, the core problem isn't that these people don't know about good practices, or don't care. The problem is that there is no incentive for them to spend more time learning software engineering, or to clean up their code and release it.
In an academic research setting, your reputation (and thus your future job prospects) depends almost entirely on the number and quality of publications that you've contributed to. Publications on methods or new algorithms are not given as much respect as those that report new biological findings. So after I do a quick analysis of a dataset, there's very little incentive for me to spend lots of time cleaning up my code and releasing it, when I could be moving on to the next dataset and making more biological discoveries.
I'll also note that the availability of funding for computational development is orders of magnitude less than that available for doing the biology. In a climate where only 10% of submitted grants are getting funded, scientists don't have the luxury of taking time to clean and release their code, when doing so doesn't help them keep their lab funded.
So, there's the problem in a nutshell. As a bioinformatician, I think it's perverse and often frustrating.
That said, there is hope for the future. With second-and-third generation sequencing, in particular, biology is moving into the realm of high-throughput discovery, where data mining and solid computational pipelines become integral to the success of the science. As that happens, you'll see more and more funding for computational projects, and more and more real software engineering happening.
It's not exactly simple, but demonstration by example would probably drive the point home most effectively - find a task the researcher needs done, find someone who did take the time to make a tool w/source available, and point out how much time the researcher could save as a result due to having that tool available - then point out that they could give back to the community in the same fashion.
In effect, what you are asking them to do is become professional developers (with their copious free time), in addition to their chosen profession. Their reluctance is understandable.
Does anyone have any suggestions for how to persuade people whose primary job isn't programming that it's of benefit to their community for them to be more open with the tools they've built?
Give up. Seriously, this is like teaching a pig to sing. (I can say this because I used to be a physicist so I know what they're like.)
The real issue is that your colleagues are rewarded for scientific output measured in publications, not software. It's hard enough in computer science to get recognized for building software; in the other sciences, it's nearly impossible.
You can't sell good development practice to your biology friends on the grounds that "it's good for you." They're going to ask "should I invest effort in learning about good software practice, or should I invest the same effort to publish another biology paper?" No contest.
Maybe framing it in terms of academic/intellectual responsibility would help, to a degree - sharing your source is, in many ways, like properly citing your sources or detailing your research methodology. There are similar arguments to be made for some of the "professional software developer" behaviors you'd like to encourage, though I think releasing the code is probably an easier sell on these grounds than other things which could require significantly more work.
Actually, asking any busy project team to include in their schedule time for making their software suitable for adoption by another team is extremely hard in my experience.
Doing extra work for the public good is a big ask.
I've seen a common pattern of "harvesting" after the project is complete, reflecting that immediate coding for reuse tends to get lost in the urgency of the day.
The only avenue I can think of is if the reuse is within an organisation with a budget for a "hunter gatherer", someone whose reason for being there is IT.
You may be on more of a win for things such as unit tests because they have immediate payback for the development.
For one thing, could we please stop teaching biologists Perl? Teaching non-professional programmers a write-only language is practically guaranteed to lead to unmaintainable, throw-away code. Python fills the same niche, is just as easy to learn (it's even used to teach kids programming!), and is much more readable.
Draw parallels with statistics. Stats is a crucial part of scientific research, and one where the only sensible advice is: either learn to do it properly, or get an expert to do it for you. Incorrectly-done stats can completely undermine a paper, just as badly-written code can completely undermine a public database or web resource.
PS: This blog is very good, but getting them to read it will be an uphill struggle: Programming for Scientists
Chris,
I agree with you to a degree, but in my experience what ends up happening is that in their eagerness to publish you end up with too many "me too" codes and methods, which don't really add to the quality of science. If there was a little more thought about open sourcing code and encouraging others to contribute (without necessarily getting publications out of it) then everyone would benefit.
Definitely agree that a separation between the scientific programmers and the software engineers is a good thing, especially for production applications. But even for scientific programming, the quality of my code would have been so much better if I had followed good practices at the time.
In my experience the best way of getting people to program cleanly is to show a good example when you're working with them.
eg: "I never spend hopeless days debugging my code because the first things I code are automated unit tests that will pinpoint problems when they are small and easily detectable"
or: "I'm very bad at keeping track of versions of things, but sometimes my new code does break what did work before. So I use svn/git/dropbox to keep track of things for me"
In my experience that kind of statement can raise the interest of "biologists that learned how to script".
And if you need to collaborate on a bigger project, make it clear that you have more experience and that everything will go more smoothly if things are done your way.
Regarding publication of code, current practice is indeed frustrating. I would like to see a new journal like Source Code for Biology and Medecine, where code is peer-reviewed and can be published, but that has no (or very low) publication costs. Putting code on sourceforge or others is indeed not "scientifically worth it" because it doesn't make a line on your publication list, and most code is not revolutionary enough to warrant paying $1,000 for publication in Source Code for Biology and Medecine or PLoS One...
You could have them use a content management system, like Joomla. That way they only push content and not code.
I wouldn't so much persuade as I would streamline the process. Document it clearly, make video tutorials and bundle some kind of tool chain that makes it ridiculously easy to get source repositories set up without requiring them to become experts in something that isn't their main field.
Take a really good programmer who already knows best practices, ask your scientists to teach him what they need and what they do, eventually the programmer will have minimum domain knowledge (I suspect it takes between 1 and 3 years depending on the domain) to do what scientists asks for.
Developers always learn another domain of competency, because most of their programs are not for developers, so they need to know what the "client" do.
To be devil's advocate, is teaching scientists to be good software engineers the right thing to do? Software in research is usually very purpose specific - sometimes to the point where a piece of code needs to run successfully only once on a single data set. The results then feed into a publication and the goal is met. And there's a high risk that your technique or algorithm will be superseded by a better one in short order. So, there's a real risk that effort spent producing sparkling code will be wasted.
When you're frustrated by wading through a swamp of ill-formed perl code, just think that the code you're looking at is one of the rare survivors. Mountains of such code has been written, used a few times, then discarded never to see the light of day again.
I guess I'm just saying there's a big place in research for smelly heinous one-off prototype code. There are good reasons why such code exists. It may not be pretty, but if it gets the job done, who cares? We can always hire a software engineer to write the production-ready version later, IF it turns out to be justified, and let our scientists move on.

The Framework/IDE Knowledge Trap [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
We don't teach children calculus first. We first teach them arithmetic, then algebra, then geometry, the analytical geometry, then finally calculus.
Why then, do we teach our computer scientists frameworks and IDE first. Some curriculum do force students to learn computer science fundamentals, but the vast majority of graduates that I see could not compose a framework of their own to save their lives.
Where then is the next generation of tool builders?
How can we promote the understanding necessary to create frameworks and development environments?
This is of course a generality. Not all education is lacking, but it seems to be the majority and it brings down the quality of our profession as a whole.
I think the analogy is a bit off. A better analogy would be "We don't teach our kids to use calculators to add and subtract, why teach programmers to use an IDE to program?"
Get rid of HR departments that require X years experience in Y. The universities are just tailoring their course to the HR department's requirements.
I employ graduates who can code in something (I really don't care what language) and who can learn.
I see your point, although I think the math analogy doesn't quite fit. You have to know basic arithmetic to be able to get anything done in any other math discipline.
When I began programming frameworks were mostly unheard of. If you wanted a binary tree, by God, you went and wrote one. In C or Assembler. That was basically it, so to get anything done at all you had to know a lot.
Today, Frameworks and IDEs and designers make it possible for "noobs" to create actually pretty brilliant things without knowing the first thing about how to build a framework, or a compiler, or manage memory allocation.
The real issue is, what about all the dingbats that think they are awesome, great programmers because they used Frontpage or Access? Managers have a hard time telling the difference between that kind of programmer and one that really knows software development as a discipline.
So, specifically, why is it that way? Because everyone wants a job and nobody hires programmers that know how to build a binary tree. They want programmers that know .Net or J2EE, etc.
I would argue that there is probably enough work out there for 9 to 5 programmers who can start at the framework level and go up from there. The truly good ones - mostly your program as a career and/or program as a hobby - are going to get the knowledge they may have missed in college over time anyway. You can't force everyone to be a wonderful programmer no matter what curriculum you teach. Inquisitive students are going to learn about the fundamentals whether its taught to them in class or entirely on their own.
There are tool makers and tool breakers. And of course there are tools, but let's not go there.
If you have a good look at an automotive workshop, you will see a lot of funny little tools that you don't see on the shelves in hardware stores. Like the ones for pushing back brake caliper pistons. Or the clamps for compressing valve stems so you can get the collets out with one hand while talking to your mates about nailing the new secretary (instead of watching them fly across the room when the spring slips out from your screwdriver).
These were designed by mechanics. They're really effective, generally small and cheap, and totally incomprehensible until you seen them in action.
Most of the profound changes in automotive technology were bottom-up, but top-down is also needed. Individual mechanics can't make fundamental technology changes like the switch from cast iron to alloy heads. A new broom sweeps clean, an old broom knows the corners. You need both.
But I digress: the point is that the mechanics couldn't design these tools if they lacked fundamental skills and knowledge. My father built me an entire motorcycle from scrap iron when I was a kid. As an adult, because I lack his skills and knowledge and modes of thought, I can barely maintain the bike I bought from Honda, much less take to it with an oxy like Mr T in a creative frenzy.
With code, I am as my father was with steel. Donald Knuth is my constant companion, and when the wireless protocol for our GPS loggers needs to be implemented in .NET it's me they come to see. The widget monkeys wouldn't know where to start.
I think the problem is in fact the GUI paradigm in general.
Microsoft made using computers much easier, they popularized the Graphical User Interface. They brought this interface metaphor, (the desktop, the file) to the domain of programming as well and very effectively too with their Visual Basic tool.
But just as the GUI obscures what happens "under the hood" so does the IDE obscure the manipulation of bits and bytes. The question is, of course, risk to reward ratio - how much understanding do programmers lose in exchange for productivity?
A cursory look at "The Art of Computer Programming" might show why IDEs are useful; "The ultimate packing density is achieved when we have 1-bit items, because we can cram 64 of them into a single 64-bit word. Suppose, for example, that we want a table of all odd prime numbers less than 1024, so that we can easily decide the primality of a small integer. No problem; only eight 64-bit numbers are required:
p0 = 011101101101001100101101001001001100101100101001000101101101000000
p1 = . . ."
Programming is really hard, you can see how an IDE might help. :^)
Learning the abstraction is easier than learning the details when it comes to programming. It's harder to teach someone to hand-code assembler to print "Hello World" than it is to have them throw together a form with a button on it that shows a "Hello World" message when the button is clicked.
You didn't know how to build the engine of a car before learning to drive, did you? Because it's not necessary in order to drive. In the same vein, you don't need to learn how a linked list or binary tree works in order to maintain a list of names and search them.
There will always be those who want to get under the hood and learn the "why" of things, but I don't think it's required to get things done.
I always screen applications by asking difficult questions that they could only answer if they understood how something really works. I think it is a real shame colleges and universities are teaching people framework based development but not focusing on core software principles. I agree that what matters more than anything is someone who understands how programming works and has the drive to learn anything they can about it.
Most universities I know of have an introduction to computer programming course that teaches basic programming concepts. Unfortunately it is impossible to teach programming without actually writing code.
The problem is that some prefer to teach this course using some OO language such as JAVA or C# and so the students must use Visual Studio (or the Java equivalent).
It is very hard to explain the basic concepts when the IDE forces you to work in a certain way.
I think that the first language students learn should be functional language such as C. This way you have less layers of abstraction between them and the basic CS concepts.
Agree with cfeduke.
I looked at the work for the same CS courses I did from 2 years previously, and they were way harder. 5 years previously, way way harder.
The CS bar is being lowered more and more, presumably because there are more and more jobs that don't require any working knowledge of any of the complicated CS subjects. There are huge numbers of jobs for people to just cut code.
Since traditionaly people who wanted to be programmers did CS courses as coding has gotten easier this is still the case.
What really needs to happen is for CS to not be a requirement for professional software development. Instead there needs to be another curriculam that focuses more on getting people out the door and cutting code.
This would leave CS to be that course for you next generation of tool builder.

If I were to build a new operating system, what kind of features would it have? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am toying with the idea of creating an completely new operating system and would like to hear what everyone on this forums take is on that? First is it too late are the big boys so entrenched in our lives that we will never be able to switch (wow - what a terrible thought...). But if this is not the case, what should a operating system do for you? What features are the most important? Should all the components be separate installations (in other words - should the base OS really have no user functionality and that gets added on by creating "plug-ins" kind of like a good flexible tool?)
Why do I want to do this... I am more curious about whether there is a demand and I am wondering, since the OSes we use most today (Linux, Windows, Mac OS X (Free BSD)) were actually written more than 20 years ago (and I am being generous - I mean dual and quad cores did not exist back then, buses were much slower, hardware was much more expensive, etc,...), I was just curious with the new technology if we would do anything differently?
I am anxious to read your comments.
To answer the first question: It's never too late. Especially when it comes to niche market segments and stuff like that.
Second though, before you start down the path of creating a new OS, you should understand the kind of undertaking it is: it'd be a massive project.
Is it just a normal programmer "scratch the itch" kind of project? If so, then by all means go ahead -- you might learn alot of things by doing it. But if you're doing it for the resulting product, then you shouldn't start down that path until you've looked at all the current OSes under development (there are alot more than you'd think at first) and figured out what you'd like to change in them.
Quite possibly the effort would be better spent improving/changing an existing open source system. Even for your own experimentation, it may be easier to get the results you want if you start out with something already in development.
First, a little story. In 1992, during the very first Win32 ( what would become the MS Professional Developers Conference ) conference, I had the opportunity to sit with over some lunch with one Mr. Dave Cutler ( Chief Architect of what most folks would now know as Windows NT,Windows 2000, XP, etc. ).
I was at the time working on the Multimedia group at IBM Boca Raton on what some of you might remember, OS/2. Having worked on OS/2 for several years, and recognizing "the writing on the wall" of where OSes were going, I asked him, "Dave, is Windows NT going to take us into the next century or are there other ideas on your mind ?". His answer to me was as follows:
"M...., Windows NT is the last operating system anyone will ever develop from scratch !". Then he looked over at me, took a sip of his beer, and said, "Then again, you could wake up next Saturday after a particularly good night out with your girl, and have a whole new approach for an operating system, that'll put this to shame."
Putting that conversation into context, and given the fact I'm back in college pursuing my Master's degree ( specializing in Operating Systems design ), I'd say there's TONS of room for new operating systems. The thing is to put things into perspective. What are your target goals for this operating system ? What problem space is it attempting to service ?
Putting this all into perspective will give you an indication of whether you're really setting your sights on an achievable goal.
That all being said, I second an earlier commenters note about looking into things like "Singularity" ( the focus of a talk I gave this past spring in one of my classes .... ), or if you really want to "sink your teeth into" an OS in its infancy....look at "ReactOS".
Then again, WebOSes, like gOS, and the like, are probably where we're headed over the next decade or so. Or then again, someone particularly bright could wake up after a particularly fruitful evening with their lady or guy friend, and have the "next big idea" in operating systems.
Why build the OS directly on a physical machine? You'll just be mucking around in assembly language ;). Sure, that's fun, but why not tackle an OS for a VM?
Say an OS that runs on the Java/.NET/Parrot (you name it) VM, that can easily be passed around over the net and can run a bunch of software.
What would it include?
Some way to store data (traditional FS won't cut it)
A model for processes / threads (or just hijack the stuff provided by the VM?)
Tools for interacting with these processes etc.
So, build a simple Platform that can be executed on a widely used virtual machine. Put in some cool functionality for a specific niche (cloud computing?). Go!
For more information on the micro- versus monolithic kernel, look up Linus' 'discussion' with Andrew Tanenbaum.
I would highly suggest looking at an early version on linux(0.01) to at least get your feet wet. You're going to mucking about with assembly and very obscure low-level stuff to even get started (especially getting into protected mode, multi-tasking, etc). And yes, it's probably true that the "big boys" already have the market cornered. I'm not telling you NOT to do it, but maybe doing some work on the linux kernel would be a better stepping stone.
Check out Cosmos and Singularity, these represent what I want from a futuristic operating system ;-)
Edit :
SharpOS is another managed OS effort. Suggested by yshuditelu
An OS should have no user functionality at all. User functionality should be added by separate projects, which does not at all mean that the projects should not work together!
If you are interested in user functionality maybe you should look into participating in existing Desktop Environment projects such as GNOME, KDE or something.
If you are interested in kernel-level functionality, either try hacking on a BSD derivate or on Linux, or try creating your own system -- but don't think too much about the user functionality then. Getting the core of an operating system right is hard and will take a long time -- wanting to reinvent everything does not make much sense and will get you nowhere.
You might want to join an existing OS implementation project first, or at least look at what other people have implemented.
For example AROS has been some 10 or more years in the making as a hobby OS, and is now quite usable in many ways.
Or how about something more niche? Check out Symbios, which is a fully multitasking desktop (in the style of Windows) operating system - for 4MHz Z80 CPUs (Amstrad CPC, MSX). Maybe you would want to write something like this, which is far less of a bite than a full next-generation operating system.
Bottom line...focus on your goals and even more importantly the goals of others...help to meet those needs. Never start with just technology.
I'd recommend against creating your own Operating System. (My own geeky interruption...Look into Cloud Computing and Amazon EC2)
I totally agree that it would first help by defining what your goals are. I am a big fan of User Experiences and thinking of not only your own goals but the goals of your audience/users/others. Once you have those goals, then move to the next step of how to meet it.
Now days what is an Operation System any way? kernal, Operating System, Virtual Server Instance, Linux, Windows Server, Windows Home, Ubuntu, AIX, zSeries OS/390, et al. I guess this is a good definition of OS... Wikipedia
I like Sun's slogan "the Network is the computer" also...but their company has really fallen in the past decade.
On that note of the Network is the computer... again, I highly recommend, checking out Amazon EC2 and more generally cloud computing.
I think that building a new OS from scratch to resemble the current OSes on the market is a waste of time. Instead, you should think about what Operating System will be like 10-20 years from now. My intuition is that they will be so different as to render them mostly unrecognizable by today's standards. Think of frameworks such as Facebook (gasp!) for models of how future OSes will operate.
I think you're right about our current operating systems being old. Someone said that all operating systems suck. And yes, don't we have problems with them? Call it BSOD, Sad Mac or a Kernel Panic. Our filesystems fail, there are security and reliability problems.
Microsoft pursued interesting approach with its Singularity kernel. It isolates processes in software, using a virtual machine similar to .NET, and formal verification methods. Basically all IPC seems to be formally specified and verified, even before a program is ran.
But there's another problem with it - Singularity is only a kernel. You can't run application not designed for it on it. This is a huge penalty, making eventual transition (Singularity is not public) quite hard. If you manage to produce something of similar technical advantages, but with a real transition plan (think about IPv4->IPv6 problems, or how Windows got so much market share on desktop), that could be huge!
But starting small is not a bad choice either. Linux started just like this, and there are many cases when it leads to better design. Small is beautiful. Easier to change. Easier to grow. Anyway, good luck!
checkout singularity project,
do something revolutionary
I've always wanted an operating system that was basically nothing but a fresh slate. It would have built in plugin support which allow you to build the user interface, applications, whatever you want.
This system would work much like a Lua sandbox to a game would work, minus the limitations. You could build a plugin or module system that would have access to a variety of subsystems that you would use. For example, if you were to write a web browser application, you would need to load the networking library and use that within your plugin script. Need 'security' ? Load the library.
The difference between this and Linux is that, Linux is an operating system but has a windows manager that runs over top of it. In this theoretical operating system, you would be able to implement the generic "look" and "feel" of a variety of windows within the plugin system, or could you create a custom interface.
The difference between this and Windows is that its fully customizable, and by fully I mean if you wanted to not implement any cryptography at all, you can do that, or if you wanted to customize an already existing window, you can do that. Nothing is closed to you.
In this theoretical operating system, there is an OS with a plugin system. The plugin system uses a simple and powerful language.
If you're asking what I'd like to see in an operating system, I can give you a list. I am just getting into programming so I'm not sure if any of this is possible, but I can give you my ideas.
I'd like to see a developed operating system (besides the main ones) in which it ISN'T a pain to get the wireless card to work. That is my #1 pet peeve with most of the ones I've tried out.
It would be cool to see an operating system designed by a programmer for other programmers. Have it so you can run programs for all different operating systems. I don't know if that's possible without having a copy of windows and OSX but it would be really damn cool if I could check the compatablity of programs I write with all operating systems.
You could also consider going with MINIX which is a good starting point.
To the originator of this forum, my hats off to you sir for daring to think in much bolder and idealistic terms regarding the IT industry. First and foremost, Your questions are precisely the kind you would think should engage a much broader audience given the flourishing Computer Sciences all over the globe & the openness taught to us by the Revolutionary Linux OS, which has only begun to win the hearts and minds of so many out there by way of strengthing its user-friendly interface. So kudos on pushing the envelope.
If I'm following correctly, you are supposing that given the fruits of our labor thus far, the development of further hardware & Software concoctions could or at least should be less conventional. The implication, of course, is that any new development would reach its goal faster than what is typical. The prospect, however, of an entirely new OS system #this time would be challenging - to say the least - only because there is so much friction out there already between Linux & Windows. It is really a battle between open source & the proprietary ideologies. Bart Roozendaal in a comment above proves my point nicely. Forget the idea of innovation and whatever possibilities may come from a much more contemporary based Operating System, for such things are secondary. What he is asking essentially is, are you going to be on the side of profit or no? He gives his position away easily here. As you know, Windows is notorious for its monopolistic approach regarding new markets, software, and other technology. It has maintained a deathgrip on its hegemony since its existence and sadly the windows os is racked with endless bugs & backdoors.
Again, I applaud you for your taking a road less travelled and hopefully forgeing ahead and not becoming discouraged. Personally, I'd like to see another OS out there...one much more contemporary.