Task distribution for graph based data - variable-assignment

I am learning to program distributed computing system. And the system I need to work on would require to perform computation on graph data (social network data) in parallel environment.
After searching on the internet, I come down to the issue regarding to the task distribution. It seems that many of such kind systems are designed by worker requesting task from master e.g. volunteer computing [1] or hadoop map/reduce transmit heartbeat.
My question is - is there any task distribution/ assignment is designed with the style in that the master would proactive assign task to slaves? What issues may need to pay attention to when programming such mechanism?
I think I may be wrong in some concepts that I am not aware of. So I appreciate any suggestions.
Thanks for advice.
[1]. boinc.berkeley.edu/heien_09.pdf

No, you're exactly right -- the master-wokrer pattern is well established. For a certain flavour of it, take a look here for a message-queue based system, or look it up on wikipedia.

Tim,
Thanks for providing that link. Although it talks about Microsoft solution, the information in the article is very useful, which serves as good keywords for me to search further information.
Thanks again for your help!

Related

Maintaining reliable service state history

The Reliable Services Overview topic has a section, at the bottom, called When to use Reliable Services APIs. In there, one list item says:
Your application needs to maintain change history for its units of state*.
The star at the end is explained just a little bit further down:
* Features available at SDK general availability.
The SDK has reached general availability by now, but I cannot find any information about how to make use of the "Maintain change history for its units of state"-feature or even a suggestion of what that actually means.
I'm asking here, in hope that someone can shed any light on this. I'm interested in knowing whether this feature is indeed available, or if not when it is supposed to be available, or if it has been abandoned.
Some insights on the intended design and functionality of this feature would also be much appreciated.
It describes the scenarios in which you might use the reliable collections. It can be used to keep multiple versions of an object and build a history of changes. (like delta's or snapshots)
This way you can create an event source. (it would require some coding on your end though)
Unit of state: entry in the State manager

find task priority levels of rtos

I am working on a research about the performance evaluation about various RTOS for embedded system. But I found the information is not easy to get. I've read a paper which discusses the number of priority levels of different RTOS. Then I tried to find how many levels for AVIX operating system. But I can't find it. Are there any tips to search this kind of information?
Not only is performance data not always easy to obtain, but for commercial RTOS in many cases publication of performance data derived from comparative testing is explicitly prohibited in the licence agreement. So you will be restricted in what you can do with this information.
You can probably determine what you need from the API documentation. You have to register with AVIX to get access to their downloads section, but I would imagine the information is available there.
That said the number of priority levels has no bearing on performance.

Scenario for evaluating source/version control tools

We're currently researching different source control tools, and want to test each one with some light-weight but meaningful scenario to get a feel for the capabilities of each one.
Terminology and internal logic varying wildly between some tools, it would be nice to have the scenario expressed in terms of use cases ("We have to correct a bug on Release 1.3"), rather than in potentially tool-specific terms ("Create a branch named Release 1.3").
It's true that different things are important for different teams, but it would be interesting to have some kind of canonical test case from which different scenarios could be picked. Or am I being too optimistic?
Is anyone aware of something like this? Have you used a similar approach when investigating source control tools?
These are the requirements that Mozilla had when they set out evaluate version control systems for internal use in 2006. You might find a similar approach useful.
If you find scenarios that are specific to your company, perhaps you can translate them to requirements like the ones above.
You have some general criteria with Google DVCS analysis which can give some ideas.
But you need first to see if you want to evaluate:
CVCS (Central Version Control): update-merge-commit
DVCS (Distributed Version Control): commit-rebase/merge
For more on the different scenario to test (one answer for CVCS, one for DVCS), see the SO question:
" Describe your workflow of using version control (VCS or DVCS) "
You have to question things like: Do you have only a single line of Release/Development or do we create multiple releases in parallel? No only the mentioned scenario is needed you need to think about upon that one like merging back the changes into dev line or multiple other lines as well. This can influence the approach. The approach you selected sounds very good cause you try to understand the process and not using tool terms. I've done that for multiple times for customers of mine. In different teams/companies different things are handled different. So the problem is to figure out what the process of yours is (sometime people are not aware of this).

Old concepts with new names (namely REST and Cloud computing)

It seems that SaaS and Cloud computing are old concepts with new names, and I am curious if I am wrong.
For cloud computing you can look at: Difference between cloud computing and distributed computing?
Basically, it seems that when we have been hosting that that is cloud computing, it is just that now some companies have put in much great resources to ensure better uptime than my local ISP. But, it seems that there is nothing really new here.
For REST, it seems that it is what we have been doing with cgis for 15 years.
Here is a question on REST: What am I not understanding about REST?
It appears that REST is an old concept, and I am curious how it is different than has been done since the early days of the web, and, to a large extent, the early days of using telnet (which http is on top of).
Am I mistaken in my simplification of these? I try to see how what is new is like what I know so I can see what more has to be learned in that topic, but for cloud computing and REST it seems that very little needs to be learned.
You are both right and wrong. You are right in the sense that new ideas are normally similar to old ideas, and indeed cloud computing is based significantly on distributed computing.
What is new in cloud computing is
virtualization
self-service
With virtualization, you can run multiple operating systems on a single hardware. While that, in itself, isn't new, either, it was never considered in distributed systems as a relevant piece of the architecture. Using virtualization allows self-service: users can create their own clusters of nodes without the administrator of the hardware taking any action. This allows a significant acceleration of deployment, and a significant reduction of cost.
For ReST, what you are missing is the client API. It is true that on the server side, a ReST service can be implemented with CGI. What is new here is that it is not an end user which retrieves the URL, but a program.
Saying that HTTP is on top of telnet ignores realities; this is like saying that we made no progress since the introduction of copper wires for communication. Strictly speaking, HTTP is not in top of telnet, but on top of TCP (which telnet is also on top of, these days).
Considering Roy's dissertation coined the term REST back in 2000, you can definitely argue that there is nothing new about REST. Additionally, the REST architectural style was synthesized from successful existing practices, so REST implementations pre-date the definition. Having said that, there is nothing simple about designing REST interfaces. Ever since Netscape first abused cookies to allow servers to maintain session state people have been swimming upstream against the web.
REST's recent resurrection has come mainly from people becoming disillusioned with SOAP based Web Services. SOAP tried to hide HTTP instead of embracing it and I think people are starting to realize how effective HTTP can be as an distributed application protocol that can do more than just deliver HTML to web browsers.
RESTful web applications don't use session state, so one could argue that by that virtue alone it is different than most web applications in existence at the moment.
As for Cloud Computing, I find myself agreeing with Larry Ellison for once in my life.
I'm in agreement on what you've posted. You might consider making this community wiki since it's likely to garner many answers based on opinion. Cloud computing seems to have taken off as a buzzword, and this is largely due to a decrease in cost for mass quantities of hardware. And then there is REST which is really just a formal name and definition for something that has been in place for a long time. Some people like to encapsulate ideas with buzzwords and acronyms. Sometimes it's useful to put a name to an idea though.
Not only this, the concept of things being old concepts with new names is old. It's hard to be original these days :P
You are right about REST -- its mostly old concepts with a lot of added pedantry and not much added substance.
Cloud computing has a small but fundamental difference from distributed computing. In distributed computing you had servers dedicated to particular functions, and usually some sort of directory service to locate the correct server. In cloud computing any server is capable of any task and usually the servers queue up for work which is distributed from a central point.

What's a good way to train employees on how to use the software you've just created? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm working in a small company and weeks away from deploying a web-app that will be used a lot. Everyone at one location will have to learn to use it, and although I think it's pretty easy and intuitive I may be biased.
I've written a help guide with plenty of screenshots that's available on every page, but I'll still need to train everyone. What's the best way? How do you take a step back and explain code you've been working on for weeks?
First try to avoid the training:
Perform usability testing to ensure your web app is intuitive. Usability testing is a very important aspect of testing and it is often ignored. How you see your system will probably be very different as how a new user sees your system.
Also add contextual help as often as you can. For example when I hover over a tag in stack overflow, I know exactly what clicking it will do, because it tells me.
Also this may seem obvious, but make sure you link to your documentation from the site itself. People may not think of looking in your documentation unless its right in front of their eyes.
About training documentation:
Try to split up your material into how your users would use the system. I personally like the "trails" option that Sun created for their Java tutorials. In this tutorial you can do several things, and you can chose on which trail you'd like to go.
Support random reads in your help documentation. If they have a task to do in your web app, then they should be able to get help on that without reading a bunch of unrelated content.
Make sure your documentation is searchable.
About actual training sessions:
If you are actually performing training sessions, stay away from explaining anything related to your code at all. You don't need to know about the engine to drive a car.
Try to split up your training sessions into very focused aspects of your system. If you only have 1 training session available to you then just do one specialized use case of your system + the overall description of the system. Refer to the different parts of documentation where they can get help.
Letting the community help itself:
No matter how extensive your documentation is, you'll always have cases that you didn't cover. That's why it's a good idea to have a forum available to all users of the system. Allow them to ask each other questions.
You can review this forum and add content to your documentation as needed.
You could also open up a wiki for the documentation itself, but this is probably not desirable if your user base isn't very large.
Few ideas:
Do you have some canned walk-through scenarios? Don't know if it is applicable for your product, but I built a pretty substantial product a couple years ago and developed some training modules that they'd work through - nothing long, maybe 15 minutes tops for each one.
I put together a slide presentation that hit the highlights to talk about what it does. I would spend about 10 minutes going through the app's highlights to familiarize them with it before doing the hands-on stuff.
People don't tend to read stuff, unfortunately. You could put hours and hours into a help document, and still find that folks simply don't read it or skim over it. That can be frustrating. Expect that answers that are in your guide will be the topic of questions your users will have.
Break up any training you do into manageable chunks. I've been to a full-day training exercise before and the trainer broke it into short pieces and made it easy for me to get the training topic in my head. You don't want to data-dump on them because their eyes will gloss over and you'll lose them.
Ultimately, if your app is highly usable, it should be a piece of cake. If it isn't, you'll find out. You might want to have a few folks you know run through your training ahead of time and give you constructive criticism on it. Better to fix it before the big group is trained. You'll be more confident in the product and the training materials (whatever they are) and you'll likely have a better training experience.
If applicable, provide an online help/wiki/faq for them. Sometimes that is helpful.
Best of luck!
You should really have addressed this issue a lot earlier in the development cycle than you are doing.
In my view the ideal scenario for corporate software is one where the users design their own application and write their own documentation and I always try to strive for this. You should have identified key users early on and designed the system with them (I try to get my users to do basic screen designs and menu layouts in Excel or similar - then I implement that as static pages and review before writing a line of significant code, obviously they won't get the design right first time, but it's your job to guide them - and ideally in a way where they think they came up with the correct design decisions, not you :-) ).
These users should then write the user documentation from this design in parallel with you developing the system. I have never seen help documentation delivered by a IT department/software company used significantly in a corporate setting. Instead what happens is the users will create their own folder of notes and work-arounds and refer to this (in fact if you're ever doing system analysis to replace an existing system finding the 'user-bible' for the old system is a key strategy). Getting the users to write their own documentation up-front simply harnesses what will happen anyway - but this is vastly easier if the users feel they have ownership of the system because they designed it themselves in the first place.
Of course this approach needs commitment and time from your users, but generally it's not that hard a sell. It's trite, but working as a facilitator so the users can develop there own system rather than as a third party to give them a system pretty much guarantees user acceptance.
As you are where you are you're too late to implement all of this, but if you can identify a couple of keen, key, users and get time from them to write their own documentation then that would be a good move. If you can't get even that then you need to identify an evangelist who you can train to be the 'departmental' expert and give them 110% of your energy to support them.
The bottom line is that user acceptance is based on perception, and this does not necessarily correlate with how usable an system actually is. You have to focus on the group psychology of this as much as the reality of the system, which tends to be tricky for developers as we're much more factually based than most people.
I'll be looking into something like this too in the next few months.
In your case, hopefully the UI has already undergone user acceptance testing. You say you work in a small company. Is it possible to get the least tech-savvy person there to try it out? In fact, get them to try it out without any guidance from yourself except for questions they ask. Document the questions and make sure your user-guide answers them.
The main thing for me would be logic and consistency. If the app's workflow relates logically to the task it has been designed to accomplish and the UI is consistent you should be OK.
Create a wiki page to describe the use of your system. Giving edit rights to the users of your system lets the users:
update the documentation to correct any errors in the initial release of documentation,
share any tips on usage they may have found.
share unusual uses for the system that you may not have thought of.
request features.
provide any workarounds they've found while waiting for the new functionality to be implemented.
Try a few users first, one or two in a small company. Mostly watch, help as little as possible. This tells you what needs to be fixed, and it creates an experienced user base - so you are not the "training bottleneck" anymore.
Turn core requirements/use cases/storycards into HowTo / walkthroughs for your documentation.
For a public training, prepare a 10..15 minute presentation (just that, not more!) that covers key concepts that the users absolutely must understand, than show your core walkthroughs. Reserve extra time for questions about how to solve various tasks.
Think as a user, not as a techie: - noone cares if it's a SQL database and you spent a lot of time to get the locking mechanisms right. They do care about "does it slow me down" and "does something bad happen when two people do that at the same time". Our job is to make complicated things look easy.
It may help to put the documentation on the intranet in an editable form - page "comments", or wiki maybe. And/or put up a "error wiki" for error messages and blips - where you or your users can quickly add recomendations, workarounds and reasons for anything that does not go as expected.
Rather then train all those people I have chosen a few superusers (at least one person from each department) and trained them to teach the rest of the employees. It is of course vital that those super users are
well respected in their departments
able to teach
like the application
The easy way to ensure that they like the app is to have them to define the way it should work :-). Since they should work with this app each and every day they are the prime stakeholders, no matter what management states