What is the difference between routine and process - operating-system

I am studying memory management in operating system and in general context i just want to know is there any difference between routine and process.
Thanks in advance

Yes, there is.
A routine usually means a piece of code such as a subroutine, coroutine or function that is called by other other code in some way.
A process is code that is actually in execution. This implies that one routine could actually be part of the codes being executed in two (or more) processes.

Related

The task scheduling problem of Flink, in Flink, how to place subtasks in a slot of the specified task manager?

Recently, I am studying the problem of task scheduling in Flink. My purpose is to schedule subtasks to a slot of the specified node according to my own needs by modifying some source codes of the scheduling part. Through remote debugging and checking the source code, I found the following method call stack, most of which I can't understand (the comments are a little less), especially in this method: org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl#allocateMultiTaskSlot. I guess the code that allocates slots to tasks is around here. Because it is too difficult to read the source code, I have to ask you for help. Of course, if there is a better way to achieve my needs, please specify one or two. Sincerely look forward to your reply! Thank you very much!!!
The method call stack is as follows(The version of Flink I use is 1.11.1):
org.apache.flink.runtime.jobmaster.JobMaster#startJobExecution
org.apache.flink.runtime.jobmaster.JobMaster#resetAndStartScheduler
org.apache.flink.runtime.jobmaster.JobMaster#startScheduling
org.apache.flink.runtime.scheduler.SchedulerBase#startScheduling
org.apache.flink.runtime.scheduler.DefaultScheduler#startSchedulingInternal
org.apache.flink.runtime.scheduler.strategy.EagerSchedulingStrategy#startScheduling
(This is like the method call chain of PipelinedRegionSchedulingStrategy class. In order to simply write it as the method call chain of EagerSchedulingStrategy class, it should have no effect)
org.apache.flink.runtime.scheduler.strategy.EagerSchedulingStrategy#allocateSlotsAndDeploy
org.apache.flink.runtime.scheduler.DefaultScheduler#allocateSlotsAndDeploy
org.apache.flink.runtime.scheduler.DefaultScheduler#allocateSlots
org.apache.flink.runtime.scheduler.DefaultExecutionSlotAllocator#allocateSlotsFor
org.apache.flink.runtime.executiongraph.SlotProviderStrategy.NormalSlotProviderStrategy#allocateSlot
org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl#allocateSlot
org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl#allocateSlotInternal
org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl#internalAllocateSlot
org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl#allocateSharedSlot
org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl#allocateMultiTaskSlot
(I feel that this is the key to allocate slot for subtask, that is, execution vertex, but there is no comment, and I don't understand the process idea, so I can't understand it.)

overloading systemverilog system tasks

In all of our testcases, I see fixed wait() system calls. I need to reduce everything to small delays without making much impact. Is there a way I can overload wait task into my custom task and then call systermverilog wait to pass in smaller delays?
Thanks & Regards,
Kiran
wait() is not a system task call, it is an event control statement. What you are asking would be the same as changing the behavior of if().
What you can do is move the statement into a virtual method, then override the method where needed.

Structured Text : Function and Function block (Pros and Cons)

I am coming from computer science background and used to traditional IT programming. I have relatively little experience with structured text. In my current project I am extensively using many function block. I am aware that this involves some memory issues and so on. Could anyone come up and give me some advantages and disadvantages of each of them. Should I avoid them and write everything in a single program ? Please practical hints should be welcome as I am about to release my application.
System : Codesys
I also come from the PC programming world, and there are certain object tricks I miss when programming in Codesys. The function blocks go a long way towards object thinking, though. They're too easy to peek into from the outside, so some discipline from the user is necessary, to encapsulate the functionality or objects.
You shouldn't write a single piece of program to handle all functionality, but instead use the Codesys facilities to divide the program into objects where possible. This also means to identify which objects are alike and can be programmed as function blocks. The instance of a function block is created in memory when the program is downloaded, e.g. it is always visible for monitoring.
I normally use POU's to divide the project into larger parts, e.g. Machine1(prg), Machine2(prg) and Machine3(prg). If each machine has one or more motors of similar type, this is where the function blocks come in, so that I can program one motor object called FB_Motor, and reuse it for the necessary motor instances inside the 3 machine programs. Each instance can then hold its own internal states, timers, input output, or whatever a motor needs.
The structure for the above example is now:
MAIN, calls
Machine1(prg), calls
fbMotor1 (implements FB_Motor, local for Machine1)
fbMotor2 (implements FB_Motor, local for Machine1)
Machine2(prg), calls
fbMotor1 (implements FB_Motor, local for Machine2)
Machine3(prg), calls
fbMotor1 (implements FB_Motor, local for Machine3)
fbMotor2 (implements FB_Motor, local for Machine3)
fbMotor3 (implements FB_Motor, local for Machine3)
The functions are another matter. Their data exist on the stack when the function is called, and when the function has returned its value, the data is released. There are lots of built in functions, e.g. BOOL_TO_INT(), SQR(n) and so on.
I normally use functions for lookup and conversion functions. And they can be called from all around the program.
The clarity, robustness and maintainability are everything in PLC world. Function blocks help you to archieve that if the stucture is kept relatively flat (so one should avoid functionblock inside functionblock insede function block, compared of true object and their heritage).
Also the graphical languages are there for reason they visualise the complex systems in easy to digest form in a way that the maintaining personnel in the future have easier life to follow what is wrong with the PLC program and the part of the factory.
What comes to ST it is advance to remember that it is based on strongly typed Wirthian languages (ADA, Pascal etc.). Also what is often more important than memory usage is the constant cycle time of the program (since real time system). The another cup of the tea is the electrical layer of the control system, plus the physical layer and all the relations on that layer that can backflash somewhere else in your program if not taken account.

What is the architecture behind Scratch programming blocks?

I need to build a mini version of the programming blocks that are used in Scratch or later in snap! or openblocks.
The code in all of them is big and hard to follow, especially in Scratch which is written in some kind of subset of SmallTalk, which I don't know.
Where can I find the algorithm they all use to parse the blocks and transform it into a set of instructions that work on something, such as animations or games as in Scratch?
I am really interested in the algorithmic or architecture behind the concept of programming blocks.
This is going to be just a really general explanation, and it's up to you to work out specifics.
Defining a block
There is a Block class that all blocks inherit from. They get initialized with their label (name), shape, and a reference to the method. When they are run/called, the associated method is passed the current context (sprite) and the arguments.
Exact implementations differ among versions. For example, In Scratch 1.x, methods took arguments corresponding to the block's arguments, and the context (this or self) is the sprite. In 2.0, they are passed a single argument containing all of the block's arguments and context. Snap! seems to follow the 1.x method.
Stack (command) blocks do not return anything; reporter blocks do.
Interpreting
The interpreter works somewhat like this. Each block contains a reference to the next one, and any subroutines (reporter blocks in arguments; command blocks in a C-slot).
First, all arguments are resolved. Reporters are called, and their return value stored. This is done recursively for lots of Reporter blocks inside each other.
Then, the command itself is executed. Ideally this is a simple command (e.g. move). The method is called, the Stage is updated.
Continue with the next block.
C blocks
C blocks have a slightly different procedure. These are the if <> style, and the repeat <> ones. In addition to their ordinary arguments, they reference their "miniscript" subroutine.
For a simple if/else C block, just execute the subroutine normally if applicable.
When dealing with loops though, you have to make sure to thread properly, and wait for other scripts.
Events
Keypress/click events can be dealt with easily enough. Just execute them on keypress/click.
Something like broadcasts can be done by executing the hat when the broadcast stack is run.
Other events you'll have to work out on your own.
Wait blocks
This, along with threading, is the most confusing part of the interpretation to me. Basically, you need to figure out when to continue with the script. Perhaps set a timer to execute after the time, but you still need to thread properly.
I hope this helps!

Objective-C: Calling and copying the same block from multiple threads

I'm dealing with neural networks here, but it's safe to ignore that, as the real question has to deal with blocks in objective-c. Here is my issue. I found a way to convert a neural network into a big block that can be executed all at once. However, it goes really, really slow, relative to activating the network. This seems a bit counterintuitive.
If I gave you a group of nested functions like
CGFloat answer = sin(cos(gaussian(1.5*x + 2.5*y)) + (.3*d + bias))
//or in block notation
^(CGFloat x, CGFloat y, CGFloat d, CGFloat bias) {
return sin(cos(gaussian(1.5*x + 2.5*y)) + (.3*d + bias));
};
In theory, running that function multiple times should be easier/quicker than looping through a bunch of connections, and setting nodes active/inactive, etc, all of which essentially calculate this same function in the end.
However, when I create a block (see thread: how to create function at runtime) and run this code, it is slow as all hell for any moderately sized network.
Now, what I don't quite understand is:
When you copy a block, what exactly are you copying?
Let's say, I copy a block twice, copy1 and copy2. If I call copy1 and copy2 on the same thread, is the same function called? I don't understand exactly what the docs mean for block copies: Apple Block Docs
Now if I make that copy again, copy1 and copy2, but instead, I call the copies on separate threads, now how do the functions behave? Will this cause some sort of slowdown, as each thread attempts to access the same block?
When you copy a block, what exactly
are you copying?
You are copying any state the block has captured. If that block captures no state -- which that block appears not to -- then the copy should be "free" in that the block will be a constant (similar to how #"" works).
Let's say, I copy a block twice, copy1
and copy2. If I call copy1 and copy2
on the same thread, is the same
function called? I don't understand
exactly what the docs mean for block
copies: Apple Block Docs
When a block is copied, the code of the block is never copied. Only the captured state. So, yes, you'll be executing the exact same set of instructions.
Now if I make that copy again, copy1
and copy2, but instead, I call the
copies on separate threads, now how do
the functions behave? Will this cause
some sort of slowdown, as each thread
attempts to access the same block?
The data captured within a block is not protected from multi-threaded access in any way so, no, there would be no slowdown (but there will be all the concurrency synchronization fun you might imagine).
Have you tried sampling the app to see what is consuming the CPU cycles? Also, given where you are going with this, you might want to become acquainted with your friendly local disassembler (otool -TtVv binary/or/.o/file) as it can be quite helpful in determining how costly a block copy really is.
If you are sampling and seeing lots of time in the block itself, then that is just your computation consuming lots of CPU time. If the block were to consume CPU during the copy, you would see the consumption in a copy helper.
Try creating a source file that contains a bunch of different kinds of blocks; with parameters, without, with captured state, without, with captured blocks with/without captured state, etc.. and a function that calls Block_copy() on each.
Disassemble that and you'll gain a deep understanding on what happens when blocks are copied. Personally, I find x86_64 assembly to be easier to read than ARM. (This all sounds like good blog fodder -- I should write it up).