Difference between Isolate vs Thread vs Process? - flutter

As we know flutter app runs in an isolates. Somewhere I read that isolates are not system processes. So what really an isolate is and how it's different from process.

Good question. But there is no definite answer, because it depends on the language implementation.
Let me just quote the docs (emphasis mine):
Within an app, all Dart code runs in an isolate. Each Dart isolate has
a single thread of execution and shares no mutable objects with other
isolates. To communicate with each other, isolates use message
passing. Although Dart’s isolate model is built with underlying
primitives such as processes and threads that the operating system
provides, the Dart VM’s use of these primitives is an implementation
detail that this page doesn’t discuss.
Many Dart apps use only one isolate (the main isolate), but you can
create additional isolates, enabling parallel code execution on
multiple processor cores.
As you might know, threads and processes are managed by OS.
Isolates are managed by Dart runtime, and they use underlying OS threads and processes.
The Dart runtime controls the main isolate (where code normally runs)
and any other isolates that the app creates.

Related

Using a vkFence with an std::condition_variable

A VkFence can be waited upon or queried about its state. Is it possible to have a callback invoked by the Vulkan implementation when the fence is ready instead?
This would allow it to be used with objects such as a std::condition_variable. When the fence would be ready, the condition_variable would get notified.
Such an approach would also allow integration with libraries like Boost.Fiber, which would completely remove the need for the thread to sleep, but rather it could do useful work while waiting upon the fence.
If this is not possible in base Vulkan, is there an extension that allows it?
Vulkan doesn't work that way. Vulkan devices and queues execute independently of the CPU. Indeed, with one or two exceptions, Vulkan implementations only ever use CPU resources within the scope of a particular function call and only on the thread on which this call was made. Even debug callbacks are made within the scope of the function that caused the error.
There is no mechanism for Vulkan implementations to use CPU resources without the explicit consent of the user of the API (again, minus one or two exceptions). So no callbacks that act outside of an API call.
Vulkan does have a way to extract a native synchronization object from a VkFence, but it is surprisingly not useful in Windows. While you can get a HANDLE, it cannot be used by the Win32 API for waiting on it. This is mainly for interop with other APIs (like converting it to a D3D12 sync object), not for waiting on it yourself. But the file descriptor extraction operation can get a fully functional sync object... if the implementation lets you.

Flutter & Dart : What is an isolate in flutter?

As I study about flutter, I notices that there is a thing called isolate.
What is it for? And how do we implement that? Can you give me a simple example?
Thank you in advance.
Isolate in flutter, similar with threading.
"Flutter is single-threaded but it is capable of doing multi-threading
stuff using Isolates (many processes). When Dart starts, there will be
one main Isolate(Thread). This is the main executing thread of the
application, also referred to as the UI Thread. In simple Flutter apps
you will only ever use one Isolate, and your app will run smoothly.
Isolates are:
Dart’s version of Threads. Do not share memory between each other.
Uses Ports and Messages to communicate between them. May use another
processor core if available. Runs code in parallel."
Docs & Simple Example
"Concurrent programming using isolates: independent workers that are
similar to threads but don't share memory, communicating only via
messages."
Official Docs

Difference between Thread, Isolate and Process in Dart

What's the difference between Thread, Isolate and a Process in Dart?
As far as I know Dart is a single-threaded language, but it can spawn many isolates which don't share memories with each other and we can do the heavy lifting work on them and return the result without blocking the UI.
But what Process is for, is that a part of an Isolate? Can anyone describe above three in more detail.
And when we do asynchronous programming using Future and let's see we are doing heavy lifting in it, will that block the UI thread in case it is awaited using the await keyword.
A Process is a native OS (Unix, Windows, MacOS) construct, which consists of one or more threads with their own address space and execution environment. In Dart, an application consists of one or more threads, one of which is the main UI thread, while the rest are typically called Isolates.
In contrast to what the previous answer said, All Dart code runs in an isolate.
Actually, your understanding is correct, based on what you said in the comments.
From the docs (emphasis mine):
Within an app, all Dart code runs in an isolate. Each Dart isolate has
a single thread of execution and shares no mutable objects with other
isolates. To communicate with each other, isolates use message
passing. Although Dart’s isolate model is built with underlying
primitives such as processes and threads that the operating system
provides, the Dart VM’s use of these primitives is an implementation
detail that this page doesn’t discuss.
Many Dart apps use only one isolate (the main isolate), but you can
create additional isolates, enabling parallel code execution on
multiple processor cores.
Your question hasn't been answered yet, and it will not be answered easily, because it depends on the language implementation.
Also, your another question: "And when we do asynchronous programming using Future and let's see we are doing heavy lifting in it, will that block the UI thread in case it is awaited using the await keyword."
-> Yes, it will block the UI, unless you use another isolate.

Asynchronous Computation in scalajs Diode

I have an user interface and provide a button to the user, which executes the function longComputation(x: A): A and updates then the user interface (particularly the model) with the new result. This function may take longer to compute the result and should therefore compute in parallel.
Diode provides me with Effect, PotAction, and AsyncAction. I read the documentation about Effects and PotActions/AsyncActions, but I cannot even get a simple example to work.
Can someone point me to or provide an simple working example?
I created a ScalaFiddle based on the SimpleCounter example. There is a LongComputation button, which should run in parallel; but is not.
In JavaScript you cannot run things in parallel without using Web Workers because the JS engine is single-threaded. Web Workers are more like separate processes than threads, as they don't share memory and you need to send messages to communicate between workers and the main thread.
Have less than 50 reputation to comment, so I have to create a new answer instead of commenting #ochrons answer:
As mentioned Web Workers communicate via message passing and share no state. This concept is somehow similar to Akka - even Akka.js exists which enables you to use actor systems in ScalaJS and therefore the browser.

When deadlocks occur in modern operating systems?

I know deadlocks was a hot research topic in past. But, even though I studied lots of modern operating systems, I cannot see any major problem about deadlocks now. I know some (most) resources which deadlocks can occur strictly managed by operating system itself and seems it prevent deadlocks someway, I really didn't see any case related to a deadlock. I know lots of features about resources handled different than others in popular systems with different design principles but, they can all maintain system deadlock-free.
Try to use two mutexes in your program and in first thread close in sequence: mutex1, sleep(500ms), mutex2, in second thread: mutex2, sleep(1000ms), mutex1.
In systems. In windows (including 8.1) if your application uses SendMessage and broadcast HWND_BROADCAST - if one application is hung, your application also will be in hung state. Also in part cases of DDE communication (including ShellExecute for part of programs), if one application is not responsive, your application can be in hung state.
But you can use SendMessageTimeout...
The deadlock will always be possible if processes or threads will be synchronized. Synchronization of processes and threads is a "must-have" element of applications.
AND... SYSTEM-WIDE deadlock (Windows):
Save all your documents before this action.
Create HWND h1 with parent=0 or parent=GetDesktopWindow and styles 0x96cf0000
Create HWND h2 with parent=h1 and styles 0x96cf0000
Create HWND h3 with parent=h2 and styles 0x56cf0000 (here must be a child window).
Use ::SetParent(h1, h3);
Then click any of these windows.
The system will in cyclic (triangle) order try to reorder windows. The application is hung but if any other application will try to use SetWindowPos, the application will newer return from this function. The Task Manager won't help, the Alt+Ctrl+Del also stops to work. 100% of usage of CPU... Only hard reset will help you.
There is possibility to prevent it but this situation must be detected ASAP.
Operating system deadlocks still happen. When a system has limited contended resources that it can't reclaim a deadlock is still possible.
In linux, look at kernel stalls, these happen when I/O doesn't release in a timely manner. Kernel stalls are particularly interesting between vmware and guest operating systems.
For external instigators, deadlocks happen when san systems and networks have issues.
New release deadlocks happen fairly often while maturing a kernel, not per user, but as a whole from the community.
Ever get a blue screen or instant reboot? Some of those are caused by lost resources.
Kernels are fairly mature, and have gotten good at reclaiming resources, but aren't perfect.
Most modern resource handlers tend to present as services now instead of being lockable objects. Most resource sharing within the operating system relies on separate channels, alleviating much of the overlap. There's a higher reliance on queues and toggles instead of direct locking contention on shared buffers. These are generalities of trends in OS parts and pieces that contribute to less opportunity for deadlocks, but there's not a way to guarantee a deadlock less system.