Is it a real-time operation when reading the files on the disk on Vxworks? - rtos

In no doubt that we can read files on the disk on Vxworks.
The question is whether it's a real-time API or not.
If the answer is not, one more question arises.
Is it possible to implement a real-time API to read the files on the disk?

Can you elaborate more on the context for the question and what you intend to address?
If you are asking if a read operation on a disk can be blocked, then the answer would be yes but it would depend on how the I/O layer is implemented for the driver being used.
In case you do not want to have a blocking operation on a read, you still be able to use select() to detect if an read operation can be performed on the device.
In addition you might use ioctl() commands to get information on the device or the file.

"real-time" OS means you have defined time-slices and monitoring of time effort, that's it.

Related

Sending custom data to socket using eBPF

I'm trying, finally, to understand eBPF and maybe use it in an upcoming project.
For sake of simplicity I started with reading bcc documentation.
In my project I'll need to send some data over network upon some kernel function calls.
Can that be done without sending the data to userspace first?
I see that I can redirect skbs from one socket to another etc., and I see that I can submit custom data to user space. Is there a way to get the best of both worlds?
EDIT: I'm trying to log some file system events to another server that'll collect this data from multiple machines. Those machines can be fairly busy in some situations. It should be real time and with low latency.
I'd love to avoid going through userspace to prevent copying the data back and forth and to reduce sw overhead as much as possible.
Thank you all!
It seems this question can be summarized to: is it possible to send data over the network from a BPF tracing program (kprobes, tracepoints, etc.)?
The answer to that question is no. As far as I know, there are currently no way to craft and send packets over the network from BPF programs. You can resend a received packet to the network with some helpers, but they are only available to networking BPF programs.

can application and hardware interact directly

I am a new student studying OS course. I have already know that OS can serve for better communication between applications and hardwares in modern computer. But sometimes it seems more time efficient if applications can control hardware directly. May I ask whether it is possible?
yes it is possible but that would be a single application computer that computer only can run one particular application.
Applications handling hardware directly is faster as there is less of overhead of what OS does in its management.
You can take the example of DMA - Direct Memory Access. This feature is useful at any time that the CPU cannot keep up with the rate of data transfer, or when the CPU needs to perform work while waiting for a relatively slow I/O data transfer.
But you should keep in mind the importance of operating system in handling other hardwares as not everything can be managed that trivially and need processing for decision making.

How much data can a socket store in its buffer before read?

I have a client / server application where clients send messages to the server. Due to a legacy library that I use, my server cannot read immediately but must wait for a condition to come true until it reads the messages. How much data can a socket store? Is there a fixed buffer size/limit?
Thanks.
It depends on the size of the socket receive buffer, whose default value varies among operating systems. You can control it from your application via setsockopt() and the SO_RCVBUFSIZE option.
That depends on many factors of which many you can't control. This is not the correct approach.
You should read the data as soon as it is available, but only process it if the conditions are met.
Edit: I guess I misinterpreted the question, see #EJP's answer.

I'm writing an application that implements a questionnaire. Does qualify as being a real-time application?

Keeping it simple, I have a server and client. The server sends questions one by one and the client the answers, as soon as they are given.
So, would you say this application is real time?
Based on this quote from wikipedia, which summarizes my understand of what a real-time application is:
"A system is said to be real-time if the total correctness of an operation depends not
only upon its logical correctness, but also upon the time in which it is performed. The classical conception is that in a hard real-time or immediate real-time system, the completion of an operation after its deadline is considered useless - ultimately, this may cause a critical failure of the complete system. A soft real-time system on the other hand will tolerate such lateness, and may respond with decreased service quality (e.g., omitting frames while displaying a video)."
I would say no, it is not real-time.
No, Real-time systems are ones where the OS/Application has to respond to the environment within a known period, for example an embedded flight control system on a fighter jet.
Wikipedia has a fairly good article on Real-time computing.
If you are using for the communication a protocol like TCP/IP, that isnt realtime system, because these communication link are not by nature deterministic in matter of response time, the only sure thing is that the message will arrive, when? who knows...

What's the best IPC mechanism for medium-sized data in Perl? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm working on designing a multi-tiered app in Perl and I'm wondering about the pros and cons of the various IPC mechnisms available to me. I'm looking at handling moderately-sized data, typically a few dozen kilobytes but up to a couple of megabytes, and the load is pretty light, at most a couple of hundred requests per minute.
My primary concerns are maintainability and performance (in that order). I don't think I'll need to scale up to more than one server, or port off of our main platform (RHEL), but I suppose it's something to consider.
I can think of the following options:
Temporary files - Simplistic, probably the worst option in terms of speed and storage requirements
UNIX domain sockets - Not portable, not scalable
Internet Sockets - Portable, scalable
Pipes - Portable, not scalable (?)
Considering that scalability and portability are not my primary concerns, I need to learn more. What's the best choice, and why? Please comment if you need additional information.
EDIT: I'll try to give more detail in response to ysth's questions (warning, wall of text follows):
Are readers/writers in a one-to-one relationship, or something more more complicated?
What do you want to happen to the writer if the reader is no longer there or busy?
And vice versa?
What other information do you have about your desired usage?
At this point, I'm contemplating a three-tiered approach, but I'm not sure how many processes I'll have in each tier. I think I need to have more processes towards the left side and fewer toward the right, but maybe I should have the same number across the board:
.---------. .----------. .-------.
| Request | -----> | Business | -----> | Data |
| Manager | <----- | Logic | <----- | Layer |
`---------' `----------' `-------'
These names are still generic and probably won't make it into the implementation in these forms.
The request manager is responsible for listening for requests from different interfaces, for example web requests and CLI (where response time is important) and e-mail (where response time is less important). It performs logging and manages the responses to the requests (which are rendered in a format appropriate to the type of request).
It sends data about the request to the business logic which performs logging, authorization depending on business rules, etc.
The business logic (if it needs to) then requests data from the data layer, which can either talk to (most often) the internal MySQL database or some other data source outside our team's control (e.g., our organization's primary LDAP servers, or our DB2 employee information database, etc.). This is mostly simply a wrapper which formats the data in a uniform way so that it can be handled more easily in the business logic.
The information then flows back to to the request manager for presentation.
If, when data is flowing to the right, the reader is busy, for the interactive requests I'd like to simply wait a suitable period of time, and return a timeout error if I don't get access in that amount of time (e.g. "Try again later"). For the non-interactive requests (e.g. e-mail), the polling system can simply exit and try again on the next invocation (which will probably be once per 1-3 minutes).
When data is flowing in the other direction, there shouldn't be any waiting situations. If one of the processes has died when trying to travel back to the left, all I can really do is log and exit.
Anyway, that was pretty verbose, and since I'm still in early design I probably still have some confused ideas in there. Some of what I've mentioned is probably tangential to the issue of which IPC system to use. I'm open to other suggestions on the design, but I was trying to keep the question limited in scope (For example, maybe I should consider collapsing down to two tiers, which is a much simpler for IPC). What are your thoughts?
If you're unsure about your exact requirements at the moment, try to think of a simple interface that you can code to, that any IPC implementation (be it temporary files, TCP/IP or whatever) needs to support. You can then choose a particular IPC flavour (I would start with whatever's easiest and/or easiest to debug -- probably temporary files) and implement the interface using that. If that turns out to be too slow, implement the interface using e.g. TCP/IP. Actually implementing the interface does not involve much work as you will essentially just be forwarding calls to some existing library.
The point is that you have a high-level task to perform ("transmit data from program A to program B") which is more or less independent of the details of how it is performed. By establishing an interface and coding to it, you isolate the main program from changes in the event that you need to change the implementation.
Note that you don't need to use any heavyweight Perl language mechanisms to capitalise on the idea of having an interface. You could simply have e.g. 3 different packages (for temp files, TCP/IP, Unix domain sockets), each of which exports the same set of methods. Choosing which implementation you want to use in your main program amounts to choosing which module to use.
Temporary files (and related things, like a shared memory region), are probably a bad bet. If you ever want to run your server on one machine and your clients on another, you will need to rewrite your application. If you pick any of the other options, at least the semantics are the essentially the same, if you need to switch between them at a later date.
My only real advice, though, is to not write this yourself. On the server side, you should use POE (or Coro, etc.), rather than doing select on the socket yourself. Also, if your interface is going to be RPC-ish, use something like JSON-RPC-Common/ from the CPAN.
Finally, there is IPC::PubSub, which might work for you.
Temporary files have other problems besides that. I think Internet socks are really the best choice. They are well documented, and as you say, scalable and portable. Even if that is not a core requirement, you get it nearly for free. Sockets are pretty easy to deal with, again there is copious amounts of documentation. You can build out your data sharing mechanism and protocol out in a library and never have to look at it again!
UNIX domain sockets are portable across unices. It's no less portable than pipes. It's also more efficient than IP sockets.
Anyway, you missed a few options, shared memory for example. Some would add databases to that list but I'd say that's a rather heavyweight solution.
Message queues would also be a possibility, though you'd have to change a kernel option for it to handle such large messages. Otherwise, they have an ideal interface for a lot of things, and IMHO they are greatly underused.
I generally agree though that using an existing solution is better than building somethings of your own. I don't know the specifics of your problem, but I'd suggest you'd check out the IPC section of CPAN
There are so many different options because most of them are better for some particular case, but you haven't really given any information that would identify your case.
Are readers/writers in a one-to-one relationship, or something more more complicated?
What do you want to happen to the writer if the reader is no longer there or busy? And vice versa?
What other information do you have about your desired usage?
For "interactive" requests (holding the connection open while waiting for a response (asynchronously or not): HTTP + JSON. JSON::XS is insanely fast. Everyone and everything can speak HTTP and it's easy to load balance, debug, ...
For queued requests ("please do this, thanks!"): Beanstalkd and Beanstalk::Client. Serialize the requests in the beanstalk queue with JSON.
Thrift might also be worth looking into depending on your application.