I am looking for a block that works as a digital de-muliplexer. I have one input signal and one selector input that determines which line the input gets passed through to.
I have looked at the demux block in simulink but it does not seem to do this:
"Demux
Split vector signals into scalars or smaller vectors."
I have also looked at "Output Switch" but this only seems to take queues as input.
You can probably achieve what you want with 2 multiport switch blocks and one ground block:
Configure the multiport switch block to have 2 data inputs
Connect your data input in to data input port 1 of the first multiport switch, and data input port 2 of the second multiport switch
Connect the ground block to the other two data input ports remaining
Connect your selector input sel to the control port of each multiport switch
Connect the output of the first multiport switch block to the first of your two outputs (O_0 or O_1), and the output of the second multiport switch block to the second of your two outputs.
You can wrap up everything into a subsystem and even include it in a library if you intend to reuse it in different models. I would obviously test it first to make sure it has the intended behaviour.
Related
I want to create a customized block in Simulink where only input and output ports are defined and then add any other block like machine learning (if possible) to process the input and generate the output. I have searched a lot but the examples on MathWorks are very specific. I am looking after a very general solution where a simple and empty block with only input/output ports can be added and output is generated based on the input.
Thanks in advance,
D.
When a process is waiting for some user input, if ^c is pressed, a signal goes to the OS that kills that process. However the same does not happens when that process is a bash/python interpreter. Also echo ^c prints something on console, so I am assuming that it is a valid unicode character.
So, how does some character inputs gets redirected into the input stream for that process to consume and some gets used as a signal. Where is it decided, what all pre-defined config gets used, and when are those config values set?
You've stumbled into the magical world of the TTY layer.
The TL;DR is that there is a big distinction between using a pipe (eg file or other cmd piped to stdin) and having a console attached to stdin. The the console's line-discipline is what hijacks the ^C character (which is just a normal, 8 bit character) and sends a signal to the foreground process group.
I intend to set the "don't fragment" flag bit in Go, the same thing as this post while it is in C. I checked the constant list but I didn't find this option. So what is the corresponding option in Go?
Thanks in advance!
How to set "don't fragment" flag bit for TCP packet in Go?
First up you should know that TCP really doesn't like IP fragments. Most if not all major implementations avoid fragmentation for TCP segments by using path MTU discovery.
The TL;DR is that the typical IP packet containing a TCP segment has a DF bit set. You can (and should) try this out. Here I am sniffing a few seconds of traffic between my machine and stackoverflow.com:
% tshark -w /tmp/tcp.pcap tcp and host stackoverflow.com
<wait a few seconds>
% tshark -r /tmp/tcp.pcap -T fields -e ip.flags | sort | uniq -c
186 0x00000002
0x02 means the DF bit is set. I confess in other captures I have seen the occasional TCP segment in an IP packet without a DF bit; I suspect rfc1191 has an explanation for this.
Now back to your question, I think there's no portable way to set the DF bit and this is a more widespread question (there isn't even a POSIX-portable way).
There is (likely) an escape hatch in the relevant package for your implementation under golang.org/x/sys.
For example, on a Unix that supports IP_DONTFRAG such as FreeBSD one could use unix.SetsockoptInt and dig the relevant constant value.
On Linux there is not IP_DONTFRAG, as you discovered from the question you linked. The workaround seems to be to use IP_MTU_DISCOVER which happens to be available as a constant in the unix package. You can use that same unix.SetsockoptInt to set it.
I'd like to create a named pipe, like the one created by "mkfifo", but one caveat. I want the pipe to be bidirectional. That is, I want process A to write to the fifo, and process B to read from it, and vice-versa. A pipe created by "mkfifo" allows process A to read the data its written to the pipe. Normally I'd use two pipes, but I am trying to simulate an actual device so I'd like the semantics of open(), read(), write(), etc to be as similar to the actual device as possible. Anyone know of a technique to accomplish this without resorting to two pipes or a named socket?
Or pty ("pseudo-terminal interface"). man pty.
Use a Unix-domain socket.
Oh, you said you don't want to use the only available solution - a Unix-domain socket.
In that case, you are stuck with opening two named pipes, or doing without. Or write your own device driver for them, of course - you could do it for the open source systems, anyway; it might be harder for the closed source systems (Windows, AIX, HP-UX).
There are so many possible errors in the POSIX environment. Why do some of them (like writing to an unconnected socket in particular) get special treatment in the form of signals?
This is by design, so that simple programs producing text (e.g. find, grep, cat) used in a pipeline would die when their consumer dies. That is, if you're running a chain like find | grep | sed | head, head will exit as soon as it reads enough lines. That will kill sed with SIGPIPE, which will kill grep with SIGPIPE, which will kill find with SEGPIPE. If there were no SIGPIPE, naively written programs would continue running and producing content that nobody needs.
If you don't want to get SIGPIPE in your program, just ignore it with a call to signal(). After that, syscalls like write() that hit a broken pipe will return with errno=EPIPE instead.
See this SO answer for a detailed explanation of why writing a closed descriptor / socket generates SIGPIPE.
Why is writing a closed TCP socket worse than reading one?
SIGPIPE isn't specific to sockets — as the name would suggest, it is also sent when you try to write to a pipe (anonymous or named) as well. I guess the reason for having separate error-handling behaviour is that broken pipes shouldn't always be treated as an error (whereas, for example, trying to write to a file that doesn't exist should always be treated as an error).
Consider the program less. This program reads input from stdin (unless a filename is specified) and only shows part of it at a time. If the user scrolls down, it will try to read more input from stdin, and display that. Since it doesn't read all the input at once, the pipe will be broken if the user quits (e.g. by pressing q) before the input has all been read. This isn't really a problem, though, so the program that's writing down the pipe should handle it gracefully.
it's up to the design.
at the beginning people use signal to control events notification which were sent to the user space, and later it is not necessary because there're more popular skeletons such as polling which don't require a system caller to make a signal handler.