In Simulink, are Goto and From blocks generally considered bad style? - matlab

I was working on a Simulink model recently and was using Goto and From blocks to keep a very busy system from becoming a twisted mess of wires. I was informed that I was not to use Goto and From blocks as they are considered bad style (at least, according to my employer).
While I hold that wires should be kept connected whenever possible, I believe that Goto and From blocks can significantly improve the readability of a system/subsystem if the model would result in lots of crossed wires otherwise; especially if the blocks can be color-coded (e.g. purple Goto block goes to all the purple From blocks).
I'd supply an image of the subsystem I'm working with, but I'm not sure I can put it on here. The subsystem itself has about 12 subsystem blocks (and possibly more later) within it, each with two bus-type outputs. The first output of each subsystem goes to a Bus Creator block, and the second output of each goes to a second Bus Creator block. Since the subsystem are aligned vertically and the Bus Creators are to the right, this results in many crossed wires. I was using Goto and From blocks to clean up the system.
I can supply an image of a smaller, but similar model that I put together for this question.
For a system with on the order of 12 subsystems, this becomes very busy. I was using Goto and From blocks to connect the subsystems and the Bus Creators without a plethora of crossed wires.
I believe my employer may be carrying the stigma of using goto statements from text-based languages and applying it to Goto/From blocks in Simulink. Generally speaking, is using Goto and From blocks in this way (or any way) considered to be bad style?

The Mathworks Automotive Advisory Board has published some modeling guidelines (PDF) that include usage of Goto/From. The rules they list are:
Do not have subsystems that are floating, i.e. all inputs / output ports are connected via Gotos. One of the great things about Simulink is the ability to determine signal flow with only a cursory visual inspection, do not destroy this by linking everything with Gotos. At least have one feed-forward and one feedback loop between subsystems connected by signal lines.
My personal opinion on feedback signals is that they should all be connected with signal lines, but I'm sure you can come up with cases where drawing all of them clutters the model.
The second guideline is about the scope of the Goto tag; keep the visibility local as much as possible.
I feel setting visibility to scoped is acceptable also as long as you're not using the matching From more than a couple of levels downstream from the Goto. I've yet to come across a legitimate need for a global Goto tag.
So, all Goto usage isn't bad, and you're right that it can improve readability in some cases. That being said, I don't think Gotos are justified for the picture above. I realize it is just an example, but I should point out that if the buses being created are virtual that order of the inputs at the creator doesn't matter, and rearranging Bus Create and Mux block inputs can work wonders for readability.
The problem with the guidelines above are that there's room for bending them, and developers on your team might do just that. Even if everyone is diligent about following them at first, you may run afoul of these guidelines one day, a long time from now, when you redraw that section of the model for refining / adding functionality. Rearranging inputs and outputs can be especially irritating in middle of implementing some cool new feature. That may be the reason your employer chose to impose a blanket ban. It is inconvenient in some cases, but is easier to enforce.

Related

What is the difference between Program Status Word (PSW) and Program Counter (PC)?

In an Operating Systems course, the instructor introduced PSW and PC when he talked about Interrupt Handling.
His explanation was
PC holds the address of the next instruction to be fetched
PSW contains execution status information
But later I searched online and found that PSW = PC + status register. This makes me quite confused.
On the one hand, I am not sure what "execution status information" refers to. On the other hand, if PSW has the functions of a PC, why do we still need it?
Appreciate any explanation.
This isn't really standardized terminology. Most architectures have some register that plays the role of a status word, containing bits to indicate things like whether an add instruction caused a carry. But different architectures give it different names, and what exactly is included can vary widely. I'm not aware of any architecture that includes the program counter as part of their status word, but if they want to do that, well, who's going to stop them?
This is the kind of thing where you just have to look at the definition given by whatever book or article you are reading (or infer it from context), and realize that a different author may use the word differently.
In general, interrupts are hardware level subroutine calls. They do the same thing as a subroutine call (change the algorithm that the processor is executing) however they do it without warning the "executing code" that they are now operating.
In order to not damage the "executing code" all information that it was using must be stored. This includes the Program Counter (usually saved to the stack by the interrupt hardware in the same way that a subroutine call does) and all of the registers that the interrupt function will alter- these must be saved by pushing them onto the stack. The registers etc must be restored before the return from interrupt (RETI) instruction - the PC is restored by the RETI itself.
The PSW (often called the flag register) is a very important register and must generally be saved first. It contains bits like Zero (the last calculation resulted in a zero result) Carry (the last calculation resulted in a carry ie the result number is bigger than the register can hold) and several other flags. I suggest that you read the data sheet of an 8 bit microcontroller for an idea of what these flags might be. suffice it to say that these flags are needed in order to perform conditional jumps. And whilst they will often be ignored you can't take that chance.
You are probably correct in Your instructor using the term PSW to mean all all of the registers.
The subject of interrupts contains concepts that are common to subroutine calls in general (e.g. don't leave data that you don't want overwritten in a register before entering a subroutine). And later on in operating systems, the concept of context switches that occur during multi-tasking.
Peter

problems with implementation of 0000-9999 counter on fpga(seven segment)

EDIT1
okay i couldnt post a long comment(i am new to the website so please accept my apologies) so i am editing my earlier question. I have tried to implement multiplexing in 2 attempts:
-2nd attempt
-3rd attempt
in 2nd attempt i have tried to send the seven seg variables of each module to the module which is just one step ahead of it, and when they all reach the final top module i have multiplexed them...there is also a clock module which generates a clock for the units module(which makes units place change 2 times in a second) and a clock for multiplexing(multiplexing between each displays 500 times per second)...ofcourse i read that my board has a clock freq of 50M hertz, so these calculations for clocks are based on that figure...
in the 3rd comment i have done the same thing, in one single module. see the 2nd attempt first and then the 3rd one.
both give errors right after synthesis and lots of unfamiliar warnings.
EDIT 2
I have been able to synthesize and implement the program in attempt4(which i am not allowed to post since my reputation is low), using the save flag for variables, variables1 variables2 and variables3(which were giving warning of unused pins) but the program doesnt run on fpga...it simply shows the number 3777. also there are still warnings of "combinatorial loops" for some things that are related to some variables( i am sorry i am new to all this verilog thing) but you can see all of them in attempt 3 as well.
You can not implement counters with loops. Neither can you implement cascaded counters with nested loops.
Writing HDL is not writing software! Please read a book or tutorial on VHDL or Verilog on how to design basic hardware circuits. There is also the Synthesis and Simulation Guide 14.4 - UG626 from Xilinx. Have a look at page 88.
Edit1:
Now it's possible to access your zip file without any dropbox credentials and I have looked into your project. Here are my comments on your code.
I'll number my bullets for better reference:
Your project has 4 mostly identical ucf files. The difference is only in assigning different anode control signals to the same pin location. This will cause errors in post synthesis steps (assign multiple nets to one pin). Normally, simple projects have only one ucf file.
The Nexsys 2 board has a 4 digit 7-segment display with common cathodes and switchable common anodes. In total these are 8+4 wires to control. A time multiplexing circuit is needed to switch at 25Hz < f < 1kHz through every digit of your 4-digit output vector.
Choosing a nested hierarchy is not so good. One major drawback is the passing of many signals from every level to the topmost level for connecting them to the FPGA pins. I would suggest a top-level module and 4 counters on level one. The top-level module can also provide the time-multiplexing circuit and the binary to 7-seg encoding.

From where is the code for dealing with critical section originated?

While learning the subject of operating systems, Critical Section is a topic which I've come across. To solve this problem, certain methods are provided like semaphores, certain software solutions, etc...etc..etc. But I've a question that from where is the code for implementing these solutions originated? As programmers never are found writing such codes for their program. Suppose I write a simple program executing printf in 'C', I never write any code for critical section problem. And the code is converted into low level instructions and is executed by OS, which behaves as our obedient servant. So, where does code dealing with critical section originate and fit in? Let resources like frame buffer be the critical section.
The OS kernel supplies such inter-thread comms synchronization mechanisms, mutex, semaphore, event, critical section, conditional variables etc. It has to because the kernel needs to block threads that cannot proceed. Many languages provide convenient wrappers around such calls.
Your app accesses them, directly or indirectly, via system calls, ie intrrupts that enter kernel state and ask for such services.
In some cases, a short-term user-space spinlock may get plastered on top, but such code should defer to a system call if the spinner is not quickly satisfied.
In the case of C printf, the relevant library, (stdio usually), will make the calls to lock/unlock the I/O stream, (assuming you have linked in a multithreaded version of the library).

trying to know more about verilog language, vhdl,and assembly language

I would like to know what is the difference between verilog and assembly language.
Next semester we will be working with micro-controllers, but I would like to learn a little bit about it before the semester begins. I've been doing a lot of research about low-level programming, and so far I have gained a good understanding in assembly language, but I get confused trying to understand Verilog and VHDL?
Verilog and VHDL are completely different languages for describing hardware, for purposes of programming FPGAs.
FPGAs are devices that can be on-the-fly programmed to implement any sort of digital logic (and sometimes analog too).
So using verilog or VHDL, I can design a circuit that creates a couple latches, some twos-complement adders, a mux, and a clock source, and suddenly you've just designed a circuit that can calculate. You could then take the output from the VHDL compiler (or whatever its called), "download" it to the FPGA, and now you actually have some hardware that can be used to do calculation.
Of course, you can use FPGAs to implement all sorts of complicated stuff - even a full custom CPU. One uses verilog and VHDL to design the circuits that are programmed to FPGAs. Those circuits could implement something simple like a ripple counter, or something more complex like a LCD driver, or something even more complex like a USB transceiver. You can go from as simple as a few latches to as complicated as a fully operating CPU; as long as its digital hardware, you can make whatever you want with VHDL and some FPGAs.
To clarify further -
"Assembly language" typically refers to raw instructions given to some sort of CPU. Of course, there are many different types of CPUs (x86, ARM, SPARC, MIPS) and further many different variants of those types of CPUs. Each CPU has its own instruction set.
Machine code is complete, fully specified, ready to be executed instructions. Assembly languages allow you type instructions from your CPU's instruction set in plain text, use labels and such, and describe the memory layout structure of the program. Put the assembly through an assembler and out comes machine code in your CPUs machine instruction set.
You could design your own CPU from scratch using VHDL. As you're designing the CPU, you would have it implement your own custom instruction set. From there, you could take the VHDL for your CPU, compile it, write it to an FPGA and have your own custom CPU. Then you could start writing programs for your made-up CPU using your custom instruction set by writing a custom assembler. Some friends of mine in college did this for giggles.
For example, you know how most CPUs are load-store, register based CPUs? Instructions tend to go something like this:
Load the value '1' into register A
Load the value '2' into register B
Add register A and register B, storing result in register A
(You just added 1 + 2! Heh)
That sort of model of computation happens to be the most popular, but it's not the only way you could do computation. What if you had a stack based CPU, where you push values onto a hardware stack, and then computations work with the values on the top of the stack, pushing results back onto the stack.
For instance:
Push 1 onto the stack (stack current contains: 1)
Push 2 onto the stack (stack current contains: 2 1)
Push 3 onto the stack (stack currently contains: 3 2 1 )
Add
'Add' takes the top two elements on the stack, adds them together, and pushes the result on the top of the stack.
Stack now contains: 5 1
Add
Stack now contains: 6
Neat isn't it? As far as a computation model goes, it has its advantages - operands tend to be short, and need fewer bits. Smaller instructions means that the CPU can be faster.
The problem is that no such processor like this exists anymore.
But if you knew what you were doing, you could design one in VHDL, program it to an FGPA, and suddenly you have one of the only operating stack-based processors in existence.
Say, if you were doing a masters thesis, for instance, you might dig around and find out that virtual-machine-based programming languages like C# and Java compile down to a bytecode for a CPU that doesn't really exist, but the model for that CPU proves useful for making code portable. You might find out that the imaginary machines used by these languages are based on stack-based processor models. If you were looking for something interesting to do, perhaps you write in VHDL a processor that natively implements the Java bytecode language. Now you'd be the only person that has a computer that can directly run Java.
Verilog and VHDL are both HDLs (Hardware description languages) used mainly for describing digital electronics. Their targets may be FPGA or ASIC (custom silicon).
Assembly level on the other hand is using an processors instruction set to perform a series of calculations. Every thing executed on a computer eventually ends up as an assembly level instruction. One example of an instruction set would be the x86 ISA.
Summary: Verilog, VHDL describe hardware. Assembly is the low level program being executed on a processor.

Looking for the best equivalents of prefetch instructions for ia32, ia64, amd64, and powerpc

I'm looking at some slightly confused code that's attempted a platform abstraction of prefetch instructions, using various compiler builtins. It appears to be based on powerpc semantics initially, with Read and Write prefetch variations using dcbt and dcbtst respectively (both of these passing TH=0 in the new optional stream opcode).
On ia64 platforms we've got for read:
__lfetch(__lfhint_nt1, pTouch)
wherease for write:
__lfetch_excl(__lfhint_nt1, pTouch)
This (read vs. write prefetching) appears to match the powerpc semantics fairly well (with the exception that ia64 allows for a temporal hint).
Somewhat curiously the ia32/amd64 code in question is using
prefetchnta
Not
prefetchnt1
as it would if that code were to be consistent with the ia64 implementations (#ifdef variations of that in our code for our (still live) hpipf port and our now dead windows and linux ia64 ports).
Since we are building with the intel compiler I should be able to many of our ia32/amd64 platforms consistent by switching to the xmmintrin.h builtins:
_mm_prefetch( (char *)pTouch, _MM_HINT_NTA )
_mm_prefetch( (char *)pTouch, _MM_HINT_T1 )
... provided I can figure out what temporal hint should be used.
Questions:
Are there read vs. write ia32/amd64 prefetch instructions? I don't see any in the instruction set reference.
Would one of the nt1, nt2, nta temporal variations be preferred for read vs. write prefetching?
Any idea if there would have been a good reason to use the NTA temporal hint on ia32/amd64, yet T1 on ia64?
Are there read vs. write ia32/amd64 prefetch instructions? I don't see any in the instruction set reference.
Some systems support the prefetchw instructions for writes
Would one of the nt1, nt2, nta temporal variations be preferred for read vs. write prefetching?
If the line is exclusively used by the calling thread, it shouldn't matter how you bring the line, both reads and writes would be able to use it. The benefit for prefetchw mentioned above is that it will bring the line and give you ownership on it, which may take a while if the line was also used by another core. The hint level on the other hand is orthogonal with the MESI states, and only affects how long would the prefetched line survive. This matters if you prefetch long ahead of the actual access and don't want to prefetch to get lost in that duration, or alternatively - prefetch right before the access, and don't want the prefetches to thrash your cache too much.
Any idea if there would have been a good reason to use the NTA temporal hint on ia32/amd64, yet T1 on ia64?
Just speculating - perhaps the larger caches and aggressive memory BW are more vulnerable to bad prefetching and you'd want to reduce the impact through the non-temporal hint. Consider that your prefetcher is suddenly set loose to fetch anything it can, you'd end up swamped in junk prefetches that would through away lots of useful cachelines. The NTA hint makes them overrun each other, leaving the rest undamaged.
Of course this may also be just a bug, I can't tell for sure, only whoever developed the compiler, but it might make sense for the reason above.
The best resource I could find on x86 prefetching hint types was the good ol' article What Every Programmer Should Know About Memory.
For the most part on x86 there aren't different instructions for read and write prefetches. The exceptions seem to be those that are non-temporal aligned, where a write can bypass the cache but as far as I can tell, a read will always get cached.
It's going to be hard to backtrack through why the earlier code owners used one hint and not the other on a certain architecture. They could be making assumptions about how much cache is available on processors in that family, typical working set sizes for binaries there, long term control flow patterns, etc... and there's no telling how much any of those assumptions were backed up with good reasoning or data. From the limited background here I think you'd be justified in taking the approach that makes the most sense for the platform you're developing on now, regardless what was done on other platforms. This is especially true when you consider articles like this one, which is not the only context where I've heard that it's really, really hard to get any performance gain at all with software prefetches.
Are there any more details known up front, like typical cache miss ratios when using this code, or how much prefetches are expected to help?