Was hoping someone could answer my question. I came across this command in a VHDL code and was not sure what it does exactly. Could someone clarify the following?
if ( element1 = (element1'range => '0')) then
given that element1 is a 4 bit std_logic_vector, what is this condition saying? I could not find a direct answer for this in the few books I had or on google.
Thanks!
It's saying, create a temporary array aggregate the size of the range specified, with every element set to '0'. Whatever that range is.
Preventing accidents when the size of element1 changes.
EVERY time you see magic numbers like 3 downto 0, or for i in 0 to 3 loop ... try to replace them with this or equivalent, because for i in element1'range loop ... will never loop off the end of your array.
The defined range is necessary because the relational operator = (like <, > and the others) doesn't restrict its arguments to the same length, so the simpler form of aggregate (others => '0') doesn't work, because its size is undefined.
The condition will return true if element1 contains only '0'. It is a way of writing this that does not depend on the size of element1. In this case element1'range is 3 downto 0. If you were to change this to, for example, 5 downto 0, the if condition would still work.
(element1'range => '0') is an array aggregate with element choices for the range of element1 and associates those elements with the value '0' creating a composite value that takes it's type from context - the left hand side of the "=" operator (IEEE Std 1076-2008 9.3.3 Aggregates, 9.3.3.3 Array aggregates).
The if statement (10.8 If statement) condition element1 = (element1'range => '0') determines if element1 is all '0's in a range independent manner, the equality relational operator (9.2.3 Relational operators) returning a BOOLEAN value.
This method of evaluating the value of element1 is immune to the declaration of element1 changing (6.4.2 Object declarations, 6.4.2.3 Signal declarations, 6.4.2.4 Variable declarations, 6.5.2 Interface object declarations).
The outer pair of parentheses for the condition (an expression) are superfluous in VHDL (10.2 Wait statement, the BNF for condition, 9. Expressions, 9.1 General, the BNF for intermediary target primary allows them).
Related
I have a couple of line of code which compares some values in two different matrices and even if it is true it doesn't enter the if part.
for i = 1:ux
for j = 1:SIR
if ShelfInfo{SIR, 2} == uniquexy(ux, 1) && uniquexy{ux, 2} == ShelfInfo{SIR, 3}
shelf = ShelfInfo{j,5};
shelves = [shelves; shelf];
1
end
end
end
This code is work but it doesn't enter the if part. I believe it is because of the braces. When I changed everything with curly braces I am receiving this error Brace indexing is not supported for variables of this type. When I am changing this braces with parentheses I am receiving this error Undefined operator '==' for input arguments of type 'table'.
I can't find what to do can you help me with it?
()-indexing subsets an array by elements, and works on any type of array.
{}-indexing subsets a cell array, and extracts the cells' contained values. Basically, it "reaches into" the cells and pulls out their contents. It only works on cell arrays, or objects that have overloaded subsref() to provide this behavior.
I'm guessing you're accidentally applying {}-indexing to your uniquexy in one of your references there, when both of them should be ()-indexing:
... uniquexy(ux, 1) && uniquexy{ux, 2} ...
Besides the indexing problem (which depends on the data type of your matrices and would be handy to give as a part of a minimal working example), in your if-statement you do not loop over your array elements. I assume you would want to use indices i and j, instead of SIR and ux (they indicate a fixed position in your arrays). So why do you need the if-statement inside two for-loops then?
May be check these links on accessing array elements depending on array types:
Basic array indexing
Cell vs. structure arrays
Tables
Perl 6 has lazy lists, but it also has unbounded Range objects. Which one should you choose for counting up by whole numbers?
And there's unbounded Range with two dots:
0 .. *
There's the Seq (sequence) with three dots:
0 ... *
A Range generates lists of consecutives thingys using their natural order. It inherits from Iterable, but also Positional so you can index a range. You can check if something is within a Range, but that's not part of the task.
A Seq can generate just about anything you like as long as it knows how to get to the next element. It inherits from Iterable, but also PositionalBindFailover which fakes the Positional stuff through a cache and list conversion. I don't think that a big deal if you're only moving from one element to the next.
I'm going back and forth on this. At the moment I'm thinking it's Range.
Both 0 .. * and 0 ... * are fine.
Iterating over them, for example with a for loop, has exactly the same effect in both cases. (Neither will leak memory by keeping around already iterated elements.)
Assigning them to a # variable produces the same lazy Array.
So as long as you only want to count up numbers to infinity by a step of 1, I don't see a downside to either.
The ... sequence construction operator is more generic though, in that it can also be used to
count with a different step (1, 3 ... *)
count downwards (10 ... -Inf)
follow a geometric sequence (2, 4, 8 ... *)
follow a custom iteration formula (1, 1, *+* ... *)
so when I need to do something like that, then I'd consider using ... for any nearby and related "count up by one" as well, for consistency.
On the other hand:
A Range can be indexed efficiently without having to generate and cache all preceding elements, so if you want to index your counter in addition to iterating over it, it is preferable. The same goes for other list operations that deal with element positions, like reverse: Range has efficient overloads for them, whereas using them on a Seq has to iterate and cache its elements first.
If you want to count upwards to a variable end-point (as in 1 .. $n), it's safer to use a Range because you can be sure it'll never count downwards, no matter what $n is. (If the endpoint is less than the startpoint, as in 1 .. 0, it will behave as an empty sequence when iterated, which tends to get edge-cases right in practice.)
Conversely, if you want to safely count downwards ensuring it will never unexpectedly count upwards, you can use reverse 1 .. $n.
Lastly, a Range is a more specific/high-level representation of the concept of "numbers from x to y", whereas a Seq represents the more generic concept of "a sequence of values". A Seq is, in general, driven by arbitrary generator code (see gather/take) - the ... operator is just semantic sugar for creating some common types of sequences. So it may feel more declarative to use a Range when "numbers from x to y" is the concept you want to express. But I suppose that's a purely psychological concern... :P
Semantically speaking, a Range is a static thing (a bounded set of values), a Seq is a dynamic thing (a value generator) and a lazy List a static view of a dynamic thing (an immutable cache for generated values).
Rule of thumb: Prefer static over dynamic, but simple over complex.
In addition, a Seq is an iterable thing, a List is an iterable positional thing, and a Range is an ordered iterable positional thing.
Rule of thumb: Go with the most generic or most specific depending on context.
As we're dealing with iteration only and are not interested in positional access or bounds, using a Seq (which is essentially a boxed Iterator) seems like a natural choice. However, ordered sets of consecutive integers are exactly what an integer Range represents, and personally that's what I would see as most appropriate for your particular use case.
When there is no clear choice, I tend to prefer ranges for their simplicity anyway (and try to avoid lazy lists, the heavy-weight).
Note that the language syntax also nudges you in the direction of Range, which are rather heavily Huffman-coded (two-char infix .., one-char prefix ^).
There is a difference between ".." (Range) and "..." (Seq):
$ perl6
> 1..10
1..10
> 1...10
(1 2 3 4 5 6 7 8 9 10)
> 2,4...10
(2 4 6 8 10)
> (3,6...*)[^5]
(3 6 9 12 15)
The "..." operator can intuit patterns!
https://docs.perl6.org/language/operators#index-entry-..._operators
As I understand, you can traverse a Seq only once. It's meant for streaming where you don't need to go back (e.g., a file). I would think a Range should be a fine choice.
I have an array that I would like to initialize to all 1. To do this, I used the following code snippet:
logic [15:0] memory [8];
always_ff #(posedge clk or posedge reset) begin
if(reset) begin
memory <= '{default:'1};
end
else begin
...
end
end
My simulator does what I think is the correct thing and sets the registers to 16'hFFFF on reset. However, my lint tool gives me a warning that bit 0 has an async set while bits 1 through 15 have async resets. This implies that the linter thinks that this code assigns 16'h0001 to the registers.
Since both tools come from the same vendor I file a bug report since they can't both be right.
The question is: Which behavior is correct according to the spec? There is no example that shows this exact situation. Section 5.7.1 mentions that:
An unsized single-bit value can be specified by preceding the single-bit value with an apostrophe ( ' ), but
without the base specifier. All bits of the unsized value shall be set to the value of the specified bit. In a
self-determined context, an unsized single-bit value shall have a width of 1 bit, and the value shall be treated
as unsigned.
'0, '1, 'X, 'x, 'Z, 'z // sets all bits to specified value
I f this is a "self-determined context" then the answer is 1 bit sign extended to 16'h0001, but if it is not, then I guess the example which says it "sets all bits to the specified value" applies. I am not sure if this is a self -determined context.
The simulator is correct: memory <= '{default:'1}; will assign all each bit in memory to 1. The linting tool does have a bug. See IEEE Std 1800-2012 ยง 10.9.1 Array assignment patterns:
The **default:***value* applies to elements or subarrays that are not matched by either index or type key. If the type of the element or subarray is a simple bit vector type, matches the self-determined type of the value, or is not an array or structure type, then the value is evaluated in the context of each assignment to an element or subarray by the default and shall be castable to the type of the element or subarray; otherwise, an error is generated. ...
The LRM goes beyond what is synthesizable when it comes to assignment patterns. And most of the tools are sill playing catchup with supporting all the SystemVerilog features. Experiment to make sure your tools (simulator,synthesizer, lint tool, logic-equivalency-checker, etc.) all have the necessary support for the features you want.
I am tutoring someone in basic search and sorts. In insertion sort I iterate negatively when I have a value that is greater than the one previous to it in numerical terms. Now of course this approach can cause issues because there is a check which calls for array[-1] which does not exist.
As underlined in bold below, adding the and x > 0 boolean prevents the index issue.
My question is how is this the case? Wouldn't the call for array[-1] still be made to ensure the validity of both booleans?
the_list = [10,2,4,3,5,7,8,9,6]
for x in range(1,len(the_list)):
value = the_list[x]
while value < the_list[x-1] **and x > 0**:
the_list[x] = the_list[x-1]
x=x-1
the_list[x] = value
print the_list
I'm not sure I completely understand the question, and I don't know what programming language this is, but most modern programming languages use so-called short-circuit Boolean evaluation by default so that the logical expression isn't evaluated further once the outcome is known.
You can use that to guard against range overflow, like this:
while x > 0 and value < the_list[x-1]
but the check of x's range here must come before the use.
AND operation returns true if and only if both arguments are true, so if one of arguments is false there's no point of checking others as the final value is already known at that point. As for your example, usually evaluation goes from left to right but it is not a principle and it looks the language you used is not following that rule (othewise it still should crash on array lookup). But ut may be, this particular implementation optimizes this somehow (which IMHO is not good idea) and evaluates "simpler" things first (like checking if x > 0) before it look up the array. check the specs why this exact order works for you as in most popular languages you would still crash if test x > 0 wouldn't be evaluated before lookup
This question already has answers here:
Testing for nil in Objective-C -- if(x != nil) vs if(x)
(4 answers)
Closed 9 years ago.
If you have an object like NSString *someString, what is the difference, if any, between
if (!someString)
vs
if (someString == nil)
Thanks!
The first syntax you use:
if (!someString)
exploits a sort of "ambiguity" of C deriving from the fact that the original standard of C lacked a proper boolean type. Therefore, any integer value equalling 0 was interpreted as "false", and any integer value different from "0" was taken as "true". The meaning of ! is therefore defined based on this convention and current versions of the C standard have kept the original definition for compatibility.
In your specific case, someString is a pointer, so it is first converted to an integer, then ! someString is interpreted as a bool value of true when someString points at the location 0x000000, otherwise it evals to "true".
This is fine in most conditions (I would say always), but in theory, NULL/nil could be different from 0x000000 under certain compilers, so (in very theory) it would be better to use the second syntax, which is more explicit:
if (someString == nil)
It is anyway more readable and since someString is not an integer (rather a pointer), IMO, better practice in general.
EDIT: about the definition of NULL...
Whether the C standard defines NULL to be 0 is an interesting topic for me...
According to C99 standard, section 7.17, "Common definitions ":
NULL [which] expands to an implementation-defined null pointer constant;
So, NULL is defined in stddef.h to an implementation-defined null pointer constant...
The same document on page 47 states:
An integer constant expression with the value 0, or such an expression cast to type void *, is called a null pointer constant.55) If a null pointer constant is converted to a pointer type, the resulting pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function.
So, the null pointer constant (which is (void*)0) can be converted to a null pointer and this is guaranteed to compare unequal to a pointer to any object or function.
So, I think that basically it depends on whether the implementation decides that the result of converting a null pointer constant to a null pointer produces a pointer which converted back to an integer gives 0. It is not clear that a null pointer interpreted as an integer equals 0.
I would say that the standard really try and enforce the null pointer being 0, but leaves the door open to systems where the null pointer was not 0.
For most pointers, they're equivalent, though most coders I know prefer the former as it's more concise.
For weakly linked symbols, the former resolves the symbol (and will cause a crash if it's missing) while an explicit comparison against nil or NULL will not.
The bang, exclamation, ! prefix operator in C is a logical not. At least, its a version of it. If you looked at a typical logical not truth table you would see something like this:
Input Result
1 0
0 1
However in C the logical not operator does something more like this:
Input Result
non-zero 0
0 1
So when you consider that both NULL and nil in Objective-C evaluate to 0, you know that the logical not operator applied to them will result in a 1.
Now, consider the equality == operator. It compares the value of two items and returns 1 if they are equal, 0 if they are not. If you mapped its results to a truth table then it would look exactly like the results for logical not.
In C and Objective-C programs, conditionality is actually determined by int's, as opposed to real booleans. This is because there is no such thing as a boolean data type in C. So writing something like this works perfectly fine in C:
if(5) printf("hello\n"); // prints hello
and in addition
if(2029) printf("hello\n"); // also prints hello
Basically, any non-zero int will evaluate as 'true' in C. You combine that with the truth tables for logical negation and equality, and you quickly realize that:
(! someString) and (someString == nil)
are for all intents identical!
So the next logical question is, why prefer one form over another? From a pure C view-point it would be mostly a point of style, but most (good) developers would choose the equality test for a number of reasons:
It's closer to what you are trying to express in code. You are
trying to check if the someString variable is nil.
It's more portable. Languages like Java have a real boolean type.
You cannot use bang notation on their variables or their NULL
definition. Using equality where its needed makes it easier to port
C to such languages later on.
Apple may change the definition of nil. Ok, no they won't! But it
never hurts to be safe!
In your case it means the same thing. Any pointer that does not point to nil will return YES (true).
Normally the exclamation mark operator negates a BOOL value.
If you mean to test the condition "foo is nil" you should say that: foo == nil.
If you mean to test a boolean value for falsehood, !foo is okay, but personally I think that a skinny little exclamation point is easy to miss, so I prefer foo == NO.
Writing good code is about clearly conveying your intention not just to the compiler, but to the next programmer that comes along (possibly a future you). In both cases, the more explicit you can be about what you're trying to do, the better.
All that aside, ! and ==nil have the same effect in all the cases I can think of.
! is a negation operator. If your object isn't allocated, you will reach the same result from a truth table as you would with an == nil operation.
But, ! is usually more used for boolean operations.
if(!isFalse) {
//if isFalse == NO, then this operation evaluates to YES (true)
[self doStuff];
}
When you use ! on an object like !something it just checks to see if the pointer is pointing to nil, if it doesn't, it returns true, and the if statement fires.