Type conversion in VHDL: real to integer - Is the rounding mode specified? - type-conversion

While debugging the handling of user defined physical types in Vivado (read more), I found a different behavior for type conversions from real to integer.
Here is my example code:
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
--use IEEE.MATH_REAL.all;
entity Top_PhysicalTest_Simple is
port (
Clock : in STD_LOGIC;
Input : in STD_LOGIC;
Output : out STD_LOGIC
);
end;
architecture top of Top_PhysicalTest_Simple is
constant int_1 : INTEGER := natural(0.5);
constant int_2 : INTEGER := integer(-0.5);
-- constant int_2 : INTEGER := natural(-0.5);
begin
assert FALSE report "16 - int_1 (natural(0.5)): " & INTEGER'image(int_1) severity note;
assert FALSE report "17 - int_2 (natural(-0.5)): " & INTEGER'image(int_2) severity note;
Output <= Input when rising_edge(Clock);
end;
The dummy flip flop is used to prevent some tools from complaining about an empty design.
XST 14.7:
Elaborating entity <Top_PhysicalTest_Simple> (architecture <top>) from library <work>.
Note: "16 - int_1 (natural(0.5)): 1"
Note: "17 - int_2 (natural(-0.5)): 0"
XST seems to use the mode round up and it handles the type conversion inclusive range check.
So I must use integer(-0.5) instead of natural(-0.5).
Vivado 2014.4:
[Synth 8-63] RTL assertion: "16 - int_1 (natural(0.5)): 1" ["D:/Temp/PhysicalTest_Vivado2014.4/vhdl/Top_PhysicalTest_Simple.vhdl":80]
[Synth 8-63] RTL assertion: "17 - int_2 (natural(-0.5)): -1" ["D:/Temp/PhysicalTest_Vivado2014.4/vhdl/Top_PhysicalTest_Simple.vhdl":81]
Synth seems to use the mode round to infinity and it handles the type conversion without range check. So maybe natural(..) is just an alias to integer(..).
The commented line: constant int_2 : INTEGER := natural(-0.5); throws no error.
GHDL 0.29:
GHDL 0.29 does no range check in natural(..).
I know it's out dated, but since 0.31 hates me I can't tell if this is already fixed.
GHDL 0.31:
I'll present the results later. GHDL refuses to analyse my code because:
Top_PhysicalTest_Simple.vhdl:29:14: file std_logic_1164.v93 has changed and must be reanalysed
My questions:
Does VHDL define a rounding mode? And if so which one?
How should I handle rounding if no mode is defined?

From IEEE Std 1076-2002 section 7.3.5 "Type conversions"
The conversion of a floating point value to an integer type rounds to
the nearest integer; if the value is halfway between two integers,
rounding may be up or down.
If you want something else, maybe functions in IEEE.MATH_REAL can be of some use (notably CEIL, FLOOR and/or TRUNC).

( posting this as an answer because I can't post a comment inline... )
Here are the results using the prebuilt ghdl-0.31-mcode-win32 :
C:\brian\jobs\ghdl_test\paebbels>md work.ghd
C:\brian\jobs\ghdl_test\paebbels>ghdl -a --workdir=work.ghd Top_PhysicalTest_Simple.vhd
C:\brian\jobs\ghdl_test\paebbels>ghdl -r --workdir=work.ghd Top_PhysicalTest_Simple
Top_PhysicalTest_Simple.vhd:18:3:#0ms:(assertion note): 16 - int_1 (natural(0.5)): 1
Top_PhysicalTest_Simple.vhd:19:3:#0ms:(assertion note): 17 - int_2 (natural(-0.5)): -1
"0.31 is my Windows machine (mcode version)"
"GHDL refuses to analyse my code "
If you're having trouble with libraries on the Windows mcode build of 0.31, try uninstalling any 0.29 or earlier NSIS-installer versions of GHDL on that machine.
Also make sure you ran the through the whole setup process as described in the 0.31 Windows INSTALL, particularly reanalyze_libraries.bat
Here's the version used for the above test:
C:\brian\jobs\ghdl_test\paebbels>ghdl -v
GHDL 0.31 (20140108) [Dunoon edition] + ghdl-0.31-mcode-win32.patch
Compiled with GNAT Version: GPL 2013 (20130314)
mcode code generator
Written by Tristan Gingold.
Copyright (C) 2003 - 2014 Tristan Gingold.
GHDL is free software, covered by the GNU General Public License. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
And the library path info :
C:\brian\jobs\ghdl_test\paebbels>ghdl --dispconfig
command line prefix (--PREFIX): (not set)
environment prefix (GHDL_PREFIX): C:\Ghdl\ghdl-0.31-mcode-win32\lib
default prefix: C:\Ghdl\ghdl-0.31-mcode-win32\lib
actual prefix: C:\Ghdl\ghdl-0.31-mcode-win32\lib
command_name: C:\Ghdl\ghdl-0.31-mcode-win32\bin\ghdl.exe
default library pathes:
C:\Ghdl\ghdl-0.31-mcode-win32\lib\v93\std\
C:\Ghdl\ghdl-0.31-mcode-win32\lib\v93\ieee\

Related

How can I return true in COBOL

So I am making a program that checks if a number is divisible by another number or not. If it is its supposed to return true, otherwise false. Here's what I have so far.
P.S : I'm using IBM (GnuCOBOL v2.2 -std=ibm-strict -O2) to run this.
IDENTIFICATION DIVISION.
PROGRAM-ID. CHECKER.
DATA DIVISION.
WORKING-STORAGE SECTION.
01 BASE PIC 9(5).
01 FACTOR PIC 9(2).
01 RESULT PIC 9(5).
88 TRU VALUE 0 .
88 FAL VALUE 1 THRU 99 .
PROCEDURE DIVISION.
CHECK-FOR-FACTOR SECTION.
IF FUNCTION MOD(BASE, FACTOR) = 0 THEN
SET TRU TO TRUE
ELSE
SET FAL TO TRUE
END-IF.
END PROGRAM CHECKER.
It gives me error saying invalid use of level 88. I'm sure I'm making a mistake, and I've searched for couple of days and I can't seem to find anything that can help me with it. Any ideas if it is possible in COBOL or does COBOL handle all the boolean stuff some other way ?
(Kindly do not reply with look up level 88 or some other stuff like that, I have already looked them up and they haven't been helping)
To return TRUE from a program you'd need an implementation that has boolean USAGE, define that in LINKAGE and specify it in PROCEDURE-DIVISION RETURNING true-item and also use CALL 'yourprog' RETURNING true-item.
Your specified environment GnuCOBOL doesn't have a boolean USAGE in 2021 and can't handle RETURNING phrase of PROCEDURE DIVISION in programs.
But you can use a very common extension to COBOL which is available in both IBM and GnuCOBOL:
Before the program ends MOVE RESULT TO RETURN-CODE (which is a global register) and in the calling program check its value (and reset it to zero).
Then it is only up to you what value means "true" (in your program it is 0).
As an alternative you could create a user-define function (FUNCTION-ID instead of PROGRAM-ID and use the RETURNING phrase to pass your result) - but that would mean you need to use IF FUNCTION instead of CALL + IF RETURN-CODE in each caller.

R/exams Moodle: Cloze with numeric and a string item

I am trying to use exams package to create my Moodle exams. I want to create a cloze question whit 3 numeric and one string sub-types but I am having problems with the exams2moodle().
Here is a simplification of my code:
```{r data generation, echo = FALSE, results = "hide"}
## DATA GENERATION
options(scipen = 999)
#here in my version, I generated the data and solutions, but I simplified the code for a better understanding
cambio_delta <- 20.1
r2 <- 0.97
y0_1 <- 19.56
sol_str<- "Not possible"
```
Question
========
Here goes the question speech
Answerlist
----------
* Question 1 (this is numeric)
* Question 2 (this is numeric)
* Question 3 (this is numeric)
* Question 4 (this is STRING, the answer suppose to be "Not possible")
Meta-information
================
extype: cloze
exclozetype: num|num|num|string
exsolution: `r 100*r2`|`r cambio_delta`|`r y0_1`|`r sol_str`
extol: 0.05|0.05|0.05
exname: regresion
When I knit this in the Rmarkdown, it works well, but not with exams2moodle():
exams2moodle("regresion.Rmd", n = 8, name = "Exam reg")
I get the warning message:
Error in split.default(solutionlist, gr) :
first argument must be a vector
I will appreciate any suggestion! Thank you!
I put the R/Markdown exercise into a file regresion.Rmd and then ran your code with both exams 2.3-6 (the current CRAN release version at the time of writing) and 2.4-0 (the current R-Forge devel version). Everything worked fine without error and the exercises worked as intended after import into Moodle.
I suggest that you update your version of the exams package and if necessary of R itself. Then you should be fine.

how to stop at an internal Maple proc?

I want to see how Maple determined the type of an ODE. But can't set break point at internal Maple proc:
restart;
ode:=2*sqrt(a*diff(y(x),x))+x*diff(y(x),x)-y(x) = 0;
DEtools:-odeadvisor(ode);
#[[_homogeneous, `class G`], _Clairaut]
But when I do
stopat(DEtools:-odeadvisor);
it gives erorr
Error, invalid input: stopat expects its 1st argument, p, to be of type {`::`, name, string}, but received proc () option `Copyright (c) 1997 Waterloo Maple Inc. All rights reserved.`; `ODEtools/initialized` <> 'true' and `ODEtools/init`() <> 0; `ODEtools/odeadv`(args) end proc
Is it possible to set break point at DEtools:-odeadvisor? showstat does not show much. I thought it was possible to view all Maple library code (other than the builtin ones).
But may be some are not possible to see in addition to the builtin? How does one know then which one can see and which one can't see? How one see the code the DEtools:-odeadvisor?
Maple 2018.1
Try this,
restart;
ode:=2*sqrt(a*diff(y(x),x))+x*diff(y(x),x)-y(x) = 0:
showstat(DEtools[odeadvisor]);
stopat(`ODEtools/odeadv`);
DEtools:-odeadvisor(ode);

WSOCK32.DLL htons function

In a Visual FoxPro app using sockets, we are using wsock32.dll and use the htons() function to convert a portnumber to TCP/IP network byte order. It should return an unsigned short between 0 and 65535. When testing this with port 63333 it returns 26103 but after installing the Windows Fall Creators update it returns a bigger value: 16213495.
Sample FoxPro program:
DECLARE INTEGER htons IN "wsock32.dll" INTEGER hostshort
LOCAL portNumber, htonsNumber
portNumber = 63333
htonsNumber = htons( portNumber )
? htonsNumber
The resulting value should go into a "sockaddr" structure used by the connect() function but there is only space for 2 bytes for the port.
Does anyone know what has happened in this windows update to the wsock32 functions and/or has a suggestion to solve this?
I compared the Windows 10 FCU function with Windows 8 and Windows has reordered the register usage and saved one AND instruction. This is most likely a compiler optimization and not a source code change. Because the left-shifted half is not masked you get garbage in bits 16-23 but these bits should be ignored. The function is still correct for anyone that follows the Windows ABI.
The best solution is to update the function declaration so it uses a 16-bit integer type. If that is not possible you can cast the number to a 16-bit type in languages that support casting. The final option is to truncate the value yourself by ANDing with 0xffff:
htonsNumber = BitAnd(htons(portNumber), 0xffff)
SHORT is listed as a valid return type so that should work as well:
DECLARE SHORT htons IN "wsock32.dll" INTEGER

COBOL add 0 to a Variable in COMPUTE

I ran into a strange statement when working on a COBOL program from $WORK.
We have a paragraph that is opening a cursor (from DB2), and the looping over it until it hits an EOT (in pseudo code):
... working storage ...
01 I PIC S9(9) COMP VALUE ZEROS.
01 WS-SUB PIC S9(4) COMP VALUE 0.
... code area ...
PARA-ONE.
PERFORM OPEN-CURSOR
PERFORM FETCH-CURSOR
PERFORM VARYING I FROM 1 BY 1 UNTIL SQLCODE = DB2EOT
do stuff here...
END-PERFORM
COMPUTE WS-SUB = I + 0
PERFORM CLOSE-CURSOR
... do another loop using WS-SUB ...
I'm wondering why that COMPUTE WS-SUB = I + 0 line is there. My understanding is that I will always at least be 1, because of the perform block above it (i.e., even if there is an EOT to start with, I will be set to one on that initial iteration).
Is that COMPUTE line even needed? Is it doing some implicit casting that I'm not aware of? Why would it be there? Why wouldn't you just MOVE I TO WS-SUB?
Call it stupid, but with some compilers (with the correct options in effect), given
01 SIGNED-NUMBER PIC S99 COMP-5 VALUE -1.
01 UNSIGNED-NUMBER PIC 99 COMP-5.
...
MOVE SIGNED-NUMBER TO UNSIGNED-NUMBER
DISPLAY UNSIGNED-NUMBER
results in: 255. But...
COMPUTE UNSIGNED-NUMBER = SIGNED-NUMBER + ZERO
results in: 1 (unsigned)
So to answer your question, this could be classified as a technique used cast signed numbers into unsigned numbers. However, in the code example you gave it makes no sense at all.
Note that the definition of "I" was (likely) coded by one programmer and of WS-SUB by another (naming is different, VALUE clause is different for same purpose).
Programmer 2 looks like "old school": PIC S9(4), signed and taking up all the digits which "fit" in a half-word. The S9(9) is probably "far over the top" as per range of possible values, but such things concern Programmer 1 not at all.
Probably Programmer 2 had concerns about using an S9(9) COMP for something requiring (perhaps many) fewer than 9999 "things". "I'll be 'efficient' without changing the existing code". It seems to me unlikely that the field was ever defined as unsigned.
A COMP/COMP-4 with nine digits does have a performance penalty when used for calculations. Try "ADD 1" to a 9(9) and a 9(8) and a 9(10) and compare the generated code. If you can have nine digits, define with 9(10), otherwise 9(8), if you need a fullword.
Programmer 2 knows something of this.
The COMPUTE with + 0 is probably deliberate. Why did Programmer 2 use the COMPUTE like that (the original question)?
Now it is going to get complicated.
There are two "types" of "binary" fields on the Mainframe: those which will contain values limited by the PICture clause (USAGE BINARY, COMP and COMP-4); those which contain values limited by the field size (USAGE COMP-5).
With BINARY/COMP/COMP-4, the size of the field is determined from the PICture, and so are the values that can be held. PIC 9(4) is a halfword, with a maxiumum value of 9999. PIC S9(4) a halfword with values -9999 through +9999.
With COMP-5 (Native Binary), the PICture just determines the size of the field, all the bits of the field are relevant for the value of the field. PIC 9(1) to 9(4) define halfwords, pic 9(5) to 9(9) define fullwords, and 9(10) to 9(18) define doublewords. PIC 9(1) can hold a maximum of 65535, S9(1) -32,768 through +32,767.
All well and good. Then there is compiler option TRUNC. This has three options. STD, the default, BIN and OPT.
BIN can be considered to have the most far-reaching affect. BIN makes BINARY/COMP/COMP-4 behave like COMP-5. Everything becomes, in effect, COMP-5. PICtures for binary fields are ignored, except in determining the size of the field (and, curiously, with ON SIZE ERROR, which "errors" when the maxima according to the PICture are exceeded). Native Binary, in IBM Enterprise Cobol, generates, in the main, though not exclusively, the "slowest" code. Truncation is to field size (halfword, fullword, doubleword).
STD, the default, is "standard" truncation. This truncates to "PICture". It is therefore a "decimal" truncation.
OPT is for "performance". With OPT, the compiler truncates in whatever way is the most "performant" for a particular "code sequence". This can mean intermediate values and final values may have "bits set" which are "outside of the range" of the PICture. However, when used as a source, a binary field will always only reflect the value specified by the PICture, even if there are "excess" bits set.
It is important when using OPT that all binary fields "conform to PICture" meaning that code must never rely on bits which are set outside the PICture definition.
Note: Even though OPT has been used, the OPTimizer (OPT(STD) or OPT(FULL)) can still provide further optimisations.
This is all well and good.
However, a "pickle" can readily ensue if you "mix" TRUNC options, or if the binary definition in a CALLing program is not the same as in the CALLed program. The "mix" can occur if modules within the same run-unit are compiled with different TRUNC options, or if a binary field on a file is written with one TRUNC option and later read with another.
Now, I suspect Programmer 2 encountered something like this: Either, with TRUNC(OPT) they noticed "excess bits" in a field and thought there was a need to deal with them, or, through the "mix" of options in a run-unit or "across file usage" they noticed "excess bits" where there would be a need to do something about it (which was to "remove the mix").
Programmer 2 developed the COMPUTE A = B + 0 to "deal" with a particular problem (perceived or actual) and then applied it generally to their work.
This is a "guess", or, better, a "rationalisation" which works with the known information.
It is a "fake" fix. There was either no problem (the normal way that TRUNC(OPT) works) or the correct resolution was "normalisation" of the TRUNC option across modules/file use.
I do not want loads of people now rushing off and putting COMPUTE A = B + 0 in their code. For a start, they don't know why they are doing it. For a continuation it is the wrong thing to do.
Of course, do not just remove the "+ 0" from any of these that you find. If there is a "mix" of TRUNCs, a program may stop "working".
There is one situation in which I have used "ADD ZERO" for a BINARY/COMP/COMP-4. This is in a "Mickey Mouse" program, a program with no purpose but to try something out. Here I've used it as a method to "trick" the optimizer, as otherwise the optimizer could see unchanging values so would generate code to use literal results as all values were known at compile time. (A perhaps "neater" and more flexible way to do this which I picked up from PhilinOxford, is to use ACCEPT for the field). This is not the case, for certain, with the code in question.
I wonder if a testing version of the sources ever had
COMPUTE WS-SUB = I + 0
ON SIZE ERROR
DISPLAY "WS-SUB overflow"
STOP RUN
END-COMPUTE
with the range test discarded when the developer was satisfied and cleaning up? MOVE doesn't allow declarative SIZE statements. That's as much of a reason as I could see. Or perhaps developer habit of using COMPUTE to move, as a subtle reminder to question the need for defensive code at every step? And perhaps not knowing, as Joe pointed out, the SIZE clause would be just as effective without the + 0? Or a maintainer struggled with off by one errors and there was a corrective change from 1 to 0 after testing?