I have developed a simple uvm testbench to verify a simple adder. I have used functional coverage to monitor the coverage as well. The adder is 8 bit with inputs a and b and the output is c, which is 9 bits.
I have developed the transaction with 8 bits rand logic for a and b.
In sequence, I have run that with repeat(100) and it will randomize and drives a and b to the DUT. The best case for functional coverage for this scenario is (100/256)*100% i.e around 40% assuming that no value will be repeated. I sample the coverage in my scoreboard and get the coverage result in env.
Here are my code snippets
// monitor class
covergroup cg;
a : coverpoint sb_item.a;
b : coverpoint sb_item.b;
endgroup
...
function void write(input input_seq_item i);
sb_item = i;
if(sb_item.c == sb_item.a + sb_item.b)
begin
`uvm_info("SB","OK!",UVM_LOW)
cg.sample();
end
else
`uvm_error("SB",$sformatf("ERROR! %b + %b = %b", sb_item.a, sb_item.b, sb_item.c), UVM_LOW)
endfunction
// env class
...
task run_phase(uvm_phase phase);
sb.cg.stop();
phase.raise_objection(this);
sb.cg.start();
seq.start(sqr);
phase.drop_objection(this);
sb.cg.stop();
`uvm_info("env",$sformatf("The coverage collected is %f",sb.cg.a.get_coverage()),UVM_LOW);
endtask
...
When I run the code, I get coverage of around 81. Results shown below
# KERNEL: UVM_INFO /home/runner/monitor.sv(56) # 996: uvm_test_top.env.sb [SB] OK!
# KERNEL: UVM_INFO /home/runner/env.sv(34) # 996: uvm_test_top.env [env] The coverage collected is 85.937500
# KERNEL: UVM_INFO /home/build/vlib1/vlib/uvm-1.2/src/base/uvm_objection.svh(1271) # 996: reporter [TEST_DONE] 'run' phase is ready to proceed to the 'extract' phase
# KERNEL: UVM_INFO /home/build/vlib1/vlib/uvm-1.2/src/base/uvm_report_server.svh(855) # 996: reporter [UVM/REPORT/SERVER]
# KERNEL: --- UVM Report Summary ---
# KERNEL:
# KERNEL: ** Report counts by severity
# KERNEL: UVM_INFO : 204
# KERNEL: UVM_WARNING : 0
# KERNEL: UVM_ERROR : 0
# KERNEL: UVM_FATAL : 0
# KERNEL: ** Report counts by id
# KERNEL: [Driver] 100
# KERNEL: [RNTST] 1
# KERNEL: [SB] 100
# KERNEL: [TEST_DONE] 1
# KERNEL: [UVM/RELNOTES] 1
# KERNEL: [env] 1
# KERNEL:
# RUNTIME: Info: RUNTIME_0068 uvm_root.svh (521): $finish called.
# KERNEL: Time: 996 ns, Iteration: 61, Instance: /top, Process: #INITIAL#14_0#.
# KERNEL: stopped at time: 996 ns
# VSIM: Simulation has finished. There are no more test vectors to simulate.
exit
# FCOVER: Covergroup Coverage data has been saved to "fcover.acdb" database.
# VSIM: Simulation has finished.
Can anyone explain what mistake I am doing here? Is the coverage cumulative over all runs?
Whether the coverage is cumulative over all runs depends on what you're analyzing. I'm guessing you're analyzing only one simulation, though. Your calculation is correct, the maximum coverage you could get per test is about 40% (basically 40% per each coverpoint, averaged together), but that's highly unlikely to reach.
What you also need to look at (aside from the percentage) is what bins are actually getting created. I don't think you're getting a bin for each value of a or b, but that some of them might be clumped up together (i.e. a in [ 0..3 ] would be one bin and so on, leaving you with 256/4 bins instead of 256). Each coverpoint has an option called auto_bin_max, whose default value is 64. if you set this to 256 or explicitly declare a (range) bin for each value that a or b could take, you'll get a coverage percentage you'd expect.
As a side note, you typically don't create a coverage bin for every value of a data item, since this doesn't really make sense. In a typical device there are so many values the data items could take that you can't verify them all. What you would do, however is declare bins for more "interesting" situations. In your case, interesting values are 0, 8'hff and anything in between. What's also particularly interesting is crossing a and b and checking the combinations, especially the case where a and b are both 8'hff (as that's where your result would overflow on 8 bits and output a carry.
Related
Could someone please give me advice to make an openbmc image for Raspberrypi platform ?
Before I tried, I looked through related documents and believed an openbmc image can be worked on Raspberrypi.
Like OpenBMC with Raspberry Pi (2 or 3) and build bmcweb?
and https://kevinleeblog.github.io/project1/2019/11/25/openbmc-for-raspberry-pi-zero/.
So, I followed these instructions and tried the following steps.
#1: Git clone openbmc.git to my local PC.
tm#tm-VB1:~/Rpi4-64$ git clone https://github.com/openbmc/openbmc.git
Snip the logs but it looks no problem.
Receiving objects: 100% (182121/182121), 84.10 MiB | 5.55 MiB/s, done.
Resolving deltas: 100% (96860/96860), done.
#2: set TEMPLATECONF for raspberrypi
tm#tm-VB1:~/Rpi4-64$ export TEMPLATECONF=meta-evb/meta-evb-raspberrypi/conf
tm#tm-VB1:~/Rpi4-64$ echo $TEMPLATECONF
meta-evb/meta-evb-raspberrypi/conf
#3: set up the environment by "openbmc-env"
tm#tm-VB1:~/Rpi4-64/openbmc$ . openbmc-env
### Initializing OE build env ###
Snip the logs but it looks no problem. As you know, the script automatically creates a subdirectory,build, under openbmc.
Common targets are:
obmc-phosphor-image
tm#tm-VB1:~/Rpi4-64/openbmc/build$
#4: Change the directory and edit local.conf for my Raspberrypi platform.
tm#tm-VB1:~/Rpi4-64/openbmc/build$ cat ./conf/local.conf
Snip the log for unchanged part.
MACHINE ??= "raspberrypi4-64" <<< Change here for my platform.
DL_DIR ?= "/home/tm/Yocto/downloads" <<< Add here for build-time reduction at retry.
SSTATE_DIR ?= "/home/tm/Yocto/sstate-cache" <<< Add here for build-time reduction at retry.
#5: Change FLASH_SIZE variable based on the following sugestion. https://github.com/openbmc/openbmc/issues/3590
tm#tm-VB1:~/Rpi4-64/openbmc/meta-phosphor/classes$ cat image_types_phosphor.bbclass
Snip the log.
# Flash characteristics in KB unless otherwise noted
FLASH_SIZE ?= "131072" <<< I changed only this variable from 32768 to 131072.
#6: bitbake starts.
tm#tm-VB1:~/Rpi4-64/openbmc/bitbake obmc-phosphor-image
Then, ERROR happened.
ERROR: Logfile of failure stored in: /home/tm/Rpi/openbmc/build/tmp/work/raspberrypi-openbmc-linux-gnueabi/obmc-phosphor-image/1.0-r0/temp/log.do_generate_static.2055074
DEBUG: Executing python function do_generate_static
DEBUG: Executing shell function do_mk_static_nor_image
32768+0 records in
32768+0 records out
33554432 bytes (34 MB, 32 MiB) copied, 0.09147 s, 367 MB/s
DEBUG: Shell function do_mk_static_nor_image finished
DEBUG: Considering file size=495980 name=/home/tm/Rpi/openbmc/build/tmp/deploy/images/raspberrypi/u-boot.bin
DEBUG: Spanning start=0K end=512K
DEBUG: Compare needed=495980 available=524288 margin=28308
484+1 records in
484+1 records out
495980 bytes (496 kB, 484 KiB) copied, 0.00120141 s, 413 MB/s
DEBUG: Considering file size=8266960 name=/home/tm/Rpi/openbmc/build/tmp/deploy/images/raspberrypi/fitImage-obmc-phosphor-initramfs-raspberrypi-raspberrypi
DEBUG: Spanning start=512K end=4864K
>>>DEBUG: Compare needed=8266960 available=4456448 margin=-3810512
ERROR: Image '/home/tm/Rpi/openbmc/build/tmp/deploy/images/raspberrypi/fitImage-obmc-phosphor-initramfs-raspberrypi-raspberrypi' is too large!
DEBUG: Python function do_generate_static finished
It said margin=-3810512.
Now, my 2nd try.
I removed the whole openbmc directory and did the same steps above.
But this time, I change FLASH_SIZE from 32768 to 262144.
It is the same result like below.
ERROR: obmc-phosphor-image-1.0-r0 do_generate_static: Image '/home/tm/Rpi4/openbmc/build/tmp/deploy/images/raspberrypi4/u-boot.bin' is too large!
ERROR: Logfile of failure stored in: /home/tm/Rpi4/openbmc/build/tmp/work/raspberrypi4-openbmc-linux-gnueabi/obmc-phosphor-image/1.0-r0/temp/log.do_generate_static.2061792
ERROR: Task (/openbmc/meta-phosphor/recipes-phosphor/images/obmc-phosphor-image.bb:do_generate_static) failed with exit code '1'
NOTE: Tasks Summary: Attempted 3915 tasks of which 2633 didn't need to be rerun and 1 failed.
Summary: 1 task failed:
/openbmc/meta-phosphor/recipes-phosphor/images/obmc-phosphor-image.bb:do_generate_static
Summary: There were 2 WARNING messages shown.
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
tm#tm-VB1:~/Rpi4/openbmc/build$ cat /home/tm/Rpi4/openbmc/build/tmp/work/raspberrypi4-openbmc-linux-gnueabi/obmc-phosphor-image/1.0-r0/temp/log.do_generate_static.2061792
DEBUG: Executing python function do_generate_static
DEBUG: Executing shell function do_mk_static_nor_image
32768+0 records in
32768+0 records out
33554432 bytes (34 MB, 32 MiB) copied, 0.177223 s, 189 MB/s
DEBUG: Shell function do_mk_static_nor_image finished
DEBUG: Considering file size=548224 name=/home/tm/Rpi4/openbmc/build/tmp/deploy/images/raspberrypi4/u-boot.bin
DEBUG: Spanning start=0K end=512K
>>>DEBUG: Compare needed=548224 available=524288 margin=-23936
ERROR: Image '/home/tm/Rpi4/openbmc/build/tmp/deploy/images/raspberrypi4/u-boot.bin' is too large!
DEBUG: Python function do_generate_static finished
tm#tm-VB1:~/Rpi4/openbmc/build$
It said margin=-23936.
OK. Image is too large. So,my 3rd try.
I removed the whole openbmc directory and did the same steps above.
But this time, I change FLASH_SIZE from 32768 to 9437184.
It is the same result like below.
ERROR: obmc-phosphor-image-1.0-r0 do_generate_static: Image '/home/tm/Rpi4/openbmc/build/tmp/deploy/images/raspberrypi4/u-boot.bin' is too large!
ERROR: Logfile of failure stored in: /home/tm/Rpi4/openbmc/build/tmp/work/raspberrypi4-openbmc-linux-gnueabi/obmc-phosphor-image/1.0-r0/temp/log.do_generate_static.2058361
ERROR: Task (/openbmc/meta-phosphor/recipes-phosphor/images/obmc-phosphor-image.bb:do_generate_static) failed with exit code '1'
NOTE: Tasks Summary: Attempted 3935 tasks of which 0 didn't need to be rerun and 1 failed.
Summary: 1 task failed:
/openbmc/meta-phosphor/recipes-phosphor/images/obmc-phosphor-image.bb:do_generate_static
Summary: There were 4 WARNING messages shown.
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
tm#tm-VB1:~/Rpi4/openbmc$
tm#tm-VB1:~/Rpi4/openbmc$ cat /home/tm/Rpi4/openbmc/build/tmp/work/raspberrypi4-openbmc-linux-gnueabi/obmc-phosphor-image/1.0-r0/temp/log.do_generate_static.2058361
DEBUG: Executing python function do_generate_static
DEBUG: Executing shell function do_mk_static_nor_image
32768+0 records in
32768+0 records out
33554432 bytes (34 MB, 32 MiB) copied, 0.173685 s, 193 MB/s
DEBUG: Shell function do_mk_static_nor_image finished
DEBUG: Considering file size=548224 name=/home/tm/Rpi4/openbmc/build/tmp/deploy/images/raspberrypi4/u-boot.bin
DEBUG: Spanning start=0K end=512K
>>>DEBUG: Compare needed=548224 available=524288 margin=-23936
ERROR: Image '/home/tm/Rpi4/openbmc/build/tmp/deploy/images/raspberrypi4/u-boot.bin' is too large!
DEBUG: Python function do_generate_static finished
tm#tm-VB1:~/Rpi4/openbmc$
It said the same margin as 256MB case.
My 4th try.
I removed the whole openbmc directory and did the same steps above.
I changed MACHINE ??= "raspberrypi4-64" to "raspberrypi2"
But this time, I change FLASH_SIZE from 32768 to 33554432.
It is the same result before.
My 5th try.
I removed the whole openbmc directory and did the same steps above.
I used MACHINE ??= "raspberrypi2"
But this time, I change FLASH_SIZE from 32768 to 67108864.
It is the same result before.
After I tried several variations, it always said "image is too large" although I changed FLASH_SIZE to much much larger one.
So, I am wondering if I have missed some important configuration or it needs another parameter to fix this except FLASH_SIZE.
By the way, I tried romulus and made it.
My environment is ubuntu-20.04.2.0-desktop-amd64.
I really appreciate someone could kindly give me advice to make this work.
Interesting, I don't have a quick fix for you but I did notice the partition that is over sized is the uboot partition. The uboot is a smaller separate binary installed on the machine. It looks as if your uboot build is over 512k and the partition is set for 512k. Your flash size is massize
FLASH_SIZE = 9437184" that is more then a gig, (because FLASH_SIZE is in K)
If I were you I would first try to build an older version of openbmc for raspberry pi. (It used to work so you just need to find the commit before uboot grew to big). Use git to move back a month until you find it works.
If that does not work I would try to modify the partition table.
here is where you failing
this looks fine building the uboot image looks fine
increasing the kernel offset make if build, but the other targets in openbmc will not be happy with this solution. So maybe meta-raspberry-pi will have to override the partition table (if uboot can not be shrunk)
What ever you do, open an issue on the github and share you changes. Also use the discord, and gerrit.
I just replicated this issue. We should fix it
I'm trying to run a peak calling tool within a conda environment using snakemake.
The script looks as such (I only added the rows connect to the problem):
rule all:
input:
expand('{project}/{organism}/{mapper}/seacr/{pattern}.auc.threshold.bed', pattern = PATTERN, sample = IDS, organism = config['org'], project = config['project'], mapper = config['mapper']) # SEACR - run the peak calling
rule seacr_run:
input:
IP = '{project}/{organism}/{mapper}/seacr/IP_{PATTERN}.bedgraph',
IgG = '{project}/{organism}/{mapper}/seacr/IgG_{PATTERN}.bedgraph',
output:
bed1 = '{project}/{organism}/{mapper}/seacr/{PATTERN}.auc.threshold.bed',
shell:
'''
bash /fs/home/yeroslaviz/SEACR/SEACR_1.3.sh {input.IP} 0.01 non stringent {output.bed1}
'''
When running the -nps dryrun of the snamemake command I get the correct command printed to STDOUT
> snakemake -nps /fs/pool/pool-bcfngs/scripts/P193.ChipSeq.Snakemake -j 100
...
Building DAG of jobs...
Job counts:
count jobs
1 all
1 seacr_run
2
[Tue Mar 3 13:56:19 2020]
rule seacr_run:
input: P193/Mmu.GrCm38/bowtie2/seacr/IP_H3K4m3.bedgraph, P193/Mmu.GrCm38/bowtie2/seacr/IgG_H3K4m3.bedgraph
output: P193/Mmu.GrCm38/bowtie2/seacr/H3K4m3.auc.threshold.bed
jobid: 22
wildcards: project=P193, organism=Mmu.GrCm38, mapper=bowtie2, PATTERN=H3K4m3
bash /fs/home/yeroslaviz/SEACR/SEACR_1.3.sh P193/Mmu.GrCm38/bowtie2/seacr/IP_H3K4m3.bedgraph 0.01 non stringent P193/Mmu.GrCm38/bowtie2/seacr/H3K4m3.auc.threshold.bed
[Tue Mar 3 13:56:19 2020]
localrule all:
...
Job counts:
count jobs
1 all
1 seacr_run
2
This was a dry-run (flag -n). The order of jobs does not reflect the order of execution.
When running the command above in the command line the tool works without problems. But hwhen I try to run it within the snakemake workflow I get the following error:
Waiting at most 5 seconds for missing files.
MissingOutputException in line 67 of /fs/pool/pool-bcfngs/scripts/P193.ChipSeq.Snakemake:
Missing files after 5 seconds:
P193/Mmu.GrCm38/bowtie2/seacr/H3K4m3.auc.threshold.bed
This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Can anyone explain what is happening?
Thanks
I have a problem. We use the Matlab testing framework to analyze our codebase. To track the results in our CI system TeamCity we use the TAP-format. Here we have the following problem:
If a test includes a TestClassSetup section, the TAP results show up only at the end, and not during the exection. This results in a few issues for us:
Timestamps created by the CI system might not be correct
If informative output is given within a test-case, it is not shown together with the assertion statment.
We use the following (simplified) snippet to identify out TestSuite and execute it:
testSuite = matlab.unittest.TestSuite.fromFolder('.');
runner = matlab.unittest.TestRunner.withNoPlugins();
runner.addPlugin(matlab.unittest.plugins.TAPPlugin.producingOriginalFormat());
results = runner.run(testSuite);
With the following two classes the issue is reproducible (the content is of course made up & meaningless...):
classdef SomeTest < matlab.unittest.TestCase
properties (TestParameter)
param = {1, 2};
param2 = {1, 2};
end
methods (TestClassSetup)
function someSetup(testCase)
pause(0.1);
end
end
methods (Test)
function testMethod(self, param, param2)
fprintf('I''m here, with the params: %f/%f\n', param, param2);
pause(0.1);
self.assertGreaterThan(param, param2);
end
end
end
classdef SomeOtherTest < matlab.unittest.TestCase
properties (TestParameter)
param = {1, 2};
param2 = {1, 2};
end
methods (Test)
function testMethod(self, param, param2)
fprintf('I''m here, with the params: %f/%f\n', param, param2);
pause(0.1);
self.assertGreaterThan(param, param2);
end
end
end
If you copy all three files into one folder, and execute the runner, you'll see the output (assertions are simplified):
1..8
I'm here, with the params: 1.000000/1.000000
not ok 1 - SomeOtherTest/testMethod(param=1,param2=1)
# ================================================================================
# Assertion failed in SomeOtherTest/testMethod(param=1,param2=1) and it did not run to completion.
# ================================================================================
#
I'm here, with the params: 1.000000/2.000000
not ok 2 - SomeOtherTest/testMethod(param=1,param2=2)
# ================================================================================
# Assertion failed in SomeOtherTest/testMethod(param=1,param2=2) and it did not run to completion.
# ================================================================================
#
I'm here, with the params: 2.000000/1.000000
ok 3 - SomeOtherTest/testMethod(param=2,param2=1)
I'm here, with the params: 2.000000/2.000000
not ok 4 - SomeOtherTest/testMethod(param=2,param2=2)
# ================================================================================
# Assertion failed in SomeOtherTest/testMethod(param=2,param2=2) and it did not run to completion.
# ================================================================================
#
I'm here, with the params: 1.000000/1.000000
I'm here, with the params: 1.000000/2.000000
I'm here, with the params: 2.000000/1.000000
I'm here, with the params: 2.000000/2.000000
not ok 5 - SomeTest/testMethod(param=1,param2=1)
# ================================================================================
# Assertion failed in SomeTest/testMethod(param=1,param2=1) and it did not run to completion.
# ================================================================================
#
not ok 6 - SomeTest/testMethod(param=1,param2=2)
# ================================================================================
# Assertion failed in SomeTest/testMethod(param=1,param2=2) and it did not run to completion.
# ================================================================================
#
ok 7 - SomeTest/testMethod(param=2,param2=1)
not ok 8 - SomeTest/testMethod(param=2,param2=2)
# ================================================================================
# Assertion failed in SomeTest/testMethod(param=2,param2=2) and it did not run to completion.
# ================================================================================
What I would expect is that also in the second case the Assertion statements (and the ok / not ok TAP flags) are aligned with the fprintf-statements.
Has anyone an idea?
The reason the presence of TestClassSetup "defers" the printing of the TAP output is because the TAP output is a streaming format and if there is any TestClassSetup code the frame work actually does not yet know whether the tests will pass or not. For example, if you have a failure in TestClassTeardown (or through an addTeardown function call in TestClassSetup), the end result is that all the tests that shared the TestClassSetup code will fail.
However, given its a streaming format the TAPPLugin wants to print out the result as soon as it knows the result. There is actually a TestRunnerPlugin method specifically designed for this case, the reportFinalizedResult method.
The fundamental issue here is that I would recommend you avoid printing to the log using disp or fprintf. This is less ideal because the plugins don't have any insight into any of the information printed using fprintf. Also, you can't redirect this information anywhere other than the matlab command line.
However, if you instead using the testCase.log method you will get the diagnostics in the right place and it will be more flexible. You will be able to log it at different levels so you can turn it on or off as you please and control whether you want to see it. It will also not only go to the command line but will go much more nicely into the TAP stream as well as the junit xml and the pdf/html test reports and so on. For your case it looks like the following:
runner = matlab.unittest.TestRunner.withNoPlugins();
runner.addPlugin(matlab.unittest.plugins.TAPPlugin.producingOriginalFormat());
results = runner.run(testSuite);
First you run and you don't see any of the log calls because it was logged at verbosity "3" and the default is lower (level 1 I believe)
1..8
not ok 1 - SomeOtherTest/testMethod(param=value1,param2=value1)
# ================================================================================
# Assertion failed in SomeOtherTest/testMethod(param=value1,param2=value1) and it did not run to completion.
# ================================================================================
not ok 2 - SomeOtherTest/testMethod(param=value1,param2=value2)
# ================================================================================
# Assertion failed in SomeOtherTest/testMethod(param=value1,param2=value2) and it did not run to completion.
# ================================================================================
ok 3 - SomeOtherTest/testMethod(param=value2,param2=value1)
not ok 4 - SomeOtherTest/testMethod(param=value2,param2=value2)
# ================================================================================
# Assertion failed in SomeOtherTest/testMethod(param=value2,param2=value2) and it did not run to completion.
# ================================================================================
not ok 5 - SomeTest/testMethod(param=value1,param2=value1)
# ================================================================================
# Assertion failed in SomeTest/testMethod(param=value1,param2=value1) and it did not run to completion.
# ================================================================================
not ok 6 - SomeTest/testMethod(param=value1,param2=value2)
# ================================================================================
# Assertion failed in SomeTest/testMethod(param=value1,param2=value2) and it did not run to completion.
# ================================================================================
ok 7 - SomeTest/testMethod(param=value2,param2=value1)
not ok 8 - SomeTest/testMethod(param=value2,param2=value2)
# ================================================================================
# Assertion failed in SomeTest/testMethod(param=value2,param2=value2) and it did not run to completion.
# ================================================================================
However, if you configure the tap plugin (or the version 13 tap plugin or the report plugin etc) to log at level threee then you see these diagnostics and they are at the expected location as well:
runner = matlab.unittest.TestRunner.withNoPlugins();
runner.addPlugin(matlab.unittest.plugins.TAPPlugin.producingOriginalFormat('Verbosity', 3));
results = runner.run(testSuite);
You see the output. Also try the TAPVersion 13, the structured output that provides might provide an even better result.
1..8
not ok 1 - SomeOtherTest/testMethod(param=value1,param2=value1)
# ================================================================================
# [Detailed] Diagnostic logged (2018-08-09 16:47:18): I'm here, with the params: 1.000000/1.000000
# ================================================================================
# ================================================================================
# Assertion failed in SomeOtherTest/testMethod(param=value1,param2=value1) and it did not run to completion.
# ================================================================================
not ok 2 - SomeOtherTest/testMethod(param=value1,param2=value2)
# ================================================================================
# [Detailed] Diagnostic logged (2018-08-09 16:47:19): I'm here, with the params: 1.000000/2.000000
# ================================================================================
# ================================================================================
# Assertion failed in SomeOtherTest/testMethod(param=value1,param2=value2) and it did not run to completion.
# ================================================================================
ok 3 - SomeOtherTest/testMethod(param=value2,param2=value1)
# ================================================================================
# [Detailed] Diagnostic logged (2018-08-09 16:47:19): I'm here, with the params: 2.000000/1.000000
# ================================================================================
not ok 4 - SomeOtherTest/testMethod(param=value2,param2=value2)
# ================================================================================
# [Detailed] Diagnostic logged (2018-08-09 16:47:19): I'm here, with the params: 2.000000/2.000000
# ================================================================================
# ================================================================================
# Assertion failed in SomeOtherTest/testMethod(param=value2,param2=value2) and it did not run to completion.
# ================================================================================
not ok 5 - SomeTest/testMethod(param=value1,param2=value1)
# ================================================================================
# [Detailed] Diagnostic logged (2018-08-09 16:47:19): I'm here, with the params: 1.000000/1.000000
# ================================================================================
# ================================================================================
# Assertion failed in SomeTest/testMethod(param=value1,param2=value1) and it did not run to completion.
# ================================================================================
not ok 6 - SomeTest/testMethod(param=value1,param2=value2)
# ================================================================================
# [Detailed] Diagnostic logged (2018-08-09 16:47:19): I'm here, with the params: 1.000000/2.000000
# ================================================================================
# ================================================================================
# Assertion failed in SomeTest/testMethod(param=value1,param2=value2) and it did not run to completion.
# ================================================================================
ok 7 - SomeTest/testMethod(param=value2,param2=value1)
# ================================================================================
# [Detailed] Diagnostic logged (2018-08-09 16:47:20): I'm here, with the params: 2.000000/1.000000
# ================================================================================
not ok 8 - SomeTest/testMethod(param=value2,param2=value2)
# ================================================================================
# [Detailed] Diagnostic logged (2018-08-09 16:47:20): I'm here, with the params: 2.000000/2.000000
# ================================================================================
# ================================================================================
# Assertion failed in SomeTest/testMethod(param=value2,param2=value2) and it did not run to completion.
# ================================================================================
Hope that helps!
I just installed DrRacket, and tried out the language "How To Design Programs - Beginning Student".
Racket - A programmable programming language
Racket - Getting Started
I run (+ 1 1), and it takes over ten seconds for this to show up:
Welcome to DrRacket, version 6.5 [3m].
Language: Beginning Student; memory limit: 128 MB.
2
>
As far as I can tell, my installation is pretty much "out of the box".
What I'm wondering is if my experience is unusual,
and if there's any obvious way to troubleshoot it if so
(I've looked around the settings and didn't find anything obvious to tweak),
or if maybe the whole HTDP language was quietly abandoned or something...?
EDIT 1
I have these files:
/usr/share/racket $
find -iname "*htdp*.zo"
./pkgs/htdp-lib/lang/private/compiled/create-htdp-executable_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-reader_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-beginner-abbr-reader_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-langs-save-file-prefix_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-advanced-reader_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-intermediate-lambda-reader_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-advanced_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-beginner-abbr_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-intermediate_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-intermediate-reader_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-langs_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-beginner_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-intermediate-lambda_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-beginner-reader_rkt.zo
./pkgs/htdp-doc/scribblings/htdp-langs/compiled/htdp-langs_scrbl.zo
./pkgs/htdp-doc/scribblings/htdp-langs/compiled/htdp-ptr_scrbl.zo
./pkgs/htdp-doc/htdp/compiled/htdp_scrbl.zo
./pkgs/htdp-doc/htdp/compiled/htdp-lib_scrbl.zo
./pkgs/htdp-doc/teachpack/htdp/scribblings/compiled/htdp_scrbl.zo
./pkgs/htdp-doc/teachpack/2htdp/scribblings/compiled/2htdp_scrbl.zo
EDIT 2 - cpu and harddrive specs
CPU
$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 28
model name : Intel(R) Atom(TM) CPU N450 # 1.66GHz
stepping : 10
microcode : 0x107
cpu MHz : 1000.000
cache size : 512 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts nopl aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm movbe lahf_lm dtherm
bugs :
bogomips : 3325.00
clflush size : 64
cache_alignment : 64
address sizes : 32 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 28
model name : Intel(R) Atom(TM) CPU N450 # 1.66GHz
stepping : 10
microcode : 0x107
cpu MHz : 1000.000
cache size : 512 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 1
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts nopl aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm movbe lahf_lm dtherm
bugs :
bogomips : 3325.00
clflush size : 64
cache_alignment : 64
address sizes : 32 bits physical, 48 bits virtual
power management:
HD
$ sudo hdparm -I /dev/sda
/dev/sda:
ATA device, with non-removable media
Model Number: Hitachi HTS545016B9A300
Serial Number: 100324PBPB06ECC0K6XL
Firmware Revision: PBBOC60F
Transport: Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6; Revision: ATA8-AST T13 Project D1697 Revision 0b
Standards:
Used: unknown (minor revision code 0x0028)
Supported: 8 7 6 5
Likely used: 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 312581808
Logical/Physical Sector size: 512 bytes
device size with M = 1024*1024: 152627 MBytes
device size with M = 1000*1000: 160041 MBytes (160 GB)
cache/buffer size = 7208 KBytes (type=DualPortCache)
Form Factor: 2.5 inch
Nominal Media Rotation Rate: 5400
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Vendor, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = 16
Advanced power management level: 254
Recommended acoustic management value: 128, current value: 254
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* NOP cmd
* DOWNLOAD_MICROCODE
* Advanced Power Management feature set
Power-Up In Standby feature set
* SET_FEATURES required to spinup after power up
SET_MAX security extension
Automatic Acoustic Management feature set
* 48-bit Address feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* General Purpose Logging feature set
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* 64-bit World wide name
* IDLE_IMMEDIATE with UNLOAD
* WRITE_UNCORRECTABLE_EXT command
* {READ,WRITE}_DMA_EXT_GPL commands
* Segmented DOWNLOAD_MICROCODE
* Gen1 signaling speed (1.5Gb/s)
* Gen2 signaling speed (3.0Gb/s)
* Native Command Queueing (NCQ)
* Host-initiated interface power management
* Phy event counters
* NCQ priority information
Non-Zero buffer offsets in DMA Setup FIS
* DMA Setup Auto-Activate optimization
Device-initiated interface power management
In-order data delivery
* Software settings preservation
* SMART Command Transport (SCT) feature set
* SCT Write Same (AC2)
* SCT Error Recovery Control (AC3)
* SCT Features Control (AC4)
* SCT Data Tables (AC5)
Security:
Master password revision code = 65534
supported
not enabled
not locked
frozen
not expired: security count
supported: enhanced erase
64min for SECURITY ERASE UNIT. 66min for ENHANCED SECURITY ERASE UNIT.
Logical Unit WWN Device Identifier: 5000cca5ffc040a7
NAA : 5
IEEE OUI : 000cca
Unique ID : 5ffc040a7
Checksum: correct
EDIT 3 command-line times
ran (+ 1 1) in HTDP-beginner 3 times -- Over 5 seconds.
$ time racket -t racket_HTDP_beginner.rkt
2
5.60user 1.04system 0:08.46elapsed 78%CPU (0avgtext+0avgdata 127968maxresident)k
5496inputs+0outputs (46major+40955minor)pagefaults 0swaps
$ time racket -t racket_HTDP_beginner.rkt
2
5.51user 0.67system 0:06.71elapsed 92%CPU (0avgtext+0avgdata 128124maxresident)k
24inputs+0outputs (0major+41790minor)pagefaults 0swaps
$ time racket -t racket_HTDP_beginner.rkt
2
5.41user 0.67system 0:06.55elapsed 92%CPU (0avgtext+0avgdata 128180maxresident)k
0inputs+0outputs (0major+36683minor)pagefaults 0swaps
ran (+ 1 1) in #lang racket 3 times -- A bit over 2 seconds.
$ time racket -t racket_lang_racket.rkt
2
2.13user 0.25system 0:02.71elapsed 87%CPU (0avgtext+0avgdata 64996maxresident)k
0inputs+0outputs (0major+12437minor)pagefaults 0swaps
$ time racket -t racket_lang_racket.rkt
2
2.15user 0.25system 0:02.63elapsed 91%CPU (0avgtext+0avgdata 61700maxresident)k
0inputs+0outputs (0major+15853minor)pagefaults 0swaps
$ time racket -t racket_lang_racket.rkt
2
2.28user 0.29system 0:02.89elapsed 89%CPU (0avgtext+0avgdata 61500maxresident)k
0inputs+0outputs (0major+15015minor)pagefaults 0swaps
EDIT 4
Running free -h every second while DrRacket is running (+ 1 1) in lang HTDP-beginner
(not running any other applications besides DrRacket and my basic system (ie window manager etc))
(this is fish shell, by the way):
http://pastebin.com/2RdZAuXj
At that point I had to kill DrRacket cuz everything was freezing up.
Anyway, yeah, it's leaking, obviously.
Everytime I re-ran the code in DrRacket, memory usage went up and stayed up.
I had only run it about twenty...two-ish (?) more times
(so maybe thirty-ish in total?)
by the point where it started getting near the limit
and I killed it to unfreeze the system.
I guess I should try this with normal #lang racket and see what happens...
EDIT 5
Yup, it leaks the same way:
http://pastebin.com/373PNnY7
I am 90% sure that the reason you had to wait was that your installation of Racket wasn't done properly.
During installation the program setup-plt needs to be run. It precompiles all racket files (.rkt) into so-called zo-files. If this step is omitted, then
DrRacket do the compilation for you the first time a file is needed.
In your case (I am guessing) it it your first time using the Beginner language, so all files relating to it needs to be compiled. And that takes a while.
The best solution is to use the official installers from http://download.racket-lang.org/ they all include precompiled zo-files.
If you happen to have used such an installer, then try again - and if the problem persist - do file a bug report (use the bug report in the Help menu in DrRacket).
I have this little nonsense script here which I am executing in MATLAB R2013b:
clear all;
n = 2000;
times = 50;
i = 0;
tCPU = tic;
disp 'CPU::'
A = rand(n, n);
B = rand(n, n);
disp '::Go'
for i = 0:times
CPU = A * B;
end
tCPU = toc(tCPU);
tGPU = tic;
disp 'GPU::'
A = gpuArray(A);
B = gpuArray(B);
disp '::Go'
for i = 0:times
GPU = A * B ;
end
tGPU = toc(tGPU);
fprintf('On CPU: %.2f sec\nOn GPU: %.2f sec\n', tCPU, tGPU);
Unfortunately after execution I receive a message from Windows saying: "Display driver stopped working and has recovered.".
Which I assume means that Windows did not get response from my graphic cards driver or something. The script returned without errors:
>> test
CPU::
::Go
GPU::
::Go
On CPU: 11.01 sec
On GPU: 2.97 sec
But no matter if the GPU runs out of memory or not, MATLAB is not able to use the GPU device before I restarted it. If I don't restart MATLAB I receive just a message from CUDA:
>> test
Warning: An unexpected error occurred during CUDA
execution. The CUDA error was:
CUDA_ERROR_LAUNCH_TIMEOUT
> In test at 1
Warning: An unexpected error occurred during CUDA
execution. The CUDA error was:
CUDA_ERROR_LAUNCH_TIMEOUT
> In test at 1
Warning: An unexpected error occurred during CUDA
execution. The CUDA error was:
CUDA_ERROR_LAUNCH_TIMEOUT
> In test at 1
Warning: An unexpected error occurred during CUDA
execution. The CUDA error was:
CUDA_ERROR_LAUNCH_TIMEOUT
> In test at 1
CPU::
::Go
GPU::
Error using gpuArray
An unexpected error occurred during CUDA execution.
The CUDA error was:
the launch timed out and was terminated
Error in test (line 21)
A = gpuArray(A);
Does anybody know how to avoid this issue or what I am doing wrong here?
If needed, my GPU Device:
>> gpuDevice
ans =
CUDADevice with properties:
Name: 'GeForce GTX 660M'
Index: 1
ComputeCapability: '3.0'
SupportsDouble: 1
DriverVersion: 6
ToolkitVersion: 5
MaxThreadsPerBlock: 1024
MaxShmemPerBlock: 49152
MaxThreadBlockSize: [1024 1024 64]
MaxGridSize: [2.1475e+09 65535 65535]
SIMDWidth: 32
TotalMemory: 2.1475e+09
FreeMemory: 1.9037e+09
MultiprocessorCount: 2
ClockRateKHz: 950000
ComputeMode: 'Default'
GPUOverlapsTransfers: 1
KernelExecutionTimeout: 1
CanMapHostMemory: 1
DeviceSupported: 1
DeviceSelected: 1
The key piece of information is this part of the gpuDevice output:
KernelExecutionTimeout: 1
This means that the host display driver is active on the GPU you are running the compute jobs on. The NVIDIA display driver contains a watchdog timer which kills any task which takes more than a predefined amount of time without yielding control back to the driver for screen refresh. This is intended to prevent the situation where a long running or stuck compute job renders the machine unresponsive by freezing the display. The runtime of your Matlab script is clearly exceeding the display driver watchdog timer limit. Once that happens, the the compute context held on the device is destroyed and Matlab can no longer operate with the device. You might be able to reinitialise the context by calling reset, which I guess will run cudaDeviceReset() under the cover.
There is a lot of information about this watchdog timer on the interweb - for example this Stack Overflow question. The solution for how to modify this timeout is dependent on your OS and hardware. The simplest way to avoid this is to not run CUDA code on a display GPU, or increase the granularity of your compute jobs so that no one operation has a runtime which exceeds the timeout limit. Or just write faster code...