Jest code coverage with tests written in CoffeeScript - coffeescript

Is there a way to run the Jest coverage tool over unit tests that are written in CoffeeScript? The coverage report always tells me 100% coverage.
jest --coverage
----------|----------|----------|----------|----------|----------------|
File | % Stmts | % Branch | % Funcs | % Lines |Uncovered Lines |
----------|----------|----------|----------|----------|----------------|
----------|----------|----------|----------|----------|----------------|
All files | 100 | 100 | 100 | 100 | |
----------|----------|----------|----------|----------|----------------|
I am already transpiling CoffeeScript to JavaScript with following Jest preprocessor
var coffee = require('coffee-script');
module.exports = {
// CoffeeScript files can be .coffee or .litcoffee
process: function(src, path) {
if (coffee.helpers.isCoffee(path)) {
return coffee.compile(src, {
bare: true,
literate: coffee.helpers.isLiterate(path)
});
}
return src;
}
};

Related

How to solve "Step definition is not found" error: StepDefinitionNotFoundError

Here is my feature file - a.feature:
Scenario Outline: Some outline
Given something
When <thing> is moved to <position>
Then something else
Examples:
| thing | position |
| 1 | 1 |
and save it in /tmp/a.feature
Here is my pytest step file (/tmp/a.py):
from pytest_bdd import (
given,
scenario,
then,
when,
)
#scenario('./x.feature', 'Some outline')
def test_some_outline():
"""Some outline."""
#given('something')
def something():
"""something."""
pass
#when('<thing> is moved to <position>')
def thing_is_moved_to_position(thing, position):
assert isinstance(thing, int)
assert isinstance(position, int)
#then('something else')
def something_else():
"""something else."""
pass
When I run it:
$ pwd
/tmp
$ pytest ./a.py
............
............
E pytest_bdd.exceptions.StepDefinitionNotFoundError: Step definition is not found: When "1 is moved to 1". Line 3 in scenario "Some outline" in the feature "/tmp/x.feature"
/home/cyan/.local/lib/python3.10/site-packages/pytest_bdd/scenario.py:192: StepDefinitionNotFoundError
============= short test summary info =============
FAILED x.py::test_some_outline[1-1] - pytest_bdd.exceptions.StepDefinitionNotFoundError: Step definition is not found: When "1 is moved to 1". Line 3 in scenario "Some outli...
============ 1 failed in 0.09s ============

Yocto(Zeus) Perf Build fails

I want to build perf on Yocto (Zeus branch), for an image without python2. The recipe is this one:
https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/recipes-kernel/perf/perf.bb?h=zeus-22.0.4
Running this recipe yields this error:
| ERROR: Execution of '/home/yocto/poseidon-build/tmp/work/imx6dl_poseidon_revb-poseidon-linux-gnueabi/perf/1.0-r9/temp/run.do_compile.19113' failed with exit code 1:
| make: Entering directory '/home/yocto/poseidon-build/tmp/work/imx6dl_poseidon_revb-poseidon-linux-gnueabi/perf/1.0-r9/perf-1.0/tools/perf'
| BUILD: Doing 'make -j4' parallel build
| Warning: arch/x86/include/asm/disabled-features.h differs from kernel
| Warning: arch/x86/include/asm/required-features.h differs from kernel
| Warning: arch/x86/include/asm/cpufeatures.h differs from kernel
| Warning: arch/arm/include/uapi/asm/perf_regs.h differs from kernel
| Warning: arch/arm64/include/uapi/asm/perf_regs.h differs from kernel
| Warning: arch/powerpc/include/uapi/asm/perf_regs.h differs from kernel
| Warning: arch/x86/include/uapi/asm/perf_regs.h differs from kernel
| Warning: arch/x86/include/uapi/asm/kvm.h differs from kernel
| Warning: arch/x86/include/uapi/asm/kvm_perf.h differs from kernel
| Warning: arch/x86/include/uapi/asm/svm.h differs from kernel
| Warning: arch/x86/include/uapi/asm/vmx.h differs from kernel
| Warning: arch/powerpc/include/uapi/asm/kvm.h differs from kernel
| Warning: arch/s390/include/uapi/asm/kvm.h differs from kernel
| Warning: arch/s390/include/uapi/asm/kvm_perf.h differs from kernel
| Warning: arch/s390/include/uapi/asm/sie.h differs from kernel
| Warning: arch/arm/include/uapi/asm/kvm.h differs from kernel
| Warning: arch/arm64/include/uapi/asm/kvm.h differs from kernel
| Warning: arch/x86/lib/memcpy_64.S differs from kernel
| Warning: arch/x86/lib/memset_64.S differs from kernel
|
| Auto-detecting system features:
| ... dwarf: [ on ]
| ... dwarf_getlocations: [ on ]
| ... glibc: [ on ]
| ... gtk2: [ OFF ]
| ... libaudit: [ OFF ]
| ... libbfd: [ on ]
| ... libelf: [ on ]
| ... libnuma: [ OFF ]
| ... numa_num_possible_cpus: [ OFF ]
| ... libperl: [ OFF ]
| ... libpython: [ on ]
| ... libslang: [ on ]
| ... libcrypto: [ on ]
| ... libunwind: [ on ]
| ... libdw-dwarf-unwind: [ on ]
| ... zlib: [ on ]
| ... lzma: [ on ]
| ... get_cpuid: [ OFF ]
| ... bpf: [ on ]
|
| Makefile.config:352: DWARF support is off, BPF prologue is disabled
| Makefile.config:547: Missing perl devel files. Disabling perl scripting support, please install perl-ExtUtils-Embed/libperl-dev
| Makefile.config:594: Python 3 is not yet supported; please set
| Makefile.config:595: PYTHON and/or PYTHON_CONFIG appropriately.
| Makefile.config:596: If you also have Python 2 installed, then
| Makefile.config:597: try something like:
| Makefile.config:598:
| Makefile.config:599: make PYTHON=python2
| Makefile.config:600:
| Makefile.config:601: Otherwise, disable Python support entirely:
| Makefile.config:602:
| Makefile.config:603: make NO_LIBPYTHON=1
| Makefile.config:604:
| Makefile.config:605: *** . Stop.
| Makefile.perf:205: recipe for target 'sub-make' failed
| make[1]: *** [sub-make] Error 2
| Makefile:68: recipe for target 'all' failed
| make: *** [all] Error 2
| make: Leaving directory '/home/yocto/poseidon-build/tmp/work/imx6dl_poseidon_revb-poseidon-linux-gnueabi/perf/1.0-r9/perf-1.0/tools/perf'
| WARNING: exit code 1 from a shell command.
|
ERROR: Task (/home/yocto/sources/poky/meta/recipes-kernel/perf/perf.bb:do_compile) failed with exit code '1'
NOTE: Tasks Summary: Attempted 1947 tasks of which 1946 didn't need to be rerun and 1 failed.
Looking at the recipe, libpython seems to be set?:
PACKAGECONFIG ??= "scripting tui libunwind"
PACKAGECONFIG[dwarf] = ",NO_DWARF=1"
PACKAGECONFIG[scripting] = ",NO_LIBPERL=1 NO_LIBPYTHON=1,perl python3"
# gui support was added with kernel 3.6.35
# since 3.10 libnewt was replaced by slang
# to cover a wide range of kernel we add both dependencies
PACKAGECONFIG[tui] = ",NO_NEWT=1,libnewt slang"
PACKAGECONFIG[libunwind] = ",NO_LIBUNWIND=1 NO_LIBDW_DWARF_UNWIND=1,libunwind"
PACKAGECONFIG[libnuma] = ",NO_LIBNUMA=1"
PACKAGECONFIG[systemtap] = ",NO_SDT=1,systemtap"
PACKAGECONFIG[jvmti] = ",NO_JVMTI=1"
# libaudit support would need scripting to be enabled
PACKAGECONFIG[audit] = ",NO_LIBAUDIT=1,audit"
PACKAGECONFIG[manpages] = ",,xmlto-native asciidoc-native"
Why does it not pick up the flag?
PACKAGECONFIG has scripting in it by default.
PACKAGECONFIG options are defined as following:
PACKAGECONFIG[f1] = "--with-f1, \
--without-f1, \
build-deps-for-f1, \
runtime-deps-for-f1, \
runtime-recommends-for-f1, \
packageconfig-conflicts-for-f1 \
"
PACKAGECONFIG[scripting] is set to ",NO_LIBPERL=1 NO_LIBPYTHON=1,perl python3". See the first comma here? It means that what is defined after is when scripting is not selected.
So if you do not want python dependency to be pulled, just set PACKAGECONFIG to a value without scripting in it.
Though I'm actually surprised the default does not build, that's definitely something that is tested by autobuilders. There's probably something else going on?
c.f.: https://www.yoctoproject.org/docs/latest/mega-manual/mega-manual.html#var-PACKAGECONFIG

Collecting output from Apache Beam pipeline and displaying it to console

I have been working on Apache Beam for a couple of days. I wanted to quickly iterate on the application I am working and make sure the pipeline I am building is error free. In spark we can use sc.parallelise and when we apply some action we get the value that we can inspect.
Similarly when I was reading about Apache Beam, I found that we can create a PCollection and work with it using following syntax
with beam.Pipeline() as pipeline:
lines = pipeline | beam.Create(["this is test", "this is another test"])
word_count = (lines
| "Word" >> beam.ParDo(lambda line: line.split(" "))
| "Pair of One" >> beam.Map(lambda w: (w, 1))
| "Group" >> beam.GroupByKey()
| "Count" >> beam.Map(lambda (w, o): (w, sum(o))))
result = pipeline.run()
I actually wanted to print the result to console. But I couldn't find any documentation around it.
Is there a way to print the result to console instead of saving it to a file each time?
You don't need the temp list. In python 2.7 the following should be sufficient:
def print_row(row):
print row
(pipeline
| ...
| "print" >> beam.Map(print_row)
)
result = pipeline.run()
result.wait_until_finish()
In python 3.x, print is a function so the following is sufficient:
(pipeline
| ...
| "print" >> beam.Map(print)
)
result = pipeline.run()
result.wait_until_finish()
After exploring furthermore and understanding how I can write testcases for my application I figure out the way to print the result to console. Please not that I am right now running everything to a single node machine and trying to understand functionality provided by apache beam and how can I adopt it without compromising industry best practices.
So, here is my solution. At the very last stage of our pipeline we can introduce a map function that will print result to the console or accumulate the result in a variable later we can print the variable to see the value
import apache_beam as beam
# lets have a sample string
data = ["this is sample data", "this is yet another sample data"]
# create a pipeline
pipeline = beam.Pipeline()
counts = (pipeline | "create" >> beam.Create(data)
| "split" >> beam.ParDo(lambda row: row.split(" "))
| "pair" >> beam.Map(lambda w: (w, 1))
| "group" >> beam.CombinePerKey(sum))
# lets collect our result with a map transformation into output array
output = []
def collect(row):
output.append(row)
return True
counts | "print" >> beam.Map(collect)
# Run the pipeline
result = pipeline.run()
# lets wait until result a available
result.wait_until_finish()
# print the output
print output
Maybe logging info instead of print?
def _logging(elem):
logging.info(elem)
return elem
P | "logging info" >> beam.Map(_logging)
Follow an example from pycharm Edu
import apache_beam as beam
class LogElements(beam.PTransform):
class _LoggingFn(beam.DoFn):
def __init__(self, prefix=''):
super(LogElements._LoggingFn, self).__init__()
self.prefix = prefix
def process(self, element, **kwargs):
print self.prefix + str(element)
yield element
def __init__(self, label=None, prefix=''):
super(LogElements, self).__init__(label)
self.prefix = prefix
def expand(self, input):
input | beam.ParDo(self._LoggingFn(self.prefix))
class MultiplyByTenDoFn(beam.DoFn):
def process(self, element):
yield element * 10
p = beam.Pipeline()
(p | beam.Create([1, 2, 3, 4, 5])
| beam.ParDo(MultiplyByTenDoFn())
| LogElements())
p.run()
Output
10
20
30
40
50
Out[10]: <apache_beam.runners.portability.fn_api_runner.RunnerResult at 0x7ff41418a210>
I know it isn't what you asked for but why don't you store it to a text file? It's always better than printing it via stdout and it isn't volatile

How would I test that a PowerShell function properly streams input from the pipeline?

I know how to write a function that streams input from the pipeline. I can reasonably tell by reading the source for a function if it will perform properly. However, is there any method for actually testing for the correct behavior?
I accept any definition of "testing"... be that some manual test that I can run or something more automated.
If you need an example, let's say I have a function that splits text into words.
PS> Get-Content ./warandpeace.txt | Split-Text
How would I check that it streams input from the pipeline and begins splitting immediately?
You can write a helper function, which would give you some indication as pipeline items passed to it and processed by next command:
function Print-Pipeline {
param($Name, [ConsoleColor]$Color)
begin {
$ColorParameter = if($PSBoundParameters.ContainsKey('Color')) {
#{ ForegroundColor = $Color }
} else {
#{ }
}
}
process {
Write-Host "${Name}|Before|$_" #ColorParameter
,$_
Write-Host "${Name}|After|$_" #ColorParameter
}
}
Suppose you have some functions to test:
$Text = 'Some', 'Random', 'Text'
function CharSplit1 { $Input | % GetEnumerator }
filter CharSplit2 { $Input | % GetEnumerator }
And you can test them like that:
PS> $Text |
>>> Print-Pipeline Before` CharSplit1 |
>>> CharSplit1 |
>>> Print-Pipeline After` CharSplit1
Before CharSplit1|Before|Some
Before CharSplit1|After|Some
Before CharSplit1|Before|Random
Before CharSplit1|After|Random
Before CharSplit1|Before|Text
Before CharSplit1|After|Text
After CharSplit1|Before|S
S
After CharSplit1|After|S
After CharSplit1|Before|o
o
After CharSplit1|After|o
After CharSplit1|Before|m
m
After CharSplit1|After|m
After CharSplit1|Before|e
e
After CharSplit1|After|e
After CharSplit1|Before|R
R
After CharSplit1|After|R
After CharSplit1|Before|a
a
After CharSplit1|After|a
After CharSplit1|Before|n
n
After CharSplit1|After|n
After CharSplit1|Before|d
d
After CharSplit1|After|d
After CharSplit1|Before|o
o
After CharSplit1|After|o
After CharSplit1|Before|m
m
After CharSplit1|After|m
After CharSplit1|Before|T
T
After CharSplit1|After|T
After CharSplit1|Before|e
e
After CharSplit1|After|e
After CharSplit1|Before|x
x
After CharSplit1|After|x
After CharSplit1|Before|t
t
After CharSplit1|After|t
PS> $Text |
>>> Print-Pipeline Before` CharSplit2 |
>>> CharSplit2 |
>>> Print-Pipeline After` CharSplit2
Before CharSplit2|Before|Some
After CharSplit2|Before|S
S
After CharSplit2|After|S
After CharSplit2|Before|o
o
After CharSplit2|After|o
After CharSplit2|Before|m
m
After CharSplit2|After|m
After CharSplit2|Before|e
e
After CharSplit2|After|e
Before CharSplit2|After|Some
Before CharSplit2|Before|Random
After CharSplit2|Before|R
R
After CharSplit2|After|R
After CharSplit2|Before|a
a
After CharSplit2|After|a
After CharSplit2|Before|n
n
After CharSplit2|After|n
After CharSplit2|Before|d
d
After CharSplit2|After|d
After CharSplit2|Before|o
o
After CharSplit2|After|o
After CharSplit2|Before|m
m
After CharSplit2|After|m
Before CharSplit2|After|Random
Before CharSplit2|Before|Text
After CharSplit2|Before|T
T
After CharSplit2|After|T
After CharSplit2|Before|e
e
After CharSplit2|After|e
After CharSplit2|Before|x
x
After CharSplit2|After|x
After CharSplit2|Before|t
t
After CharSplit2|After|t
Before CharSplit2|After|Text
Add some Write-Verbose statements to your Split-Text function, and then call it with the -Verbose parameter. You should see output in real-time.
Ah, I've got a very simple solution. The concept is to insert your own step into the pipeline with obvious side-effects before the function that you're testing. For example...
PS> 1..10 | %{ Write-Host $_; $_ } | function-under-test
If your function-under-test is "bad", you will see all of the output from 1..10 twice, like this
1
2
3
1
2
3
If the function-under-test is processing items lazily from the pipeline, you'll see the output interleaved.
1
1
2
2
3
3

How to extract 2 strings in one line and compare it by using powershell?

I have lots of txt files and each of the file has below contents.
And I want to compare Run with Passed value.
e.g. 00001A22 vs 00001A22
I can use excel for those works but I have more than 200 files.
It's a big job.
And I try to use powershell to build a script to extract 2 strings and compare it. I tried to select string but I failed.
Is there any method to work?
DRAM Test Run: 00001A22 Passed: 00001A22
Ethernet Test Run: 000011E2 Passed: 000011E2
DRAM Test Run: 00001BA7 Passed: 00001BA7
Ethernet Test Run: 000012EC Passed: 000012EC
DRAM Test Run: 00001CA3 Passed: 00001CA3
Ethernet Test Run: 00001399 Passed: 00001399
I would use a regex pattern with named capture groups for this:
# Define pattern that captures the hex strings
$CaptureTestPattern = 'Run: (?<run>[0-9A-F]{8}).*Passed: (?<passed>[0-9A-F]{8})'
# Go through each line in file
Get-Content tests.txt |ForEach-Object {
# Check if string contains both a "run" and "passed" result
if(($m = [regex]::Match($_,$CaptureTestPattern)).Success)
{
# Compare them
if($m.Groups['run'].Value -eq $m.Groups['passed'])
{
# they are the same!
}
else
{
# they are NOT the same!
}
}
}
According the fact the file test.txt contains the following :
DRAM Test Run: 00001A22 Passed: 00001A22
Ethernet Test Run: 000011E2 Passed: 000011E2
DRAM Test Run: 00001BA7 Passed: 00001BA7
Ethernet Test Run: 000012EC Passed: 000012ED
DRAM Test Run: 00001CA3 Passed: 00001CA3
Ethernet Test Run: 00001399 Passed: 00001399
You can use :
Get-Content D:\Temp\test.txt | % {$n=0}{$_ -match '^.*Run: (.*) Passed: (.*)$' | Out-Null; if ($Matches[1] -ne $Matches[2]){"line $n KO"}; $n++}
it returns
line 3 KO