I'm making a custom task to run Perl unit tests with yath. The output of that command contains details about failed tests, which I would like to filter and display as problems.
I've written the following matcher for the my output.
"problemMatcher": {
"owner": "yath",
"fileLocation": [ "relative", "${workspaceFolder}" ],
"severity": "error",
"pattern": [
{
"regexp": "\\[\\s*FAIL\\s*\\]\\s*job\\s*\\d+\\s*\\+?\\s*(.+)",
"message": 1,
},{
"regexp": "\\(\\s*DIAG\\s*\\)\\s*job\\s*\\d+\\s*\\+?\\s*at (.+) line (\\d+)\\.",
"file": 1,
"line": 2
}
]
}
This is supposed to match two different lines in the following output, which I will present as code for copying, and as a screenshot.
** Defaulting to the 'test' command **
( LAUNCH ) job 1 t/foo.t
( NOTE ) job 1 Seeded srand with seed '20220414' from local date.
[ PASS ] job 1 + passing test
[ FAIL ] job 1 + failing test
( DIAG ) job 1 Failed test 'failing test'
( DIAG ) job 1 at t/foo.t line 57.
[ PLAN ] job 1 Expected assertions: 2
( FAILED ) job 1 t/foo.t
( TIME ) job 1 Startup: 0.30841s | Events: 0.01992s | Cleanup: 0.00417s | Total: 0.33250s
< REASON > job 1 Test script returned error (Err: 1)
< REASON > job 1 Assertion failures were encountered (Count: 1)
The following jobs failed:
+--------------------------------------+-----------------------------------+
| Job ID | Test File |
+--------------------------------------+-----------------------------------+
| e7aee661-b49f-4b60-b815-f420d109457a | t/foo.t |
+--------------------------------------+-----------------------------------+
Yath Result Summary
-----------------------------------------------------------------------------------
Fail Count: 1
File Count: 1
Assertion Count: 2
Wall Time: 0.74 seconds
CPU Time: 0.76 seconds (usr: 0.20s | sys: 0.00s | cusr: 0.49s | csys: 0.07s)
CPU Usage: 103%
--> Result: FAILED <--
But it's actually pretty with colours.
I suspect there are ANSI escape sequences in this output. I could pass a flag to yath to make it not print colours, but I would like to be able to read this output as well, so that isn't ideal.
Do I have to change my pattern to match the escape sequences (I can read the source of the program that prints them, but it's annoying), or are they in fact stripped out and my pattern is wrong, but I can't see where?
Here's the first pattern as a regex101 match, and here's the second.
Related
Here is my feature file - a.feature:
Scenario Outline: Some outline
Given something
When <thing> is moved to <position>
Then something else
Examples:
| thing | position |
| 1 | 1 |
and save it in /tmp/a.feature
Here is my pytest step file (/tmp/a.py):
from pytest_bdd import (
given,
scenario,
then,
when,
)
#scenario('./x.feature', 'Some outline')
def test_some_outline():
"""Some outline."""
#given('something')
def something():
"""something."""
pass
#when('<thing> is moved to <position>')
def thing_is_moved_to_position(thing, position):
assert isinstance(thing, int)
assert isinstance(position, int)
#then('something else')
def something_else():
"""something else."""
pass
When I run it:
$ pwd
/tmp
$ pytest ./a.py
............
............
E pytest_bdd.exceptions.StepDefinitionNotFoundError: Step definition is not found: When "1 is moved to 1". Line 3 in scenario "Some outline" in the feature "/tmp/x.feature"
/home/cyan/.local/lib/python3.10/site-packages/pytest_bdd/scenario.py:192: StepDefinitionNotFoundError
============= short test summary info =============
FAILED x.py::test_some_outline[1-1] - pytest_bdd.exceptions.StepDefinitionNotFoundError: Step definition is not found: When "1 is moved to 1". Line 3 in scenario "Some outli...
============ 1 failed in 0.09s ============
I have the following simple workflow:
workflow {
Channel.fromPath(params.file_list)
.splitText(){it.trim()}
.set { file_list }
data = GetFromHPSS(file_list)
data_pairs = CoupleDETXToFile(data, file(params.detx_path))
SingleDUTimeResFit(data_pairs)
}
In which file_list is a list of paths on a tape-drive system. The GetFromHPSS is the process which retrieves files from the tape system and I need to limit the parallel processes to a fairly low number.
Currently, I am using
executor {
queueSize = 100
}
in the configuration file but there are two problems:
it limits the overall maximum number of parallel jobs, while I could run thousands of SingleDUTimeResFit processes in parallel
it always first waits until it processed everything from GetFromHPSS instead of continuing with the subsequent processes
Here is an example:
N E X T F L O W ~ version 21.04.3
Launching `workflows/singledu_timeresfit.nf` [wise_galileo] - revision: 8084ac1482
executor > sge (502)
[13/ca3e8a] process > GetFromHPSS (426) [ 18%] 402 of 22840
[- ] process > CoupleDETXToFile [ 0%] 0 of 402
[- ] process > SingleDUTimeResFit -
Is there a way to limit GetFromHPSS to a specific number of parallel executions and let the remaining processes run with another queue-limit set?
EDIT: This is one of my best tries I guess, but it does not accept the configuration:
process {
executor {
queueSize = 100
submitRateLimit = "10sec"
}
withName: GetFromHPSS {
executor.queueSize = 10
}
}
With this process top-level configuration, I get:
N E X T F L O W ~ version 21.04.3
Launching `workflows/singledu_timeresfit.nf` [confident_pasteur] - revision: 8084ac1482
Unknown config attribute `process.withName:GetFromHPSS` -- check config file: /sps/km3net/users/tgal/dev/PhD/workflows/nextflow.config
I think what you're looking for here is the maxForks directive, which can be applied to just the 'GetFromHPSS' process without the need to change the executor's queueSize:
process 'GetFromHPSS' {
maxForks 1
"""
<your script here>
"""
}
You could even parameterize it, if you think it makes sense:
params.hpss_forks = 5
process 'GetFromHPSS' {
maxForks params.hpss_forks
"""
<your script here>
"""
}
I am following the tutorial in the documentation (https://snakemake.readthedocs.io/en/stable/tutorial/advanced.html) and have been stuck on the "Step 4: Rule parameter" exercise. I would like to access a float from my config file using a wildcard in my params directive.
I seem to be getting the same error whenever I run snakemake -np in the command line:
InputFunctionException in line 46 of /mnt/c/Users/Matt/Desktop/snakemake-tutorial/Snakefile:
Error:
AttributeError: 'Wildcards' object has no attribute 'sample'
Wildcards:
Traceback:
File "/mnt/c/Users/Matt/Desktop/snakemake-tutorial/Snakefile", line 14, in get_bcftools_call_priors
This is my code so far
import time
configfile: "config.yaml"
rule all:
input:
"plots/quals.svg"
def get_bwa_map_input_fastqs(wildcards):
print(wildcards.__dict__, 1, time.time()) #I have this print as a check
return config["samples"][wildcards.sample]
def get_bcftools_call_priors(wildcards):
print(wildcards.__dict__, 2, time.time()) #I have this print as a check
return config["prior_mutation_rates"][wildcards.sample]
rule bwa_map:
input:
"data/genome.fa",
get_bwa_map_input_fastqs
#lambda wildcards: config["samples"][wildcards.sample]
output:
"mapped_reads/{sample}.bam"
params:
rg=r"#RG\tID:{sample}\tSM:{sample}"
threads: 2
shell:
"bwa mem -R '{params.rg}' -t {threads} {input} | samtools view -Sb - > {output}"
rule samtools_sort:
input:
"mapped_reads/{sample}.bam"
output:
"sorted_reads/{sample}.bam"
shell:
"samtools sort -T sorted_reads/{wildcards.sample} "
"-O bam {input} > {output}"
rule samtools_index:
input:
"sorted_reads/{sample}.bam"
output:
"sorted_reads/{sample}.bam.bai"
shell:
"samtools index {input}"
rule bcftools_call:
input:
fa="data/genome.fa",
bam=expand("sorted_reads/{sample}.bam", sample=config["samples"]),
bai=expand("sorted_reads/{sample}.bam.bai", sample=config["samples"])
#prior=get_bcftools_call_priors
params:
prior=get_bcftools_call_priors
output:
"calls/all.vcf"
shell:
"samtools mpileup -g -f {input.fa} {input.bam} | "
"bcftools call -P {params.prior} -mv - > {output}"
rule plot_quals:
input:
"calls/all.vcf"
output:
"plots/quals.svg"
script:
"scripts/plot-quals.py"
and here is my config.yaml
samples:
A: data/samples/A.fastq
#B: data/samples/B.fastq
#C: data/samples/C.fastq
prior_mutation_rates:
A: 1.0e-4
#B: 1.0e-6
I don't understand why my input function call in bcftools_call says that the wildcards object is empty of attributes, yet an almost identical function call in bwa_map has the attribute sample that I want. From the documentation it seems like the wildcards would be propogated before anything is run, so why is it missing?
This is the full output of the commandline call snakemake -np:
{'_names': {'sample': (0, None)}, '_allowed_overrides': ['index', 'sort'], 'index': functools.partial(<function Namedlist._used_attribute at 0x7f91b1a58f70>, _name='index'), 'sort': functools.partial(<function Namedlist._used_attribute at 0x7f91b1a58f70>, _name='sort'), 'sample': 'A'} 1 1628877061.8831172
Job stats:
job count min threads max threads
-------------- ------- ------------- -------------
all 1 1 1
bcftools_call 1 1 1
bwa_map 1 1 1
plot_quals 1 1 1
samtools_index 1 1 1
samtools_sort 1 1 1
total 6 1 1
[Fri Aug 13 10:51:01 2021]
rule bwa_map:
input: data/genome.fa, data/samples/A.fastq
output: mapped_reads/A.bam
jobid: 4
wildcards: sample=A
resources: tmpdir=/tmp
bwa mem -R '#RG\tID:A\tSM:A' -t 1 data/genome.fa data/samples/A.fastq | samtools view -Sb - > mapped_reads/A.bam
[Fri Aug 13 10:51:01 2021]
rule samtools_sort:
input: mapped_reads/A.bam
output: sorted_reads/A.bam
jobid: 3
wildcards: sample=A
resources: tmpdir=/tmp
samtools sort -T sorted_reads/A -O bam mapped_reads/A.bam > sorted_reads/A.bam
[Fri Aug 13 10:51:01 2021]
rule samtools_index:
input: sorted_reads/A.bam
output: sorted_reads/A.bam.bai
jobid: 5
wildcards: sample=A
resources: tmpdir=/tmp
samtools index sorted_reads/A.bam
[Fri Aug 13 10:51:01 2021]
rule bcftools_call:
input: data/genome.fa, sorted_reads/A.bam, sorted_reads/A.bam.bai
output: calls/all.vcf
jobid: 2
resources: tmpdir=/tmp
{'_names': {}, '_allowed_overrides': ['index', 'sort'], 'index': functools.partial(<function Namedlist._used_attribute at 0x7f91b1a58f70>, _name='index'), 'sort': functools.partial(<function Namedlist._used_attribute at 0x7f91b1a58f70>, _name='sort')} 2 1628877061.927639
InputFunctionException in line 46 of /mnt/c/Users/Matt/Desktop/snakemake-tutorial/Snakefile:
Error:
AttributeError: 'Wildcards' object has no attribute 'sample'
Wildcards:
Traceback:
File "/mnt/c/Users/Matt/Desktop/snakemake-tutorial/Snakefile", line 14, in get_bcftools_call_priors
If anyone knows what is going wrong I would really appreciate an explaination. Also if there is a better way of getting information out of the config.yaml into the different directives, I would gladly appreciate those tips.
Edit:
I have searched around the internet quite a bit, but have yet to understand this issue.
Wildcards for each rule are based on that rule's output file(s). The rule bcftools_call has one output file (calls/all.vcf), which has no wildcards. Because of this, when get_bcftools_call_priors is called, it throws an exception when it tries to access the unset wildcards.sample attribute.
You should probably set a global prior_mutation_rate in your config file and then access that in the bcftools_call rule:
rule bcftools_call:
...
params:
prior=config["prior_mutation_rate"],
I am trying to run a hive query using gcloud compute ssh via scala
First, here is what i tried
scala> import sys.process._
scala> val results = Seq("hive", "-e", "show databases;").!!
asd
zxc
qwe
scala>
which is good. Now, i want to run the same hive command, but against a GCP cluster. I have gcloud setup on my VM and from the command line, i can easily do
$ gcloud compute ssh --zone myZone myNode --internal-ip -- 'hive -e "show databases;"'
Updating project ssh metadata...⠶Updated [https://www.googleapis.com/compute/v1/projects/myProject].
Updating project ssh metadata...done.
Waiting for SSH key to propagate.
Warning: Permanently added 'compute.2746937995265952194' (RSA) to the list of known hosts.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 19 100 19 0 0 2982 0 --:--:-- --:--:-- --:--:-- 3166
Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j2.properties Async: true
OK
asd
zxc
qwe
Now, I want to run the above using scala. Here is what i tried
scala> val results = Seq("gcloud", "compute", "ssh", "--zone", "myZone", "myNode", "--internal-ip", "--", "hive", "-e" ,"show databases").!!
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 19 100 19 0 0 3270 0 --:--:-- --:--:-- --:--:-- 3800
Pseudo-terminal will not be allocated because stdin is not a terminal.
Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j2.properties Async: true
NoViableAltException(-1#[846:1: ddlStatement : ( createDatabaseStatement | switchDatabaseStatement | dropDatabaseStatement | createTableStatement | dropTableStatement | truncateTableStatement | alterStatement | descStatement | showStatement | metastoreCheck | createViewStatement | createMaterializedViewStatement | dropViewStatement | dropMaterializedViewStatement | createFunctionStatement | createMacroStatement | createIndexStatement | dropIndexStatement | dropFunctionStatement | reloadFunctionStatement | dropMacroStatement | analyzeStatement | lockStatement | unlockStatement | lockDatabase | unlockDatabase | createRoleStatement | dropRoleStatement | ( grantPrivileges )=> grantPrivileges | ( revokePrivileges )=> revokePrivileges | showGrants | showRoleGrants | showRolePrincipals | showRoles | grantRole | revokeRole | setRole | showCurrentRole | abortTransactionStatement );])
at org.antlr.runtime.DFA.noViableAlt(DFA.java:158)
at org.antlr.runtime.DFA.predict(DFA.java:144)
at org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:3757)
at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:2382)
at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1333)
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:208)
at org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:77)
at org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:70)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:468)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:787)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:244)
at org.apache.hadoop.util.RunJar.main(RunJar.java:158)
FAILED: ParseException line 1:4 cannot recognize input near 'show' '<EOF>' '<EOF>' in ddl statement
java.lang.RuntimeException: Nonzero exit value: 64
at scala.sys.package$.error(package.scala:27)
at scala.sys.process.ProcessBuilderImpl$AbstractBuilder.slurp(ProcessBuilderImpl.scala:132)
at scala.sys.process.ProcessBuilderImpl$AbstractBuilder.$bang$bang(ProcessBuilderImpl.scala:102)
... 50 elided
scala>
why am i getting this error ? I also tried
scala> val results = Seq("gcloud", "compute", "ssh", "--zone", "myZone", "myNode", "--internal-ip", "--", "hive", "-e" ,"show databases;").!!
but got the same error. Then i tried
scala> val results = Seq("gcloud", "compute", "ssh", "--zone", "myZone", "myNode", "--internal-ip", "--", "'hive -e \"show databases;\"'").!!
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 19 100 19 0 0 3245 0 --:--:-- --:--:-- --:--:-- 3800
Pseudo-terminal will not be allocated because stdin is not a terminal.
bash: hive -e "show databases;": command not found
java.lang.RuntimeException: Nonzero exit value: 127
at scala.sys.package$.error(package.scala:27)
at scala.sys.process.ProcessBuilderImpl$AbstractBuilder.slurp(ProcessBuilderImpl.scala:132)
at scala.sys.process.ProcessBuilderImpl$AbstractBuilder.$bang$bang(ProcessBuilderImpl.scala:102)
... 50 elided
How can I run the gcloud comput ssh properly using scala ?
You don't need the single quotes in your last example. You're trying to pass the string:
hive -e "show databases;"
For fun, I would use triple quotes in Scala:
"""hive -e "show databases;""""
to avoid backslash. Single quotes in your good command line are processed by bash.
This is what worked in bash:
$ gcloud compute ssh --zone myZone myNode --internal-ip -- 'hive -e "show databases;"'
scala.sys.process got some basic parsing at some point. There is a space in this file name that must be quoted. Amazingly, it seems to do shell-style quotes:
$ scala
Welcome to Scala 2.13.0 (OpenJDK 64-Bit Server VM, Java 11.0.3).
Type in expressions for evaluation. Or try :help.
scala> import scala.sys.process._
import scala.sys.process._
scala> "ls -l /tmp/skypeforlinux Crashes".!!
ls: cannot access '/tmp/skypeforlinux': No such file or directory
ls: cannot access 'Crashes': No such file or directory
java.lang.RuntimeException: Nonzero exit value: 2
at scala.sys.process.ProcessBuilderImpl$AbstractBuilder.slurp(ProcessBuilderImpl.scala:155)
at scala.sys.process.ProcessBuilderImpl$AbstractBuilder.$bang$bang(ProcessBuilderImpl.scala:112)
... 28 elided
scala> """ls -l "/tmp/skypeforlinux Crashes"""".!!
res1: String =
"total 0
"
scala> """ls -l '/tmp/skypeforlinux Crashes'""".!!
res2: String =
"total 0
"
scala> """ls -l /tmp/skypeforlin'ux Cr'ashes""".!!
res3: String =
"total 0
"
scala> """echo 'hive -e "show databases;"'""".!!
res4: String =
"hive -e "show databases;"
"
The double quotes around "my house" are part of the file name:
scala> """ls '/tmp/"my house"'""".!!
res5: String =
"/tmp/"my house"
"
I guess that code is where I learned how shell-style quotes work, though I never have a chance to use that knowledge. Except for this answer, so thanks for the opportunity.
Need some help with respect to search for a string between repetitive tags
I have a text file with following format repeating many times within a file
========== File Name: fixed_am_7bitLI_HE10.txt Test Case: Test_Case_1 START ==========
Block 1
TC ID OK
Block 2
Input section OK
data section OK
Block 3
Input section OK
data section OK
Block 4
Input section OK
data section OK
========== File Name: fixed_am_7bitLI_HE10.txt Test Case: Test_Case_2 START ========
Block 1
TC ID OK
Block 2
Input section OK
line mismatch:
output line: "Buffer allocated from pool: MIF_CTRL_POOL, buffer_id: 1, size (words): 4"
reference line: "pdcp_pdu_delete_count = 0, reset_cip_rdy = 1"
line mismatch:
output line: "pdcp_pdu_delete_count = 0, reset_cip_rdy = 1"
reference line: "-- MIF CTRL output: --------------------------------------------------------------------------------"
line mismatch:
output line: "-- MIF CTRL output: --------------------------------------------------------------------------------"
reference line: "mif_ctrl_rlc_am_um_reset_reestablish_ind_t.rlc_reset_reestablish_ind = 3 (0x0003)"
line mismatch:
output line: "mif_ctrl_rlc_am_um_reset_reestablish_ind_t.rlc_reset_reestablish_ind = 3 (0x0003)"
reference line: "Buffer released from pool: MIF_CTRL_POOL, buffer_id 0, size (words) 6 (used 5)"
line count mismatch:
last output line: "mif_ctrl_rlc_am_um_reset_reestablish_ind_t.rlc_reset_reestablish_ind = 3 (0x0003)"
last reference line: "Buffer released from pool: MIF_CTRL_POOL, buffer_id 0, size (words) 6 (used 5)"
data section DIFFERS
Block 3
Input section OK
data section OK
Block 4
Input section OK
data section OK
========== File Name: fixed_am_7bitLI_HE10.txt Test Case: Test_Case_3 START ========
Block 1
TC ID OK
Block 2
Input section OK
data section OK
Block 3
Input section OK
data section OK
Block 4
Input section OK
data section OK
**I need to find if anything other than 'OK' exists between the Start tags and if yes i have to mark the particular block as failed
for example if i find any other than OK between Test Case: Test_Case_1 START and Test Case: Test_Case_2 START i have to mark Test Case: Test_Case_1 as failed
UPDATED
Expected Output in somewhat this format
File Name: fixed_am_7bitLI_HE10.txt Test Case: Test_Case_1 Status: PASS
(if there is no string as 'DIFFERS' between tags (==)
File Name: fixed_am_7bitLI_HE10.txt Test Case: Test_Case_2 Status: FAILED
(if there is string 'DIFFERS' between tags (==)
UPDATE -2
If in case Test Case fails
File Name: fixed_am_7bitLI_HE10.txt Test Case: Test_Case_2 Status: FAILED
Section of block differing is:
Block 2
Input section OK
line mismatch:
output line: "Buffer allocated from pool: MIF_CTRL_POOL, buffer_id: 1, size (words): 4"
reference line: "pdcp_pdu_delete_count = 0, reset_cip_rdy = 1"
line mismatch:
output line: "pdcp_pdu_delete_count = 0, reset_cip_rdy = 1"
reference line: "-- MIF CTRL output: --------------------------------------------------------------------------------"
line mismatch:
output line: "-- MIF CTRL output: --------------------------------------------------------------------------------"
reference line: "mif_ctrl_rlc_am_um_reset_reestablish_ind_t.rlc_reset_reestablish_ind = 3 (0x0003)"
line mismatch:
output line: "mif_ctrl_rlc_am_um_reset_reestablish_ind_t.rlc_reset_reestablish_ind = 3 (0x0003)"
reference line: "Buffer released from pool: MIF_CTRL_POOL, buffer_id 0, size (words) 6 (used 5)"
line count mismatch:
last output line: "mif_ctrl_rlc_am_um_reset_reestablish_ind_t.rlc_reset_reestablish_ind = 3 (0x0003)"
last reference line: "Buffer released from pool: MIF_CTRL_POOL, buffer_id 0, size (words) 6 (used 5)"
data section DIFFERS
Wild guess:
while(<>) {print '**FAIL**' unless /TC ID OK/; print $_; }
But really, you should specify your requirements.
Maybe something quirky along the lines of the following:
#!/usr/bin/env perl
use strict;
use warnings;
my ($failed, $file, $test);
while (<>)
{
chomp;
next if /^$/;
if (/^=/)
{
print "$file $test Status: $failed\n" if $failed;
($file, $test) = ($_ =~ /(File Name:\s+\S+).+\b(Test Case: Test_Case_\d+)\b/);
$failed = 'PASSED';
next;
};
$failed = 'FAILED' if /\bDIFFERS\b/g;
}
print "$file $test Status: $failed\n";
Data
$ cat testdata
========== File Name: fixed_am_7bitLI_HE10.txt Test Case: Test_Case_1 START ==========
Block 1 TC ID OK
Block 2 Input section OK data section OK
Block 3 Input section OK data section OK
Block 4 Input section OK data section OK
========== File Name: fixed_am_7bitLI_HE10.txt Test Case: Test_Case_2 START ========
Block 1 TC ID OK
Block 2 Input section OK data section DIFFERS
Block 3 Input section OK data section OK
Block 4 Input section OK data section OK
Invocation
$ ./t.pl < testdata
File Name: fixed_am_7bitLI_HE10.txt Test Case: Test_Case_1 Status: PASSED
File Name: fixed_am_7bitLI_HE10.txt Test Case: Test_Case_2 Status: FAILED