Omnet++ multiple submodues - simulation

I am trying to simulate 100 submodules. Is there a simple way to define 100 submodules in few lines?
submodules:
sampler1: Sam1 ;
sampler2: Sam1 ;
sampler3: Sam1;
sampler4: Sam1;
sampler5: Sam1 ;
sampler6: Sam1 ;
Without doing as above is there any other simple way?
Thanks

Yes, write:
submodules:
sampler[100]: Sam1;

Related

Can I reference input events in the "post-if" condition in GitHub Action?

I have a GitHub Action that can be triggered via the GitHub UI (with workflow_dispatch), which has a boolean flag as one of the inputs.
What I want to do is this - set a condition in post-if that is evaluated to true only if the user-provided flag is set to true.
I tried to write it like this, but it didn't work:
post-if: "${{ github.event.inputs.cache }}"
I got the following error in CI:
Unrecognized named-value: 'github'. Located at position 1 within expression: github.event.inputs.cache
Is this doable? Or can event inputs simply not be used in post-if?
Yup, looks like this is doable.
I just had to delete the braces {{ }}:
post-if: "github.event.inputs.cache"

Is it possible to define again rule all in snakemake?

I run in parallel processing of 8 files (fastq) with snakemake. Then each of these file is demultiplexed and I run in parallel processing of demultiplexed files generated by each of these files with snakemake again.
My first attempt (which is working well) was to use 2 snakefiles.
a snakefile for processing 8 files in parallel
a snakefile for processing generated demultiplexed files in parallel
I would like to use only one snakefile.
Here the solution with 2 snakefiles:
snakefile #1 for processing 8 files (wildcard {run}) in parallel
configfile: "config.yaml"
rule all:
input:
expand("{folder}{run}_R1.fastq.gz", run=config["fastqFiles"],folder=config["fastqFolderPath"]),
expand('assembled/{run}/{run}.fastq', run=config["fastqFiles"]),
expand('assembled/{run}/{run}.ali.fastq', run=config["fastqFiles"]),
expand('assembled/{run}/{run}.ali.assigned.fastq', run=config["fastqFiles"]),
expand('assembled/{run}/{run}.unidentified.fastq', run=config["fastqFiles"]),
expand('log/remove_unaligned/{run}.log',run=config["fastqFiles"]),
expand('log/illuminapairedend/{run}.log',run=config["fastqFiles"]),
expand('log/assign_sequences/{run}.log',run=config["fastqFiles"]),
expand('log/split_sequences/{run}.log',run=config["fastqFiles"])
include: "00-rules/assembly.smk"
include: "00-rules/demultiplex.smk"
snakefile #2 for processing generated demultiplexed files in parallel
SAMPLES, = glob_wildcards('samples/{sample}.fasta')
rule all:
input:
expand('samples/{sample}.uniq.fasta',sample=SAMPLES),
expand('samples/{sample}.l.u.fasta',sample=SAMPLES),
expand('samples/{sample}.r.l.u.fasta',sample=SAMPLES),
expand('samples/{sample}.c.r.l.u.fasta',sample=SAMPLES),
expand('log/dereplicate_samples/{sample}.log',sample=SAMPLES),
expand('log/goodlength_samples/{sample}.log',sample=SAMPLES),
expand('log/clean_pcrerr/{sample}.log',sample=SAMPLES),
expand('log/rm_internal_samples/{sample}.log',sample=SAMPLES)
include: "00-rules/filtering.smk"
This solution is working well.
Is it possible to merge these 2 snakefiles into one this way ?
configfile: "config.yaml"
rule all:
input:
expand("{folder}{run}_R1.fastq.gz", run=config["fastqFiles"],folder=config["fastqFolderPath"]),
expand('assembled/{run}/{run}.fastq', run=config["fastqFiles"]),
expand('assembled/{run}/{run}.ali.fastq', run=config["fastqFiles"]),
expand('assembled/{run}/{run}.ali.assigned.fastq', run=config["fastqFiles"]),
expand('assembled/{run}/{run}.unidentified.fastq', run=config["fastqFiles"]),
expand('log/remove_unaligned/{run}.log',run=config["fastqFiles"]),
expand('log/illuminapairedend/{run}.log',run=config["fastqFiles"]),
expand('log/assign_sequences/{run}.log',run=config["fastqFiles"]),
expand('log/split_sequences/{run}.log',run=config["fastqFiles"])
include: "00-rules/assembly.smk"
include: "00-rules/demultiplex.smk"
SAMPLES, = glob_wildcards('samples/{sample}.fasta')
rule all:
input:
expand('samples/{sample}.uniq.fasta',sample=SAMPLES),
expand('samples/{sample}.l.u.fasta',sample=SAMPLES),
expand('samples/{sample}.r.l.u.fasta',sample=SAMPLES),
expand('samples/{sample}.c.r.l.u.fasta',sample=SAMPLES),
expand('log/dereplicate_samples/{sample}.log',sample=SAMPLES),
expand('log/goodlength_samples/{sample}.log',sample=SAMPLES),
expand('log/clean_pcrerr/{sample}.log',sample=SAMPLES),
expand('log/rm_internal_samples/{sample}.log',sample=SAMPLES)
include: "00-rules/filtering.smk"
So i have to define again rule all.
and i got the following message error:
The name all is already used by another rule
Is they're a way to have many rule all or the solution "using many snakefiles" is the only one possible ?
I would like to use snakemake in the most appropriate way possible.
You are not limited in the naming of the top level rule. You may call it all, or you may rename it: the only thing that matters is the order of their definition. By default Snakemake takes the first rule as the target one and then constructs the graph of dependencies.
Taking that in consideration you have several options. First, you can merge both top-level rules from the workflows into one. At the end of the day your all rules do nothing except of the definition of the target files. Next, you may rename your rules into all1 and all2 (making it possible to run a single workflow if you specify that in command line), and provide the all rule with merged input. Finally you can use subworkflows, but as long as your intention is to squash two scripts into one, that would be an overkill.
One more hint that could help: you don't need to specify the pattern expand('filename{sample}',sample=config["fastqFiles"]) for each file if you define a distinct output for each run. For example:
rule sample:
input:
'samples/{sample}.uniq.fasta',
'samples/{sample}.l.u.fasta',
'samples/{sample}.r.l.u.fasta',
'samples/{sample}.c.r.l.u.fasta',
'log/dereplicate_samples/{sample}.log',
'log/goodlength_samples/{sample}.log',
'log/clean_pcrerr/{sample}.log',
'log/rm_internal_samples/{sample}.log'
output:
temp('flag_sample_{sample}_complete')
In this case the all rule becomes trivial:
rule all:
input: expand('flag_sample_{sample}_complete', sample=SAMPLES)
Or, as I advised before:
rule all:
input: expand('flag_run_{run}_complete', run=config["fastqFiles"]),
input: expand('flag_sample_{sample}_complete', sample=SAMPLES)
rule all1:
input: expand('flag_run_{run}_complete', run=config["fastqFiles"])
rule all2:
input: expand('flag_sample_{sample}_complete', sample=SAMPLES)

Fiddler Autoresponder: Regex replacement not working

I have a regex rule and an action that returns a file from a local cache. The rule captures what I want it to, but the problem is $2 in the action is not handled, so Fiddler tries to return D:\path\$2 (and fails). What could be wrong?
Rule:
regex:(?insx).*(host1.com|host2.com)/folder1/folder2/(.*)\?rev=.*
Action:
D:\path\$2
Any help would be appreciated.
P.S. I'm using Fiddler v2.4.8.0
After loosing an interesting amount of hair with this, I achieve it 'naming' the group replacement, like this:
Rule:
regex:(?insx).*(host1.com|host2.com)/folder1/folder2/(?'mygroup'.*)\?rev=.*
Action:
D:\path\${mygroup}
When you're using group replacements like this, it's important to put ^ at the front of the Rule expression and $ at the end.

testing mailers in rspec

Is there no simple way to do the equivalent of response.should render_template(:foo) in a mailer spec? Here's what I want to do:
mail.should render_template(:welcome)
Is that so much to ask? Am I stuck in the dark ages of heredocs or manually reading fixtures in to match against?
Have you tried looking at email-spec. It doesn't have the exact syntax but it is used for testing various aspects of sending emails.
# IMPORTANT!
# must copy https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/support/helpers/next_instance_of.rb
it 'renders foo_mail' do
allow_next_instance_of(described_class) do |mailer|
allow(mailer).to receive(:render_to_body).and_wrap_original do |m, options|
expect(options[:template]).to eq('foo_mail')
m.call(options)
end
end
body = subject.body.encoded
end

Mirc script, sockread

I'm trying to make a script which will take data from a webpage and post it on irc.
I've managed to do it, but im not able to make it repeat its job multiple times.
This is what i tried:
alias mib {
var %i = 0
%bitka = $1
while (%i <= 4) {
$myb(%bitka)
inc %i 1
}
}
Alias "myb" works, it gets the data and posts it.
I tried to make it repeat what alias "myb" does 5 times, but it does it only once. Idealy i want it to keep posting that data until i turn it off, but i wanted to go with baby steps. Not successfuly though.
Help is appreciated. Thanks.
Value-returning functions in mIRC are called identifiers, they are prefixed with the $ sigil. A non-value returning function is called a command and their syntax is a bit different to that of identifiers. If alias "myb" is not supposed to return anything, as understood by your code, it should be changed to command syntax. If your alias returns something, as your code is, it will be executed by mIRC as if it was a command, which can result in undesired behavior.
alias mib {
var %i = 0
while (%i < 5) {
myb $1-
inc %i
}
}
No recursion at mIRC.
Try using Signals.
/timer 1 4 $myb(%bitka)
This does repeat it 4 times with 1 second delay in between.
I gues this is what you want. You can also check my website for another sockread script.