How to load multiple modules implementing the same behaviour - import

I do not understand how is one supposed to use multiple modules that each implement the same behaviour since i get this error at compile time:
Function is already imported from
In my case i have two modules implementing gen_event behaviour and i am trying to import them in a third module.
I get the error message whenever i am trying to compile this code:
-module(mgr).
-import(h1,[init/1]). // implements gen_event
-import(h2,[init/1]). // implements gen_event

You can't do that. Import is a simple trick to avoid to write the complete "definition" of a function. It does nothing but just says to the compiler : when you see init(P) in this module, replace with h1:init(P).
Thus it is not possible to import several function with the same name/arity.
For short names, I do not see any benefit to use import.
If you are using module:function with long names, and you want to shorten the lines in the code, it is possible to use macros instead, and there are no limitation (but also few chance that the function name are the same :o):
-define(Func1(Var1,...,VarN), module1:func(Var1,...,VarN)).
-define(Func2(Var1,...,VarN), module2:func(Var1,...,VarN)).
...
?Func1(A1,...,AN);
...
?Func2(B1,...,BN);
Edit
The next example illustrates how it works, first I create the module mod1 as follow:
-module (mod1).
-export ([test/1]).
test(P) ->
case P of
1 -> ok;
2 -> mod2:test()
end.
and I test it in the shell:
1> c(mod1).
{ok,mod1}
2> mod1:test(1).
ok
3> mod1:test(2).
** exception error: undefined function mod2:test/0
4> % this call failed because mod2 was not defined.
4> % lets define it and compile.
mod2 is created as:
-module (mod2).
-export ([test/0]).
test() ->
io:format("now it works~n").
continue in the shell:
4> c(mod2).
{ok,mod2}
5> mod1:test(1).
ok
6> mod1:test(2).
now it works
ok
7>
As you can see, it is not necessary to modify mod1, but only to create and compile mod2 (note that it would be the same if mod2 already exists but the function test/0 is not exported).
If you want to verify that your code is not using undefined function, you can use external tools. As I am using rebar3 to manage my projects, I use the command rebar3 xref to perform this check. Note that calling an undefined function is a simple warning, it is meaningful in the context of application upgrading. This verification is not bullet proof: it is done at build time, this does not guarantee that the modules yo need will be present, with the right version on a production system: it opens a lot more interesting questions about versioning, code loading...

Related

Pytest-bdd - Fixture 'self' not found

i am using pytest-bdd
Here is my feature file
#recon_test.feature
Feature: This is used to run recon
Scenarios:Run Recon
Test File
'''python
#recon_test.py
Class Recon_Tests():
#scenario('recon_test.feature','Run Recon')
def test_run_recon(self):
#do something
when i run this using command pytest , i get error **fixture 'self' not found.**
Because due to scenario annotation it treats this function as fixture maybe , and expects **'self'** to be another fixture.
I want to use the '#scenario' in my test functions inside the test classes . Is there any way ?
Also , i have found a workaround for this , i have created a fixture
```python
def self():
pass
to avoid this , and the error is gone .
But it gives another error saying that 'Recon_Tests' does not have an attribute config.
as bdd tries to read the fixture's config object for pre test hooks.
Please suggest
This is because pytest has no way of knowing whether it is a self (in terms of class instance) or a fixture.
This is fixed when you inherit your class from unittest.TestCase.
Meaning instead of class Recon_Tests() you specify
class ReconTests(unittest.TestCase).

D receives location in args?

I'm pretty new to looking at D (like...yesterday, after looking for Kotlin benchmarks...) and currently trying to decide if it's a language I want to cope with.
I'm trying to pass some arguments from command line and I'm a little surprised. Let's say I pass "-Foo -Bar".
My program is quite simple:
import std.stdio;
void main(string [] args) {
foreach(arg; args) {
writeln(arg);
}
}
Coming from Java, I expected it to print
-Foo
-Bar
But my D program seems to receive its location as the first argument?
The output is:
/home/(username)/Java_Projects/HelloD/hellod
-Foo
-Bar
I tried searching for this, but all Google hits refer to Java's -D switch...
So, is this intended behaviour? If yes, does anyone know why?
That's normal in D, inherited from C and C++. The first argument is the name of the program so you can use it to determine which function you want in a multi-use program.
The busybox unix toolset https://busybox.net/ uses this (well, at least used to, I'm not sure if it has changed) so one program, busybox, can be called as various unix commands like ls or cp.
Using args[0], it can tell which one it was called as, though they all point to the same binary program, and respond accordingly.
TIP: if you're not interested in this, you can loop just your args with foreach(arg; args[1 .. $]) {}

The difference between scala script and application

What is the difference between a scala script and scala application? Please provide an example
The book I am reading says that a script must always end in a result expression whereas the application ends in a definition. Unfortunately no clear example is shown.
Please help clarify this for me
I think that what the author means is that a regular scala file needs to define a class or an object in order to work/be useful, you can't use top-level expressions (because the entry-points to a compiled file are pre-defined). For example:
println("foo")
object Bar {
// Some code
}
The println statement is invalid in the top-level of a .scala file, because the only logical interpretation would be to run it at compile time, which doesn't really make sense.
Scala scripts in contrast can contain expressions on the top-level, because those are executed when the script is run, which makes sense again. If a Scala script file only contains definitions on the other hand, it would be useless as well, because the script wouldn't know what to do with the definitions. If you'd use the definitions in some way, however, that'd be okay again, e.g.:
object Foo {
def bar = "test"
}
println(Foo.bar)
The latter is valid as a scala script, because the last statement is an expression using the previous definition, but not a definition itself.
Comparison
Features of scripts:
Like applications, scripts get compiled before running. Actually, the compiler translates scripts to applications before compiling, as shown below.
No need to run the compiler yourself - scala does it for you when you run your script.
Feeling is very similar to script languages like bash, python, or ruby - you directly see the results of your edits, and get a very quick debug cycle.
You don't need to provide a main method, as the compiler will add one for you.
Scala scripts tend to be useful for smaller tasks that can be implemented in a single file.
Scala applications on the other hand, are much better suited when your projects start to grow more complex. They allow to split tasks into different files and namespaces, which is important for maintaining clarity.
Example
If you write the following script:
#!/usr/bin/env scala
println("foo")
Scala 2.11.1 compiler will pretend (source on github) you had written:
object Main {
def main(args: Array[String]): Unit =
new AnyRef {
println("foo")
}
}
Well, I always thought this is a Scala script:
$ cat script
#!/usr/bin/scala
!#
println("Hello, World!")
Running with simple:
$ ./script
An application on the other hand has to be compiled to .class and executed explicitly using java runtime.

Calling methods from task in Cakefile

I'm setting up a Cakefile that will compile and minify my CoffeeScript and minify my Vanilla libs.
I created different tasks for each case (whether it was a coffee file or not) but I want to combine them into one task.
The problem I'm having is calling a method from the task; I can call a method with no problem under some circumstances, but otherwise I receive
TypeError: undefined is not a function
The object I'm working on looks like
source =
libs: [
'lib/jquery-1.7.1.min.js'
'lib/backbone.js'
'lib/underscore.js'
]
coffees: [
'app/800cart.coffee'
'app/models/coffee/cart.coffee'
'app/models/coffee/contact.coffee'
]
And Im wanting to do this, and I get the error
task 'build', 'Concat, compile, and minify files', ->
for fileType, files of source
concatinate files
concatinate = (files) ->
console.log 'concatinating'
The part that I'm really confused by is if I call the method with a condition it runs fine
task 'build', 'Concat, compile, and minify files', ->
for fileType, files of source
concatinate files if fileType is 'coffees'
concatinate = (files) ->
console.log 'concatinating'
What am I doing wrong here?
The problem is that you're trying to call concatinate before you define concatinate with the line concatinate =. Just move up the declaration, or better yet, move it outside of the task definition.
You're probably used to JavaScript's function concatinate syntax, which automatically moves the function to the top of the scope. CoffeeScript compiles to the concatinate = function syntax instead, mainly because the function cocatinate syntax behaves inconsistently across different JS runtimes (particularly IE). So, CoffeeScript functions simply obey ordinary variable assignment rules.

GtkAda simple chat error

I'm writing simple chat program in Ada, and I'm having problem with chat window simulation - on button clicked it reads text form entry and puts it on text_view. Here is the code I've written and here is the compile output:
gnatmake client `gtkada-config`
gcc -c -I/usr/include/gtkada client_pkg.adb
client_pkg.adb:14:19: no candidate interpretations match the actuals:
client_pkg.adb:14:37: expected private type "Gtk_Text_Iter" defined at gtk-text_iter.ads:48
client_pkg.adb:14:37: found type "Gtk_Text_View" defined at gtk-text_view.ads:58
client_pkg.adb:14:37: ==> in call to "Get_Buffer" at gtk-text_buffer.ads:568
client_pkg.adb:14:37: ==> in call to "Get_Buffer" at gtk-text_buffer.ads:407
client_pkg.adb:15:34: no candidate interpretations match the actuals:
client_pkg.adb:15:34: missing argument for parameter "Start" in call to "Get_Text" declared at gtk-text_buffer.ads:283
client_pkg.adb:15:34: missing argument for parameter "Start" in call to "Get_Text" declared at gtk-text_buffer.ads:270
gnatmake: "client_pkg.adb" compilation error
Can anyone tell me what is the problem, since I have no idea why procedure Get_Buffer expects Gtk_Text_Iter, and why Get_Text miss Start parameter?
You have to call the correct procedures/functions.
In your example, you call Gtk.Text_Buffer.Get_Buffer, not the correct Gtk.Text_View.Get_Buffer. This is because you with and use Gtk.Text_Buffer, but don't use Gtk.Text_View. You should be careful what you use. Same for Get_Text.
If you add use clauses for Gtk.Text_View and Gtk.GEntry, those errors should disappear.
But I give you an advice: try to use as few as possible use clauses. That way you always know what function is really called.
TLDR: Add use Gtk.Text_View; use Gtk.GEntry; to the declaration part of the On_Btn_Send_Clicked procedure.