D receives location in args? - command-line

I'm pretty new to looking at D (like...yesterday, after looking for Kotlin benchmarks...) and currently trying to decide if it's a language I want to cope with.
I'm trying to pass some arguments from command line and I'm a little surprised. Let's say I pass "-Foo -Bar".
My program is quite simple:
import std.stdio;
void main(string [] args) {
foreach(arg; args) {
writeln(arg);
}
}
Coming from Java, I expected it to print
-Foo
-Bar
But my D program seems to receive its location as the first argument?
The output is:
/home/(username)/Java_Projects/HelloD/hellod
-Foo
-Bar
I tried searching for this, but all Google hits refer to Java's -D switch...
So, is this intended behaviour? If yes, does anyone know why?

That's normal in D, inherited from C and C++. The first argument is the name of the program so you can use it to determine which function you want in a multi-use program.
The busybox unix toolset https://busybox.net/ uses this (well, at least used to, I'm not sure if it has changed) so one program, busybox, can be called as various unix commands like ls or cp.
Using args[0], it can tell which one it was called as, though they all point to the same binary program, and respond accordingly.
TIP: if you're not interested in this, you can loop just your args with foreach(arg; args[1 .. $]) {}

Related

Swift: macro for __attribute__((section))

This is kind of a weird and un-Swift-thonic question, so bear with me.
I want to do in Swift something like the same thing I'm currently doing in Objective-C/C++, so I'll start by describing that.
I have some existing C++ code that defines a macro that, when used in an expression anywhere in the code, will insert an entry into a table in the binary at compile time. In other words, the user writes something like this:
#include "magic.h"
void foo(bool b) {
if (b) {
printf("%d\n", MAGIC(xyzzy));
}
}
and thanks to the definition
#define MAGIC(Name) \
[]{ static int __attribute__((used, section("DATA,magical"))) Name; return Name; }()
what actually happens at compile time is that a static variable named xyzzy (modulo name-mangling) is created and allocated into the special magical section of my Mach-O binary, so that running nm -m foo.o to dump the symbols shows something a lot like this:
0000000000000098 (__TEXT,__eh_frame) non-external EH_frame0
0000000000000050 (__TEXT,__cstring) non-external L_.str
0000000000000000 (__TEXT,__text) external __Z3foob
00000000000000b0 (__TEXT,__eh_frame) external __Z3foob.eh
0000000000000040 (__TEXT,__text) non-external __ZZ3foobENK3$_0clEv
00000000000000d8 (__TEXT,__eh_frame) non-external __ZZ3foobENK3$_0clEv.eh
0000000000000054 (__DATA,magical) non-external [no dead strip] __ZZZ3foobENK3$_0clEvE5xyzzy
(undefined) external _printf
Through the magic of getsectbynamefromheader(), I can then load the symbol table for the magical section, scan through it, and find out (by demangling every symbol I find) that at some point in the user's code, he calls MAGIC(xyzzy). Eureka!
I can replicate the whole second half of that workflow just fine in Swift — starting with the getsectbynamefromheader() part. However, the first part has me stumped.
Swift has no preprocessor, so spelling the magic as elegantly as MAGIC(someidentifier) is impossible. I don't want it to be too ugly, though.
As far as I know, Swift has no way to insert symbols into a given section — no equivalent of __attribute__((section)). This is okay, though, since nothing in my plan requires a dedicated section; that part just makes the second half easier.
As far as I know, the only way to get a symbol into the symbol table in Swift is via a local struct definition. Something like this:
func foo(b: Bool) -> Void {
struct Local { static var xyzzy = 0; };
println(Local.xyzzy);
}
That works, but it's a bit of extra typing, and can't be done inline in an expression (not that that'll matter if we can't make a MAGIC macro in Swift anyway), and I'm worried that the Swift compiler might optimize it away.
So, there are three questions here, all about how to make Swift do things that Swift doesn't want to do: Macros, attributes, and creating symbols that are resistant to compiler optimization.
I'm aware of #asmname but I don't think it helps me since I can already deal with demangling on my own.
I'm aware that Swift has "generics", but they seem to be closer to Java generics than to C++ templates; I don't think they can be used as a substitute for macros in this particular case.
I'm aware that the code for the Swift compiler is now open-source; I've skimmed bits of it in vain; but I can't read through all of it looking for tricks that might not even be there.
Here is the answer to your question about preprocessor (and macros).
Swift has no preprocessor, so spelling the magic as elegantly as MAGIC(someidentifier) is impossible. I don't want it to be too ugly, though.
Swift project has a preprocessor (but, AFAIK, it is not distributed with Swift's binary).
From swift-users mailing list:
What are .swift.gyb files?
It’s a preprocessor the Swift
team wrote so that when they needed to build, say, ten nearly-identical
variants of Int, they wouldn’t have to literally copy and paste the same
code ten times. If you open one of those files, you’ll see that they’re
mainly Swift code, but with some lines of code intermixed that are written in Python.
It is not as beautiful as C macros, but, IMHO, is more powerful.
You can see the available commands with ./swift/utils/gyb --help command after cloning the Swift's git repo.
$ swift/utils/gyb --help
usage, etc (TL;DR)...
Example template:
- Hello -
%{
x = 42
def succ(a):
return a+1
}%
I can assure you that ${x} < ${succ(x)}
% if int(y) > 7:
% for i in range(3):
y is greater than seven!
% end
% else:
y is less than or equal to seven
% end
- The End. -
When run with "gyb -Dy=9", the output is
- Hello -
I can assure you that 42 < 43
y is greater than seven!
y is greater than seven!
y is greater than seven!
- The End. -
My example of GYB usage is available on GitHub.Gist.
For more complex examples look for *.swift.gyb files in #apple/swift/stdlib/public/core.

Dynamic arg types for a python function when embedding

I am adding to Exim an embedded python interpreter. I have copied the embedded perl interface and expect python to work the same as the long-since-coded embedded perl interpreter. The goal is to allow the sysadmin to do complex functions in a powerful scripting language (i.e. python) instead of trying to use exim's standard ACL commands because it can get quite complex to do relatively simple things using the exim ACL language.
My current code as of the time of this writing is located at http://git.exim.org/users/tlyons/exim.git/blob/9b2c5e1427d3861a2154bba04ac9b1f2420908f7:/src/src/python.c . It is working properly in that it can import the sysadmin's custom python code, call functions in it, and handle the returned values (simple return types only: int, float, or string). However, it does not yet handle values that are passed to a python function, which is where my question begins.
Python seems to require that any args I pass to the embedded python function be explicitly cast to one of int,long,double,float or string using the c api. The problem is the sysadmin can put anything in that embedded python code and in the c side of things in exim, I won't know what those variable types are. I know that python is dynamically typed so I was hoping to maintain that compliance when passing values to the embedded code. But it's not working that way in my testing.
Using the following basic super-simple python code:
def dumb_add(a,b):
return a+b
...and the calling code from my exim ACL language is:
${python {dumb_add}{800}{100}}
In my c code below, reference counting is omitted for brevity. count is the number of args I'm passing:
pArgs = PyTuple_New(count);
for (i=0; i<count; ++i)
{
pValue = PyString_FromString((const char *)arg[i]);
PyTuple_SetItem(pArgs, i, pValue);
}
pReturn = PyObject_CallObject(pFunc, pArgs);
Yes, **arg is a pointer to an array of strings (two strings in this simple case). The problem is that the two values are treated as strings in the python code, so the result of that c code executing the embedded python is:
${python {dumb_add}{800}{100}}
800100
If I change the python to be:
def dumb_add(a,b):
return int(a)+int(b)
Then the result of that c code executing the python code is as expected:
${python {dumb_add}{800}{100}}
900
My goal is that I don't want to force a python user to manually cast all of the numeric parameters they pass to an embedded python function. Instead of PyString_FromString(), if there was a PyDynamicType_FromString(), I would be ecstatic. Exim's embedded perl parses the args and does the casting automatically, I was hoping for the same from the embedded python. Can anybody suggest if python can do this arg parsing to provide the dynamic typing I was expecting?
Or if I want to maintain that dynamic typing, is my only option going to be for me to parse each arg and guess at the type to cast it to? I was really really REALLY hoping to avoid that approach. If it comes to that, I may just document "All parameters passed are strings, so if you are actually trying to pass numbers, you must cast all parameters with int(), float(), double(), or long()". However, and there is always a comma after however, I feel that approach will sour strong python coders on my implementation. I want to avoid that too.
Any and all suggestions are appreciated, aside from "make your app into a python module".
The way I ended up solving this was by finding out how many args the function expected, and exit with an error if the number of args passed to the function didn't match. Rather than try and synthesize missing args or to simply omit extra args, for my use case I felt it was best to enforce matching arg counts.
The args are passed to this function as an unsigned char ** arg:
int count = 0;
/* Identify and call appropriate function */
pFunc = PyObject_GetAttrString(pModule, (const char *) name);
if (pFunc && PyCallable_Check(pFunc))
{
PyCodeObject *pFuncCode = (PyCodeObject *)PyFunction_GET_CODE(pFunc);
/* Should not fail if pFunc succeeded, but check to be thorough */
if (!pFuncCode)
{
*errstrp = string_sprintf("Can't check function arg count for %s",
name);
return NULL;
}
while(arg[count])
count++;
/* Sanity checking: Calling a python object requires to state number of
vars being passed, bail if it doesn't match function declaration. */
if (count != pFuncCode->co_argcount)
{
*errstrp = string_sprintf("Expected %d args to %s, was passed %d",
pFuncCode->co_argcount, name, count);
return NULL;
}
The string_sprintf is a function within the Exim source code which also handles memory allocation, making life easy for me.

Python: outside function applying changes to a class object's unique namespace

My question is how to program in Python (2.6) a function that uses a namespace of an object, while the function is defined outside the object/class. In addition, that function should only change the variables in the object's namespace; it should not take over the namespace (because with multiple objects they will all use the same namespace).
My reason for pursuing this, is because I wish to write a very small class, where during construction all necessary functions for future use are already given and subsequent function calls (self.__call__) on the object itself can be directly applied.
I realize that this idea is not very pythonic (as they say), and I have thought of various other solutions (such as putting the functions in another class and connecting them), but I can't help but feel that each of these solutions is a lot more work than I would think makes sense.
One simple way that accomplishes what I want is the following:
class A:
def __init__(self, some_dict, func_a):
self.memory = some_dict
self.__call__ = func_a
def test_func(obj, some_input):
if some_input in obj.memory :
return obj.memory[some_input]
else :
obj.memory[some_input] = 0. # some default value
return 0.
first_object = A({}, test_func)
print first_object(first_object, '3')
This will work fine, but what aches me is that when I make function calls to the object, I will also have to give the object itself (see the last line). I hope to be able make calls as such:
print first_object('3')
So far, my ideas were unsuccesful to avoid this (e.g. copying the function method and link its namespace by self.__call__.memory = self.memory). I wish to find something to change the def __init__ part to 'adopt' a function and link their namespaces.
I have rigorously searched for an answer on the internet, but a definite solution has not yet been found. The following http://www.velocityreviews.com/forums/t738476-inserting-class-namespace-into-method-scope.html seeks the same, but is also not succesfull.
Anyone have a solution to tackle this?

The difference between scala script and application

What is the difference between a scala script and scala application? Please provide an example
The book I am reading says that a script must always end in a result expression whereas the application ends in a definition. Unfortunately no clear example is shown.
Please help clarify this for me
I think that what the author means is that a regular scala file needs to define a class or an object in order to work/be useful, you can't use top-level expressions (because the entry-points to a compiled file are pre-defined). For example:
println("foo")
object Bar {
// Some code
}
The println statement is invalid in the top-level of a .scala file, because the only logical interpretation would be to run it at compile time, which doesn't really make sense.
Scala scripts in contrast can contain expressions on the top-level, because those are executed when the script is run, which makes sense again. If a Scala script file only contains definitions on the other hand, it would be useless as well, because the script wouldn't know what to do with the definitions. If you'd use the definitions in some way, however, that'd be okay again, e.g.:
object Foo {
def bar = "test"
}
println(Foo.bar)
The latter is valid as a scala script, because the last statement is an expression using the previous definition, but not a definition itself.
Comparison
Features of scripts:
Like applications, scripts get compiled before running. Actually, the compiler translates scripts to applications before compiling, as shown below.
No need to run the compiler yourself - scala does it for you when you run your script.
Feeling is very similar to script languages like bash, python, or ruby - you directly see the results of your edits, and get a very quick debug cycle.
You don't need to provide a main method, as the compiler will add one for you.
Scala scripts tend to be useful for smaller tasks that can be implemented in a single file.
Scala applications on the other hand, are much better suited when your projects start to grow more complex. They allow to split tasks into different files and namespaces, which is important for maintaining clarity.
Example
If you write the following script:
#!/usr/bin/env scala
println("foo")
Scala 2.11.1 compiler will pretend (source on github) you had written:
object Main {
def main(args: Array[String]): Unit =
new AnyRef {
println("foo")
}
}
Well, I always thought this is a Scala script:
$ cat script
#!/usr/bin/scala
!#
println("Hello, World!")
Running with simple:
$ ./script
An application on the other hand has to be compiled to .class and executed explicitly using java runtime.

GtkAda simple chat error

I'm writing simple chat program in Ada, and I'm having problem with chat window simulation - on button clicked it reads text form entry and puts it on text_view. Here is the code I've written and here is the compile output:
gnatmake client `gtkada-config`
gcc -c -I/usr/include/gtkada client_pkg.adb
client_pkg.adb:14:19: no candidate interpretations match the actuals:
client_pkg.adb:14:37: expected private type "Gtk_Text_Iter" defined at gtk-text_iter.ads:48
client_pkg.adb:14:37: found type "Gtk_Text_View" defined at gtk-text_view.ads:58
client_pkg.adb:14:37: ==> in call to "Get_Buffer" at gtk-text_buffer.ads:568
client_pkg.adb:14:37: ==> in call to "Get_Buffer" at gtk-text_buffer.ads:407
client_pkg.adb:15:34: no candidate interpretations match the actuals:
client_pkg.adb:15:34: missing argument for parameter "Start" in call to "Get_Text" declared at gtk-text_buffer.ads:283
client_pkg.adb:15:34: missing argument for parameter "Start" in call to "Get_Text" declared at gtk-text_buffer.ads:270
gnatmake: "client_pkg.adb" compilation error
Can anyone tell me what is the problem, since I have no idea why procedure Get_Buffer expects Gtk_Text_Iter, and why Get_Text miss Start parameter?
You have to call the correct procedures/functions.
In your example, you call Gtk.Text_Buffer.Get_Buffer, not the correct Gtk.Text_View.Get_Buffer. This is because you with and use Gtk.Text_Buffer, but don't use Gtk.Text_View. You should be careful what you use. Same for Get_Text.
If you add use clauses for Gtk.Text_View and Gtk.GEntry, those errors should disappear.
But I give you an advice: try to use as few as possible use clauses. That way you always know what function is really called.
TLDR: Add use Gtk.Text_View; use Gtk.GEntry; to the declaration part of the On_Btn_Send_Clicked procedure.