Macro name expanded from another macro in makefile - macros

I have a makefile with the following format. First I define what my outputs are;
EXEFILES = myexe1.exe myexe2.exe
Then I define what the dependencies are for those outputs;
myexe1.exe : myobj1.obj
myexe2.exe : myobj2.obj
Then I have some macros that define extra dependencies for linking;
DEP_myexe1 = lib1.lib lib2.lib
DEP_myexe2 = lib3.lib lib4.lib
Then I have the target for transforming .obj to .exe;
$(EXEFILES):
$(LINK) -OUT:"Exe\$#" -ADDOBJ:"Obj\$<" -IMPLIB:$($($(DEP_$*)):%=Lib\\%)
What I want to happen is (example for myexe1.exe)
DEP_$* -> DEP_myexe1
$(DEP_myexe1) -> lib1.lib lib2.lib
$(lib1.lib lib2.lib:%=Lib\\%) -> Lib\lib1.lib Lib\lib2.lib
Unfortunately this is not working. When I run make --just-print, the -IMPLIB: arguments are empty. However, if I run $(warning DEP_$*) I get
DEP_myexe1
And when I run $(warning $(DEP_myexe1)) I get
lib1.lib lib2.lib
So for some reason, make does not like the combination of $(DEP_$*). Perhaps it cannot resolve macro names dynamically like this. What can I do to get this to work? Is there an alternative?

Where does $(warning DEP_$*) give you DEP_myexe1 as output exactly? Because given your makefile above it shouldn't.
$* is the stem of the target pattern that matched. In your case, because you have explicit target names, you have no patten match and so no stem and so $* is always empty.
Additionally, you are attempting a few too many expansions. You are expanding $* to get myexe1 directly (assuming for the moment that variable works the way you intended). You then prefix that with DEP_ and used $(DEP_$*) to get the lib1.lib lib2.lib. You then expand that result $($(DEP_$*)) and then expand that (empty) result again (to do your substitution) $($($(DEP_$*)):%=Lib\\%).
You want to either use $(#:.exe=) instead of $* in your rule body or use %.exe as your target and then use $* to get myexe1/myexe2.
You then want to drop two levels of expansion from $($($(DEP_$*)):%=Lib\\%) and use $(DEP_$*:%=Lib\\%) instead.
So (assuming you use the pattern rule) you end up with:
%.exe:
$(LINK) -OUT:"Exe\$#" -ADDOBJ:"Obj\$<" -IMPLIB:$(DEP_$*:%=Lib\\%)

I managed to get it working without needing to resolve macros in the way described above. I modified the linking dependencies like this;
myexe1.exe : myobj1.obj lib1.lib lib2.lib
myexe2.exe : myobj2.obj lib3.lib lib4.lib
Then I need to filter these files by extension in the target recipe;
$(EXEFILES):
$(LINK) -OUT:"$(EXE_PATH)\$#" -ADDOBJ:$(patsubst %, Obj\\%, $(filter %.obj, $^)) -IMPLIB:$(patsubst %, Lib\\%, $(filter %.lib, $^))
The $(pathsubst ...) is used to prepend the path that the relevant files are in.
In the case of myexe1.exe, the link command expands to;
slink -OUT:"Exe\myexe1.exe" -ADDOBJ: Obj\myexe1.obj -IMPLIB: Lib\lib1.lib Lib\lib2.lib
Out of interest's sake, I would still like to know if it is possible to resolve macro names like in the question.

Related

Simple `to_tsvector` configuration - postgres

How can I change the to_tsvector configuration to use a simple tokenization rule like:
lowercase
split by spaces only
Executing the following query:
SELECT to_tsvector('english', 'birthday=19770531 Name=John-Oliver Age=44 Code=AAA-345')
I get these lexemes:
'-345':9 '19770531':2 '44':6 'aaa':8 'age':5 'birthday':1 'code':7 'john':4 'name':3
The kind of searching I'm looking for is like:
(!birthday | birthday=19770531) & (code=AAA-345)
It means, get me all records that has a text "birthday=19770531" or doesn't have "birthday" at all, and a text equals to "code=AAA-345"). The way lexemes are being created it is not possible. I was expecting to have something like this:
'birthday=19770531':1 'age=44':2 'code=aaa-345':4 'name=john-oliver':3
You would have to code a custom parser. This can only be done in C.
But you might be able to use the existing testing parser test_parser, it seems to do what you want. If not, it would at least be a good starting point.
The problem may be that this is in src/test/modules/, and I don't think it ships with most installation packaging. So it might take some effort to get it to install. It would depend on your OS, version, and package manager.

Drools filtering stream within DRL

I would like to create a rule with lamba expression like below:
when
rule_id: String() from "784acba8-32e5-41de-bd73-04f9ce2bfaff"
$: DroolsEventWrapper(Arrays.stream("testing".split("")).filter(element->(element!=null && element.contains("e"))).findFirst().isPresent())
then
System.out.println("Qualified for "+rule_id);
end
just a simple check "testing" will be given as a parameter also; however when I use it drl. It gave error like:
mismatched input '.' in rule "784acba8-32e5-41de-bd73-04f9ce2bfaff" in pattern
When I check the location of that dot; it belongs to filter(element->(element!=null && element.contains("e"))) when I omit it, it is working.
I am using the latest version of Drools: 7.55.0.Final. I found some tickets which say that lambda expressions somehow buggy at previous versions but not the latest ones.
Do I missing something, or is there any way to run this within DRL ?

Cannot get values from Lark AST

I cannot and do not know how to retrieve the values of an AST that I generated using the Lark parser.
My grammar is as follows, saved in a .lark file :
start: (un_handle ": ")? AMOUNT "|" p_handle ("," p_handle)* (" \"" MESSAGE* "\"")?
AMOUNT: /[0-9]+(\.[0-9][0-9]?)?/
un_handle: HANDLE
p_handle: HANDLE
HANDLE : /[A-Z][A-Z]/
MESSAGE : /[^"]+/
I then run:
testText = '10|GP "Bananas"'
testTree = parser.parse(testText)
and get:
Tree(start, [Token(AMOUNT, '10'), Tree(p_handle, [Token(HANDLE, 'GP')]), Token(MESSAGE, 'Bananas')])
But, what now?
I realize that I have to probably have to build a transformer, but what methods should I define and what should I call them? I just want to extract the values for AMOUNT, un_handle, p_handle (there may be more than one p_handle), and message into Python variables.
Thank you so much in advance! Have been debugging for hours.
First off, try adding a "line" rule to provide a reference point. Yes, your application does not probably use multiple lines, but it is usually good to include one just in case.
Now, write a subroutine to find each "line" token in the AST, and append it to a list.
Finally, I suggest that you process the resulting list using a subroutine based upon the eval() subroutine in LisPy.

Get real position in lexer. Added example

I writing editor with netbeans7 and ANTLR4
I have line in my.g4 file
Label : {(getCharPositionInLine()==0)}? ID;
That works well for static files, but while editing getCharPositionInLine() returns 0 often in other places.
How get a real position in lexer?
updated
I created example with this problem
https://github.com/daimor/SimpleANTLR
Your error is likely in the way you are constructing your lexer and/or input stream (i.e. code that you have not shown here). The predicate you describe above will work as expected for an ANTLR 4 lexer.
Also, if getCharPositionInLine()==0, then exactly one of the following conditions is also true:
_input.index()==0
_input.LA(-1)=='\n'

SWI-Prolog cgi_get_form(Arguments) saving and handling arguments web form

I'm looking for a way of saving and after handling the arguments of a web form in SWI-Prolog when I submit the form and I call the same program to generate another form and so on. Always calling the same prolog program from one form to the next one.
The CGI SWI-Prolog library saves these arguments as a list of Name(Value) terms, i.e [Name(Value)].
if I pass the arguments like a hidden argument inside the form (TotalArguments is a list):
format('"<"input type="hidden" id="nameofform1" name="nameofform1" value="~w" />~n', TotalArguments),
I need to get rid of the id or name that concatenates on my resultant list on TotalArguments when I append it. Any idea of how to do this so that the final list looks like [nameofform1(value1), nameofform2(value2),...]?
I could also write this list of arguments and append it into a file, and consult it every time the program is called again, but this will load them always and I only need to load the arguments needed in the specific step and form handled at the moment. Because otherwise this file could contain undesirable info after some executions. Any thoughts on how to do it this way?
Any other suggestions for this kind of problem?
Edit with my solution using hidden form
I've solved it by creating:
extract_value([],_).
extract_value([A0|__ ], Valor) :-
A0 =.. [_, Value],
Valor is Value.
and then doing:
extract_value(Arguments, Value),
and submiting the hidden value of the form like:
format('<"input type="hidden" id="nameofform1" name="nameofform1" value="~w"/>~n', [Value]),
and appending it in the next form so that it looks how I wanted:
[nameofform2(value2),nameofform1(value1)]
It's a bit unclear to me what exactly you need here, but to remove the first element of a list that unifies with a given element (especially if you know for certain that the list contains such an element), use selectkchk/3. For example:
selectchk(id(_), List0, List1),
selectchk(name(_), List1, List)
in order to obtain List, which is List0 without the elements id(_) and name(_). Kind of implicit in your question, as I understand it, seems to be how to create a term like "form1(Value)" given the terms name(form1) and Value. You can do this for example with =../2. You can create a term T with functor N and arguments Args with
T =.. [N|Args]
It does not seem necessary to write anything to files here, I would simply pass the info through forms just as you outline.