I have 2 bitbake recipes with function do_install_prepend(). The point is that 90% of body of this function is similar in both recipes. I would like to do something like:
Create new file
Add common variables to the file
Create new function inside the file which contains the reusable part.
Use require inside the recipes.
So it looks pretty easy. I did it all of those steps, but it doesn't seem to work. I need the function from new file to run BEFORE the do_install_prepend. Of course I'm using require before the definition of do_install. Unfortunately the do_install is running first and causes recipe to fail.
Tried the addtask, addtask new_func before do install but then I receive the error that new_function command not found.
Related
When I do actions list it's there but for some reason it saying object not found and this is happening on the live server. file.dump_from_b64 works locally if I created a walker init and using it to test. The issue doesn't make sense.
Oh, you shouldn't use the variable name file alongside referencing the file Jaseci action set.
Try
for f in files:
...
instead of
for file in files:
I'm using preaty old Matlab (version 7.1.0.246 (R14) Service Pack 3) :(
I have some toolbox I was provided which I'd like to use. When I try to execute the function, I'm getting Undefined command/function 'test' (my function is test and stored in test.m and the file located in my current working directory).
If I place the file in C:\Temp\ and execute which test, I'm getting the complete file path (C:\Temp\test.m).
If I place the file in C:\Temp\MyMap\ and execute which test, I'm getting the complete file path('C:\Temp\MyMap\test.m') and additional comment %Has no license available.
If I use following one
if exist('test')
test(...)
end
It solves the issue. However, as mentioned previously, its a toolbox and contains many functions. I don't have time (and want to) apply the workaround on all the files/functions.
Any suggestion how this could be solved?
It seems like it is impossible to run a process on Scala with another working directory and input redirect.
This is how I would typically run a process on Scala with a default directory:
Process(cmd, new File("someDir")).!!
And this is how I would typically run a process on Scala with input redirect:
("someCmd -someParam" #< "myFile.txt").!!
It seems like it is impossible to combine the two..
Am I missing anythign?
#< is a method on ProcessBuilder, so you can just call:
(Process("someCmd -someParam", new File("someDir")) #< new File("myFile.txt")).!!
Note, that the File you pass as input has to be specified relative to the working directory of the Scala process. But if instead you are passing the file path as an argument to the command, the path has to be relative to the working directory of the command.
So, for myFile.txt inside someDir, the calls may look like this:
(Process("someCmd -someParam", new File("someDir")) #< new File("someDir/myFile.txt")).!!
But,
Process("cat myFile.txt", new File("someDir")).!!
I have several Octave script files that run tests, named test_1, test_2, etc. I want to have a script file that will run all the tests, without having to switch all the test_n files to function files. I've tried several variations on this:
#!/path/to/octave -q
addpath('/path/to/directory/containing/all/scripts/');
source(test_1.m);
source(test_2.m);
but I always get "error: invalid call to script /path/to/directory/containing/all/scripts/test_1.m".
(I've tried source_file(), run(), and just having the filename alone on the line.)
Is there any way to run script files from a script file in Octave?
Try
source test_1.m
or
source('test_1.m')
instead.
Your syntax implies test_1 is a struct variable and you're trying to access a field called m
Same with the run command (in fact, run simply calls source under the hood).
You can also call the script directly, if it's on the path. You just have to make sure you don't include the .m extension, i.e.
test_1
test_2
Just put the name of the included script, without .m extension on a separate line.
Lets have for example script 1: 'enclosed.m'
and script 2: 'included.m'. Then enclosed.m should look like:
% begin enclosed.m
included; % sources included.m
% end encluded.m
In Xcode, for any Objective-C header, we can view the Generated Interface, which shows how it is seen by Swift in interop.
I'd like to generate that from the command line. Any idea how to do it?
Bonus task: The header should be precompiled first, so all #imports should be replaced already.
Invoke interpreter command :type lookup on the module you are trying to inspect.
Suppose you have a header file named header.h. Put it into a separate directory, so that the interpreter would recognise it as a module. Also create a modulemap in the same directory. Let's call this directory Mod:
./
./Mod/
/header.h
/module.modulemap
Fill in the modulemap with the following:
module Mod {
header "./header.h"
export *
}
Once it's done, issue a command like this:
echo "import Mod\n:type lookup Mod" | swift -I./Mod | tail -n+2 >| generated-interface.swift
Alternatively, you might want use a command like this with equal effect:
echo "import Mod\n:print_module Mod" | swift -deprecated-integrated-repl -I./Mod >| generated-interface.swift
It's broken down as follows:
first we echo the script to be executed: import module and type-lookup it;
then we launch the interpreter and feed the script into it; the -I argument helps it find our module, which is crucial;
then we cut off the “Welcome to Swift” part with Tail
and write the result into generated-interface.swift.
While running the above commands, make sure your working directory is set to one level higher than the Mod directory.
Note that the output might not be exactly the same as from Xcode, but it's very similar.
Just for the record, if you want to produce the interface from a Swift file, then it's just this:
swiftc -print-ast file.swift