Good afternoon
I wonder how I can inherit commands from a file that created the commands using the lib argparser.
To make it easy to understand, I will put examples:
File 1: (The house of the project may not have anything useless)
import file2
code here
File 2: (containing the argparser values)
import argparse
class file2():
def function():
argumentos = argparse.ArgumentParser(description = 'My Program')
argumentos.add_argument('-command')
argumentos_recebe = argumentos.parse_args()
I would like to invez to call the file 2 I called the file using one of two commands.
example:
arquivo1.py -command
Instead of ..
arquivo2.py -command
I think this is possible using class most like to find out how!
When you import file1 from file1.py, you can then refer to its contents as attributes on the file1 module object. So, your file1 code can be something like:
import file2
object = file2.file2() # create an instance of the class
object.function() # call the function to parse the arguments
Now, this won't quite work as written with the file2 contents you showed, because your function method doesn't take a self argument. You may not actually need a class (unlike Java, you don't need to pack all of your functions into classes all the time in Python). If that's the case, you can move function to the top level of the module and drop the object parts of the my code. Then just call file2.function() and it will do the parsing. Note that you might want to return something from the function that's doing the parsing, if you need to access the results in the rest of your code in file1.
Related
Suppose I have:
+MyPackage/+MySubPackage2/some_function.m
How can I generate the string 'MyPackage.MySubPackage2.some_function' from within this some_function.m when it's executing?
mfilename(), dbstack(), what(), etc. all just give 'some_function'
meta.package.fromName requires the string we're after as its input
parsing the full path (mfilename('fullpath')) or meta.package.getAllPackages() etc. seems to be the only way...
Seems that calling mfilename('class') in a class inside a package gives the right answer, but there's no equivalent for plain functions...
...or is there? Feels like I'm missing something obvious...
If it is possible to import the containing package (say p1/p2), then:
function outputArg1 = some_function()
import p1.p2.*
t = #some_function;
func2str(t)
%ans = 'p1.p2.some_function'
outputArg1 = ...;
end
The method in this answer may also be used (with some changes possibly) to automate the import process.
I am using the command line to create swift files and using the "swift" command in order to run one of them. However I would like one file to be able to access functions from another file. If this were C then I could use the #include macro and specify where the file is. But Swift's import statement doesn't seem to allow that. There should be a way and I would like to know how to do it.
for example:
If I have a file with a function in it and then I make another file that uses that function. How do I allow it to use it?
// file1.swift
import Foundation
func sayHello() -> String {
return "hello"
}
// file2.swift
import file1 // <-- trying to import file1 but it doesn't work
println(sayHello())
Once the files have been made I then write "swift file2.swift" in terminal. But it tells me..
error: no such module 'file1.swift'
clearly the swift compiler is looking for a module. How do I make file1 into a module? I've seen some solutions but they all take place in Xcode. What I'm looking for is to do it all in the command line.
// file1.swift
func sayHello() -> String {
return "hello"
}
// main.swift
println(sayHello())
and then from the terminal:
$ swiftc file1.swift main.swift
$ ./main
hello
See http://computers.tutsplus.com/tutorials/alfred-workflows-in-swift--cms-21807 for an example of creating a module (an Alfred helper library) from the command line, also http://www.alfredforum.com/topic/4902-alfred-workflows-in-swift-tutorial-is-available/?p=35662 where the author of the tutorial explains the four step compilation process in more detail.
I'm new to scala/java and I'm trying to understand this code below which returns a list of files in a directory. Function source
It takes an argument called dir - is the type of dir a File or File object?
It returns an array of type file.
It calls the method listFiles on dir.
What does the last line do?
def getRecursiveListOfFiles(dir: File): Array[File] = {
val these = dir.listFiles
these ++ these.filter(_.isDirectory).flatMap(getRecursiveListOfFiles)
}
This code does a breadth-first search, using recursion.
A File can either be a file, or a directory.
The code dir.listFiles list all files in directory. Remember that this will be a list of files and directories!
We can then break down the last line into 3 things. These could easily be separate lines.
these.filter(_.isDirectory) will return a list of directories that need searching in. It filters out the files.
flatMap(getRecursiveListOfFiles) takes this list of directories, and calls getRecursiveListOfFiles for every single directory. It then adds flattens these results into one list.
++ adds together two arrays. We add these to the result of the recursive call.
flatMap is key here. Read up about it, and how it differs from the map function to fully understand what's going on.
In short:
these ++ these.filter(_.isDirectory).flatMap(getRecursiveListOfFiles)
is fundamentally:
val allSubDirectories:Array[Files] = these.filter(_.isDirectory)
allSubDirectories.flatMap(getRecursiveListOfFiles)
//i.e. for each sub-directory, again find all files in sub-directory
these ++ (files of all sub-directories)
//ultimately add files of sub-directory to the actual list
Another alternative way to understand would be:
def getAllFiles(dir: File): List[File] = {
val these = dir.listFiles.toList
these ::: these.filter(_.isDirectory).map(x => getAllFiles(x)).flatten
}
Fundamentally same control flow, that for each sub-directory, you get a list of all the files and then to the same list, you add files of sub-directory.
I'm going to try to walk you through how I would think of this as someone trying to read a piece of unknown code.
It should be pretty clear by context that File can either be a regular file or directory. It doesn't contain the file contents, but represents any entry in the file system, and you can open the file with other library commands. For the purposes of this function, File is just something that happens to contain more Files in the case that it's a directory, and these can be accessed via the method listFiles: List[Files]. Presumably, it also provides other info, so that the original caller of getRecursiveListOfFiles could do something with the resulting list.
Also by context, these is pretty clearly the entries in the current directory.
The last line is the most subtle. But to break it down, it augments these with the Files found in those entries in these which happen to be directories.
To explain this step, the signature of flatMap on a List[File] can be thought of as flatMap[B](f: File => List[B]): List[B], where B is a type variable. In this case, because the same function getRecursiveListOfFiles, which is of type File => List[File], is being passed recursively, B is just File, so we can think of this particular call as flatMap(f: File => List[File]): List[File].
Roughly speaking, flatMap applies a function f to each item in a container, where f is required to return the same type of container. The "flat" part is simply the fact that these individual containers get combined, instead of being nested, which is what map would do. This is what allows the function to recursively add all the files found in its subdirectories. Pretty slick.
What's the distinction you're drawing? The type of dir is File.
An array of Files, yes.
It's probably implicitly converting these to a scala List or Buffer, which is the confusing part (e.g. it may import scala.collection.JavaConversions._ - it's better to use JavaConverters which makes the conversions explicit calls to .asScala or asJava). You can find the definitions of ++, filter and flatMap in the scaladoc, which will hopefully be enough to understand what's happening.
(note: my 4. is coming out as a 3. because of markdown :()
Is it possible to use Scala's import without specifying a main function in an object, and without using the package keyword in the source file with the code you wish to import?
Some explanation: In Python, I can define some functions in some file "Lib.py", write
from Lib import *
in some other file "Run.py" in the same directory, use the functions from Lib in Run, and then run Run with the command python Run.py. This workflow is ideal for small scripts that I might write in an hour.
In Scala, it appears that if I want to include functions from another file, I need to start wrapping things in superfluous objects. I would rather not do this.
Writing Python in Scala is unlikely to yield satisfactory results. Objects are not "superfluous" -- it's your program that is not written in an object oriented way.
First, methods must be inside objects. You can place them inside a package object, and they'll then be visible to anything else that is inside the package of the same name.
Second, if one considers solely objects and classes, then all package-less objects and classes whose class files are present in the classpath, or whose scala files are compiled together, will be visible to each other.
This is as minimal as I could get it:
[$]> cat foo.scala
object Foo {
def foo(): Boolean = {
return true
}
}
// vim: set ts=4 sw=4 et:
[$]> cat bar.scala
object Bar extends App {
import Foo._
println(foo)
}
// vim: set ts=4 sw=4 et:
[$]> fsc foo.scala bar.scala
[$]> export CLASSPATH=.:$CLASSPATH # Or else it can't find Bar.
[$]> scala Bar
true
When you just write simple scripts, use Scala's REPL. There, you can define functions and call them without having any enclosing object or package, and without a main method.
Objects/classes don't have to be in packages, though it's highly recommended. That said, you can also treat singleton objects like packages, i.e., as namespaces for standalone functions, and import their contents as if they were packages.
If you define your application as an object that extends App, then you don't have to define a main method. Just write your code in the body of the object, and the App trait (which extends thespecial DelayedInit trait) will provide a main method that will execute your code.
If just want to write a script, you can forgo the object altogether and just write code without any container, then pass your source file to the interpreter (REPL) in non-interactive mode.
Hello Pythoneers: the following code is only a mock up of what I'm trying to do, but it should illustrate my question.
I would like to know if this is dirty trick I picked up from Java programming, or a valid and Pythonic way of doing things: basically I'm creating a load of instances, but I need to track 'static' data of all the instances as they are created.
class Myclass:
counter=0
last_value=None
def __init__(self,name):
self.name=name
Myclass.counter+=1
Myclass.last_value=name
And some output of using this simple class , showing that everything is working as I expected:
>>> x=Myclass("hello")
>>> print x.name
hello
>>> print Myclass.last_value
hello
>>> y=Myclass("goodbye")
>>> print y.name
goodbye
>>> print x.name
hello
>>> print Myclass.last_value
goodbye
So is this a generally acceptable way of doing this kind of thing, or an anti-pattern ?
[For instance, I'm not too happy that I can apparently set the counter from both within the class(good) and outside of it(bad); also not keen on having to use full namespace 'Myclass' from within the class code itself - just looks bulky; and lastly I'm initially setting values to 'None' - probably I'm aping static-typed languages by doing this?]
I'm using Python 2.6.2 and the program is single-threaded.
Class variables are perfectly Pythonic in my opinion.
Just watch out for one thing. An instance variable can hide a class variable:
x.counter = 5 # creates an instance variable in the object x.
print x.counter # instance variable, prints 5
print y.counter # class variable, prints 2
print myclass.counter # class variable, prints 2
Do. Not. Have. Stateful. Class. Variables.
It's a nightmare to debug, since the class object now has special features.
Stateful classes conflate two (2) unrelated responsibilities: state of object creation and the created objects. Do not conflate responsibilities because it "seems" like they belong together. In this example, the counting of created objects is the responsibility of a Factory. The objects which are created have completely unrelated responsibilities (which can't easily be deduced from the question).
Also, please use Upper Case Class Names.
class MyClass( object ):
def __init__(self, name):
self.name=name
def myClassFactory( iterable ):
for i, name in enumerate( iterable ):
yield MyClass( name )
The sequence counter is now part of the factory, where the state and counts should be maintained. In a separate factory.
[For folks playing Code Golf, this is shorter. But that's not the point. The point is that the class is no longer stateful.]
It's not clear from question how Myclass instances get created. Lacking any clue, there isn't much more than can be said about how to use the factory. An iterable is the usual culprit. Perhaps something that iterates through a list or a file or some other iterable data structure.
Also -- for folks just of the boat from Java -- the factory object is just a function. Nothing more is needed.
Since the example on the question is perfectly unclear, it's hard to know why (1) two unique objects are created with (2) a counter. The two unique objects are already two unique objects and a counter isn't needed.
For example, the static variables in the Myclass are never referenced anywhere. That makes it very, very hard to understand the example.
x, y = myClassFactory( [ "hello", "goodbye" ] )
If the count or last value where actually used for something, then a perhaps meaningful example could be created.
You can solve this problem by splitting the code into two separate classes.
The first class will be for the object you are trying to create:
class MyClass(object):
def __init__(self, name):
self.Name = name
And the second class will create the objects and keep track of them:
class MyClassFactory(object):
Counter = 0
LastValue = None
#classmethod
def Build(cls, name):
inst = MyClass(name)
cls.Counter += 1
cls.LastValue = inst.Name
return inst
This way, you can create new instances of the class as needed, but the information about the created classes will still be correct.
>>> x = MyClassFactory.Build("Hello")
>>> MyClassFactory.Counter
1
>>> MyClassFactory.LastValue
'Hello'
>>> y = MyClassFactory.Build("Goodbye")
>>> MyClassFactory.Counter
2
>>> MyClassFactory.LastValue
'Goodbye'
>>> x.Name
'Hello'
>>> y.Name
'Goodbye'
Finally, this approach avoids the problem of instance variables hiding class variables, because MyClass instances have no knowledge of the factory that created them.
>>> x.Counter
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'MyClass' object has no attribute 'Counter'
You don't have to use a class variable here; this is a perfectly valid case for using globals:
_counter = 0
_last_value = None
class Myclass(obj):
def __init__(self, name):
self.name = name
global _counter, _last_value
_counter += 1
_last_value = name
I have a feeling some people will knee-jerk against globals out of habit, so a quick review may be in order of what's wrong--and not wrong--with globals.
Globals traditionally are variables which are visible and changeable, unscoped, from anywhere in the program. This is a problem with globals in languages like C. It's completely irrelevant to Python; these "globals" are scoped to the module. The class name "Myclass" is equally global; both names are scoped identically, in the module they're contained in. Most variables--in Python equally to C++--are logically part of instances of objects or locally scoped, but this is cleared shared state across all users of the class.
I don't have any strong inclination against using class variables for this (and using a factory is completely unnecessary), but globals are how I'd generally do it.
Is this pythonic? Well, it's definitely more pythonic than having global variables for a counter and the value of the most recent instance.
It's said in Python that there's only one right way to do anything. I can't think of a better way to implement this, so keep going. Despite the fact that many will criticize you for "non-pythonic" solutions to problems (like the needless object-orientation that Java coders like or the "do-it-yourself" attitude that many from C and C++ bring), in most cases your Java habits will not send you to Python hell.
And beyond that, who cares if it's "pythonic"? It works, and it's not a performance issue, is it?