Python generating Python - code-generation

I have a group of objects which I am creating a class for that I want to store each object as its own text file. I would really like to store it as a Python class definition which subclasses the main class I am creating. So, I did some poking around and found a Python Code Generator on effbot.org. I did some experimenting with it and here's what I came up with:
#
# a Python code generator backend
#
# fredrik lundh, march 1998
#
# fredrik#pythonware.com
# http://www.pythonware.com
#
# Code taken from http://effbot.org/zone/python-code-generator.htm
import sys, string
class CodeGeneratorBackend:
def begin(self, tab="\t"):
self.code = []
self.tab = tab
self.level = 0
def end(self):
return string.join(self.code, "")
def write(self, string):
self.code.append(self.tab * self.level + string)
def indent(self):
self.level = self.level + 1
def dedent(self):
if self.level == 0:
raise SyntaxError, "internal error in code generator"
self.level = self.level - 1
class Point():
"""Defines a Point. Has x and y."""
def __init__(self, x, y):
self.x = x
self.y = y
def dump_self(self, filename):
self.c = CodeGeneratorBackend()
self.c.begin(tab=" ")
self.c.write("class {0}{1}Point()\n".format(self.x,self.y))
self.c.indent()
self.c.write('"""Defines a Point. Has x and y"""\n')
self.c.write('def __init__(self, x={0}, y={1}):\n'.format(self.x, self.y))
self.c.indent()
self.c.write('self.x = {0}\n'.format(self.x))
self.c.write('self.y = {0}\n'.format(self.y))
self.c.dedent()
self.c.dedent()
f = open(filename,'w')
f.write(self.c.end())
f.close()
if __name__ == "__main__":
p = Point(3,4)
p.dump_self('demo.py')
That feels really ugly, is there a cleaner/better/more pythonic way to do this? Please note, this is not the class I actually intend to do this with, this is a small class I can easily mock up in not too many lines. Also, the subclasses don't need to have the generating function in them, if I need that again, I can just call the code generator from the superclass.

We use Jinja2 to fill in a template. It's much simpler.
The template looks a lot like Python code with a few {{something}} replacements in it.

This is pretty much the best way to generate Python source code. However, you can also generate Python executable code at runtime using the ast library. You can build code using the abstract syntax tree, then pass it to compile() to compile it into executable code. Then you can use eval() to run the code.
I'm not sure whether there is a convenient way to save the compiled code for use later though (ie. in a .pyc file).

Just read your comment to wintermute - ie:
What I have is a bunch of planets that
I want to store each as their own text
files. I'm not particularly attached
to storing them as python source code,
but I am attached to making them
human-readable.
If that's the case, then it seems like you shouldn't need subclasses but should be able to use the same class and distinguish the planets via data alone. And in that case, why not just write the data to files and, when you need the planet objects in your program, read in the data to initialize the objects?
If you needed to do stuff like overriding methods, I could see writing out code - but shouldn't you just be able to have the same methods for all planets, just using different variables?
The advantage of just writing out the data (it can include label type info for readability that you'd skip when you read it in) is that non-Python programmers won't get distracted when reading them, you could use the same files with some other language if necessary, etc.

I'm not sure whether this is especially Pythonic, but you could use operator overloading:
class CodeGenerator:
def __init__(self, indentation='\t'):
self.indentation = indentation
self.level = 0
self.code = ''
def indent(self):
self.level += 1
def dedent(self):
if self.level > 0:
self.level -= 1
def __add__(self, value):
temp = CodeGenerator(indentation=self.indentation)
temp.level = self.level
temp.code = str(self) + ''.join([self.indentation for i in range(0, self.level)]) + str(value)
return temp
def __str__(self):
return str(self.code)
a = CodeGenerator()
a += 'for a in range(1, 3):\n'
a.indent()
a += 'for b in range(4, 6):\n'
a.indent()
a += 'print(a * b)\n'
a.dedent()
a += '# pointless comment\n'
print(a)
This is, of course, far more expensive to implement than your example, and I would be wary of too much meta-programming, but it was a fun exercise. You can extend or use this as you see fit; how about:
adding a write method and redirecting stdout to an object of this to print straight to a script file
inheriting from it to customise output
adding attribute getters and setters
Would be great to hear about whatever you go with :)

From what I understand you are trying to do, I would consider using reflection to dynamically examine a class at runtime and generate output based on that. There is a good tutorial on reflection (A.K.A. introspection) at http://diveintopython3.ep.io/.
You can use the dir() function to get a list of names of the attributes of a given object. The doc string of an object is accessible via the __doc__ attribute. That is, if you want to look at the doc string of a function or class you can do the following:
>>> def foo():
... """A doc string comment."""
... pass
...
>>> print foo.__doc__
A doc string comment.

Related

Scala var best practice - Encapsulation

I'm trying to understand what's the best practice for using vars in scala, for example
class Rectangle() {
var x:Int = 0
}
Or something like:
class Rectangle() {
private var _x:Int = 0
def x:Int = _x
def x_(newX:Int):Unit = _x=newX
}
Which one can be considered as better? and why?
Thank you!
As Luis already explained in the comment, vars are something that should be avoided whenever you are able to avoid it, and such a simple case like you gave is one of those that can be better designed using something like this:
// Companion object is not necessary in your case
object Rectangle {
def fromInt(x: Int): Option[Rectangle] = {
if(x > 0) {
Some(Rectangle(x))
} else None
}
final case class Rectangle(x: Int)
It would be very rare situations when you can't avoid using vars in scala. Scala general idiom is: "Make your variables immutable, unless there is a good reason not to"
I'm trying to understand what's the best practice for using vars in scala, […]
Best practice is to not use vars at all.
Which one can be considered as better? and why?
The second one is basically equivalent to what the compiler would generate for the first one anyway, so it doesn't really make sense to use the second one.
It would make sense if you wanted to give different accessibility to the setter and the getter, something like this:
class Rectangle {
private[this] var _x = 0
def x = _x
private def x_=(x: Int) = _x = x
}
As you can see, I am using different accessibility for the setter and the getter, so it makes sense to write them out explicitly. Otherwise, just let the compiler generate them.
Note: I made a few other changes to the code:
I changed the visibility of the _x backing field to private[this].
I changed the name of the setter to x_=. This is the standard naming for setters, and it has the added advantage that it allows you to use someRectangle.x = 42 syntactic sugar to call it, making it indistinguishable from a field.
I added some whitespace to give the code room to breathe.
I removed some return type annotations. (This one is controversial.) The community standard is to always annotate your return types in public interfaces, but in my opinion, you can leave them out if they are trivial. It doesn't really take much mental effort to figure out that 0 has type Int.
Note that your first version can also be simplified:
class Rectangle(var x: Int = 0)
However, as mentioned in other answers, you really should make your objects immutable. It is easy to create a simple immutable data object with all the convenience functions generated automatically for you by using a case class:
final case class Rectangle(x: Int = 0)
If you now want to "change" your rectangle, you instead create a new one which has all the properties the same except x (in this case, x is the only property, but there could be more). To do this, Scala generates a nifty copy method for you:
val smallRectangle = Rectangle(3)
val enlargedRectangle = smallRectangle.copy(x = 10)

Why I cannot extend a scipy rv_discrete class successfully?

I'm trying to extend the rv_discrete scipy class, as it is supposed to work in every case while extending a class.
I just want to add a couple of instance attributes.
from scipy.stats import rv_discrete
class Distribution(rv_discrete):
def __init__(self, realization):
self._realization = realization
self.num = len(realization)
#stuff to obtain random alphabet and probabilities from realization
super().__init__(values=(alphabet,probabilities))
This should allow me to do something like this :
realization = #some values
dist = Distribution(realization)
print(dist.mean())
Instead, I receive this error
ValueError: rv_discrete.__init__(..., values != None, ...)
If I simply create a new rv_discrete object as in the following line of code
dist = rv_discrete(values=(alphabet,probabilities))
It works just fine.
Any idea why? Thank you for your help

Scala macros - storing global state?

Is it possible for a macro implementation to maintain some form of global state (during the entire compilation run)? Specifically, I want to create a separate instance of IMain, but I don't want to create it anew in every macro expansion, so I would like to have a form of lazy val, ThreadLocal or anything where I can cache that instance. For simplicity, just imagine I want to share an object during compilation between all expansions of the same macro:
object Foo {
def next: Int = macro ???
}
trait Test {
val a = Foo.next
val b = Foo.next
val c = Foo.next
assert(a == 1 && b == 2 && c == 3)
}
Since in the actual case, the state is quite complex and not serializable, reading and writing to disk is not an option.
I can't seem to see any way to achieve that through the only context provided, scala.reflect.macros.blackbox.Context. Does that mean I have to write a full-fledged compiler plugin? Can I trick sbt into giving me some object I can write to?
Use an object and a var. As long as the file it's in doesn't use your macros, it should work. I'm not sure if it's ever guaranteed anywhere that scalac will keep the state of macros between units, but this seems to work for you.
object Foo {
def next: Int = macro next_impl
var state = ???;
def next_impl(...): ...
}

Map an instance using function in Scala

Say I have a local method/function
def withExclamation(string: String) = string + "!"
Is there a way in Scala to transform an instance by supplying this method? Say I want to append an exclamation mark to a string. Something like:
val greeting = "Hello"
val loudGreeting = greeting.applyFunction(withExclamation) //result: "Hello!"
I would like to be able to invoke (local) functions when writing a chain transformation on an instance.
EDIT: Multiple answers show how to program this possibility, so it seems that this feature is not present on an arbitraty class. To me this feature seems incredibly powerful. Consider where in Java I want to execute a number of operations on a String:
appendExclamationMark(" Hello! ".trim().toUpperCase()); //"HELLO!"
The order of operations is not the same as how they read. The last operation, appendExclamationMark is the first word that appears. Currently in Java I would sometimes do:
Function.<String>identity()
.andThen(String::trim)
.andThen(String::toUpperCase)
.andThen(this::appendExclamationMark)
.apply(" Hello "); //"HELLO!"
Which reads better in terms of expressing a chain of operations on an instance, but also contains a lot of noise, and it is not intuitive to have the String instance at the last line. I would want to write:
" Hello "
.applyFunction(String::trim)
.applyFunction(String::toUpperCase)
.applyFunction(this::withExclamation); //"HELLO!"
Obviously the name of the applyFunction function can be anything (shorter please). I thought backwards compatibility was the sole reason Java's Object does not have this.
Is there any technical reason why this was not added on, say, the Any or AnyRef classes?
You can do this with an implicit class which provides a way to extend an existing type with your own methods:
object StringOps {
implicit class RichString(val s: String) extends AnyVal {
def withExclamation: String = s"$s!"
}
def main(args: Array[String]): Unit = {
val m = "hello"
println(m.withExclamation)
}
}
Yields:
hello!
If you want to apply any functions (anonymous, converted from methods, etc.) in this way, you can use a variation on Yuval Itzchakov's answer:
object Combinators {
implicit class Combinators[A](val x: A) {
def applyFunction[B](f: A => B) = f(x)
}
}
A while after asking this question, I noticed that Kotlin has this built in:
inline fun <T, R> T.let(block: (T) -> R): R
Calls the specified function block with this value as its argument and returns
its result.
A lot more, quite useful variations of the above function are provided on all types, like with, also, apply, etc.

Modifying value returned from a method in Scala (Highcharts lib)

I am using an external library in Scala, which uses a set of traits to pass around complex configuration options to different methods. This is Highcharts Scala API, but the problem seems to be more general.
The library defines a trait (HighchartsOptions in the actual usage), which is just a data transfer object that stores a number of fields and allows them to be passed around. Code simplified and generalized for clarity looks like this:
trait Opts {
def option1: Int = 3
def option2: String = "abc"
//Many more follow, often of more complex types
}
As long as the complete set of options can be generated in one place, this allows for a neat syntax:
val opts = new Opts() {
override val option1 = 5
//And so on for more fields
}
doSomething(opts)
However, there are a few situations where one piece of code prepares such a configuration but another piece of code needs to adjust just one option extra. It would be nice to be able to pass some Opts instance to a method and let the method modify a value or two.
Since the original trait is based on defs rather than vars, it's easy to override an option's value only if the type of the object is known, like in the example above. If a method receives only an instance of some anonymous subclass of Opts, how can it create another instance or modify the received one so that a call to e.g. option2 could return a different value? The desired operation is similar to what Mockito's spy does, however I feel there should be some less contrived way than using a mocking framework to achieve this effect.
PS: Actually I am a bit surprised by the use of such an interface by the library's authors, so perhaps I'm missing something and there is some completely different way of achieving my goal of building a single set of options from several different places in the code (e.g. some builder object that is mutable and that I can pass around instead of the finished HighchartsOptions)?
I would first check if using the Opts trait (solely) is an absolute necessity. Hopefully it's not and then you just extend the trait, overriding defs with vars, like you said.
When Opts is mandatory and you have its instance that you want to copy modifying some fields, here's what you could do:
Write a wrapper for Opts, which extends Opts, but delegates every call to the wrapped Opts excluding the fields that you want to modify. Set those fields to values you want.
Writing the wrapper for a broad-interface trait can be boring task, therefore you may consider using http://www.warski.org/blog/2013/09/automatic-generation-of-delegate-methods-with-macro-annotations/ to let macros generate most of it automatically.
The shortest, simplest way.
Define a case class:
case class Options(
option1: Int,
option2: String
/* ... */
) extends Opts
and implicit conversion from Opts to your Options
object OptsConverter {
implicit def toOptions(opts: Opts) = Options(
option1 = opts.option1,
option2 = opts.option2
/* ... */
)
}
That way you get all copy methods (generated by compiler) for free.
You can use it like that:
import OptsConverter.toOptions
def usage(opts: Opts) = {
val improvedOpts = opts.copy(option2 = "improved")
/* ... */
}
Note, that Options extends Opts, so you can use it whenever Opts is required. You'll be able to call copy to obtain a modified instance of Opts in every place where you import the implicit conversion.
The simplest solution is to allow the trait to define it's own "copy" method, and allow it's subclasses (or even base class) to work with that. However, the parameters can really only match the base class unless you recast it later. Incidentally this doesn't work as "mixed in" so your root might as well be an abstract class, but it works the same way. The point of this is that the subclass type keeps getting passed along as it's copied.
(Sorry I typed this without a compiler so it may need some work)
trait A {
type myType<:A
def option1: Int
def option2: String
def copyA(option1_:Int=option1, option2_String=option2):myType = new A {
def option1 = option_1
def option2 = option_2
}
}
trait B extends A { me=>
type myType = B
def option3: Double
//callable from A but properly returns B
override def copyA(option1_:Int=option1, option2_:String=option2):myType = new B {
def option1 = option_1
def option2 = option_2
def option3 = me.option3
}
//this is only callable if you've cast to type B
def copyB(option1_:Int=option1, option2_:String=option2, option3_:Double=option3):myType = new B {
def option1 = option_1
def option2 = option_2
def option3 = option_3
}
}