For example, I have an abstract type for the base type, and I want to implement a type factory that dynamically creates concrete types under this abstract type, with names (and other type traits) given by user input arguments.
abstract type AbstractFoo end
function TypeFactory(typename::String, supertype::DataType)
...
return ConcreteSubtype
end
The function TypeFactory takes typename and supertype arguments and creates a concrete type that belongs to the supertype and has the same name as typename.
I know this kind of class factory is not too difficult to implement in Python (e.g., How can I dynamically create derived classes from a base class). In Julia, it can be imitated by using eval(parse()) to evaluate string expressions (Is it possible to create types in Julia at runtime?). But it looks to me that this solution is not a true type factory in an object-oriented sense. Is it possible to have a type factory in Julia that behaves like those in OOP languages (Python, C#, etc.)?
Edit [8 Feb 2018]:
My bad for not expressing things clearly. I'm new to Julia and had just recently started coding my project in it. I knew that inheritance is not supported and did not meant to get around with that in Julia.
Coming from a Python background, I had some feeling that eval() is usually meant for prototyping but not for production code. Of course Julia is different and is far more efficient than pure Python, but the code given to eval() still has to be compiled at the runtime (correct me if I'm wrong). And its use is sorted of discouraged too, from the perspective of performance (Julia, speeding up eval).
And by 'user input' I didn't mean solely command-line input. They could be supplied by a user's config file. (That being said, #SalchiPapa's macro solution is both apt and elegant!)
Is it possible to implement a type factory in Julia without using eval()?
You could use a macro:
https://docs.julialang.org/en/stable/manual/metaprogramming/#man-macros-1
Macros provide a method to include generated code in the final body of a program. A macro maps a tuple of arguments to a returned expression, and the resulting expression is compiled directly rather than requiring a runtime eval() call.
julia> VERSION
v"0.7.0-DEV.2098"
julia> module Factory
export #factory
macro factory(type_name::Symbol, super_type::Symbol)
# ...
return quote
struct $type_name <: $(esc(super_type))
# ...
bar
end
return $(esc(type_name))
end
end
end
Main.Factory
julia> using Main.Factory: #factory
julia> abstract type AbstractFoo end
julia> #factory ConcreteFoo AbstractFoo
ConcreteFoo
julia> foo = ConcreteFoo(42)
ConcreteFoo(42)
julia> foo.bar
42
julia> ConcreteFoo <: AbstractFoo
true
julia> supertype(ConcreteFoo)
AbstractFoo
Edit according to #Gnimuc understanding in the comments, using input:
julia> module Factory
export #factory
function input(prompt::String="")::String
print(prompt)
return chomp(readline())
end
macro factory(type_name = input("Type name: "))
AbstractT = Symbol(:Abstract, type_name)
ConcreteT = Symbol(:Concrete, type_name)
return quote
abstract type $(esc(AbstractT)) end
struct $ConcreteT <: $(esc(AbstractT))
bar
end
return $(esc(AbstractT)), $(esc(ConcreteT))
end
end
end
Main.Factory
julia> using Main.Factory: #factory
julia> #factory
Type name: Foo
(AbstractFoo, ConcreteFoo)
julia> #factory
Type name: Bar
(AbstractBar, ConcreteBar)
julia> #factory Baz
(AbstractBaz, ConcreteBaz)
julia> foo = ConcreteFoo(42)
ConcreteFoo(42)
julia> foo.bar
42
julia> ConcreteFoo <: AbstractFoo
true
julia> supertype(ConcreteFoo)
AbstractFoo
julia> #macroexpand #factory
Type name: Qux
quote
#= REPL[1]:13 =#
abstract type AbstractQux end
#= REPL[1]:14 =#
struct ConcreteQux <: AbstractQux
#= REPL[1]:15 =#
bar
end
#= REPL[1]:17 =#
return (AbstractQux, ConcreteQux)
end
julia> eval(ans)
(AbstractQux, ConcreteQux)
With PostgreSQL, we can do something like this:
CREATE TYPE order_status AS ENUM ('placed','shipping','delivered')
From Ecto's official doc, there is no native type to map the Postgres' enumerated type. This module provides a custom type for enumerated structures, but it maps to an integer in the database. I could easily use that library, but I would prefer using the native enumerated type that ships with the database.
Ecto provides also a way to create custom types, but as far as I can see, the custom type must map to a native Ecto type...
Anyone knows if this can be done in a schema with Ecto? If yes, how would the migration work?
Maybe I did something wrong but I just created the type and field like this:
# creating the database type
execute("create type post_status as enum ('published', 'editing')")
# creating a table with the column
create table(:posts) do
add :post_status, :post_status, null: false
end
and then just made the field a string:
field :post_status, :string
and it seems to work.
Small enhancement for #JustMichael. If you need to rollback, you can use:
def down do
drop table(:posts)
execute("drop type post_type")
end
Summarizing all the bits and pieces here and there in the answers and comments. See the "Enumerated Types" in the PostgreSQL manual for more on the SQL commands used.
Ecto 3.0.0 and above
Since Ecto 3.0.0, there is Ecto.Migration.execute/2 that "Executes reversible SQL commands" therefore it can be used in change/0:
Migration
After generating a migration with mix ecto.gen.migration create_orders:
defmodule CreateOrders do
use Ecto.Migration
#type_name :order_status
def change do
execute(
"""
CREATE TYPE #{#type_name}
AS ENUM ('placed','shipping','delivered')
""",
"DROP TYPE #{#type_name}"
)
create table(:orders) do
add :order_status, #type_name, null: false
timestamps()
end
end
end
Schema
This is the same as under "Ecto 2.x.x and below".
Ecto 2.x.x and below
Migration
After generating a migration with mix ecto.gen.migration create_orders:
defmodule CreateOrders do
use Ecto.Migration
#type_name :order_status
def up do
execute(
"""
CREATE TYPE #{#type_name}
AS ENUM ('placed','shipping','delivered'})
""")
create table(:orders) do
add :order_status, #type_name, null: false
timestamps()
end
end
def down do
drop table(:orders)
execute("DROP TYPE #{#type_name}")
end
end
Schema
Because the schema is unable to see the database type created in the migration, using Ecto.Changeset.validate_inclusion/4 in Order.changeset/2 to ensure valid input.
defmodule Order do
use Ecto.Schema
import Ecto.Changeset
schema "orders" do
field :order_status, :string
timestamps()
end
def changeset(
%__MODULE__{} = order,
%{} = attrs
) do
fields = [ :order_status ]
order
|> cast(attrs, fields)
|> validate_required(fields)
|> validate_inclusion(
:order_status,
~w(placed shipping delivered)
)
end
end
You need to create an Ecto type for each postgresql enum. In the schema definition, you simply have the type be :string. In migrations, you set the type to be the module name. This can get really tedious, though, so I have the following macro in my project that uses Postgresql enums:
defmodule MyDB.Enum do
alias Postgrex.TypeInfo
defmacro defenum(module, name, values, opts \\ []) do
quote location: :keep do
defmodule unquote(module) do
#behaviour Postgrex.Extension
#typename unquote(name)
#values unquote(values)
def type, do: :string
def init(_params, opts), do: opts
def matching(_), do: [type: #typename]
def format(_), do: :text
def encode(%TypeInfo{type: #typename}=typeinfo, str, args, opts) when is_atom(str), do: encode(typeinfo, to_string(str), args, opts)
def encode(%TypeInfo{type: #typename}, str, _, _) when str in #values, do: to_string(str)
def decode(%TypeInfo{type: #typename}, str, _, _), do: str
def __values__(), do: #values
defoverridable init: 2, matching: 1, format: 1, encode: 4, decode: 4
unquote(Keyword.get(opts, :do, []))
end
end
end
end
Possible usage:
import MyDB.Enum
defenum ColorsEnum, "colors_enum", ~w"blue red yellow"
ColorsEnum will be the module name, "colors_enum" will be the enum name internal to Postgresql: you will need to add a statement to create the enum type in your database migrations. The final argument is a list of enum values. I used a ~w sigil that will split the string by whitespace to show how concise this can be. I also added a clause that converts atom values to string values when they pass through an Ecto schema.
Ecto_enum now supports postgres enum type https://github.com/gjaldon/ecto_enum#using-postgress-enum-type
adding to what #JustMichael and #swennemen have said... as of ecto 2.2.6 we have Ecto.Migration.execute/2 which takes an up and a down arg. So we can do:
execute("create type post_status as enum ('published', 'editing')", "drop type post_status")
In our migration file inside the change block, and ecto will be able to rollback effectively.
Let's say I create the following class in Python:
class SomeClass(object):
variable_1 = "Something"
def __init__(self, variable_2):
self.variable_2 = variable_2 + self.variable_1
I do not understand why variables lying outside the method (should I say function?) definition, but inside the body of class, are not named using the dot notation like variable_1, and variables inside the method definition are named - for example: self.variable_2 - or referenced - for example: self.variable_1 using the dot notation. Is this just a notation specific to Python or is this for some other reason?
It is not something you will find in C++ or java. It is not only a notation. See the documentation documentation:
classes partake of the dynamic nature of Python: they are created at
runtime, and can be modified further after creation.
And it is the same for functions.
Here is a valid way to define a class:
def f1(self, x, y):
return min(x, x+y)
class C:
text = 'hello world'
f = f1
def g(self):
return self.text
h = g
f, gand hare attributes of C. To make it work, it is important to pass the object as an argument of the functions. By convention, it is named self.
This is very powerful. It allows to really use a function as an attribute. For example:
c = C()
class B:
text = 'hello word 2'
g = c.g
The call to B.g() will return 'hello word 2'. It is only possible because of the use of self.
For the record, you may also read the documentation of global and of the variable scope in general.
If I evaluate
(def ^:macro my-defn1 #'defn)
a macro named 'my-defn1' is defined, which I can use just like 'defn'.
However, if I evaluate instead
(if true
(def ^:macro my-defn2 #'defn))
the var for 'my-defn2' doesn't have the :macro metadata set and I can't use it as a macro, even though the 'def' form is equal to the previous case.
Here is the complete code (http://cljbin.com/paste/52322ba5e4b0fa645e7f9243):
(def ^:macro my-defn1 #'defn)
(if true
(def ^:macro my-defn2 #'defn))
(println (meta #'my-defn1)) ; => contains :macro
(println (meta #'my-defn2)) ; => doesn't contain :macro!
(my-defn1 hello1 []
(println "hello 1"))
(hello1) ; => prints "hello 1"
(my-defn2 hello2 [] ; => CompilerException: Unable to resolve
(println "hello 2")) ; symbol: hello2 in this context
What makes the behaviour different?
Clojure's def cannot really be conditionally applied. The documentation for def is IMO insufficiently strong on that part. It's not just bad style, it can cause all kinds of subtle issues.
You should only use def at the top-level, or in a do or let form at the top-level. Conditionally applying def will result in the functionality being split up in something like declare and a subsequent conditional def, but not always in the way that you'd expect/like.
You might be better off using def at the top level here and then conditionally using alter-var-root. Or use (def my-var (if .. .. ..)). Think about why you'd ever want to have a global definition "disappear".
Is there an ability to define scopes similar to Rails ActiveRecord way?
http://datamapper.org/docs/find.html
class Zoo
# all the keys and property setup here
def self.open
all(:open => true)
end
def self.big
all(:animal_count => 1000)
end
end
big_open_zoos = Zoo.big.open
So nothing special about scopes, just ruby.