EnumType.STRING ignored by ebean and Play 2 - jpa

I have a field definition that is set as an enumeration with EnumType.STRING.
Typically, this works nicely, but on two occasions, it has ignored the EnumType attribute and used the ordinal value for the enumeration.
My declaration looks like this:
#Basic(optional=true) #Enumerated(EnumType.STRING)
public StationFormat stationFormat;
I've tried:
Changing the name of the field
It still creates it as an ordinal
Doing a clean compile
Still uses ordinal value
Adding a second field on the same class
Still uses ordinal value
What the heck? I had this happen before, and at some point it magically resolved itself.
-John

I found a solution to this problem, thought I believe the underlying issue is a bug.
To workaround, add the same enum to a DIFFERENT model class. Does not matter which one, and you can delete it immediately afterwards. It will be added correctly to the new class, as well as the existing class will be amended to use the name() value rather than the ordinal.

Related

What are the functional differences between Coredata's CodeGen 'manual/none + create NSManagedObject subclass' vs. 'category/extension'

I've read Subclassing NSManagedObject with swift 3 and Xcode 8 beta and read this great tutorial. Still have questions on some points.
The similarities are:
I can customize both classes however I like.
I can add new attributes or remove or rename attributes. ie for category/extension it will get updated upon a new build (in the derived data), and in case of manual/none it will leave the class file intact and update the extension in the file navigation ie I won't end up with a duplicate file. This is all handled by Xcode because they are marked with a preprocessor #NSManaged
Dumping something like #NSManaged public var name: String? straight into an existing NSManagedObject subclass is not allowed. I tried to do entity.name = "John" but I got the following error: reason: '-[SomeEntity setName:]: unrecognized selector sent to instance 0x60400009b120'. I believe that's reasonable. I think without using the Core Data Model Editor the setter/getter accessor methods are not created.
The differences are:
For Category/Extension you just need to create the class yourself and add any extra functions/properties you need.
For Category/Extension the attributes are created in derived data which is enough. Because you never need to see that file. Its existence is enough to get things working.

And specifically in the context of making changes to your NSManaged properties:
Changing property type, e.g. NSDate to Date is allowed only for Manual/None . Example here
Changing optionality of a type, e.g. String? to String is allowed only for Manual/None. Example here
Changing a property access level, e.g. from public to private is allowed only for Manual/None. Example here
Having that said there is significant difference if I choose Manual/None codegen and but don't select 'create NSManagedObject subclass'. In that case I have start writing all the code myself (subclass from NSManagedObject and write NSManaged for every property)...or if I don't write all that code myself then I can still access/set fields using KVC which is awkward!
In a nutshell I'm just trying to figure out the full extent of capabilities that I can get from using Manual/None.
Question: Aside from the 9 notes which I need to know if I have validated correctly, an important question would be: how does me changing NSDate to Date or optional to non-optional not break the mappings between my NSManagedObject class and my object graph all while changing an NSDate property to String does break!! Does this have something to do with things that have guaranteed casting between Swift and Objective-C ie things that can be casted through as — without ? or !?
To address each of your notes and considering the cases where codegen is set to Manual/None and Category/Extension:
Yes, in either case you can customise the classes however you like (within limits - for example, the class must be a subclass - directly or indirectly - of NSManagedObject).
Correct. You can add, amend or delete attributes in the model editor. In the Category/Extension case, the relevant changes will be made automatically. In the Manual/None case, you can either manually update the Extension (or the class file) or you can redo the "create NSManagedObject subclass" which will update the Extension with the amended attribute details. If you do not do this, Xcode will not recognise the new attribute details and will not provide code completion for them (nor will it successfully compile if you try to override code completion). But unlike what you think this has nothing to do with the properties being marked as #NSManaged.
Correct. Adding an #NSManaged property to the class definition (or Extension) is enough to tell Xcode that the property exists (so you can reference them in code) but does not create the corresponding getter/setter. So your code will crash.
Yes, for Category/Extension just create and tailor the class file as you require.
Yes, for Category/Extension the properties are declared in the automatically created Extension file in Derived Data.
Changing the property definition in any way - from Date to NSDate, or marking it private, or whatever - can only be done in the Manual/None case because the Extension file in Derived Data is overwritten with each new build so any changes are lost.
Ditto
Ditto
Correct. You could write your app without ever creating separate NSManagedObject subclasses (automatically or manually), if you use KVC to access the properties.
As to your final point: you cannot arbitrarily change the type of the property definition: the type specified in the model editor must correspond to the type specified in the property definition. You can switch between optional and non-optional versions of the same type, and you can switch between Date and NSDate etc, but switching from Date to String will not work. I suspect you are correct that this is due to the bridging between Swift value type and the corresponding Objective-C reference type using as. See here.

Hierarchy in Python 3.4 [duplicate]

I know, there are no 'real' private/protected methods in Python. This approach isn't meant to hide anything; I just want to understand what Python does.
class Parent(object):
def _protected(self):
pass
def __private(self):
pass
class Child(Parent):
def foo(self):
self._protected() # This works
def bar(self):
self.__private() # This doesn't work, I get a AttributeError:
# 'Child' object has no attribute '_Child__private'
So, does this behaviour mean, that 'protected' methods will be inherited but 'private' won't at all?
Or did I miss anything?
Python has no privacy model, there are no access modifiers like in C++, C# or Java. There are no truly 'protected' or 'private' attributes.
Names with a leading double underscore and no trailing double underscore are mangled to protect them from clashes when inherited. Subclasses can define their own __private() method and these will not interfere with the same name on the parent class. Such names are considered class private; they are still accessible from outside the class but are far less likely to accidentally clash.
Mangling is done by prepending any such name with an extra underscore and the class name (regardless of how the name is used or if it exists), effectively giving them a namespace. In the Parent class, any __private identifier is replaced (at compilation time) by the name _Parent__private, while in the Child class the identifier is replaced by _Child__private, everywhere in the class definition.
The following will work:
class Child(Parent):
def foo(self):
self._protected()
def bar(self):
self._Parent__private()
See Reserved classes of identifiers in the lexical analysis documentation:
__*
Class-private names. Names in this category, when used within the context of a class definition, are re-written to use a mangled form to help avoid name clashes between “private” attributes of base and derived classes.
and the referenced documentation on names:
Private name mangling: When an identifier that textually occurs in a class definition begins with two or more underscore characters and does not end in two or more underscores, it is considered a private name of that class. Private names are transformed to a longer form before code is generated for them. The transformation inserts the class name, with leading underscores removed and a single underscore inserted, in front of the name. For example, the identifier __spam occurring in a class named Ham will be transformed to _Ham__spam. This transformation is independent of the syntactical context in which the identifier is used.
Don't use class-private names unless you specifically want to avoid having to tell developers that want to subclass your class that they can't use certain names or risk breaking your class. Outside of published frameworks and libraries, there is little use for this feature.
The PEP 8 Python Style Guide has this to say about private name mangling:
If your class is intended to be subclassed, and you have attributes
that you do not want subclasses to use, consider naming them with
double leading underscores and no trailing underscores. This invokes
Python's name mangling algorithm, where the name of the class is
mangled into the attribute name. This helps avoid attribute name
collisions should subclasses inadvertently contain attributes with the
same name.
Note 1: Note that only the simple class name is used in the mangled
name, so if a subclass chooses both the same class name and attribute
name, you can still get name collisions.
Note 2: Name mangling can make certain uses, such as debugging and
__getattr__(), less convenient. However the name mangling algorithm
is well documented and easy to perform manually.
Note 3: Not everyone likes name mangling. Try to balance the need to
avoid accidental name clashes with potential use by advanced callers.
The double __ attribute is changed to _ClassName__method_name which makes it more private than the semantic privacy implied by _method_name.
You can technically still get at it if you'd really like to, but presumably no one is going to do that, so for maintenance of code abstraction reasons, the method might as well be private at that point.
class Parent(object):
def _protected(self):
pass
def __private(self):
print("Is it really private?")
class Child(Parent):
def foo(self):
self._protected()
def bar(self):
self.__private()
c = Child()
c._Parent__private()
This has the additional upside (or some would say primary upside) of allowing a method to not collide with child class method names.
By declaring your data member private :
__private()
you simply can't access it from outside the class
Python supports a technique called name mangling.
This feature turns class member prefixed with two underscores into:
_className.memberName
if you want to access it from Child() you can use: self._Parent__private()
Also PEP8 says
Use one leading underscore only for non-public methods and instance
variables.
To avoid name clashes with subclasses, use two leading underscores to
invoke Python's name mangling rules.
Python mangles these names with the class name: if class Foo has an
attribute named __a, it cannot be accessed by Foo.__a. (An insistent
user could still gain access by calling Foo._Foo__a.) Generally,
double leading underscores should be used only to avoid name conflicts
with attributes in classes designed to be subclassed.
You should stay away from _such_methods too, by convention. I mean you should treat them as private
Although this is an old question, I encountered it and found a nice workaround.
In the case you name mangled on the parent class because you wanted to mimic a protected function, but still wanted to access the function in an easy manner on the child class.
parent_class_private_func_list = [func for func in dir(Child) if func.startswith ('_Parent__')]
for parent_private_func in parent_class_private_func_list:
setattr(self, parent_private_func.replace("_Parent__", "_Child"), getattr(self, parent_private_func))
The idea is manually replacing the parents function name into one fitting to the current namespace.
After adding this in the init function of the child class, you can call the function in an easy manner.
self.__private()
AFAIK, in the second case Python perform "name mangling", so the name of the __private method of the parent class is really:
_Parent__private
And you cannot use it in child in this form neither

Intersystems Cache - Correct syntax for %ListOfObjects

The documentation says this is allowed:
ClassMethod GetContacts() As %ListOfObjects(ELEMENTTYPE="ContactDB.Contact")
[WebMethod]
I want to do this:
Property Permissions As %ListOfObjects(ELEMENTTYPE="MyPackage.MyClass");
I get an error:
ERROR #5480: Property parameter not declared:
MyPackage.Myclass:ELEMENTTYPE
So, do I really have to create a new class and set the ELEMENTTYPE parameter in it for each list I need?
Correct syntax for %ListOfObjects in properties is this one
Property Permissions As list of MyPackage.MyClass;
Yes, a property does sometimes work differently than a method when it comes to types. That is an issue here, in that you can set a class parameter of the return value of a method declaration in a straightforward way, but that doesn't always work for class parameters on the class of a property.
I don't think the way it does work is documented completely, but here are some of my observations:
You can put in class parameters on a property if the type of the property is a data-type (which are often treated differently than objects).
If you look at the %XML.Adaptor class it has the keyword assignment statement
PropertyClass = %XML.PropertyParameters
This appears to add its parameters to all the properties of the class that declares it as its PropertyClass. This appears to be an example of Intersystems wanting to implement something (an XML adaptor) and realizing the implementation of objects didn't provide it cleanly, so they hacked something new into the class compiler. I can't really find much documentation so it isn't clear if its considered a usable API or an implementation detail subject to breakage.
You might be able to hack something this way - I've never tried anything similar.
A possibly simpler work around might be to initialize the Permissions property in %OnNew and %OnOpen. You will probably want a zero element array at that point anyway, rather than a null.
If you look at the implementation of %ListOfObjects you can see that the class parameter which you are trying to set simply provides a default value for the ElementType property. So after you create an instance of %ListOfObjects you could just set it's ElementType property to the proper element type.
This is a bit annoying, because you have to remember to do it every time by hand, and you might forget. Or a maintainer down the road might not now to do it.
You might hope to maybe make it a little less annoying by creating a generator method that initializes all your properties that need it. This would be easy if Intersystems had some decent system of annotating properties with arbitrary values (so you could know what ElementType to use for each property). But they don't, so you would have to do something like roll your own annotations using an XData block or a class method. This probably isn't worth it unless you have more use cases for annotations than just this one, so I would just do it by hand until that happens, if it ever does.

morphia annotation

I am using mongodb with java and also morphia.
For my usecase i get collection name at run time. So i have a enum of collection names and based on some value i get the corresponding collection name from enum. My entity annotation is as follows
#entity(EnumName.getCollectionName())
But i get the following error
"The value for annotation attribute Entity.value must be a constant expression"
I am actually returning a constant expression only. Could anyone let me know what the issue is.
You can't use some something dynamic within annotations as those are "compile" time features which can't be changed afterwards. So you can only handle constants which you declared there, Enums and Classes. For this a smart compiler may be able to find out that you handle something which may never change, but most wont and will simply error as soon as they see that you're trying asign some function value to an annotation property.
I don't really understand what you're trying to do, but it somehow looks like you try to use one "generic" entity class for several concrete entities. I think this is really bad design.
If you can tell more details, we may be able to give you a proper solution for your problem.
If you simply don't know what Class you have to operate with at runtime, try this.
Declare your concrete entities and fill your enum with those Classes. At Runtime you can do Datastore.find(Enum.YOURCLASS) and morphia will query your appropriate class.

What is exactly the point of auto-generating getters/setters for object fields in Scala?

As we know, Scala generates getters and setters automatically for any public field and make the actual field variable private. Why is it better than just making the field public ?
For one this allows swapping a public var/val with a (couple of) def(s) and still maintain binary compatibility. Secondly it allows overriding a var/val in derived classes.
First, keeping the field public allows a client to read and write the field. Since it's beneficial to have immutable objects, I'd recommend to make the field read only (which you can achieve in Scala by declaring it as "val" rather than "var").
Now back to your actual question. Scala allows you to define your own setters and getters if you need more than the trivial versions. This is useful to maintain invariants. For setters you might want to check the value the field is set to. If you keep the field itself public, you have no chance to do so.
This is also useful for fields declared as "val". Assume you have a field of type Array[X] to represent the internal state of your class. A client could now get a reference to this array and modify it--again you have no chance to ensure the invariant is maintained. But since you can define your own getter you can return a copy of the actual array.
The same argument applies when you make a field of a reference type "final public" in Java--clients can't reset the reference but still modify the object the reference points to.
On a related note: accessing a field via getters in Scala looks like accessing the field directly. The nice thing about this is that it allows to make accessing a field and calling a method without parameters on the object look like the same thing. So if you decide you don't want to store a value in a field anymore but calculate it on the fly, the client does not have to care because it looks like the same thing to him--this is known as the Uniform Access Principle
In short: the Uniform Access Principle.
You can use a val to implement an abstract method from a superclass. Imagine the following definition from some imaginary graphics package:
abstract class circle {
def bounds: Rectangle
def centre: Point
def radius: Double
}
There are two possible subclasses, one where the circle is defined in terms of a bounding box, and one where it's defined in terms of the centre and radius. Thanks to the UAP, details of the implementation can be completely abstracted away, and easily changed.
There's also a third possibility: lazy vals. These would be very useful to avoid recalculating the bounds of our circle again and again, but it's hard to imagine how lazy vals could be implemented without the uniform access principle.