What is a good naming convention for Unity? [closed] - unity3d

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm pretty much a noob with Unity. As a c++ programmer, the naming conventions in Unity bothers me a little. And having OCD ontop of that makes me go crazy ;)
The objects has say a property Transform which again has a property Position.
But these properties must be accessed by writing transform.position in the code using lower case. This is not very intuitive to me. So I wonder how I can look at it in order to more easily avoid complications. And what conventions I should use to be able to tell everything appart by taking a quick look at the variables.

The Unity convention is actually rather simple: everything is Pascal-cased, types (classes, structs, enums) and methods start with upper-case, fields and properties with lower-case. Enum values in upper-case, constants in lower-case (usually).
So ClassName, MethodName, myField, myProperty { get; set; }, MyEnum.CaseA... that's it.
As for your example, Transform is a class, whereas transform is an accessor to the instance of Transform in that particular GameObject/Component.
Also, Transform doesn't have a Position property, it has a position property (always lower-case).
This is more or less based on C#'s conventions and the standard .NET library (MS has very precise guidelines about it), except standard .NET uses UpperCase for public/protected methods AND properties, and lower-case for private (again, usually; what's private is more left to the taste of the coder I think).
As a side-note, with any codebase, in any language, the best way is ALWAYS to follow the existing convention. Every seasoned programmer will tell you this. I understand about OCD, believe me, but in this case I suggest you let it go.
There are very little objective arguments as to why a convention would be better than another (by definition a convention is arbitrary), and even if there was, the absolute worse thing you can do is mix several conventions, because then you have 0 convention at all and never know what to expect.
At least C# tries to standardize; I've worked on several C++ codebases and I fail to see a common denominator: UpperCaseClassNames, lowerCaseClassNames, underscore_separated, tClassName, ENUMS_IN_UPPER, or not... it's rarely consistent, so the less you mix the better.

Related

Is this a good alternative to Moose Perl? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have been a searching for alternative to Moose (Modern object-oriented Perl)
Because Moose is slow I have seen several post relation to this issue, I not want that.
Example from the same creator: https://www.youtube.com/watch?v=ugEry1UWg84&feature=youtu.be&t=260
So I found this alternative from the same creator of moose:
https://metacpan.org/pod/MOP#DESCRIPTION
MOP - A Meta Object Protocol for Perl 5
This module implements a Meta Object Protocol for Perl 5 with minimal overhead and no non-core dependencies (eventually).
Work with UNIVERSAL::Object:
https://metacpan.org/pod/UNIVERSAL::Object
Is this a good choice and alternative to Moose, does someone test this software ?
Related post:
https://www.perlmonks.org/?node_id=1220917
Thanks.
Note: I forget to mention I know about Moo, Mouse, etc, maybe exist something better ?
MOP is very low level, Moxie is based on it; but it's still a proof of concept.
There are faster and lighter alternatives that have been tested in production: Moo and Mouse.
In which context do you use Moose and find it slow ? There is of course an overhead involved, but most of it happens at startup time (compilation) ; then, at runtime, most features are cheap (as long as you make your classes immutable), as explained in the documentation. Over the time Moose has become the de facto standard for object oriented programming and it has a very, very wide ecosystem (a search on MooseX on metacpan returns 820 results). Don't give up on it to early.
If you really need faster startup time (like in vanilla CGI environment for example), the most relevant alternative to Moose is Moo, Minimal Object Orientation. It is really light-weithg, has no XS dependency, while implementing a significant subset of Moose (also, its syntax is fully compatible with Moose so you upgrade to Moose anytime later if you need some piece of functionality that you find missing in Moo). It also has a rich ecosystem.

Swift strictness [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Well, it says on their website that Swift is a strict language. However, I am not sure in what ways it is considered to be strict. Can you please elaborate on that?
Statements about the nature of Swift are often expressed in terms meaningful to people accustomed to the previous language, Objective-C. So in this case, the statement that Swift is "strict" typically refers to how things like variables are typed. But unless you have used another language like Objective-C or Ruby that is not strict about typing, you probably won't appreciate the difference.
For example, in Objective-C, programmers often use "dynamic typing", where a variable is typed as id and you can assign to it a value of any type, even different types at different times — now an NSString, now an NSNumber, now a UIView. But in Swift you can't do that; once we've established that this variable is a String, its value can only ever be a String.
Similarly, in Objective-C, NSArray is just "a collection of objects" of any old type. But in Swift, an Array is a collection of just one type of object and you have to say in advance exactly what type it is.

What is the primary technical challenge that Scala's implicit solves? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
While learning Scala, I found the concept of implicit difficult to rationalize. It allows one to pass values implicitly, without explicitly mentioning them.
What is its purpose for being and what is the problem that it seeks to solve?
At it's heart, implicit is a way of extending the behavior of values of a type in a way that's fully controllable on a local level in your program, and external to the original code that defines those values. It's one approach to solving the expression problem.
It lets you keep your core classes focused on their most fundamental structure and behavior, and factor out higher-level behaviors. It's used to achieve ad hoc polymorphism, where two formally unrelated data types can be seamlessly adapted to the same interface, so that they can be treated as instances of the same type.
For example, rather than your data model classes containing JSON serialization behavior, you can store that behavior elsewhere, and implicitly augment an object with the ability to serialize itself. This amounts to defining in an implicit instance, which specifies how your object can be viewed as "JSON serializable", rather than its original type, and it's done without editing real type of the object.
There are several forms of implicit, which are pretty thoroughly covered elsewhere. Use cases include enhance-my-library pattern, the typeclass pattern, implicit conversions, and dependency injection.
What's really interesting to me, in the context of this question, is how this differs from approaches in other languages.
Enhance-my-library and typeclasses
In many other languages, you accomplish this by monkey patching (typically where there is no type checking) or extension methods. These approaches have the downside of composing unpredictably and applying globally. In statically typed languages without a way of opening classes, you usually have to make explicit adapters. This has the downside of a lot of boilerplate. In both static and dynamic languages, you may also be able to use reflection, but usually with a lot of ceremony and complexity.
In Haskell, typeclasses exist as a first-class concept. They're global, though, so you don't get the local control over what typeclass is applied in a given situation. In Scala, you control what implicits are in scope locally, through the modules you import. And you can always opt out of implicit resolution entirely by passing parameters explicitly.
People advocate for global versus local resolution of typeclasses one way or the other, depending on who you ask.
Implicit conversions
A lot of other languages have no way to accomplish this. But it's become pretty frowned upon in Scala, so maybe this is for good reason.
There's a paper about type classes with older slides and discussion.
Being able implicitly to pass an object that encodes a type class simplifies the boilerplate.
Odersky just responded to a critique of implicits that
Scala would not be Scala if it did not have implicit parameters and
classes.
That suggests they solve a challenge that is central to the design of the language. In other words, supporting type classes is not an ancillary concern.
Its a deep question really It is something that is very powerful and you can use them to write abstract code eg typeclasses etc i can recommend some tutorials that you may look into and then we can haved a chat maybe sometime :)
It is all about providing sensible defaults in your code.
Also the magic of invoking apparently non existent methods on objects which just seems to work! All that good stuff is done via implicits.
But for all its power it may cause people to write some really bad code as well.
Please do watch Nick Partridge's presentation here and i am sure if you code along with him you will understand why and how to approach implicits.
Watch it here
Dick Walls excellent presentation with live coding
Watch both parts.

Matlab function command [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Assume that I have a subfunction seen in the below. What is the difference between these two
function a=b(x,y)
.
.
.
a=output
and
function b(x,y)
......
if I write it in second form how can I define it main function and how can I see its outputs.
Another question,
I found a code from here (http://www.mathworks.com/matlabcentral/fileexchange/21443-multiple-rapidly-exploring-random-tree--rrt-) including a function like:
%% SetObstacleFilename
function SetObstacleFilename(self,value)
if isa(value,'char')
self.obstacleFilename = value;
self.GenerateObstacles();
end
end
how can I use it in my main function? Moreover what is self.GenerateObstacles() command? There is no equality on it?
I think I see how both of your questions are related to the same thing. You really should've asked something along the lines of:
I always saw MATLAB functions written in the form function a=b(x,y), however recently I came across some code which included functions in the form function b(x,y) (e.g. function SetObstacleFilename(self,value)).... so what's up with that?
In order to understand the 2nd type of functions, you need to consider object-oriented programming (OOP).
The code example you found is taken from within a MATLAB class. Class-related functions are known in OOP as "methods", and this specific code in another programming language would take the shape of a void return type function\method.
Now consider the term object that refers to an instance of a class.
Traditionally, methods are limited to a single output. For this reason, some methods are designed to operate on objects (actually pointers, AKA "passing by reference") such that returning a value is not necessary at all, because the input objects are directly manipulated. Other cases when methods don't need to return anything may include functions that have some "utility" functionality (e.g. initialize something, plot something, output something to the console etc. - just like the self.GenerateObstacles() method you mentioned).
Regarding your other questions:
The self in SetObstacleFilename(self,value) looks like an instance of the considered class.
Usually to use class methods you need to instantiate an object using a constructor (which should be a function with the same name of the class), unless these methods are static.
To conclude - above are just the most fundamental basics of OOP. I won't attempt to teach you the whole OOP Torah while standing on one leg, so I am providing some additional materials below, should you be interested to further your understanding of the topic.
Hopefully, what's going on is a bit clearer now!
Here are some resources for you:
MATLAB's OOP Manual.
MATLAB's documentation on OOP.

Which new features are (or will be) added to Scaladoc in Scala 2.10? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Among all the various incomplete lists of features going into Scala 2.10, there are various mentions of improvements to Scaladoc. But it's unclear which ones there are, and which ones are actually going in -- e.g. one of the lists of improvements says "fixes to Scaladoc" with links to various pull requests, some of which got rejected.
Can anyone summarize what's actually changed between Scala 2.9 and 2.10 milestone 4, and maybe indicate what else is planned for 2.10 itself?
Also, are they finally going to fix the problem of not being able to link to methods? E.g. littered throughout my code I have things like this:
/**
* Reverse the encoding computed using `encode_ngram`.
*/
def decode_ngram(ngram: String): Iterable[String] = {
DistDocument.decode_ngram_for_counts_field(ngram)
}
where I want to refer to another method in the same class, but AFAIK there's simply no way to do it. IMO it should be something obvious like [[encode_ngram]] -- i.e. I definitely shouldn't need to give an absolute class (which would make everything break as soon as I pull out a class and stick it somewhere else), and I shouldn't need to give the parameter types if the method name itself is unambiguous (i.e. non-polymorphic).
Several new features, as well as many bugfixes are coming, but there's no definitive list of all the fixes that are in, yet. Of the more notable new features:
Implicitly added members will now be visible. A good example is to look at scala.Array, where methods like map which you might've assumed you had are now visible in the Scaladoc.
Automatically-generated SVG inheritance diagrams, for a bird's eye view of relationships between classes/traits/objects at the package-level and then also at the level of individual classes etc. For example, see the Scaladoc diagrams nightly at both the package-level (click "Content Hierarchy") as well as at the class-level.
Method-linking in some limited form should go into 2.10 (not in the nightly yet). (It's actually not totally trivial to implement in its complete form, due to practical stuff like overloading, as you noted.)
Improved use cases A member with a use case isn't doubly generated anymore, and they're now a bit clearer and simpler than before.
(Less-notable) Keyboard shortcuts for navigating Scaladoc have been added, they're explained here and here
For a more exhaustive list of bugfixes, it might be a good idea to write to scala-internals-- there's a good chance someone will compile a list of all major bugfixes in the past year for you there.