Is pod files structure the way to go in ember 2.x apps? - ember-cli

Is pod structure recommended over the traditional way of organizing files in ember projects using version 2.x ?

Ember pods are a way of structuring your project by feature, instead of type. Instead of having a directory structure with several types (controllers, models, templates...), everything is grouped around a feature (comments, posts...).
So its your decision to go that way. As you application grows you'll be able to easily find the route, model and template for each feature without having to look in a directory with a long list of files.

Related

Can I structure my Moodle plugin in directories?

I am creating a new Moodle plugin, and I have created more than 180 files. I think it would be better to reorder them in a directory structure. Is there any rule or best practice to do this?
It depends on the type of plugin you are building. Moodle only enforces some folder structures for components like strings, db and templates. The documentation is somehow incomplete on the subject : https://docs.moodle.org/dev/Tutorial#Let.27s_make_a_plugin
You should take a look at core plugins that have recently been modernized by the implementation of mustache templates. The mod_forum plugin should be a suitable example of how to structure a large plugin (classes, libs, templates, etc).

How to create several flash application sharing common codebase in FlashDevelop/ActionScript 3.0?

Situation:
I need several swf/exe output files compiled in FlashDevelop from several projects. More than 60% of ActionScript 3.0 source is common for all project, rest are project-specific. How can I organize that in FlashDevelop? I want to have "one-click-to-build all" setting without duplicating common codebase (so when I need to fix something I do not need to copy-paste solution into several files).
All sources are under develeopment and will change very often.
A straightforward solution is to make an external classpath, for instance:
c:\dev\shared_src\
c:\dev\project1\
c:\dev\project2\
Then configure each project:
Project Properties > Classpath
Add Classpath > select '../shared_src'
PS: of course you should keep everything under source control.
Using svn:externals you could structure your repository in such a way that the commom parts are stored just once in the source control system, so changes made can be synchronised with just a single commit and update cycle.
For example, imagine that you have ^/ProjectA and ^/ProjectB, each of with require ^/Common as a sub directory.
Using svn:externals, pull ^/Common into both projects.
The exact nature of doing this will depend on the version of svn you use, and any client you use (such as TortoiseSvn). Refer to the relevant edition of the svn book for specifics.
The ease of implementing this will depend quite a lot on how separate the common code currently is in your application; and pulling in directories as directories is much more practical than trying to pull them into an existing directory; and unfortunately wildcards for filepaths are not supported.
However, based on your description of your aim; this is the most straight-forward solution I can imagine.
Hope this helps.

What are the Xcode solution organisation best practices and guidelines?

Are there any best practices for how to organise one's solution in xcode
This is mine at the moment from the root:
A folder for each 3rd party framework e.g. KissXML
A folder for my unit tests
A folder for frameworks, products and resources
A folder for MyApp which has sub folders for model, view, controller, database, supporting files and domain.
Mine is:
Main application
Model
Singletons
Helper+managers
Controllers // I keep nibs with their respective class files
View
Resources
images
plists
// ... groups from other types of resources if needed
Supporting files
Unit tests
Frameworks
For reusable code on iOS I use static libraries and add these as separate projects in the Xcode workspace. Even for third-party code, if there is not a static library target, I create one. That way, I treat third-party code the same way as I treat my own library code. Further, then I don't have to worry about versioning of third-party code.
I've found it important to have Xcode mirror the file system organization of the code, at least up to some level. I adopted this practice after reading this blog post. I don't do this below the levels I've listed above, though. This helps when you share code on github, for example. Rather than have downloaders or contributors have to dig through all of your source dumped into a single directory, it is organized into functional buckets. I've seen some projects where the Xcode organization is OK, but every single source file in the file system is dumped into a single directory.
Although no particular method can be devoid of disadvantages, here is what we use
Folder for Application core or Model. This includes sub-folders for
any third party libraries used and folders for specialized model
classes. For example there would be folder for web service handling.
Folder for one major module which would include sub-folders for each
screen containing class files, nibs and resources (this may include
more sub-folders according to the need).
Folder for second major module and so on..
This model serves us one major purpose. Our application core contains stuff like logging, data encryption/decryption etc. So it is very unlikely to be changed for many applications that we develop. Similarly there would be some applications which would need functionality of major module one and add some other things. Therefore these three folder groups are maintained as separate repositories on the subversion.
Now when we start a new project, we create a new repository for the project and link it with the application core repository and other major module repositories according to the need. So any changes made in application core by one project team, is reflected in other projects as well. Same with other major modules. This also helps us to achieve complete modularity.
Of course there would be disadvantages to this scheme, but this scheme has suited us well for many years now :)

How to organize Scala code in Lift project?

After 1.5+ years of Ruby and Rails programming, I have finally started working on one of the new projects in Scala and Lift. Basically I'm trying to write an API for accessing information from a huge database (millions of rows). Lift should help me code the frontend of this project (the API part). But now, this also involves a module that would read from a compressed ZIP XML file to initially populate the database with rows. This module would need to run once in 3 months.
Where should I place this module code? or rather, How should I organise my Lift and Scala code? Where does the background processes go? Any pointers in this regard are welcome.
I'm a little uncertain if this is what you're after, but I'm using SBT (http://code.google.com/p/simple-build-tool/). It draws up a default project structure. You should especially look at sub projects (http://code.google.com/p/simple-build-tool/wiki/SubProjects).
For scheduled processes you could use an actor and the ActorPing to restart the process on regular intervals. For such long intervals as 3 months you could keep track of last invocation by touching a file and checking the date on application restart. The ActorPing need to be initiated on application startup; this can be done in the lift boot. If you need to modularise it more you could create a servlet that initiates the ActorPing on servlet init.
Lift follows (at least the versions I use) a standard Maven 2 structure, so there is nothing special there. Just add the code in the src folder. The packages to create will depend on your design/preferences, we can't really help you with that :)
The "standard" Lift project using SBT as the build usually calls for the following project structure:
project
src
main
scala
bootstrap
liftweb
Boot.scala
project-name
comet
lib
model
snippet
view
resources
webapp
WEB-INF/web.xml
index.html
test
resources
scala
RunWebApp.scala
If you are using the Lift Mapper ORM, you generally put your models in the src/main/scala/project-name/model directory. Likewise, any of your CometActors should go in src/main/scala/project-name/comet. Any custom Snippets you write should be in src/main/scala/project-name/snippet and any custom View components in the view directory under project-name. All of the code related to booting up your application and establishing database connectors, etc, should go in src/main/scala/bootstrap/liftweb/Boot.scala. The rest of the structure falls out like the previous answers have said, which follows the general Maven 2 structure.
This is just the general structure that is provided by the default Lift app. The only thing that is required is the bootstrap.liftweb.Boot.scala file, as the Lift Servlet looks for that class during boot.

Salesforce - How to Deploy between Environments (Sandboxes, Live etc)

We're looking into setting up a proper deployment process.
From what I've read there seems to be 4 methods of doing this.
Copy & Paste -- We don't want to do this
Using the "Package" mechanism built into the Salesforce Web Interface
Eclipse Force IDE "Deploy to Server" option
Ant Script (haven't tried this one yet)
Does anyone have advice on the limitation of the various methods .
Can you include everything in a Web Interface package?
We're looking to deploy the following items:
Apex Classes
Apex Triggers
WorkFlows
Email Templates
MailMerge Templates -- Can't seem to find these in Eclipse
Custom Fields
Page Layout
RecordTypes (can't seem to find these in Website or Eclipse)
PickList items?
SControls
I recommend the Force.com Migration Tool.
For reference:
Force.com Migration Tool Documentation
Migration Tool Guide
The Migration Tool allows you to use ant targets to move your metadata between salesforce.com organzations.
I can speak to this from recent painful experience.
Packaging: this is a very old method that predates the metadata API on which both Ant and Eclipse rely. In our experience, packaging's only benefit is in defining your project. If you're using Eclipse (which we do, and I recommend), you can define your project as being based on a particular package. As long as you remember to add new components to your package, your project hangs together
One thing that baffled us for a while, btw, are the many uses of package. We've noted the following:
Installed packages: these come in managed and unmanaged flavors and are really, in the words of a recent post on the SFDC boards, for ISVs to deploy their stuff into various unknown orgs "out there". Both managed and unmanaged packages have limitations that make them unsuitable and unneeded for deployment from development to production within an org, or in any case where you're doing custom development and don't intend to distribute code to a large anonymous base.
Non-installed packages: this is what you see when you click "Packages" in the web UI. These, that we sometimes call "development packages", seem to be just a convenient way to keep a project definition together.
Anyway, the conclusion I'm coming toward is that our team (custom development, not an ISV) does not need packages in any form.
The other forms of deployment, both Eclipse and Ant, rely on the Metadata API. In theory they are capable of exactly the same things. In reality they appear to be complementary. The Force.com migration tool, built into the Force.com IDE for Eclipse, makes deployment as easy as it can be (which is not very) and gives you a nice look at what it intends to deploy. On the other hand, we've seen Ant do some things the IDE could not. So it's probably worthwhile to learn both.
The process we're leaning toward is to keep all our projects in SVN, and use the SVN structure as the project definition (Eclipse will work with this and respect it). And we use Eclipse and sometimes Ant for migration. No apparent need for packages anywhere.
By the way, one more thing to be aware of -- not all components are migratable. Some things must be reconfigured by hand in the target environment. One example would be time-based workflows. Queues and Groups also need to behand-created, I think. Likewise the metadata API can't directly process field deletions so if you deleted a field in your source, you need to delete it by hand in the target. There are other cases as well.
Hope that's useful --
-- Steve Lane
As of Spring '09, mail merge templates are not supported in metadata but record types are. You will find record types as an XML element in the file for the object they belong to. Everything else on your list is supported with a small exception. Picklist values for standard fields cannot be edited in Spring '09. Stay tuned for news on Summer '09 feature announcements.
Update: Standard picklists on standard objects are now metadata exposed (as of API v16):
http://www.salesforce.com/us/developer/docs/api_meta/Content/meta_picklist.htm
Otherwise, Steve Lane's response is pretty accurate. The advantage of using unmanaged packages (what Steve calls non-installed packages) is that when you add metadata to a package, the metadata it depends on will automatically be added. So it's easier to grab a full set of metadata containing all its dependencies. If you are repeatedly moving metadata from one org (sandbox) to another (production), Steve's approach is probably the best way to go and certainly the most common today. I frequently use unmanaged "developer" packages to move something I've developed in one org to another unrelated org. For my purpose, I like to have the package defined in the org as opposed to an Eclipse project / SVN. But that probably doesn't make sense if you are doing team development across many dev/sandbox orgs and are using SVN already.
Jesper
Another option is to use Change Sets if you want to move meta data from a sandbox to production.
There are currently some limitations on how change sets can be used:
Sending a change set between two organizations requires a deployment
connection. Currently, change sets can only be sent between
organizations that are affiliated with a production organization, for
example, a production organization and a sandbox, or two sandboxes
created from the same organization.
From the docs:
A package must be managed for it to be published publicly on AppExchange, and for it to support upgrades. An organization can create a single managed package that can be downloaded and installed by many different organizations. They differ from unmanaged packages in that some components are locked, allowing the managed package to be upgraded later. Unmanaged packages do not include locked components and cannot be upgraded. In addition, managed packages obfuscate certain components (like Apex) on subscribing organizations, so as to protect the intellectual property of the developer.
Advantage to managed package would be that it allows you to easily version and distribute things across multiple SFDC organizations.