Magento: app/code/local vs. app/etc/modules - magento-1.7

A lot of literature on the web teaches the difference between app/design and skin
and goes to great length to describe all of their subdirectories and how they relate
to each other. However I am struggling with another part of the Magento directory tree.
What is the difference in the purpose of the app/code/local and app/etc/modules .
While custom code goes in app/code/local is app/etc/modules where the code needs
to be declared to actually appear in the admin panel?
Thanks.

In principle your assumptions are correct.
The .xml files in app/etc/modules are used to generally manage all the modules (assigning code pools, declaring dependencies, switching them on and off).
The declarations in these .xml files of app/etc/modules is what makes modules "appear" in the admin panel under System > Configuration > Advanced (but they do not make module menus, tabs, grids or the like appear in the admin panel. That's the job of your modules configuration files).
The app/code/local folder contains local modules and their files themselves, like the controllers, observers, blocks, helpers and configuration files.

app/code/local is local code pool for magento modules. If you have a purpose to change the logic somehow, local code pool is the place where everything should be done.
app/etc/modules is a place where you can put main config file for your module, which can help the system to understand about your module.
To get better idea, visit: http://knowledge.santanu.net/developer-guide-for-magento-module-structure-and-codepool/

Related

Sharing common helper scripts among projects in VSCode

I have a few utility functions, snippets and scratches that I want to be able to use in every project.
Currently I have the following setup for Clojure projects and Intellij IDEA/Cursive:
I have a user profile defined in .lein, where I have source-paths pointing to
when I sync and run REPL for project in IDEA/Cursive, I check that :user profile is selected (it is the default)
Cursive shows both project files and common files (i.e. scratches folder) in project pane
I can edit and eval in REPL real project files and my local helers/scratchpads seamlessly.
Is there a way to achieve this behaviour in VSCode?
Notes:
My clojure setup in Cursive is for illustration purposes only. I would like to find a way to get similar feature in other scripting languages (i.e. python, groovy). So ideally I don't want to use leiningen for that, but rather find a generic way to add common source folder to VS code Explorer pane.
I know that with multi-root workspaces I can achieve what I want, but it must be done manually per project. It would be much better if my folder with common utils was added to every workspace automatically. Something like a default workspace template would solve this, but I couldn't find anything similar. Am I missing something?

Importing source files and folders into IAR Workbench

I have a cup of source files in a certain folder structure in my file system. I want to use this structure for a project in the IAR Workbench. Thinking of Eclipse, that could be so easy! But in the IAR Workbench, the folders will become to "Groups", which are only kind of virtual folders. The Workbench doesn't care about folders.
Is there some easy and fast way to import them?
Up to now I have to add the groups manually each and then add the files to the groups, and that's really annoying!
Is there maybe a tool to generate a proper project file (*.ewp) out of a file/folder structure path?
This would help me a lot!
You should have a look at IAR Project/Add Project Connection command.
Although IAR doesn't seem to have any public documentation on the xml syntax, or at least I couldn't find any, you can find Infineon DAVE (Config.xml) and Freescale PE (ProjectInfo.xml) files if you search around. These can be used as examples to figure out the syntax on how to write your own xml files in one of these interfaces, to allow you to specify where all your c, h, assembly and library files are from where ever they may be in your file system. They also allow you to define preprocessor includes for compiler/assembler, and DAVE allows you to define a path variable, which is also very useful.
See: https://mcuoneclipse.com/2013/11/01/iar-arm-v6-7-comes-with-improved-processor-expert-support/
I have modified a DAVE Config.xml file and found it EXTREMELY useful for managing and migrating even just a handful of project files. For example to upgrade to a new release with all files having a new directory root, you just change a single line in the xml file (defining the new root), and all source files, compiler includes etc are all updated to the new level. No more manually editing the preprocessor includes or replacing all the files in the project. And no more fiddling around with ../../ file system hierarchy navigation stuff, you just specify directly (or indirectly via a path to) where the files are, no more relative from where your project happens to be. VERY NICE.
IAR should consider opening this up (documenting) for general users, as it is very useful for project management and migration. While at it they should also consider generalizing the xml syntax a little bit and allow for definition of IAR group heading names, specifying linker file name, and definitely allowing multiple xml files to be included (connected) (so that subprojects can be easily added or removed without effecting the other subproject definition files) and a few basic things like that.
If they where to do a bang up job on this, they might consider allowing most/all aspects of IAR project configuration that might be required by the subproject, to be defined in these xml files, and then entire (sub)projects could just be plopped down anywhere and be up an running extremely quickly (OK, just let me dream a bit :)
For anyone who happens upon this you may want to check out https://github.com/IARSystems/project-migration-tools. They have a tool for pulling in file trees here.

Managing shared code amongst PowerShell modules

I've been diving into some of the more advanced features of powershell modules and manifests recently, with a view to handling scenarios more advanced than just a basic export of a few functions. It sounds like it should be obvious, but I'm struggling to find a nice solution for sharing common 'helper' type functions across several large non trivial modules. In particular, I'm looking for a solution that:
Allows sharing of 'helper' type functions without necessarily being exported by anyone
Allow installation via PsGet from a local repo path
Let me go into some of the challenges I see.
First of all, as far as I can tell, PsGet does not handle module dependencies well. This implies sharing between modules is going to be a struggle. Maybe a solution to this is to avoid PsGet, and use a custom script to 'install' modules to the local module path, which might be more tolerant of dependencies and load order.
My point about not using module exports to share helper functions also seems to be an issue. The reason I can see for this is desiring aliases, helpers etc for common internal actions (needed inside useful functions), that are either useless or unsafe to expose. For example, a nice brief alias for getting the local script path (commonly used, noisier than it should be). Or I recently made a nice simple wrapper around PromptForChoice with fewer options. Maybe this whole thing isn't a real issue. But I can't help but feel that shipping a 'utils' module that exports low level functions that are useful inside real modules, but not to an end user, seems like the wrong way to go.
What I've been playing with is a small build structure that tests and then packs modules, and I want to get some code sharing possible. I've been looking for an alternative using ScriptsToProcess in the manifest, but these seem to be absolute paths, not relative.
Imagine a folder structure:
modules
utils
console_helpers.ps1
moduleA
moduleA.psm1
moduleA.psd1
moduleB
moduleB.psm1
moduleB.psd1
packed_modules
moduleA.zip
moduleB.zip
What I was considering was that you could list relative paths in each ScriptsToProcess, and then my pack phase will go and drag those relative paths in to each zip.
Is this a horrible crazy idea? Am I right that ps modules and PsGet really don't have decent dependency support? I would love to hear feedback from anyone who has looked into this space. I think the answer I'm hoping to get in rough priority might be:
Here's an example of sharing code without exposing it (probably a build/pack level solution)
Here's how to make module dependencies work nicely, using PsGet
Here's how to make module dependencies work nicely, but you can't use PsGet
Just expose everything from modules
This is a terrible idea and you're terrible
Thanks!
UPDATE as suggested by CalebB
Here's another example to illustrate what I'm trying to resolve. I find it useful to wrap up '&' style execution of commands with a wrapper function, to deal with stuff like checking exit codes etc. If i'm building half a dozen modules, many of them will want to make use of that helper (obviously).
My options today seem to be put it in a module and export it, but maybe I don't want it exported, I want more of a . source style access. And if I've got a family of modules all trying to use this stuff, the options for module dependency management are limited (PsGet limitation etc).
If I'm 'building' all the modules at once (with some decent psake and pester infrastructure), maybe I can use a hack at this point to embed scripts into my zipped modules to 'solve' all these problems?
Allows sharing of 'helper' type functions without necessarily being exported by anyone
Mhm... what is wrong with dot sourcing the scripts you need within particular module ? You could :
Keep your folder structure and symlink the desired functions into module folder.
Try to use AbsolutePath with ScriptProcess that has "relative part" in it, for example %PSScriptRoot%\..\utils (not tried in that context but generally works). If not, u can always add preprocessor to fix paths for you if it doesn't work
Delete undesired imported elements manually via function:, alias:, and var: provider.
Import extra utilities only when u use them then remove them at the end ? If the desire is that user can't see them you can encrypt them.
Here's how to make module dependencies work nicely, but you can't use PsGet
Chocolatey uses NuGet so it handles dependencies and can load from the local store. As a benefit, OneGet supports it which is something everybody will use eventually.
I've posted the solution I've come up with on github. I've rolled in a few other features I want when building modules, but the key solution for this question here uses reading and updating the psd1 of each module.
You include scripts that you want to embed in the NestedModules property of your manifest. My build phase will find each script and copy it into the module folder for packing and zipping. The manifest that ships in the package has the script paths converted to the now local file name.
I'm still not sure of this is ideal, but it seems to be a nice compromise to deal with the issues here.
A key issue I encountered along the way was that the ScriptsToProcess list is executed literally at module import time, so it is only useful for bootstrapping the import of your functionality. The NestedModules property is actually the list of additional scripts you want to be . sourced and available when your module is used.

Custom Eclipse (CDT) project layout, different from folder structure

A good hello to you fellow Stackoverflow people.
I am stuck with a small dilemma here.
At my work we used to work with UltraEdit projects but we want to migrate to using Eclipse CDT. (Not using its compiler/build options, we need an external SDK for this).
On the harddisk we have a specific folder structure to keep things seperate between two teams. Namely the 'productcode' + 'applicationcode'-group and the 'drivercode'-group.
Both groups have their own folder where they place sourcecode in.
application
drivercode
productcode
The filenames are given a specific prefix, denoting to which 'layer' they belong.
os (operating system)
application
system
unit
component
IO
hardware
All of these files (except for application which is only allowed in the application folder) can be in the product or drivercode folder.
In UltraEdit all of these files are grouped under their respective layer. So our project has the following folders:
0 Operating System
1 Application Layer
2 System Safety Layer
3 Unit Layer
4 Component Layer
5 IO Layer
6 Hardware Layer
Generic
XML
The virtual folder '0 Operating System' holds all os_xxx files from the real folders 'drivercode/productcode' And the same goes for 2, 3, 4, 5 and 6.
TL;DR:
Is it possible to get the same (virtual) folder structure within Eclipse CDT?
To make things more complex, this whole folder structure is devided in 3 projects. E.G. proj-1, proj-2, proj-3 and there is also a shared folder that holds code that is shared among projects.
I had a similar situation. Rather than a bunch of hunt/peck for linked resources, which tend break the ability to reuse the .*project files elsewhere, I made a 'workspace setup" script that just symlinked the sources into the directories where their projects were. That way the default eclipse mechanisms (build all source within a tree) just work out of the box.
I have found one way, but it is quite cumbersome.
I can create the structure I want using Linked Resource Folder and files.
However this means I need to go through all dialog's per folder/file in order to add them to the list. I hope there is an other way though. So I'll not accept my own answer as of yet.
Eclipse CDT plays well with existing projects.
I guess you probably also have manually generated Makefile? Then you only need to use File -> Import -> C/C++ -> Existing code as Makefile Project.
This will leave all your source where it was and team members that prefer to no use Eclipse can still use whatever they want, and build from command line.

Tool to list all source safe link files

My client is migrating from Source Safe to Clearcase. They need to list all the link files in the Source Safe database so the links can be carried over to Clearcase, as apparently all the source must be checked into Clearcase on day 1, losing any existing links.
Are there any tools for creating this report, or perhaps even doing the full import into clearcase ?
My plan is to write a powershell script to recurse Source Safe the SS folders, findings links using COM.
Thanks.
As I have mentioned in this question, clearexport_ssafe should be used for import from Source Safe to ClearCase.
However, the documentation for that tool explicitly mentions:
Shares. There is no feature in Rational ClearCase equivalent to a Visual SourceSafe share. clearexport_ssafe does not preserve shares as hard links during conversion. Instead, shares become separate elements
So your script would need to list all links, and create soft links between their initial directory and the newly created separate element.
But I believe you may want to consider another organization for the target ClearCase repository, one in which all share files are no longer directly used, as illustrated by this answer (for SVN repository in this instance):
We have eliminated all of our linked files. All class files that were previously linked have been placed into class libraries which are shared to our other projects as shared project references in the solution. So in essence you share libraries, not class files.
There was a bit of an adjustment process getting used to this, but I haven't missed links since then. It really does promote a better design practice by having your code setup like this.
I work mainly with UCM, and all those "share" are natural candidate for UCM component, with UCM baselines to refer to their different version, and you can then make your own "configuration" (list of labels) in order to select the different components you need, making them easily reusable across projects.
As VonC mentioned, the import from VSS to ClearCase is truly atrocious as:
The export/import takes forever to complete, so much so we open a PMR against IBM for it (that didn't help, btw)
The Source Safe shares are transformed into files, which is creating duplicates all over the place (the horror !).
I work on ClearCase UCM myself, and we took the same decision as you (which, in my 10 years of experience in CM, is ALWAYS the best decision): leave the history behind for reference and import at most a couple of versions one on top of the others, by hand (like current in development ; current in test ; current in live).
The way we solved the shares' problem is as follow:
The "shares" where isolated from the source-tree, to be imported independantly from the other sources
The other sources where imported (without the history and without the shares) from scratch. Let say in a component called MAIN_SRC
The shares where imported (without the history) from scratch. Let say in a component called SHARE_SRC
A project was created containing both components: MAIN___SRC, and SHARE_SRC.
Now, the problem is not solved because your shares are living aside your main source code, when your IDE (e.g. Visual Studio) fully expects them to be in the same folders they were before (i.e. in Visual all your projects become wrong if you don't solve this issue, and all the files would need to be relinked from within Visual itself, etc... A lot of work).
This is resolved by using ClearCase VOB symbolic links:
Let says in MAIN___SRC you need to use a file called myShared file in SHARE_SRC.
From within the folder needing to use the myShared file, use the command line interface and run:
cleartool ln -s ..\..\SHARE_SRC\(myPath)\mySharedFile .
You need as many ..\.. as necessary to go up to the component folder level in ClearCase, and then down following your path (myPath) in the SHARE_SRC component folder.
Remember the ClearCase path is composed of:
M:\View_name\VOB_name\Component_name\Your first level of files and folders
( VOB_name\Component_name is the "root" of the component, apart if you have single component VOB, in this case VOB_name\Component_name becomes just VOB_name)
The easiest way is to have a mapping of all the VOB symbolic links that need to be created, and put all necessary "cleartool ln -s" command lines in a script to run once.
After that, you should be fine, and your IDE think the sources are where they used to be.
Cheers,
Thomas