How to "pack" an Ember CLI component? - ember-cli

I'm using ember-cli and I made a custom component using ember-cli syntax & naming conventions. This is a highly reusable component and I'd like to know what is the better way to put it all into a "package" so it's easy to integrate into other projects.
My component use a .js file for the Ember.Component subclass along with a .hbs file for the template and yet another couple of .js files for the necessary Ember.View subclasses. Right now, every file is in its respective folder along with the files for the rest of my project.
How can I isolate the files related to the component and package them for reuse? In Ruby on Rails I use gems for this matter, and in jQuery I used to write plugins by extending $.fn in a single file.

Take advantage of Ember CLI addon system. It's been designed for cases like this one. The process should be easy if you are familiar with Ember CLI already. As Ember CLI addon system's been reworked in the recent past and it's API was changing it's possible that older articles or guides on this topic became out of sync.
The most comprehensive and kept in sync guide on this topic is kristianmandrup's gist Converting libraries to Ember CLI addons.
There is also an Addons tutorials section on the official Ember CLI site.

Related

How to create python libraries and how to import it in palantir foundry

In order to generalize the python functions, I wanted to add functions to python libraries so that I can use these function across the multiple repositories. Anyone please answer the below questions.
1) How to create our own python libraries
2) how to import those libraries across multiple repositories
How to create python libraries in Palantir Foundry?
To create a new library, you can do so by creating a new repository. When prompted to initialise the repository, you should have an option that reads:
Python Library
Template for publishing a Python library package. Consuming new libraries has changed,
please read README in library repository.
The readme will contain instructions on how to publish a library. It is recommended you understand how conda publishing channels work for this.
A note, avoid using _ in the library name, since it can cause problems. - is safe to use though.
How to import a library in code authoring?
Once your library is publishing, you can add it to your conda recipe of the repository you want to consume the library in. You can find this in: transforms-python/conda_recipe/meta.yaml
Afterwards just add it to the list of under
requirements:
run:
- python
- pandas
- your-library-name
In addition to fmsf's answer, in the second step you might have to add the following content to the build.gradle of your transforms-python directory:
transformsPython {
sharedChannels "libs"
}

How do you use/reference libraries in IBM Bluemix OpenWhisk?

As of today, in the IBM Bluemix docs for the IBM Bluemix OpenWhisk service I could not find any clues as to how to use libraries.
How am I missing the obvious that all apps invariably require a library and therefore why isn't that at least mentioned in the OpenWhisk docs?
If libraries are called something else or the concept doesn't apply in the usual way (such as maybe Libraries need to be converted into "OpenWhisk Packages"?), the OpenWhisk docs should SAY SOMETHING about the word/term/concept "libraries".
You can use webpack to bundle all your dependencies and create the final .js file you'll use as your OpenWhisk action.
See this example:
These are all the actions before webpack build: https://github.com/IBM-Bluemix/logistics-wizard-recommendation/tree/dev/actions
Invoking webpack: https://github.com/IBM-Bluemix/logistics-wizard-recommendation/blob/dev/package.json webpack --config webpack.config.js
Here is another more simpler example: https://github.com/IBM-Bluemix/openwhisk-webpack
To cover another language for anyone who finds this question…
For Swift, OpenWhisk comes with the Kitura-net, SwiftyJSON & swift-watson-sdk packages (Swift term for libraries) built in.
If you want to include any other packages then you have to either build your own Docker container for your action or concatenate all the Swift source files that are in the packages together with your action file to create a single .swift file for upload with wsk action update. I've used cat to do this:
cat lib/Redis/Redis*.swift actions/_common.swift actions/counts.swift > build/counts.swift
which creates a single build/counts.swift containing Kitura-Redis, some common code and my counts action.

Ember cli - use sass addon in less project

I use broccoli-less in my ember cli project and would like to use an addon (ember-cli-materialize), which uses broccoli-sass.
After installing the addon, i get: File to read not found or unreadable ../app.scss, because i also have an app.less file in my styles dir.
As i understand, this commit Allow multiple preprocessors per type should make it possible, although i might be missing something. Has anyone managed to use ember-cli with multiple preprocessors, and what changes is needed?
Ember-cli version: 1.13.1
Ember version: 1.12.0
Thanks
I know your circumstance is different than mine but this may help others or spur a better solution. I was added to a dev team to polish up an app already styled using LESS. I favor SASS and tried to use ember-cli-sass alongside ember-cli-less without any success.
You may want to look further into Ember-Cli's add.import
By adding your input configurations to ember-cli-build.js with the above, you can leverage either your bower-components directory (if used) or vendor directory, to import a compiled CSS doc (from Sass source files) that will build alongside the project quite nicely with a simple sass --watch <input:output> command.
The LESS files are ultimately compiled to app.css, and your SASS files to vendor.css (make sure you link to the stylesheet in your index page/template).

How to read\write settings from a Visual Studio Project file using NuGet?

I'm interested in creating a NuGet package for a documentation tool I'm writing. Ideally, I'd like the user to not have to configure my tool in anyway. In order to do this, I need to be able to read some settings from the Project's .csproj or .vbproj file to get the path of the Xml documentation file generated by the compiler. I also need to add a post-build step to the project.
I've looked through their documentation but I haven't seen any mention of being able to do this. Is this possible? If so, is there any documentation or examples of this available?
Using PowerShell script, you can access the rich VS DTE object model. This allows your package to do all kind of things that NuGet doesn't have specific features for. Look at this help topic for some info on using init.ps1 or install.ps1.

How to setup a DotNetNuke Development Environment with Source Control?

My team is developing a new DotNetNuke web application and would like to know what is recommended to setup a development environment with source control and automated builds? We would like to keep the DNN source code separate from our custom modules and extensions source code.
The DotNetNuke Compiled Module template for Visual Studio wants us to store the source code in the DesktopModules directory of the DNN source code and output to the DNN source code bin directory. Is this the recommended structure? I would rather keep the files in different locations, but then it becomes more difficult to run and debug locally as it would require an install of the module for each change. Also, how should an automated build deploy any changes?
How have others set this up? Is there a recommended best practice?
For my source control, I develop modules in their own project. This contains the module code, test code, data provider code (if applicable) and anything else. This is checked into source control like any other project. Note that the module project contains no links to a specific DNN website, and DNN references are made in the project to a common "bin" directory that references your target build. For example, in my projects folder, I have \bin460 , \bin480, \bin510, \bin520 etc. Each of these folders holds a set of binaries for a specific DNN version. That way you can build against a particular version but test against any version you like.
The problem with source-controlling a module in place in a dnn install is
- sometimes not all of the module code is easily isolated under a single parent directory
- doesn't lend well to a PA module approach
- not easy to shift the project to a different DNN Version for development or testing
- easy to inadvertently source control parts of the DNN solution, particularly with integrated VS source control solutions.
This approach compiles quickly because you're not trying to compile the entire project. For test deployment I have a build script that copies the various parts of the module into a target website. This can be done via the compile (link the build script) or just run after you've had a successful compile in a cmd window. My build script has a 'target' environment switch, so that I can say 'dnn520' to deploy the build to my test dnn520 install. Note that you need to manually create the module configuration first before this will work, but this is a one-time effort, and you can use the export feature to create your .dnn module manifest.
To build your module package, invest the time in a comprehensive script which will take the various parts from your source directory, and zip them into an install package. Keep all of the parts in your source control folder, and copy them into a temp directory, then run a command-line zip utility (I use an ancient version of pkzip) to pack it into an installable file.
The benefits of this approach is :
- separation of module code from installed code
- simple way of keeping only the module code in source control (don't have to exclude all the website code)
- ability to quickly test out modules in different dnn versions
- packaging script allows you to quickly and easily build a new version of a module for install testing/deployment
The drawbacks are
- can't use the magic green 'go' button in VS (have to manually attach debugger)
- more setup time than developing in-place
We typically stick to keeping the module code in a folder under DesktopModules and building to the website's bin directory.
In source control, we just map the individual modules, rather than the entire website. Depending on what we're working on, a module may be an entire project in source control, or we may have multiple related modules in the same project, living next to each other.
Automatically deploying changes is somewhat difficult in DNN. It's highly recommended to have a build script that packages your module into an installable form. You can then copy installable packages into the website's Install/Module folder, and get the URL /Install/Install.aspx?mode=InstallResources, which will install any packages in that folder.
In response to bduke's answer. You should, and don't want to build projects in the DesktopModules folder.
That's where all of the source code for the site out of the box goes.
That's where you modules will be "installed" and thus if someone "updates" or re-installs one, then it will be overwritten
It can make upgrading your Application far more difficult. Many developers don't understand that the idea of not touching the original source code files to modify their behavior. BECAUSE it will just be overwritten when you perform an upgrade.
If you want to build modules, create a solution folder called Modules and place your seperate project modules there.
If you want to debug them, make the target debug output point to the web\bin folder.
If you want to install/deploy them. Build it in release mode and install them through the Module/Extension filter.