Accessing an Ember CLI app's config from within an addon - ember-cli

I'm developing an addon for Ember CLI which requires me to dynamically load files from the app. I should be able to do this with a command like require('my-app/models/load-me'). The only problem is that my-app could be anything, depending on what the developer named their application. If I had access to the my-app/config/environment file, I could just get the modulePrefix from there, but unfortunately, that's also namespaced under my-app.
So does anyone know of another way to access the modulePrefix? I'm assuming there must be a way, because Ember CLI itself needs to get that prefix before it can load any files.

Found the answer here. Basically, you can look it up through the container like so:
this.container.lookupFactory('config:environment').modulePrefix;

Related

Angular module federation: Webcomponents and assets static file sharing between MFE and Shell application in polyrepo scenario

Currently i am facing two problems:
I have two repos one for shell and another for MFE application
Here is the version o module federation
#angular-architects/module-federation": "^14.3.12
First Problem:
I have webcomponent created which is custom grid and it created separately in different repo. And this components is install via npm (added in package.json file) in MFE application project. And add the necessary files in script section of angular.json file. MFE application run properly when standalone. But when we use the MFE in shell application the shell application doesn't show the custom webcomponent. Only way to work is to install the same webcomponent in shell as well to work.
So anybody any idea how can share the webcomponents which used by MFE in shell application instead of installing in shell application project?
Second Problem:
Another problem I currently facing is sharing static files between MFE application and shell.
I have env.js file which has api call urls which is used by MFE application.With MFE application its works correctly.
But when I load the MFE application in shell application it doesn't recognizes the API url call because it tries to search in its asset folder instead of assets folder of MFE application?
Please let me know what i am missing here?
I tried with webconfig.js files in MFE application which angular module federation has created but didn't got any break through to these problems

Why is the gcloud sdk's deploy command looking at my home directory for files?

I'm attempting to deploy a python server to Google App Engine.
I'm trying to use the gcloud sdk to do so.
It appears the command I need to use is gcloud app deploy.
I get the following error:
me#mymachine:~/development/some-app/backend$ gcloud app deploy
ERROR: (gcloud.app.deploy) Error Response: [3] The directory [~/.config/google-chrome/Default/Cache] has too many files (greater than 1000).
I had to add ~/.config to my .gcloudignore to get past this error.
Why was it looking there at all?
The full repo of my project is public but I believe I've included the relevant portion.
I looked at your linked repo and there aren't any yaml files. As far as I know, a GAE project needs an app.yaml file because that file tells GAE what your runtime is so that GAE knows how to deploy/run your code. In fact, according to the gcloud app deploy documentation, if you don't specify any yaml files to be deployed, it will default to app.yaml in the current directory. If it can't find any in the current directory, it will try to build one.
Your repo also shows you have a Dockerfile. GAE documentation for custom runtimes says ...Custom runtimes let you build apps that run in an environment defined by a Dockerfile... In the app.yaml file for custom runtimes, you will have the following entry
runtime: custom
env: flex
Since you don't have an app.yaml file and you have a Docker file in which you are downloading and installing Chrome, it seems to me that gcloud app deploy is trying to infer your runtime and this has led to it executing some or all of the contents of the Dockerfile before it attempts to then push it to Production. This is what is making it take a peek at the config file on your local machine till you explicitly tell it to ignore it. To be clear, I'm not 100% sure of this, just trying to see if I can draw a logical conclusion.
My suggestion would be to create an app.yaml file and specify a custom runtime. Or just use the python runtime with flex

use workbox without using cdn

Does anybody know how to use workbox without getting it from the CDN? I tried this...
add workbox-cli to my dependencies:
"workbox-cli": "^3.6.3"
which gets me all of the following dependencies
$ ls node_modules | grep workbox
workbox-background-sync
workbox-broadcast-cache-update
workbox-build
workbox-cacheable-response
workbox-cache-expiration
workbox-cli
workbox-core
workbox-google-analytics
workbox-navigation-preload
workbox-precaching
workbox-range-requests
workbox-routing
workbox-strategies
workbox-streams
workbox-sw
Then I replaced this line in the examples
importScripts('https://storage.googleapis.com/workbox-cdn/releases/3.6.1/workbox-sw.js');
with this
importScripts('workbox-sw.js');
after copying node_modules/workbox-sw/build/workbox-sw.js to the public folder
But now I realise by looking at the network tab, that that file still gets all the other modules from the cdn
(I thought it would be build with everything inside it.)
Can anybody tell me if there is an npm package somewhere that already has everything inside it? Or should I copy the modules I need from the npm folder, and somehow tie them all together myself? Or do I have to use the webpack plugin? (which I guess will only bundle the modules that I use)
(Update: Workbox v5 makes the process of using a local copy of the Workbox runtime much simpler, and in most cases, it's the default.)
There's one more step that's required. The "Using Local Workbox Files Instead of CDN" has the details:
If you don’t want to use the CDN, it’s easy enough to switch to
Workbox files hosted on your own domain.
The simplest approach is to get the files via workbox-cli's
copyLibraries
command
or from a GitHub Release, and then tell workbox-sw where to find
these files via the modulePathPrefix config option.
If you put the files under /third_party/workbox/, you would use them
like so:
importScripts('/third_party/workbox/workbox-sw.js');
workbox.setConfig({modulePathPrefix: '/third_party/workbox/'});
With this, you’ll use only the local Workbox files.

How to "pack" an Ember CLI component?

I'm using ember-cli and I made a custom component using ember-cli syntax & naming conventions. This is a highly reusable component and I'd like to know what is the better way to put it all into a "package" so it's easy to integrate into other projects.
My component use a .js file for the Ember.Component subclass along with a .hbs file for the template and yet another couple of .js files for the necessary Ember.View subclasses. Right now, every file is in its respective folder along with the files for the rest of my project.
How can I isolate the files related to the component and package them for reuse? In Ruby on Rails I use gems for this matter, and in jQuery I used to write plugins by extending $.fn in a single file.
Take advantage of Ember CLI addon system. It's been designed for cases like this one. The process should be easy if you are familiar with Ember CLI already. As Ember CLI addon system's been reworked in the recent past and it's API was changing it's possible that older articles or guides on this topic became out of sync.
The most comprehensive and kept in sync guide on this topic is kristianmandrup's gist Converting libraries to Ember CLI addons.
There is also an Addons tutorials section on the official Ember CLI site.

Creating a new cocoapods pod

I'm trying to create my first pod, and, as per the recommendation on the website, am doing so at the command line with pod lib create <mylib>. The trouble is lib create assumes I want to create an iOS library, when in fact I'm developing for OS X. I've grep'ed my way through the cocoapod files on my computer looking for the template on which the generated project is based, but have come up empty-handed. Does anyone know how I might fiddle with these settings, wherever they are, to get the configuration I'm after?
If you already have your library created and just want to create a sample podspec you should use:
pod spec create
Instead. You can also pass this a URL to set that as the source automatically. See
pod spec create --help
For more info.