how to to host single-spa root-config and modules on a central server during development - localstack

I've been experimenting with single-spa for a while, and understand the basics of the developer experience. Create a parcel, yarn start on a unique port, add the reference to the import map declaration, and so on. The challenge with this is that as my root-config accrues more and more modules managing ports and import-maps starts to get tedious. What I want is to publish these modules to a central repository and load them from there (e.g., http://someserver.com/repository/moduleA/myorg-modulea.js, etc.).
I was recently introduced to localstack and started thinking maybe a local S3 bucket would serve for this. I have a configuration where builds (yarn build) are automatically published to an s3 bucket running on localstack. But when I try to load the root config index.html from the bucket I get the following JS error:
Unable to resolve bare specifier '#myorg/root-config'
I can access the JS files for each parcel and the root-config just fine via curl, so I suppose this would be a problem with any http server used in the same way. I can flip the root config to use the standard webpack-dev-server instead (on port 9000) and it works okay. So I'm guessing there's a difference between how a production build resolves these modules vs. the local build.
Has anyone tried something like this and got it working?

I had a similar issue that I got to work with http-server by adding each child .js file to a sub-folder in the root-config directory and launching the web server at the root-config directory level.
"imports": {
"#myorg/root-config": "http://someserver.com/root-config.js",
"#myorg/moduleA": "http://someserver.com/modules/moduleA/myorg-modulea.js",
"#myorg/moduleB": "http://someserver.com/modules/moduleB/myorg-moduleb.js",
"#myorg/moduleC": "http://someserver.com/modules/moduleC/myorg-modulec.js",
}
Note: By default, Single-SPA has an "isLocal" check before the current import mappings. You'll need to remove this if using a production build or it won't load the correct mappings.
<% if (isLocal) { %>
In my case, I was using localhost instead of someserver so I could navigate into the 'repository' folder and run npx http-server to get everything to run correctly.
I was hung up on this for a little while so hopefully this leads you in the right direction.

For the record, after trying several approaches to hosting a repository as described above, I found the unresolved bare specifier problem went away. So I have to chalk that up as just not having the right URL in the import map. Measure twice, cut once.

Related

Does anyone have tried the HLF 2.0 feature "External Builders and Launchers" and wants to get in touch?

I'm getting my way through the HLF 2.0 docs and would love to discuss and try out the new features "External Builders and Launchers" and "Chaincode as an external service".
My goal is to run HLF2.0 on an K8s cluster (OpenShift). Does anyone wants to get in touch or has anyone already figured his way through?
Cheers from Germany
Also trying to use the ExternalBuilder. Setup core.yaml, rebuilt the containers to use it. I get an error that on "peer lifecycle chaincode install .tgz...", that the path to the scripts in core.yaml can not be found.
I've added volume bind commands in the peer-base.yaml, and in docker-compose-cli.yaml, and am using the first-network setup. Dropped out the part of the byfn.sh that would connect to the cli container, so that I do that part manually, do the create, join, update anchors successfully, and then try to do the install and fail. However, on the install, I'm failing on the /bin/detect, because it can't find that file to fork/exec it. To get that far, peer was able to read my external configuration, and read the core.yaml file. At the moment, trying the "mode: dev" in the core.yaml which seems to indicate that the scripts and the chaincode will be run "locally", which I think means it should run in the cli container. Otherwise, tried to walk the code to see how the docker containers are being created dynamically, and from what image, but haven't been able to nail that down yet.

Nexus proxy to node-sass-binaries

I am behind a corporate proxy, so my build can't download the node-sass-binaries directly from github. For now I have a Nexus3 raw repository (hosted). The biniary files are downloaded from https://github.com/sass/node-sass/releases/download/{version}/{artifact} and I upload them manually to the repository. In the .npmrc I reference my repository with node-sass-binary={path to repo} and it works fine. But I don't want to manually download and upload the files every time a new one is needed.
Now I want to set up a proxy repository that gets the artifacts automatically (like it is working with maven central).
What have I tried? I have created an Raw (proxy) repo and entered the download URL https://github.com/sass/node-sass/releases/download/
But this isn't working.
The error I get:
node-sass#4.11.0 install C:\Project\ng\src\node_modules\node-sass
node scripts/install.js
Downloading binary from https://myserver/nexus/repository/node-sass-binary//v4.11.0/win32-x64-72_binding.node
Cannot download "https://myserver/nexus/repository/node-sass-binary//v4.11.0/win32-x64-72_binding.node":
HTTP error 404 Not Found
Hint: If github.com is not accessible in your location
try setting a proxy via HTTP_PROXY, e.g.
export HTTP_PROXY=http://example.com:1234
or configure npm proxy via
npm config set proxy http://example.com:8080
node-sass#4.11.0 postinstall C:\Project\ng\src\node_modules\node-sass
node scripts/build.js
In my opinion this error message makes sense because if I call https://github.com/sass/node-sass/releases/download/ directly in the browser I get a 404 message.
So am I using the wrong URL or do I miss something else? Is it even possible to do this? Thanks for your help.
I had a similar problem.
I believe you are using Node v12 which is not supported with node-sass#4.11.0
https://github.com/sass/node-sass/releases/tag/v4.11.0
and for this reason win32-x64-72_binding.node is not found (72 refers to Node v.12 ...)
So, use node-sass 4.12.0 or higher in order to fix this issue.

Running mapbox-gl-js locally (unable to serve debug page)

Edit:
Summary, I tried to follow only the steps listed in the below two links as applies to windows:
https://github.com/mapbox/mapbox-gl-js/blob/master/CONTRIBUTING.md
https://github.com/stackgl/headless-gl#windows
Here I have reattached the screenshot of the commands that I had problems with:
https://imgur.com/RCQCNU5
One more step I took that I should mention is I also did not find the headless gl when I downloaded the repository, when the install headless gl command did not work I manually copied the file and put it in my local copy under the nodemodules directory thinking it would work but it didnt solve anything. I do think this is related to access issues but I dont know what else I should try to get it working?
First, let's clarify your problem: you want a version of mapbox-gl.js which contains a recently fixed bug.
Your best option is to just wait a couple of weeks for a release.
Failing that, you should build your own, from master. You don't need to set up a debug server for that. You can skip straight to the "Creating a Standalone Build" section.
If the steps for building on Windows don't work for some reason, you could set up a local virtual machine running Ubuntu and use that.
But honestly, just wait a couple of weeks. :)
Just in case some one else need to run this on local server.
After clone
Run npm install
npm run start-debug
It will start listening on port 9966.
Test the debug html files entering to
localhost:9966/debug/FILE_NAME_TO_TEST.html

Deploy resources using Powershell DSC pull server

I am trying to deploy Powershell modules from my https pull server but couldn't. I don't know what I'm missing here. These are things which I already did or tried:
Setup a https based pull server using instructions outlined at https://msdn.microsoft.com/en-us/powershell/dsc/pullserver
Register a pull client using instructions mentioned here https://msdn.microsoft.com/en-us/powershell/dsc/pullclientconfignames
On my pull server I've placed modules under C:\Program Files\WindowsPowerShell\DscService\Modules as xWebAdministration_1.12.0.0.zip and xWebAdministration_1.12.0.0.zip.checksum
If I double click xWebAdministration_1.12.0.0.zip file it contains: DSCResources, Examples, Tests, HighQualityResourceKitPlan.md, README.md and xWebAdministration.psd1 at root level, Under DSCResources I have all MSFT_* folders and other stuff
When I run a custom configuration on my client node which requires xWebAdministration module, I get module not found exception.
I looked at client's event viewer for errors but don't see anything related.
Any help is appreciated.
Thanks!
Have you tried Publish-DSCModuleAndMof from xPSDesiredStateConfiguration?
You have to run Install-Module xPSDesiredStateConfiguration first.
You can find an example of using this here.

Advice on creating self contained project, and distributing a web server with the source code

I need some advice on configuring a project so it works in development, staging and production environments:
I have a web app project, MainProject, that contains two sub-projects, ProjectA and ProjectB, as well as some common code, Common. It's in a Subversion repository. It's nearly all HTML, CSS and JavaScript.
In our current development environment we check MainProject out, then set up Apache virtual hosts to point at each of the sub-project's directories, as paths within each project are relative to their root. We also have a build process that then compiles each of the sub-projects into their own deliverable package, with the common code copied into each.
So - I'm trying to make development of this project a bit easier. At the moment there is a lot of configuration of file paths in Apache http.conf files, as well as the build.xml file and in a couple of other places too.
Ideally I'd like the project to be checked out of SVN onto a fresh computer, with a web server as part of the project, fully configured, that can then be run from the checkout directory with very little extra configuration, either on a PC or Mac. And I'd like anyone to be able to run the build to compile it too.
I'd love to hear from anyone who has done something like this, and any advice you have.
Thanks,
Paul
If you can add python as a dependency, you can get a minimal HTTP server running in less than ten lines of code. If you have basic server side code, there is a CGI server as well.
The following snippet is copied directly from the BaseHTTPServer documentation
import BaseHTTPServer
def run(server_class=BaseHTTPServer.HTTPServer,
handler_class=BaseHTTPServer.BaseHTTPRequestHandler):
server_address = ('', 8000)
httpd = server_class(server_address, handler_class)
httpd.serve_forever()
I've done this with Jetty, from within Java. Basically you write a simple Java class that starts Jetty (which is a small web server) - you can make then this run via an ant task (I used it with automated tests - Java code made requests to the server and checked the results in various ways).
Not sure it's appropriate here because you don't mention Java at all, so apologies if it's not the kind of thing you're looking for.