I have been trying to get my portfolio working on Github Pages with a custom domain, I have build everything with Sapper / Svelte. Locally everything works great, but when I deploy the site, I get my 404 error page when first loading the domain, if I then use the links to navigate the site it works perfect. What surprises me is that even the index works perfectly but ff I then reload the page, I get the 404 again.
I followed this Sapper and github tutorial.
But I am using a CNAME in the static folder (it is deployed at the root) to get the domain name to work, I also changed the following places to include the domain.
In server.js I have the following line for the base url:
const dev = NODE_ENV === 'development';
const url = dev ? '/' : '/';
polka() // You can also use Express
.use(
url,
compression({ threshold: 0 }),
sirv('static', { dev }),
sapper.middleware()
)
In package.json I have the following:
"scripts": {
"dev": "sapper dev",
"build": "sapper build --legacy",
"export": "sapper export --basepath <custom-domain> --legacy",
"start": "node __sapper__/build",
"deploy": "npm run export && node ./scripts/gh-pages.js"
},
I have tried different combinations for the basepath and url. For example with and without https, I also tried the github repo name. And also tried it with and without CNAME file.
I probably don't understand the basepath well enough, but the documentation was not extensive enough for a beginner like me.
Does anybody know what I'm doing wrong?
After looking into the issue with colleagues, turns out that the problem was slightly different.
For custom domain, no base path adjustments need to be made. So no url in server.js and no --basepath in package.json.
The reason it did not update was because my gh-pages.js still used the wrong sapper export command.
scripts/gh-pages-js needs to look like:
ghpages.publish(
'__sapper__/export/',
{
branch: 'master',
repo: 'https://github.com/<user>/<repo>.git',
user: {
name: '<user-name>',
email: '<email-adress>'
}
},
() => {
console.log('Deploy Complete!')
}
)
Related
I am currently using Next.js and multi zones for our web apps and I'm hitting an issue where webpackDevMiddleware only sees the current app I am on for changes. I use Docker to create my network. I'm hoping to change the scope to watch all apps in my workspace and refresh when any of them change.
My main issue is when I'm accessing app2 and make changes to app2, app1 doesn't see changes had been made to refresh the screen to update the view from app2.
I did verify if going directly to app2 and making a change, the page does refresh but I'd like developers to access the app1 and route to app2 from there. This will prevent them from needing to know what port (localhost:3000, localhost:3001, localhost:3002, etc.) to access for the right app.
Here is my next.config.js
const { APP2_URL } = process.env;
module.exports = {
webpackDevMiddleware: (config) => {
config.watchOptions = {
poll: 1000,
aggregateTimeout: 300,
};
return config;
},
output: "standalone",
async rewrites() {
return [
{
source: "/app1",
destination: "/",
},
{
source: "/app2",
destination: `${APP2_URL}/app2`,
},
{
source: "/app2/:path*",
destination: `${APP2_URL}/app2/:path*`,
},
];
},
};
Each webapp is in it's own Docker container, so I'm guessing I need to add additional settings to watch the remote container for app2. Any guidance to get me started would be appreciated.
This ended up easier than anticipated. Next.js handled it themselves. Confirmed it works as expected with Next.js version 12.2.5.
I integrated vue store front with magento 2, frontend works fine but product images not display in frontend. It throws error Unable to compile TypeScript:\nsrc/image/action/local/index.ts(27,18): error TS2339: Property 'query' does not exist on type 'Request<any, any, any, any>'. imagemagick is also installed and imgurl in local.json is also defined.
Anyone please know about this why error display.
It is about this.req which is typeof Request from express - it has query property. Please make sure you have yarn.lock from the original repo and reinstall dependencies.
If you are using docker, you might need to add:
- './yarn.lock/var/www/yarn.lock'
To volumes section in the docker-compose.nodejs.yml
i have found a simple solution you can try that
copy all your magento 2 pub/media data in vue-storefront-api/var/magento-folder/pub/media
Or
create a symlink if you are working on localhost
vue-storefront-api/config/local.json
"magento2": {
"imgUrl": "http://magento-domain/pub/media/catalog/product",
"assetPath": "/../var/magento-folder/pub/media",
}
vue-storefront/config/local.json
"images": {
"useExactUrlsNoProxy": false,
"baseUrl": "http://localhost:8080/img/",
"useSpecificImagePaths": false,
"paths": {
"product": "/catalog/product"
},
"productPlaceholder": "/assets/placeholder.jpg"
},
run command in vue-storefront and vue-storefront-api
I was wondering if anyone here has experience with implementing a service worker in SFCC/Demandware.
I generate a service worker with Webpack with sw-precache-webpack-plugin
The problem is: a service worker should be available from the root of the domain. so site.com/sw.js.
JS files will come normally in the static/ folder.
Anyone an idea how to serve this JS file from the root of the project in Demandware/SFCC?
Unfortunately, registering a service worker under an scope that is in an upper path than the service worker file itself does not work (as stated in MDN):
The service worker will only catch requests from clients under the service worker's scope.
The max scope for a service worker is the location of the worker.
(Source: https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers)
Solution
Here is a suggestion for a working approach for serving "/sw.js" in Demandware (Sales Force):
Create a new controller (or pipeline), e.g. "ServiceWorker-GetFile"; the response should be a file content, which can be read from whatever source you wish:
Content asset (dw.content.ContentMgr.getContent());
Library file (dw.content.ContentMgr.getContent() or directly reading a file with dw.io.File / dw.io.FileReader);
even Site preference (although I wouldn't recommend it);
Create an entry in Business Manager / Merchant Tools / SEO / Aliases to route "/sw.js" to "ServiceWorker-GetFile", i.e. use something along:
{
...
"your-host" : [
...,
{
"if-site-path": "/sw.js",
"pipeline": "ServiceWorker-GetFile"
}
]
}
This may seem like an unnecessary overhead, but it was the only way I could findfor serving files with root path in the URI.
Serving other root files as well
By expanding the controller (renaming it to, say, "Content-GetFile" and adding GET/POST parameters like "name" and/or "source") this could be conveniently used for other files as well ("/manifest.json", "/.well-known/assetlinks.json" etc.). In the next example of Business Manager / ... / Aliases, let Content-GetFile accept two parameters: "name" (which would be a file name in the content library or a content asset ID) and "source" (which would be "file" or "asset"):
...
{
"if-site-path": "/sw.js",
"pipeline": "Content-GetFile",
"params": {
"name": "/ServiceWorker/sw.js",
"source": "file"
}
},
{
"if-site-path": "/manifest.json",
"pipeline": "Content-GetFile",
"params": {
"name": "MANIFEST_JSON",
"source": "asset"
}
}
Note that your code should handle appropriately the base paths of the resources (e.g. "/ServiceWorker/sw.js" from the above example does not speak much; you should know whether this is a path in a content library or a path relative to "cartridges//static/default/js/").
Dynamic content
Since the suggested approach uses a controller, you can dynamically process the content before serving it to the user (e.g. if you need to add/remove the "/v12435145145/" part from DMW links). Sky is your limits. :)
I'm currently messing around with the service workers on DW as well.
In my case I have directly added the script inside a footer.isml file like this:
<script>
//init service worker
if ('serviceWorker' in navigator) {
window.addEventListener('load', () => {
navigator.serviceWorker
.register("${URLUtils.staticURL('/lib/sw/sw.js')}")
.then(registration => {
console.log(
`Service Worker registered! Scope: ${registration.scope}`
);
})
.catch(err => {
console.log(`Service Worker registration failed: ${err}`);
});
});
}
</script>
This works for me, well at least I can see the Service Worker registered message.
I also had some issues due to the SSL certificate since my development environment doesn't have a proper SSL but it's using HTTPs routes, so chrome was complaining about it, I needed to run chrome via terminal using this command:
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --user-data-dir=/tmp/foo --ignore-certificate-errors --unsafely-treat-insecure-origin-as-secure=[YOUR DOMAIN]
Unfortunately I'm not able to make work any line of code inside that service worker file, even I tried on Safari, since it has a Service Workers option in the develop menu, but it's not showing any service worker running.
I Hope it will helps you.
I'm trying to do navigation test in protractor and don't see any consitency with the baseUrl in the config and the url used in the test.
protractor.conf.js
exports.config = {
baseUrl: 'http://localhost:4200/'
}
navbar.e2e-spec.ts
import { NavbarPage } from './navbar.po';
import * as protractor from './../protractor.conf.js';
describe('navbar', () => {
let navbar: NavbarPage;
const baseUrl = protractor.config.baseUrl;
beforeEach(() => {
navbar = new NavbarPage();
browser.get('/');
});
it(`should see showcase nav item, be able to (click) it,
and expect to be navigated to showcase page`, () => {
const anchorShowcase = navbar.anchorShowcase;
expect(anchorShowcase.isDisplayed()).toBe(true);
anchorShowcase.click();
browser.waitForAngular();
expect(browser.getCurrentUrl()).toBe(baseUrl + '/showcase');
});
});
Although when I run the e2e test it uses a different port:
** NG Live Development Server is listening on localhost:49154, open your browser on http://localhost:49154/ **
Why is the test url set to port 49154. This apparently seems to be the default if you start a new angular-cli project: https://github.com/angular/angular-cli
How can I get control over the baseUrl / Or is http://localhost:49154/ safe to use for all my angular cli projects?
By default when you do ng e2e the command take --serve value as true. It means it will build and serve at that in a particular URL. Not the baseUrl you passed in protractor.conf.js
that is why, you are getting a random URL served when testing you app like http://localhost:49154/
Now as you don't want build during test and want to test existing build (URL) like http://localhost:4200/ you need to pass --no-serve in your command line and it will pick baseUrl from the protractor.conf.js
you can also pass baseUrl in the command line like below. note that this not baseUrl but --base-href=
ng e2e --no-serve --base-href=https://someurl.com:8080
When running Angular CLI's ng e2e command, it states in the wiki that the default port will be random, as seen here:
https://github.com/angular/angular-cli/wiki/e2e
Under the serve submenu.
The e2e command can take in all the same arguments as serve so to keep the port the same just pass in --port my-port-number to the ng e2e command.
As far as that port being safe to use, I wouldn't use it, it is just a random port after all. I would stick to the default unless you have a use-case for changing it. The port is mainly relevant for the dev server, not so much for where ever the production code runs.
Aniruddha Das's solution doesn't work anymore as this option isn't there from Angular CLI 6.x version, you can try following -
ng e2e --dev-server-target=
please see following reference
Alright, I've tried to look up my question on StackOverflow but I can't find something that helps me since everything I've tried doesn't have any effect on the result (Application error).
So I'm really stumped because the app works perfectly fine on my localhost, but I can't get it to work on Heroku, it just gives me a Application error so I have no idea what the issue is.
So on my package.JSON file looks like this:
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "nodemon --use_strict index.js",
"bundle": "webpack"
},
And I've already tried to change "nodemon" to "node" and gotten rid of --use_strict and ran it on local host and it still works perfectly fine but the Heroku app still gives me a Application Error.
index.js the only thing that I can think of being bad (changed it and it runs here):
// start the server
app.listen(3000, () => {
console.log('Server is running.');
});
webpack.config.js:
const path = require('path');
module.exports = {
// the entry file for the bundle
entry: path.join(__dirname, '/client/src/app.jsx'),
// the bundle file we will get in the result
output: {
path: path.join(__dirname, '/client/dist/js'),
filename: 'app.js',
},
module: {
// apply loaders to files that meet given conditions
loaders: [{
test: /\.jsx?$/,
include: path.join(__dirname, '/client/src'),
loader: 'babel-loader',
query: {
presets: ["react", "es2015"]
}
}],
},
// start Webpack in a watch mode, so Webpack will rebuild the bundle on changes
watch: true
};
It deployed properly after git push heroku master:
https://c1.staticflickr.com/3/2873/33519283263_3d9a711311_z.jpg
I'm pretty much trying to make this app work on Heroku:
https://vladimirponomarev.com/blog/authentication-in-react-apps-creating-components
I think a possible problem might be that you have to run "run bundle" on one shell and "npm start" in the other shell.
Another thing, this app had a lot of things that were npm installed manually in node_modules, which Heroku does not accept if I try to push it on github and will crash, so I'm thinking that might be an issue as well, though I have no idea how to get around that.
This also uses Express and Mongodb, and I added my mongodb info into the index.json file and ran the application, and it worked perfectly fine and after checking the db, the correct info was also inside it, so it's not that either.
You should use process.env.PORT instead of custom port 3000.
Check that you have a mongodb addon purchased, you can get one for free but for limited spacing!
And use the config vars of that database, if you haven't done that already!