I am currently using Next.js and multi zones for our web apps and I'm hitting an issue where webpackDevMiddleware only sees the current app I am on for changes. I use Docker to create my network. I'm hoping to change the scope to watch all apps in my workspace and refresh when any of them change.
My main issue is when I'm accessing app2 and make changes to app2, app1 doesn't see changes had been made to refresh the screen to update the view from app2.
I did verify if going directly to app2 and making a change, the page does refresh but I'd like developers to access the app1 and route to app2 from there. This will prevent them from needing to know what port (localhost:3000, localhost:3001, localhost:3002, etc.) to access for the right app.
Here is my next.config.js
const { APP2_URL } = process.env;
module.exports = {
webpackDevMiddleware: (config) => {
config.watchOptions = {
poll: 1000,
aggregateTimeout: 300,
};
return config;
},
output: "standalone",
async rewrites() {
return [
{
source: "/app1",
destination: "/",
},
{
source: "/app2",
destination: `${APP2_URL}/app2`,
},
{
source: "/app2/:path*",
destination: `${APP2_URL}/app2/:path*`,
},
];
},
};
Each webapp is in it's own Docker container, so I'm guessing I need to add additional settings to watch the remote container for app2. Any guidance to get me started would be appreciated.
This ended up easier than anticipated. Next.js handled it themselves. Confirmed it works as expected with Next.js version 12.2.5.
Related
I have been trying to get my portfolio working on Github Pages with a custom domain, I have build everything with Sapper / Svelte. Locally everything works great, but when I deploy the site, I get my 404 error page when first loading the domain, if I then use the links to navigate the site it works perfect. What surprises me is that even the index works perfectly but ff I then reload the page, I get the 404 again.
I followed this Sapper and github tutorial.
But I am using a CNAME in the static folder (it is deployed at the root) to get the domain name to work, I also changed the following places to include the domain.
In server.js I have the following line for the base url:
const dev = NODE_ENV === 'development';
const url = dev ? '/' : '/';
polka() // You can also use Express
.use(
url,
compression({ threshold: 0 }),
sirv('static', { dev }),
sapper.middleware()
)
In package.json I have the following:
"scripts": {
"dev": "sapper dev",
"build": "sapper build --legacy",
"export": "sapper export --basepath <custom-domain> --legacy",
"start": "node __sapper__/build",
"deploy": "npm run export && node ./scripts/gh-pages.js"
},
I have tried different combinations for the basepath and url. For example with and without https, I also tried the github repo name. And also tried it with and without CNAME file.
I probably don't understand the basepath well enough, but the documentation was not extensive enough for a beginner like me.
Does anybody know what I'm doing wrong?
After looking into the issue with colleagues, turns out that the problem was slightly different.
For custom domain, no base path adjustments need to be made. So no url in server.js and no --basepath in package.json.
The reason it did not update was because my gh-pages.js still used the wrong sapper export command.
scripts/gh-pages-js needs to look like:
ghpages.publish(
'__sapper__/export/',
{
branch: 'master',
repo: 'https://github.com/<user>/<repo>.git',
user: {
name: '<user-name>',
email: '<email-adress>'
}
},
() => {
console.log('Deploy Complete!')
}
)
I'm working on a GatsbyJS site using gatsby-plugin-offline which is available at example.com and would like to make PDF files to which I link on example.com but are at download.example.com/example.pdf available offline. Is that possible?
Yes, it's possible. I'm not 100% familiar with gatsby-plugin-offline's configuration, but it looks like https://www.gatsbyjs.org/packages/gatsby-plugin-offline/#available-options describes a process for appending additional service worker logic to thee end of its default configuration:
plugins: [{
resolve: `gatsby-plugin-offline`,
options: {
appendScript: require.resolve(`src/custom-sw-code.js`),
},
}]
Then in src/custom-sw-code.js:
workbox.routing.registerRoute(
({url}) => url.pathname.endsWith('.pdf'),
// Use StaleWhileRevalidate, CacheFirst, etc. as desired.
new workbox.strategies.StaleWhileRevalidate({cacheName: 'pdfs'})
);
This question already has an answer here:
Fetch error when building Next.js static website in production
(1 answer)
Closed last year.
I have a very simple NextJS 9.3.5 project.
For now, it has a single pages/users and a single pages/api/users that retrieves all users from a local MongoDB table
It builds fine locally using 'next dev'
But, it fails on 'next build' with ECONNREFUSED error
page/users
import fetch from "node-fetch"
import Link from "next/link"
export async function getStaticProps({ params }) {
const res = await fetch(`http://${process.env.VERCEL_URL}/api/users`)
const users = await res.json()
return { props: { users } }
}
export default function Users({ users }) {
return (
<ul>
{users.map(user => (
<li key={user.id}>
<Link href="/user/[id]" as={`/user/${user._id}`}>
<a>{user.name}</a>
</Link>
</li>
))}
</ul>
);
}
pages/api/users
import mongoMiddleware from "../../lib/api/mongo-middleware";
import apiHandler from "../../lib/api/api-handler";
export default mongoMiddleware(async (req, res, connection, models) => {
const {
method
} = req
apiHandler(res, method, {
GET: (response) => {
models.User.find({}, (error, users) => {
if (error) {
connection.close();
response.status(500).json({ error });
} else {
connection.close();
response.status(200).json(users);
}
})
}
});
})
yarn build
yarn run v1.22.4
$ next build
Browserslist: caniuse-lite is outdated. Please run next command `yarn upgrade`
> Info: Loaded env from .env
Creating an optimized production build
Compiled successfully.
> Info: Loaded env from .env
Automatically optimizing pages ..
Error occurred prerendering page "/users". Read more: https://err.sh/next.js/prerender-error:
FetchError: request to http://localhost:3000/api/users failed, reason: connect ECONNREFUSED 127.0.0.1:3000
Any ideas what is going wrong ? particularly when it works fine with 'next dev' ?
Thank you.
I tried the same few days ago and didn't work... because when we build the app, we don't have localhost available... check this part of the doc - https://nextjs.org/docs/basic-features/data-fetching#write-server-side-code-directly - that said: "You should not fetch an API route from getStaticProps..." -
(Next.js 9.3.6)
Just to be even more explicit on top of what Ricardo Canelas said:
When you do next build, Next goes over all the pages it detects that it can build statically, i.e. all pages that don't define getServerSideProps, but which possibly define getStaticProps and getStaticPaths.
To build those pages, Next calls getStaticPaths to decide which pages you want to build, and then getStaticProps to get the actual data needed to build the page.
Now, if in either of getStaticPaths or getStaticProps you do an API call, e.g. to a JSON backend REST server, then this will get called by next build.
However, if you've integrated both front and backend nicely into a single server, chances are that you have just quit your development server (next dev) and are now trying out a build to see if things still work as sanity check before deployment.
So in that case, the build will try to access your server, and it won't be running, so you get an error like that.
The correct approach is, instead of going through the REST API, you should just do database queries directly from getStaticPaths or getStaticProps. That code never gets run on the client anyways, only server, to it will also be slightly more efficient than doing a useless trip to the API, which then calls the database indirectly. I have a demo that does that here: https://github.com/cirosantilli/node-express-sequelize-nextjs-realworld-example-app/blob/b34c137a9d150466f3e4136b8d1feaa628a71a65/lib/article.ts#L4
export const getStaticPathsArticle: GetStaticPaths = async () => {
return {
fallback: true,
paths: (await sequelize.models.Article.findAll()).map(
article => {
return {
params: {
pid: article.slug,
}
}
}
),
}
}
Note how on that example, both getStaticPaths and getStaticProps (here generalized HoC's for reuse, see also: Module not found: Can't resolve 'fs' in Next.js application ) do direct database queries via sequelize ORM, and don't do any HTTP calls to the external server API.
You should then only do client API calls from the React components on the browser after the initial pages load (i.e. from useEffect et al.), not from getStaticPaths or getStaticProps. BTW, note that as mentioned at: What is the difference between fallback false vs true vs blocking of getStaticPaths with and without revalidate in Next.js SSR/ISR? reducing client calls as much as possible and prerendering on server greatly reduces application complexity.
I was wondering if anyone here has experience with implementing a service worker in SFCC/Demandware.
I generate a service worker with Webpack with sw-precache-webpack-plugin
The problem is: a service worker should be available from the root of the domain. so site.com/sw.js.
JS files will come normally in the static/ folder.
Anyone an idea how to serve this JS file from the root of the project in Demandware/SFCC?
Unfortunately, registering a service worker under an scope that is in an upper path than the service worker file itself does not work (as stated in MDN):
The service worker will only catch requests from clients under the service worker's scope.
The max scope for a service worker is the location of the worker.
(Source: https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers)
Solution
Here is a suggestion for a working approach for serving "/sw.js" in Demandware (Sales Force):
Create a new controller (or pipeline), e.g. "ServiceWorker-GetFile"; the response should be a file content, which can be read from whatever source you wish:
Content asset (dw.content.ContentMgr.getContent());
Library file (dw.content.ContentMgr.getContent() or directly reading a file with dw.io.File / dw.io.FileReader);
even Site preference (although I wouldn't recommend it);
Create an entry in Business Manager / Merchant Tools / SEO / Aliases to route "/sw.js" to "ServiceWorker-GetFile", i.e. use something along:
{
...
"your-host" : [
...,
{
"if-site-path": "/sw.js",
"pipeline": "ServiceWorker-GetFile"
}
]
}
This may seem like an unnecessary overhead, but it was the only way I could findfor serving files with root path in the URI.
Serving other root files as well
By expanding the controller (renaming it to, say, "Content-GetFile" and adding GET/POST parameters like "name" and/or "source") this could be conveniently used for other files as well ("/manifest.json", "/.well-known/assetlinks.json" etc.). In the next example of Business Manager / ... / Aliases, let Content-GetFile accept two parameters: "name" (which would be a file name in the content library or a content asset ID) and "source" (which would be "file" or "asset"):
...
{
"if-site-path": "/sw.js",
"pipeline": "Content-GetFile",
"params": {
"name": "/ServiceWorker/sw.js",
"source": "file"
}
},
{
"if-site-path": "/manifest.json",
"pipeline": "Content-GetFile",
"params": {
"name": "MANIFEST_JSON",
"source": "asset"
}
}
Note that your code should handle appropriately the base paths of the resources (e.g. "/ServiceWorker/sw.js" from the above example does not speak much; you should know whether this is a path in a content library or a path relative to "cartridges//static/default/js/").
Dynamic content
Since the suggested approach uses a controller, you can dynamically process the content before serving it to the user (e.g. if you need to add/remove the "/v12435145145/" part from DMW links). Sky is your limits. :)
I'm currently messing around with the service workers on DW as well.
In my case I have directly added the script inside a footer.isml file like this:
<script>
//init service worker
if ('serviceWorker' in navigator) {
window.addEventListener('load', () => {
navigator.serviceWorker
.register("${URLUtils.staticURL('/lib/sw/sw.js')}")
.then(registration => {
console.log(
`Service Worker registered! Scope: ${registration.scope}`
);
})
.catch(err => {
console.log(`Service Worker registration failed: ${err}`);
});
});
}
</script>
This works for me, well at least I can see the Service Worker registered message.
I also had some issues due to the SSL certificate since my development environment doesn't have a proper SSL but it's using HTTPs routes, so chrome was complaining about it, I needed to run chrome via terminal using this command:
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --user-data-dir=/tmp/foo --ignore-certificate-errors --unsafely-treat-insecure-origin-as-secure=[YOUR DOMAIN]
Unfortunately I'm not able to make work any line of code inside that service worker file, even I tried on Safari, since it has a Service Workers option in the develop menu, but it's not showing any service worker running.
I Hope it will helps you.
I am phasing ember into a project that has its content linking from the server root (as it is in prod).
E.g I have a html files with links like this:
<img src="/content/foo.svg">
How can I set up ember cli so that when I run ember server these URL's will work, without having to move the ember-cli project to the directory in my file system containing /content. I could get round this by moving content into the ember folder but don't want to do this at present..
my folder structure:
/content
/anotherFolder
/theEmberCliApp
/app
/etc etc..
but when I run it I get this error:
[Report Only] Refused to connect to 'ws://127.0.0.1:35729/livereload' because it violates the following Content Security Policy directive: "connect-src 'self' ws://localhost:35729 ws://0.0.0.0:35729".
livereload.js?ext=Chrome&extver=2.0.9:193__connector.Connector.Connector.connect livereload.js?ext=Chrome&extver=2.0.9:193Connector livereload.js?ext=Chrome&extver=2.0.9:176LiveReload livereload.js?ext=Chrome&extver=2.0.9:862(anonymous function) livereload.js?ext=Chrome&extver=2.0.9:1074(anonymous function)
I think the issue is this: baseURL: '../../' how can I get round this? For other non ember sites I just point apaches httpdconfig to the location of the parent of content, but I don't want to stick the whole ember cli project in there.
my environment.js:
/* jshint node: true */
module.exports = function(environment) {
var ENV = {
modulePrefix: 'ember-app',
environment: environment,
baseURL: '../../',
locationType: 'auto',
EmberENV: {
FEATURES: {
// Here you can enable experimental features on an ember canary build
// e.g. 'with-controller': true
}
},
APP: {
// Here you can pass flags/options to your application instance
// when it is created
}
};
if (environment === 'production') {
}
return ENV;
};