NextJS app builds fine locally but fails when deploying to Vercel (FetchError) - deployment

Fairly new to web dev and I'm trying to build fullstack for the first time.
I've built a simple blog using Next as frontend and Strapi as the backend. The app works fine locally but I'm stuck at deployment. I figure it's some sort of a connection issue with the backend (which is still running locally) or I'm totally missing something on the frontend. Is it that I need to make Strapi available online first and then use that domain in my fetch?
Vercel throws this error during build:
FetchError: request to http://localhost:1337/api/places?populate=%2A failed, reason: connect ECONNREFUSED 127.0.0.1:1337
at ClientRequest.
This is how I'm getting data from Strapi
export async function getStaticProps() {
// const postRes = await axios.get("http://localhost:1337/api/articles?populate=*");
const reviewsRes = await fetchAPI("/articles", { populate: ["image", "place"] });
return {
// props: { reviews: postRes.data.data },
props: { reviews: reviewsRes.data },
};
}
I'm also using this line to bring in images and data
backgroundImage: `url(${process.env.NEXT_PUBLIC_STRAPI_API_URL}${place.attributes.image.data.attributes.url})`,
I've also tried changing localhost to 127.0.0.1:1337 in my env/config

You've actually answered your question in this line:
Is it that I need to make Strapi available online first and then use that domain in my fetch?
Yes! You have to deploy your Strapi backend first, and use the domain/URL inside your deployed NextJS frontend.
Strapi Deployment Documentation: https://docs.strapi.io/developer-docs/latest/setup-deployment-guides/deployment.html
What you're doing with the backgroundImage url is what you need to do to your data-fetching url.
Use that NEXT_PUBLIC_STRAPI_API_URL env variable so you don't have to change the code before deploying.
Locally you can have .env file in you NextJS root folder something like this
NEXT_PUBLIC_STRAPI_API_URL=http://localhost:1337/api
Then, once you've deployed the NextJS app to Vercel, you have to set the NEXT_PUBLIC_STRAPI_API_URL variable to the project setting.
Set env variables in Vercel: https://vercel.com/docs/concepts/projects/environment-variables
Welcome to web dev world! Hope it helps!

Related

Netlify deploy can't connect to Heroku backend

I've built a wee program that works fine when I run it locally. I've deployed the backend to Heroku, and I can access that either by going straight to the URL (http://gymbud-tracker.herokuapp.com/users) or when running the frontend locally. So far so good.
However, when I run npm run-script build and deploy it to Netlify, something goes wrong, and any attempt to access the server gives me the following error in the console:
auth.js:37 Error: Network Error
at e.exports (createError.js:16)
at XMLHttpRequest.p.onerror (xhr.js:99)
The action that is pushing that error is the following, if it is relevant:
export const signin = (formData, history) => async (dispatch) => {
try {
const { data } = await api.signIn(formData);
dispatch({ type: AUTH, data });
history.push("../signedin");
} catch (error) {
console.log(error);
}
};
I've been tearing my hair out trying to work out what is changing when I build and deploy, but cannot work it out.
As I say, if I run the front end locally then it access the Heroku backend no problem - no errors, and working exactly as I'd expect. The API call is correct, I believe: const API = axios.create({baseURL: 'http://gymbud-tracker.herokuapp.com/' });
I wondered if it was an issue with network access to the MongoDB database that Heroku is linked to, but it's open to "0.0.0.0/0" (I've not taken any security precautions yet, don't kill me!). The MDB database is actually in the same collection as other projects I've used, that haven't had this problem at all.
Any ideas what I need to do?
The front end is live here: https://gym-bud.netlify.app/
And the whole thing is deployed to GitHub here: https://github.com/gordonmaloney/gymbud
Your issue is CORS (Cross-Origin Resource Sharing). When I visit your site and inspect the page, I see the following error in the JavaScript console which is how I know this:
This error essentially means that your public-facing application (running live on Netlify) is trying to make an HTTP request from your JavaScript front-end to your Heroku backend deployed on a different domain.
CORS dictates which requests from a frontend application are ALLOWED to make a request to your backend API.
What you need to do to fix this is to modify your Heroku application and have it return the appropriate Access-Control-Allow-Origin header. This article on MDN explains the header and how you can use it.
Here's a simple example of the header you could set on your Heroku backend to allow this to work:
Access-Control-Allow-Origin: *
Please be sure to read the MDN documentation, however, as this example will allow any front-end application to make requests to your Heroku backend when in reality, you'll likely want to restrict it to just the front-end domains you build.
God I feel so daft, but at least I've worked it out.
I looked at the console on a different browser (Edge), and it said it was blocking it because it was mixed origin, and I realised I had just missed out the s in the https on my API call, so it wasn't actually a cors issue (I don't think?), but just a typo on my part!
So I changed:
const API = axios.create({baseURL: 'http://gymbud-tracker.herokuapp.com' });
To this:
const API = axios.create({baseURL: 'https://gymbud-tracker.herokuapp.com' });
And now it is working perfectly ☺️
Thanks for your help! Even if it wasn't the issue here, I've definitely learned a lot more about cors on the way, so that's good

Vercel creates new DB connection for every request

I'm working on a new website, and although things were working well as we developed locally we've run into an issue when we tried to deploy on Vercel. The app uses the Sapper framework for both the pages and an API, and a database in MongoDB Atlas that we access through Mongoose. The behavior we have locally is that we npm run dev and a single DB connection is made which persists until we shut the app down.
When it gets deployed to Vercel though, the code which makes the DB connection and prints that "DB connection successful" message and is only supposed to run once is instead run on every API request
This seems to quickly get out of hand, reaching our database's limit of 500 connections:
As a result, after the website is used briefly even by a single user some of our API requests start failing with this error (We have the db accepting any connection rather than an IP whitelist, so the suggestion the error gives isn't helpful):
Our implementation is that we have a call to mongoose.connect in a .js file:
mongoose.connect(DB, {
useNewUrlParser: true,
useCreateIndex: true,
useFindAndModify: false,
useUnifiedTopology: true
}).then(() => console.log("DB connection successful!")).catch(console.error);
and then we import that file in Sapper's server.js. The recommendation we've been able to find is "just cache the connection", but that hasn't been successful and seems to be more of a node-mongodb-native thing. Regardless, this is what we tried which didn't work better or worse locally, but also didn't fix the problems on Vercel:
let cachedDb = {};
exports.connection = async () => {
if (cachedDb.isConnected)
return;
try {
const db = await mongoose.connect(DB, {
useNewUrlParser: true,
useCreateIndex: true,
useFindAndModify: false,
useUnifiedTopology: true
});
cachedDb.isConnected = db.connections[0].readyState;
console.log("DB connection successful!");
return cachedDb;
} catch(error) {console.log("Couldn't connect to DB", error);}
So... is there a way to make this work without replacing at least one of the pieces? The website isn't live yet so replacing something isn't the end of the world, but "just change a setting" is definitely preferred to starting from scratch.
Summary
Serverless functions on Vercel work like a self-contained process. While it is possible to cache the connection "per function," it is not a good idea to deploy a serverful-ready library to a serverless environment. Here are a few questions that you need to answer:
Is your framework or DB library caching the connection?
Is your code prepared for Serverless?
What type of workload is Vercel optimized for?
Further Context
Vercel is an excellent platform for your frontend that would use Serverless Functions as helpers. The CDN available in conjunction with the workflow makes the deployment process very quick and allows you to move faster. Deploying a full-blown API or serverful workload will never be a good idea. Let's suppose I need to use MySQL with Vercel. Instead of mysql, you should use mysql-serverless, which is optimized for the serverless primitives. Even with that in mind, it will be probably cheaper to just use a VM/Container for the API depending on the level of requests you are expecting. Therefore, we would end up with the following ideal solution:
Frontend (Vercel - Serverless) --> Backend (Serverful - External provider) --> DB
Disclaimer: At the moment, I work for Vercel.
If you are using the cloud database of MongoDB Atlas, then you can use the mongodb-data-api library, which is encapsulated based on the Data API of MongoDB Atlas. All data operations are performed through the HTTPS interface, and there is no connection problem.
import { MongoDBDataAPI, Region } from 'mongodb-data-api'
const api = new MongoDBDataAPI({
apiKey: '<your_mongodb_api_key>',
appId: '<your_mongodb_app_id>'
})
api
.findOne({
dataSource: '<target_cluster_name>',
database: '<target_database_name>',
collection: '<target_collection_name>',
filter: { name: 'Surmon' }
})
.then((result) => {
console.log(result.document)
})
The example codes provided by NextJS say to cache the database connection yet this is the issue that happens with myself as well.
Both here
https://github.com/vercel/next.js/blob/canary/examples/with-mongodb-mongoose/utils/dbConnect.js
And here
https://github.com/vercel/next.js/blob/canary/examples/with-mongodb/util/mongodb.js
are caching the connection and if I copy the example i get the same issue as the OP.
It also says here
https://nextjs.org/docs/basic-features/data-fetching#getstaticprops-static-generation
that i can interact directly with my database. Massively conflicting information where I'm told on one hand to cache the connection, while a member of the team tells me its not suitable for this approach despite the docs & examples telling me otherwise. Is this a bug report type situtation?
I was struggling with the similar issue but I came across an example here:
https://github.com/vercel/next.js/blob/canary/examples/with-mongodb/util/mongodb.js
Apparently the trick is to use the global variable:
let cached = global.mongo
if (!cached) {
cached = global.mongo = { conn: null, promise: null }
}

Prisma 1 + MongoDB Atlas deploy to Heroku returns error 404

I've deployed a Prisma 1 GraphQL server app on Heroku, connected to a MongoDB Atlas cluster.
Running prisma deploy locally with the default endpoint http://localhost:4466, the action being run successfully and all the schemas are being generated correctly.
But, if I change the endpoint with the Heroku remote host https://<myapp>.herokuapp.com, prisma deploy fails, returning this exception:
ERROR: GraphQL Error (Code: 404)
{
"error": "\n<html lang="en">\n\n<meta charset="utf-8">\nError\n\n\nCannot POST /management\n\n\n",
"status": 404
}
I think that's could be related to an authentication problem, but I'm getting confused because I've defined both security token in prisma.yml than the management API secret key in docker-compose.yml.
Here's my current configs if it could be helpful:
prisma.yml
# The HTTP endpoint for your Prisma API
# Tried with https://<myapp>.herokuapp.com only too with the same result
endpoint: https://<myapp>.herokuapp.com/dinai/staging
secret: ${env:PRISMA_SERVICE_SECRET}
# Points to the file that contains your datamodel
datamodel: datamodel.prisma
databaseType: document
# Specifies language & location for the generated Prisma client
generate:
- generator: javascript-client
output: ../src/generated/prisma-client
# Ensures Prisma client is re-generated after a datamodel change.
hooks:
post-deploy:
- prisma generate
docker-compose.yml
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.34
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
# uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
managementApiSecret: ${PRISMA_MANAGEMENT_API_SECRET}
databases:
default:
connector: mongo
uri: mongodb+srv://${MONGO_DB_USER}:${MONGO_DB_PASSWORD}#${MONGO_DB_CLUSTER}/myapp?retryWrites=true&w=majority
database: myapp
Plus, a weird situation happens too. In both cases, if I try to navigate the resulting API with GraphQL Playground, clicking on the tab "Schema" returns an error. On the other side, the tab "Docs" is being populated correctly. Apparently, seems that the exception is blocking the script finishing to generate the rest of the schemas.
A little help by someone experienced with Prisma/Heroku would be awesome.
Thanks in advance.
To date, I still do not clear what was causing the exception in detail. But looking deeply on Prisma docs, I discovered that in version 1 there's the necessity to proxy the app through the Prisma Cloud.
So probably, deploying straight on Heroku without it, was generating the main issue: basically there wasn't any Prisma container service running on the server.
What I did is to follow step by step the official doc about how to deploy your server on Prisma Cloud (here's the video version). As in the example shown in the guide, I already have my own project, which is actually splitted in two different apps: respectively one for the client (front-end) and one for the API (back-end). So, instead to generate a new one, I pointed the back-end API endpoint to the remote URL of the Prisma server generated by the cloud (the Heroku container created by following the tutorial). Then, leaving the management secret API key only on the Prisma server container configuration (which has been generated automatically by the cloud) and, on the other hand, the service secret only on the back-end app, finally I was able to run the prisma deploy correctly and run my project remotely.

Hosting a static Sapper site in a subfolder in Google Cloud Storage

I have a page that uses fetch in onMount, that I export using sapper export, then upload to Google Cloud Storage to be hosted as a static site...
when a page is requested with a trailing / everything works great but when it is requested without a trailing / GCS redirects to /index.html... when this happens the fetch doesn't run... it looks like a 2nd index.[hash].js file that contains the fetch isn't being requested. All the styles load & routing works fine.
I'm not worried about GCS redirecting to /index.html, that's expected... what I'm wondering is if svelte/sapper is able to run normally if index.html is in the url?
[EDIT]
Important info: I'm hosting the site under a subfolder! Hosting it at the root works perfectly, like joshnuss mentions below.
Before exporting, I updated src/server.js to this:
polka()
.use(
"/test1", //<----- THIS IS THE SUBFOLDER
compression({ threshold: 0 }),
sirv("static", { dev }),
sapper.middleware()
)
.listen(PORT, err => {
if (err) console.log("error", err);
});
I export the site with the following command:
yarn run sapper export --legacy --basepath="/test1"

Hiding API key info from public facing github for Twitter bot running on Heroku? [duplicate]

This question already has answers here:
Deploying to Heroku with sensitive setting information
(2 answers)
Is it safe to publish an app on Heroku that has api keys on there?
(1 answer)
Closed 3 years ago.
I've been teaching myself node.js using some tutorials online. I successfully made a Twitter bot and deployed it using Heroku and everything works great.
However, my Twitter API keys are contained in a config.js file that is freely available on the github repository that my Heroku app is linked to. I've since removed this sensitive data from github.
I have searched for answers on this and have found a lot of conflicting and confusing solutions and was hoping somebody could direct me to an easy-to-follow solution. If my API keys are not available on the git, where do I store them and how do I instruct my app to retrieve them?
This is the main app.js file, note I've combined a couple of different tutorials and so what it does is provide a "Hello World" output on screen and also Tweets "Hello, learning node.js!" on my chosen Twitter account:
const http = require('http');
const port=process.env.PORT || 3000
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/html');
res.end('<h1>Hello World</h1>');
});
server.listen(port,() => {
console.log(`Server running at port `+port);
});
var Twit = require('twit')
var fs = require('fs'),
path = require('path'),
Twit = require('twit'),
config = require(path.join(__dirname, 'config.js'));
var T = new Twit(config);
T.post('statuses/update', { status: 'Hello, learning node.js!' },
function(err, data, response) {
console.log(data)
});
The config.js file referenced above looks like:
var config = {
consumer_key: 'xxx',
consumer_secret: 'xxx',
access_token: 'xxx',
access_token_secret: 'xxx'
}
module.exports = config;
This all works with the correct keys in the config.js file, but obviously this is not ideal security-wise!
I'm a bit of a novice here as you can tell, but keen to learn what the correct approach would be to resolve this. Many Thanks in advance!
Heroku let you set some environment variables, more details here, and you can get them with process.env.MY_ENV_VAR.
This is a recommended way for building applications referring to the Twelve-Factor App.
I don't know a lot about heroku but I guess you can set environment variables.
And to have access to these variable in your dev machine, you can set them in a .env file or directly in your computer environment variable. If you want to use a .env file, then I guess you'll need the npm dotenv module (and obviously add .env to your .gitignore).
For your exemple you could have the following .env file :
#!/usr/bin/env bash
consumer_key= 'xxx',
consumer_secret= 'xxx',
access_token= 'xxx',
access_token_secret='xxx'
Then you can use them with process.env.VAR_NAME so if you want the consumer key you can do process.env.consumer_key. Usually these variables are named uppercase tho.
It's also commonly used to set a NODE_ENV variable which allow you to determine if you are running in development, production, test ... mode
Thanks for this. I added the environment variables on Heroku (via desktop, not using CLI), and then changed my config.js file to:
var config = {
consumer_key: process.env.consumer_key,
consumer_secret: process.env.consumer_secret,
access_token: process.env.access_token,
access_token_secret: process.env.access_token_secret
}
module.exports = config;