AWS nextjs-serverless set node_env = development - deployment

I would like to set node_env = development when I deploy using nextjs-serverless for setting staging server.
It's seems like default node_env is production.
thanks for advance.

Related

Load environment variables when running locally via serverless offline

I want to load env variables from .env file, when running locally,
So here's my serverless.yaml file,
functions:
health:
handler: src/api/health.check
name: ${self:provider.stackName}-health
environment:
USER_POOL_APP_CLIENT_ID: !Ref UserPoolClient
You see, it sets a userpool id that gets created in the resources section as an environment variable to a lambda. This works perfectly fine, when deployed as expected.
However, when I try to run it locally via serverless-offline, no matter how I set the env variables, via dotenv or manually, it seems to get overriden by serverless, in this case all I see is "[object object]".
Other workflows that I see, load all env variables from files, like below
functions:
health:
handler: src/api/health.check
name: ${self:provider.stackName}-health
environment:
USER_POOL_APP_CLIENT_ID: {env:USER_POOL_APP_CLIENT_ID}
But wouldn't this require us to have variables of all stages, stored locally as files?
I was hoping to store only the dev version, locally, and have all the remaining fetched from the serverless file itself, automatically via !Ref like shown at the beginning.
So, how do I prevent, serverless from populating/polluting my env variables when I run locally, while sticking to the first format?
Or are there other better ways to handle this?
It happened here with the new version of serverless-offline (v12.0.4).
My solution was to use: https://www.serverless.com/plugins/serverless-plugin-ifelse
See the example below:
serverlessIfElse:
- If: '"${env:NODE_ENV}" == "development"'
Set:
functions.health.environment.USER_POOL_APP_CLIENT_ID: "${env:USER_POOL_APP_CLIENT_ID, ''}"
You can change it for your use case.

How to configure Quarkus Dev Services besides prod config?

I want to use dev services to start postgres db on local application startup (quarkus profile = local).
Dev services are active in test and dev mode, so by not defining quarkus.datasource.jdbc.url it is actually working already with following config:
quarkus.datasource.db-kind=postgresql
quarkus.datasource.devservices.port=5432
quarkus.datasource.username=postgres
quarkus.datasource.password=postgres
However, I don't know how to configure this besides a prod configuration which has a quarkus.datasource.jdbc.url defined e.g.
%local.quarkus.datasource.devservices.enabled=true
%local.quarkus.datasource.db-kind=postgresql
%local.quarkus.datasource.devservices.port=5432
%local.quarkus.datasource.username=postgres
%local.quarkus.datasource.password=postgres
quarkus.datasource.db-kind=postgresql
quarkus.datasource.jdbc.url=jdbc:postgresql://myProdDB.com:5432/mydb
If I start the application with quarkus profile = local, instead of spinning up postgres on docker, quarkus falls back to the prod property quarkus.datasource.jdbc.url which I don't want for my local dev environment.
If I prefix the prod properties with a prod profile like %prod.quarkus.datasource.jdbc.url this fallback can be prevented, however, I want to follow the convention that prod properties are default, thus not prefixed with profile.
I already tried without success to somehow set %local.quarkus.datasource.jdbc.url to an empty value to prevent fallback to quarkus.datasource.jdbc.url:
%local.quarkus.datasource.jdbc.url="" -> java.sql.SQLException: Driver does not support the provided URL: ""
%local.quarkus.datasource.jdbc.url=null -> java.sql.SQLException: Driver does not support the provided URL: null
%local.quarkus.datasource.jdbc.url= -> io.quarkus.runtime.configuration.ConfigurationException: Model classes are defined for the default persistence unit default but configured datasource default not found: the default EntityManagerFactory will not be created. To solve this, configure the default datasource. Refer to https://quarkus.io/guides/datasource for guidance.

Capistrano: organizing folders for different environments

I'm approaching to Capistrano and I want to understand better how I have to organize the folder structure on the server.
Let's suppose I have two branch:
master
develop
That are visible respectively on:
www.example.org
develop.example.org
Actually (no-Capistrano), on the server I have:
/home/sites/example.org/www
/home/sites/example.org/develop
But, with Capistrano, I will have only /home/sites/example.org/current.
How can I manage the "production/development" situation with Capistrano?
Thanks
You can override the deployment folder in your environment config. For example, you have the default deployment location in config/deploy.rb using set :deploy_to, '/home/sites/example.org/www'. Then you set up config/develop.rb and config/production.rb (these names are arbitrary and don't need to map to the branch names):
server 'servername', user: 'username', roles: %w(app db web)
set :deploy_to, '/home/sites/example.org/develop'
In general, anything you set in deploy.rb can be overridden in deploy/[env].rb.

strongloop slc deploy env var complications

I've been deploying a loopback app via a custom init.d/app.conf script, using slc run --detach --cluster "cpu", but want to move to using strong-pm, as recommended.
But I've come across some limitations and am looking for any guidance on how to replicate the setup with which I'm currently familiar.
Currently I set app-specific configuration inside server/config.local.js and server/datasources.local.js, most importantly the PORT at which the app should listen for connections on. This works perfectly using slc run for local development and remote deploying for staging, all I do is set different env vars for each distinct app:
datasources.local.js:
module.exports = {
"mysqlDS": {
name: "mysqlDS",
connector: "mysql",
host: process.env.PROTEUS_MYSQL_HOST,
port: process.env.PROTEUS_MYSQL_PORT,
database: process.env.PROTEUS_MYSQL_DB,
username: process.env.PROTEUS_MYSQL_USER,
password: process.env.PROTEUS_MYSQL_PW
}
}
config.local.js:
module.exports = {
port: process.env.PROTEUS_API_PORT
}
When I deploy using strong-pm, I am not able to control this port, and it always gets set to 3000+N, where N is just incremented based on the service ID assigned to the app when it's deployed.
So even when I deploy and then set env using
slc ctl -C http://localhost:8701 env-set proteus-demo PROTEUS_API_PORT=3033 PROTEUS_DB=demo APP_DOMAIN=demo.domain.com
I see that strong-pm completely ignores PROTEUS_API_PORT when it redeploys with the new env vars:
ENV has changed, restarting
Service "1" listening on 0.0.0.0:3001
Restarting next commit Runner: commit 1/deploy/default/demo-deploy
Start Runner: commit 1/deploy/default/demo-deploy
Request (status) of current Runner: child 20066 commit 1/deploy/default/demo-deploy
Request {"cmd":"status"} of Runner: child 20066 commit 1/deploy/default/demo-deploy
3001! Not 3033 like I want, and spec'd in config.local.js. Is there a way to control this explicitly? I do not want to need to run an slc inspection command to determine the port for my nginx upstream block each time I deploy an app. Would be awesome to be able to specify listen PORT by service name, too.
FWIW, this is on an aws instance that will host demo and staging apps pointing to separate DBs and on different PORTs.
strong-pm only sets a PORT environment variable, which the app is responsible for honouring.
Based on loopback-boot/lib/executor:109, it appears that loopback actually prefers the PORT environment variable over the value in the config file. In that case it seems your best bet is to either:
pass a port in to app.listen() yourself
set one of the higher priority environment variables such as npm_config_port (which would normally be set via npm start --port 1234).

capistrano ssh connection - doesn't work when ran from cron or teamcity

I've been researching this all day and can't seem to find an answer so am posting here. We are using capistrano multistage to deploy our ruby on rails app and all is well, until we get to automated deployments.
Right now whenever this is ran interactively there are no issues, the deploy completes just fine. We are now looking at using ci (Teamcity) to deploy to our staging environment after each successful build.
On the CI server, running "ssh server1", or "ssh deploy#server1" works without issue.
my ci stage looks like this, and again works fine from command line
set :branch, "development"
set :rails_env, "staging"
set :user, "deploy"
$:.unshift(File.expand_path('./lib', ENV['rvm_path']))
require 'rvm/capistrano'
set :rvm_ruby_string, 'ruby-1.9.2-p290'
set :rvm_bin_path, "/usr/local/rvm/bin/"
default_run_options[:pty] = true
ssh_options[:verbose] = :debug
default_run_options[:pty] = true
role :app, "server1"
role :web, "server1"
role :utility, "server2"
role :db, "server1", :primary => true
my deploy.rb is very large, but these are the relevant settings
# Repo Settings
set :repository, "git#github.com:myrepo/myrepo.git"
set :scm, "git"
set :checkout, 'export'
set :copy_exclude, ".git/*"
set :deploy_via, :remote_cache
# General Settings
default_run_options[:pty] = true
set :ssh_options, { :forward_agent => true }
set :keep_releases, 20
set :use_sudo, false
Under Team City as a final build step I have added a command line task which is simply "cap ci deploy:setup" - as an easier test than a full deploy
The cap log shows me this:
[03:27:38]: [Step 4/10] D, [2011-11-21T03:27:38.103284 #22035] DEBUG -- net.ssh.authentication.session[70ca88]: allowed methods: publickey,password
[03:27:38]: [Step 4/10] E, [2011-11-21T03:27:38.103328 #22035] ERROR -- net.ssh.authentication.session[70ca88]: all authorization methods failed (tried publickey)
The same thing seems to happen from a cronjob - however don't have the logs there
To me this seems like an environment issue as Teamcity and likely cron arent loading my full environment. I've tried specifying my ssh key directly in the cap file, among other things and it does not seem to have any effect.
The other odd thing is that on the remote server I am trying to deploy to, the auth.log shows no attempted connections, so troubleshooting this from the server side doesnt seem to be an option.
So my questions is, how do I get this working? Any ideas on tests to determine where the issue is, or environment variables I need to set appreciated.
Any answer that leads me to the solution will be accepted.
Thanks.
I fixed this by modifying the ssh connection in ssh/config, running ssh-agent with a specific pid, adding environment variables and adding a build step to add the key to the ssh agent in the running build.
http://petey5king.github.com/2011/12/09/deploying-with-capistrano-from-teamcity.html