I've been deploying a loopback app via a custom init.d/app.conf script, using slc run --detach --cluster "cpu", but want to move to using strong-pm, as recommended.
But I've come across some limitations and am looking for any guidance on how to replicate the setup with which I'm currently familiar.
Currently I set app-specific configuration inside server/config.local.js and server/datasources.local.js, most importantly the PORT at which the app should listen for connections on. This works perfectly using slc run for local development and remote deploying for staging, all I do is set different env vars for each distinct app:
datasources.local.js:
module.exports = {
"mysqlDS": {
name: "mysqlDS",
connector: "mysql",
host: process.env.PROTEUS_MYSQL_HOST,
port: process.env.PROTEUS_MYSQL_PORT,
database: process.env.PROTEUS_MYSQL_DB,
username: process.env.PROTEUS_MYSQL_USER,
password: process.env.PROTEUS_MYSQL_PW
}
}
config.local.js:
module.exports = {
port: process.env.PROTEUS_API_PORT
}
When I deploy using strong-pm, I am not able to control this port, and it always gets set to 3000+N, where N is just incremented based on the service ID assigned to the app when it's deployed.
So even when I deploy and then set env using
slc ctl -C http://localhost:8701 env-set proteus-demo PROTEUS_API_PORT=3033 PROTEUS_DB=demo APP_DOMAIN=demo.domain.com
I see that strong-pm completely ignores PROTEUS_API_PORT when it redeploys with the new env vars:
ENV has changed, restarting
Service "1" listening on 0.0.0.0:3001
Restarting next commit Runner: commit 1/deploy/default/demo-deploy
Start Runner: commit 1/deploy/default/demo-deploy
Request (status) of current Runner: child 20066 commit 1/deploy/default/demo-deploy
Request {"cmd":"status"} of Runner: child 20066 commit 1/deploy/default/demo-deploy
3001! Not 3033 like I want, and spec'd in config.local.js. Is there a way to control this explicitly? I do not want to need to run an slc inspection command to determine the port for my nginx upstream block each time I deploy an app. Would be awesome to be able to specify listen PORT by service name, too.
FWIW, this is on an aws instance that will host demo and staging apps pointing to separate DBs and on different PORTs.
strong-pm only sets a PORT environment variable, which the app is responsible for honouring.
Based on loopback-boot/lib/executor:109, it appears that loopback actually prefers the PORT environment variable over the value in the config file. In that case it seems your best bet is to either:
pass a port in to app.listen() yourself
set one of the higher priority environment variables such as npm_config_port (which would normally be set via npm start --port 1234).
Related
I'm trying to get a local env to run/debug Python Lambdas with VSCode (windows). I'm using a provided HelloWorld example to get the hang of this but I'm not being able to invoke.
Steps used to setup SAM and invoke the Lambda:
I have Docker installed and running
I have installed the SAM CLI
My AWS credentials are in place and working
I have no connectivity issues and I'm able to connect to AWS normally
I create the SAM application (HelloWorld) with all the files and resources, I didn't change anything.
I run "sam build" and it finishes sucessfully
I run "sam local invoke" and it fails with timeout. I increased the timeout to 10s, still times out. The HelloWorld Lambda code only prints and does nothing else, so I'm guessing the code isn't the problem, but something else relating to the container or the SAM env itself.
C:\xxxxxxx\lambda-python3.8>sam build Your template contains a
resource with logical ID "ServerlessRestApi", which is a reserved
logical ID in AWS SAM. It could result in unexpected behaviors and is not recommended.
Building codeuri:
C:\xxxxxxx\lambda-python3.8\hello_world runtime: python3.8 metadata:
{} architecture: x86_64 functions: ['HelloWorldFunction'] Running
PythonPipBuilder:ResolveDependencies Running
PythonPipBuilder:CopySource
Build Succeeded
Built Artifacts : .aws-sam\build Built Template :
.aws-sam\build\template.yaml
C:\xxxxxxx\lambda-python3.8>sam local invoke Invoking
app.lambda_handler (python3.8) Skip pulling image and use local one:
public.ecr.aws/sam/emulation-python3.8:rapid-1.51.0-x86_64.
Mounting C:\xxxxxxx\lambda-python3.8.aws-sam\build\HelloWorldFunction
as /var/task:ro,delegated inside runtime container Function
'HelloWorldFunction' timed out after 10 seconds
No response from invoke container for HelloWorldFunction
Any hints on what's missing here?
Thanks.
Mostly, a lambda function gets timed out because of some resource dependency. Are you using any external resource, maybe db connection or some REST API call ?
Please put more prints in lambda_handler(your function handler), before calling any resource, then you might know where exactly it is waiting. Also increase the timeout to 1 minute or more because most of the external resource call over HTTPS will have 30 secs timeouts.
The log suggests that either the container wasn't started, or SAM couldn't connect to it.
Sometimes the hostname resolution on Windows can be affected by hosts file or system settings.
Try running the invoke command as follows (this will make the container ports bind to all interfaces):
sam local invoke --container-host-interface 0.0.0.0
...additionally try setting the container-host parameter (set to localhost by default):
sam local invoke --container-host-interface 0.0.0.0 --container-host host.docker.internal
The next piece of puzzle is incorporating these settings into VSCODE. This can to be done in two places:
create samconfig.toml in the root dir of the project with the following contents. This will allow running sam local invoke from the terminal without having to add the command line argument:
version=0.1
[default.local_invoke.parameters]
container_host_interface = "0.0.0.0"
update launch configuration as follows to enable VSCode debugging:
...
"sam": {
"localArguments": ["--container-host-interface","0.0.0.0"]
}
...
I have set up an Arango instance on Kubernetes nodes, which were installed on a VM, as mentioned in the ArangoDB docs ArangoDB on Kubernetes. Keep in mind, I skipped the ArangoLocalStorage and ArangoDeploymentReplication step. I can see 3 pods each of agent, coordinators and dbservers in get pods.
The arango-cluster-ea service, however, shows the external IP as pending. I can use the master node's IP address and the service port to access the Web UI, connect to the DB and make changes. But I am not able to access either the Arango shell, nor am I able to use my Python code to connect to the DB. I am using the Master Node IP and the service port shown in arango-cluster-ea in services to try to make the Python code connect to DB. Similarly, for arangosh, I am trying the code:
kubectl exec -it *arango-cluster-crdn-pod-name* -- arangosh --service.endpoint tcp://masternodeIP:8529
In case of Python, since the Connection class call is in a try block, it goes to except block. In case of Arangosh, it opens the Arango shell with the error:
Cannot connect to tcp://masternodeIP:port
thus not connecting to the DB.
Any leads about this would be appreciated.
Posting this community wiki answer to point to the github issue that this issue/question was resolved.
Feel free to edit/expand.
Link to github:
Github.com: Arangodb: Kube-arangodb: Issues: 734
Here's how my issue got resolved:
To connect to arangosh, what worked for me was to use ssl before using the localhost:8529 ip-port combination in the server.endpoint. Here's the command that worked:
kubectl exec -it _arango_cluster_crdn_podname_ -- arangosh --server.endpoint ssl://localhost:8529
For web browser, since my external access was based on NodePort type, I put in the master node's IP and the 30000-level port number that was generated (in my case, it was 31200).
For Python, in case of PyArango's Connection class, it worked when I used the arango-cluster-ea service. I put in the following line in the connection call:
conn = Connection(arangoURL='https://arango-cluster-ea:8529', verify= False, username = 'root', password = 'XXXXX')
The verify=False flag is important to ignore the SSL validity, else it will throw an error again.
Hopefully this solves somebody else's issue, if they face the similar issue.
I've tested following solution and I've managed to successfully connect to the database via:
arangosh from localhost:
Connected to ArangoDB 'http+ssl://localhost:8529, version: 3.7.12 [SINGLE, server], database: '_system', username: 'root'
Python code
from pyArango.connection import *
conn = Connection(arangoURL='https://ABCD:8529', username="root", password="password",verify= False )
db = conn.createDatabase(name="school")
Additional resources:
Arangodb.com: Tutorials: Tutorial Python
Arangodb.com: Docs: Stable: Tutorials Kubernetes
I'm trying to setup my own instance of nextcloud on my server but I'm running into a problem as I want nextcloud to be available under https://example.com/cloud/.
Next cloud is running in a CoreOS virtual machine called let's say myvm.
So this is the way I setup my CaddyFile:
example.com {
gzip
proxy /cloud myvm:8080 {
transparent
without /cloud
}
}
I have other proxies that work fine for other services or VMs that are written similarily.
With this, and publishing port 8080 in my docker-compose file, I manage to connect to the nextcloud instance. But every time I go to example.com/cloud/ it will redirect me to example.com/apps/files/ instead of example.com/cloud/apps/files/.
If I enter this last url manually, I can access to nextcloud, but also the page doesn't load properly because all the contents cannot be loaded because they are not prompted with the prefix cloud/.
Is there a way to explain nextcloud about this prefix through the configuration of docker-compose file? (It's the only configuration I created, it works with just that and no extra work, I use one similar to the one available here (the apache one).)
Or maybe I can improve the CaddyFile config? (By the way, if I don't use the without option, it will just not work at all and return 404 when I go to the url).
Whilst I've tried several solutions to related problems on SO, nothing appears to fix my problem when deploying a Meteor project to a VM on Google Compute Engine.
I setup mupx to handle the deployment and don't have any apparent issues when running
sudo mupx deploy
My mup.json is as follows
{
// Server authentication info
"servers": [
{
"host": "104.199.141.232",
"username": "simonlayfield",
"password": "xxxxxxxx"
// or pem file (ssh based authentication)
// "pem": "~/.ssh/id_rsa"
}
],
// Install MongoDB in the server, does not destroy local MongoDB on future setup
"setupMongo": true,
// WARNING: Node.js is required! Only skip if you already have Node.js installed on server.
"setupNode": true,
// WARNING: If nodeVersion omitted will setup 0.10.36 by default. Do not use v, only version number.
"nodeVersion": "0.10.36",
// Install PhantomJS in the server
"setupPhantom": true,
// Show a progress bar during the upload of the bundle to the server.
// Might cause an error in some rare cases if set to true, for instance in Shippable CI
"enableUploadProgressBar": true,
// Application name (No spaces)
"appName": "simonlayfield",
// Location of app (local directory)
"app": ".",
// Configure environment
"env": {
"ROOT_URL": "http://simonlayfield.com"
},
// Meteor Up checks if the app comes online just after the deployment
// before mup checks that, it will wait for no. of seconds configured below
"deployCheckWaitTime": 30
}
When navigating to my external IP in the browser I can see the Meteor site template however the Mongodb data isn't showing up.
http://simonlayfield.com
I have set a firewall rule up on the VM to allow traffic through port 27017
Name: mongodb
Description: Allow port 27017 access to http-server
Network: default
Source filter: Allow from any source (0.0.0.0/0)
Allowed protocols and ports: tcp:27017
Target tags: http-server
I've also tried passing the env variable MONGO_URL but after several failed attempts I found this post on the Meteor forums suggesting that it is not required when using a local Mongodb database.
I'm currently connecting to the VM using ssh rather than the gcloud SDK but if it will help toward a solution I'm happy to set that up.
I'd really appreciate it if someone could provide some guidance on how I can know specifically what is going wrong. Is the firewall rule I've setup sufficient? Are there other factors than need to be considered when using a Google Compute Engine VM specifically? Is there a way for me to check logs on the server via ssh to gain extra clarity around a connection/firewall/configuration problem?
My knowledge in this area is limited and so apologies if there's an easy fix that has evaded me.
Thanks in advance.
There were some recent meteord updates, please rerun your deployment
Also as a side note: I always specify a port for mup / mupx files
"env": {
"PORT": 5050,
"ROOT_URL": "http://youripaddress"
},
If I understand correctly the standard git deploy implementation with capistrano v3 deploys the same repository on all roles. I have a more difficult app that has several types of servers and each type has its own code base with its own repository. My database server for example does not need to deploy any code.
How do I tackle such a problem in capistrano v3?
Should I write my own deployment tasks for each of the roles?
How do I tackle such a problem in capistrano v3?
All servers get the code, as in certain environments the code is needed to perform some actions. For example in a typical setup the web server needs your static assets, the app server needs your code to serve the app, and the db server needs your code to run migrations.
If that's not true in your environment and you don't want the code on the servers in some roles, you could easily send a pull request to add the no_release feature back from Cap2 in to Cap3.
You can of course take the .rake files out of the Gem, and load those in your Capfile, which is a perfectly valid way to use the tool, and modify them for your own needs.
The general approach is that if you don't need code on your DB server, for example, why is it listed in your deployment file?
I can confirm you can use no_release: true to disable a server from deploying the repository code.
I needed to do this so I could specifically run a restart task for a different server.
Be sure to give your server a role so that you can target it. There is a handy function called release_roles() you can use to target servers that have your repository code.
Then you can separate any tasks (like my restart) to be independent from the deploy procedure.
For Example:
server '10.10.10.10', port: 22, user: 'deploy', roles: %w{web app db assets}
server '10.10.10.20', port: 22, user: 'deploy', roles: %w{frontend}, no_release: true
namespace :nginx do
desc 'Reloading PHP will clear OpCache. Remove Nginx Cache files to force regeneration.'
task :reload do
on roles(:frontend) do
execute "sudo /usr/sbin/service php7.1-fpm reload"
execute "sudo /usr/bin/find /var/run/nginx-cache -type f -delete"
end
end
end
after 'deploy:finished', 'nginx:reload'
after 'deploy:rollback', 'nginx:reload'
# Example of a task for release_roles() only
desc 'Update composer'
task :update do
on release_roles(:all) do
execute "cd #{release_path} && composer update"
end
end
before 'deploy:publishing', 'composer:update'
I can think of many scenarios where this would come in handy.
FYI, this link has more useful examples:
https://capistranorb.com/documentation/advanced-features/property-filtering/