Configuring Meteor deployment to Google Compute Engine VM using mupx - mongodb

Whilst I've tried several solutions to related problems on SO, nothing appears to fix my problem when deploying a Meteor project to a VM on Google Compute Engine.
I setup mupx to handle the deployment and don't have any apparent issues when running
sudo mupx deploy
My mup.json is as follows
{
// Server authentication info
"servers": [
{
"host": "104.199.141.232",
"username": "simonlayfield",
"password": "xxxxxxxx"
// or pem file (ssh based authentication)
// "pem": "~/.ssh/id_rsa"
}
],
// Install MongoDB in the server, does not destroy local MongoDB on future setup
"setupMongo": true,
// WARNING: Node.js is required! Only skip if you already have Node.js installed on server.
"setupNode": true,
// WARNING: If nodeVersion omitted will setup 0.10.36 by default. Do not use v, only version number.
"nodeVersion": "0.10.36",
// Install PhantomJS in the server
"setupPhantom": true,
// Show a progress bar during the upload of the bundle to the server.
// Might cause an error in some rare cases if set to true, for instance in Shippable CI
"enableUploadProgressBar": true,
// Application name (No spaces)
"appName": "simonlayfield",
// Location of app (local directory)
"app": ".",
// Configure environment
"env": {
"ROOT_URL": "http://simonlayfield.com"
},
// Meteor Up checks if the app comes online just after the deployment
// before mup checks that, it will wait for no. of seconds configured below
"deployCheckWaitTime": 30
}
When navigating to my external IP in the browser I can see the Meteor site template however the Mongodb data isn't showing up.
http://simonlayfield.com
I have set a firewall rule up on the VM to allow traffic through port 27017
Name: mongodb
Description: Allow port 27017 access to http-server
Network: default
Source filter: Allow from any source (0.0.0.0/0)
Allowed protocols and ports: tcp:27017
Target tags: http-server
I've also tried passing the env variable MONGO_URL but after several failed attempts I found this post on the Meteor forums suggesting that it is not required when using a local Mongodb database.
I'm currently connecting to the VM using ssh rather than the gcloud SDK but if it will help toward a solution I'm happy to set that up.
I'd really appreciate it if someone could provide some guidance on how I can know specifically what is going wrong. Is the firewall rule I've setup sufficient? Are there other factors than need to be considered when using a Google Compute Engine VM specifically? Is there a way for me to check logs on the server via ssh to gain extra clarity around a connection/firewall/configuration problem?
My knowledge in this area is limited and so apologies if there's an easy fix that has evaded me.
Thanks in advance.

There were some recent meteord updates, please rerun your deployment
Also as a side note: I always specify a port for mup / mupx files
"env": {
"PORT": 5050,
"ROOT_URL": "http://youripaddress"
},

Related

cannot connect to self-hosted gRPC Windows service

I published my first gRPC Windows service on to a test server. Please excuse my clulessness.
Long story short:
When I tried to connect to it w/ a client, I am getting the error below:
No connection could be made because the target machine actively
refused it. SocketException: No connection could be made because the
target machine actively refused it.
Here's my appsettins.json on Kestrel:
"HttpsInlineCertStore": {
"Url": "https://localhost:5001",
"Certificate": {
"Subject": "CN=<secret>",
"Store": "My",
"Location": "LocalMachine",
"AllowInvalid": "true"
}
}
On my client, I have this:
readonly static GrpcChannel channel =
GrpcChannel.ForAddress("https://full server name and domain:5001");
Question:
I keep seeing port 5000 being opened, but no 5001. Why?
Thanks!
UPDATE:
By default http is on 5000. Here's MS link...search Endpoint Configuration
Make sure you have the proper environment name (aspnetcore_environment) is set, so the configuration of the Kestrel in the specific appsettings..json is loaded.
Andrew Luck has an article on how to set the environment variables.
Also, here you can find the different options you can set for the Kestrel. Search for Replace the default certificate from configuration will lead you to the section.

"host not allowed" error when deploying a play framework application to Amazon AWS with Boxfuse

I am trying to deploy a simple web application written using Play Framework in Scala to Amazon web service.
The web application is running OK in development mode and production mode in my local machine, and I've changed its default port to 80.
I used Boxfuse to deploy to AWS as suggested.
I first run "sbt dist"
then "boxfuse run -env=prod"
Things went well as desired. The image is fused and pushed to AWS. AMI is created. Instance was started and my application was running.
i-0f696ff22df4a2b71 => 2017-07-13 01:28:23.940 [info] play.api.Play - Application started (Prod)
Then came the error message:
WARNING: Healthcheck (http://35.156.38.90/) returned 400 instead of 200. Retrying for the next 300 seconds ...
i-0f696ff22df4a2b71 => 2017-07-13 01:28:24.977 [info] p.c.s.AkkaHttpServer - Listening for HTTP on /0.0.0.0:80
i-0f696ff22df4a2b71 => 2017-07-13 01:28:25.512 [warn] p.f.h.AllowedHostsFilter - Host not allowed: 35.156.38.90
The instance was terminated after repeated try after 3 minutes. It gave a warning like:
Ensure your application responds with an HTTP 200 at / on port 80
But I've made sure the application responds in local machine, and I tried both Windows and Ubuntu, all works well.
Also, running "boxfuse run" on local machine, I can connect to it using "http://localhost", but still have the error.
Hope someone with experience can give me some suggestions. Thanks in advance.
ps: not sure if relevant, I added these settings to application.conf
http {
address = 0.0.0.0
port = 80
}
Judging from the error message, it looks like the problem might be related to play.filters.hosts.allowed not set up in application.conf. The filter lets you configure which hosts can access your application. More details about the Play filter is available here.
Here's a configuration example:
play.filters.hosts {
allowed = ["."]
}
Note that allowed = ["."] matches all hosts hence would not be recommended in a production environment.
As stated in the Boxfuse Play Documentation:
If your application uses the allowed hosts filter you must ensure play.filters.hosts.allowed in application.conf allows connections from anywhere as this filter otherwise causes ELB healthchecks to fail. For example:
play.filters.hosts {
allowed = ["."]
}
More info in the official Play documentation.

Issue connecting composer to Blockchain on Bluemix - identity or token does not match

I have fabric composer 0.72 installed on my mac, and I was able to follow this thread to get it connected to my Blockchain (v.61 of Fabric) on Bluemix.
fabric-composer-integration-with-bluemix-blockchain-service
Now I am trying to build an ubuntu (16.04) docker container and run composer-rest-server there. When I try to connect to my blockchain service from my docker container (using the same id, WebAppAdmin, that I used on my mac) I get an error:
Discovering types from business network definition ...
Connection fails: Error: Identity or token does not match.
It will be retried for the next request.
{ Error: Identity or token does not match.
at /home/composer/.nvm/versions/node/v6.10.3/lib/node_modules /composer-rest-server/node_modules/grpc/src/node/src/client.js:417:17 code: 2, metadata: Metadata { _internal_repr: {} } }
I tried copying the cert from my mac to my docker container:
/home/composer/.composer-credentials/member.WebAppAdmin
but when I did that I got a different error that says "signature does not verify". I did some additional testing, and I discovered that if I used an id that I had not previously used with composer (i.e. user_type1_0) then I could connect, and I could see a new cert in my .composer-credentials directory.
I tried deleting that container and building a new one (I dorked something else up) I could not use that same userid again.
Does anybody know how security and these certs are supposed to work? It would seem as though something to do with certificate generation/validation is tied to the client (i.e. hardware address), such that if I try to re-use an id on a different machine, the certs or keys or something don't match. I have a way to make things work, but it doesn't seem like it's the right way if I can't use the same id from different machines.
Thanks!
Hi i tried to recreate this by having blockchain running on a unix machine and then i copied my connection profile and certificate to my mac and then edited my connection profile to update the ip address and key store. I then did a composer network ping and it worked fine.
I am using composer v0.7.4 so you could try that?
I have also faced this issue, and concluded that
There is inconsistent behavior while deploying network using composer on Cloud environment includeing Bluemix. Problem is not with composer, but with fabric 0.6.
I am assuming that this issue is also indirectly related to following known bugs into fabric 0.6, which will not be fixed in fabric 0.6.
ERROR:
"
throw er; // Unhandled 'error' event
^
Error
at ClientDuplexStream._emitStatusIfDone (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:189:19)
at ClientDuplexStream._readsDone (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:158:8)
at readCallback (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:217:12)
"
So far, We have understood that following three JIRA are root cause , where essentially the cloud networking layer ends up killing the idle event hub connection after a period of inactivity and the fabric SDK cannot handle this.
https://jira.hyperledger.org/browse/FAB-4002 FAB-3310
https://jira.hyperledger.org/browse/FAB-3310
or FAB-2787
Conclusion:
There is no alternative way of fixing this issue with Bluemix or any cloud environment with fabric 0.6
You may not experience this issue with Fabric 1.0, but there is still possibilities as all above mentioned defects are not fixed yet.

strongloop slc deploy env var complications

I've been deploying a loopback app via a custom init.d/app.conf script, using slc run --detach --cluster "cpu", but want to move to using strong-pm, as recommended.
But I've come across some limitations and am looking for any guidance on how to replicate the setup with which I'm currently familiar.
Currently I set app-specific configuration inside server/config.local.js and server/datasources.local.js, most importantly the PORT at which the app should listen for connections on. This works perfectly using slc run for local development and remote deploying for staging, all I do is set different env vars for each distinct app:
datasources.local.js:
module.exports = {
"mysqlDS": {
name: "mysqlDS",
connector: "mysql",
host: process.env.PROTEUS_MYSQL_HOST,
port: process.env.PROTEUS_MYSQL_PORT,
database: process.env.PROTEUS_MYSQL_DB,
username: process.env.PROTEUS_MYSQL_USER,
password: process.env.PROTEUS_MYSQL_PW
}
}
config.local.js:
module.exports = {
port: process.env.PROTEUS_API_PORT
}
When I deploy using strong-pm, I am not able to control this port, and it always gets set to 3000+N, where N is just incremented based on the service ID assigned to the app when it's deployed.
So even when I deploy and then set env using
slc ctl -C http://localhost:8701 env-set proteus-demo PROTEUS_API_PORT=3033 PROTEUS_DB=demo APP_DOMAIN=demo.domain.com
I see that strong-pm completely ignores PROTEUS_API_PORT when it redeploys with the new env vars:
ENV has changed, restarting
Service "1" listening on 0.0.0.0:3001
Restarting next commit Runner: commit 1/deploy/default/demo-deploy
Start Runner: commit 1/deploy/default/demo-deploy
Request (status) of current Runner: child 20066 commit 1/deploy/default/demo-deploy
Request {"cmd":"status"} of Runner: child 20066 commit 1/deploy/default/demo-deploy
3001! Not 3033 like I want, and spec'd in config.local.js. Is there a way to control this explicitly? I do not want to need to run an slc inspection command to determine the port for my nginx upstream block each time I deploy an app. Would be awesome to be able to specify listen PORT by service name, too.
FWIW, this is on an aws instance that will host demo and staging apps pointing to separate DBs and on different PORTs.
strong-pm only sets a PORT environment variable, which the app is responsible for honouring.
Based on loopback-boot/lib/executor:109, it appears that loopback actually prefers the PORT environment variable over the value in the config file. In that case it seems your best bet is to either:
pass a port in to app.listen() yourself
set one of the higher priority environment variables such as npm_config_port (which would normally be set via npm start --port 1234).

Telescope / Meteor deployment using meteor UP on MongoDB url mup.json configuration

I am new developing Meteor apps and I just set up a Telescope blog which is based in Meteor.
I want to deploy it in my own hosting (a droplet at Digital ocean) using "Meteor Up" but I dont know how to configure the "MONGO_URL" and "MAIL_URL" in the mup.json file.
Everything was set up transparently in local so I have no clue where is the DB and who is the user or the password... Any help or orientation where I should look up?
Here a snippet of my mup.json file:
{
"env": {
"PORT": 80,
"ROOT_URL": "",
"MONGO_URL": "mongodb://:#:/App",
"MAIL_URL": "smtp://postmaster%40myapp.mailgun.org:adj87sjhd7s#smtp.mailgun.org:587/"
},
Remove the mongo_url and it will use an internal mongo server. (I am sure of this)
You will need to apply for a free account at mailgun and use your api key here.
(guessing here) To get started, try eliminating that key as well and you may be fine.
{ "env": { "PORT": 80, "ROOT_URL": "" },