We've finished our first phoenix application for internal company use and it is time now to deploy it. It will be hidden under Apache server (yep, nginx is not available as an option), and according to PCI DSS standard it should be run under readonly user. Node.js devs use some functions like this in their startup configs to change user from 'deployer' which can write to filesystem to 'runner' (readonly user) which just read the code.
So, when my app is starting up after deploy, user should be dynamically changed from 'deploy' to 'runner'. What is the best way to do it? BTW, I'm not completely understood how Cowboy works, but I guess it just runs in the background like unicorn or puma?
Related
I have a Meteor App running on a Ubuntu Droplet on Digital Ocean (your basic virtual machine). This app was written by a company that went out of business and left us with nothing.
The database is a MongoDB currently running on IBM Compose. Compose is shutting down in a month and the Database needs to be moved and our App needs to connect to the new database.
I had no issues exporting and creating a MongoDB with all the data on a different server.
I cannot for the life of me figure out where on the live Meteor App server I would change the address of the database connection. There is no simple top level config file where I can change this?? Does anyone out there know where I would do this?
I realize that in the long term I will need to either rewrite or deprecate this aging app, but in the short term the company relies on it and IBM decided to just shut down their Compose service so please help!!
There is mostly the MONGO_URL and MONGO_OPLOG_URL that are configured as environment variable: https://docs.meteor.com/environment-variables.html#MONGO-OPLOG-URL
Now you don't set these within the code but during deployment. If you are running on localhost and want to connect to the external MongoDb you can simply use:
$ MONGO_URL="mongodb://user:password#myserver.com:port" meteor
If you want to deploy the app, you should stick with the docs: https://galaxy-guide.meteor.com/mongodb.html#authentication
If you use MUP then configure the mongo appropriately: https://meteor-up.com/docs.html#mongodb
Edit: If your app was previously deployed using MUP you can try to restore the environment variables from /opt/app-name/config (where app-name is the name of your app) which contains env.list (including all environment variables; thus your MONGO_URL) and start.sh which you can use to recreate the mup.js config.
I have created my REST host's address as property in
config/environment.js and use it successfully during development.
but once it should go live, the REST server may change and even during the live time of the app the REST server may change to another host.
building via
ember build -prod ...
puts the variable inside a lengthy string in idex.html and the minified and compacted app-js file, where it cannot be changed easily.
I cannot believe that, I need to re-build the whole app just to change the REST host's address.
Simple question: How to change the REST host in production easily?
Is it possible for Sails.js app to understand config file changes without having to restart server ? I want to add routes and change Mail server config params without server reboot. sails-hook-autoreload, seems to only cover models, controllers and services.
What are my options? I really do not want to restart the server when there are so many users logged into the app.
Please help. Thanks for reading the post
It isn't possible because the configs are only loaded into the app during start up. Your best bet to do scheduled maintenance and bring the app down and restart do your testing and then reopen the app to users.
I am not sure how to use it but I hear containers like Docker may be another solution where you containerize your app and use it to push updates out. Haven't used it but that could be a solution.
I've read some articles recently on setting up AWS infrastructure w/o enabling SSH on Ec2 instances. My web app requires a binary to run. So how can I deploy my application to an ec2 instance w/o using ssh?
This was the article in question.
http://wblinks.com/notes/aws-tips-i-wish-id-known-before-i-started/
Although doable, like the article says, it requires to think about servers as ephemeral servers. A good example of this is web services that scale up and down depending on demand. If something goes wrong with one of the servers you can just terminate your server and spin up another one.
Generally, you can accomplish this using a pull model. For example at bootup pull your code from a git/mecurial repository and then execute scripts to setup your instance. The script will setup all the monitoring required to determine whether your server and application are up and running appropriately. You would still need an SSH client for this if you want to pull your code using ssh. (Although you could also do it through HTTPS)
You can also use configuration management tools that don't use ssh at all like Puppet or Chef. Essentially your node/server will pull all your application and server configuration from the Puppet master or the Chef server. The Puppet agent or Chef client would then perform all the configuration/deployment/monitoring changes for your application to run.
If you with this model I think one of the most critical components is monitoring. You need to know at all times if there's something wrong with one of your server and in the event something goes wrong discard the server and spin up a new one. (Even better if this whole process is automated)
Hope this helps.
I would like to make Worker Role in azure that handles some behind the scene processing for a web role. In the web role i would like to upload a plugin (a DLL most likely) which becomes avalible for the worker role to use.
What about security? If i was to let 3th party people upload a dll to my azure worker role. Can i do anything to limit what it can do. Would not be nice if they could take control over the management API or something like this.
I am new to azure and exploring if its a platform to use for this project.
Last question, i noticed that i could remote desktop my cloud service. Could i upload binary programs to that and call that from the worker role aswell? (another kind of plugin).
There are a few things you might want to look at. Let's assume your Worker Role is an empty shell. After starting the Worker Role you could start a timer that runs every X minutes to get the latest assemblies from a blob storage container for example.
You can download these assemblies to a folder and use MEF to scan them and import all objects implementing IWorkerRolePlugin for example (this would be a custom interface you would create). MEF would be the best choice when you want to work with plugins. You could even create a custom catalog that directly links with a blob storage container.
Now about the security part. In your Worker Role you could for example create a restricted AppDomain to make sure these plugins can't do anything wrong. This code should get you started: Restricted AppDomain example
Try the Azure Plugin Library by Richard Astbury!
Sounds like Lokad.Cloud is just what you need.
It has an execution framework part which consists of worker roles capable of running what they have named a Cloud Service. It comes with a web console which allows you to add new CloudService implementations by uploading assemblies, and if you configure it to allow for Azure self management you can also adjust the number of worker instances through the web console.