Telescope / Meteor deployment using meteor UP on MongoDB url mup.json configuration - deployment

I am new developing Meteor apps and I just set up a Telescope blog which is based in Meteor.
I want to deploy it in my own hosting (a droplet at Digital ocean) using "Meteor Up" but I dont know how to configure the "MONGO_URL" and "MAIL_URL" in the mup.json file.
Everything was set up transparently in local so I have no clue where is the DB and who is the user or the password... Any help or orientation where I should look up?
Here a snippet of my mup.json file:
{
"env": {
"PORT": 80,
"ROOT_URL": "",
"MONGO_URL": "mongodb://:#:/App",
"MAIL_URL": "smtp://postmaster%40myapp.mailgun.org:adj87sjhd7s#smtp.mailgun.org:587/"
},

Remove the mongo_url and it will use an internal mongo server. (I am sure of this)
You will need to apply for a free account at mailgun and use your api key here.
(guessing here) To get started, try eliminating that key as well and you may be fine.
{ "env": { "PORT": 80, "ROOT_URL": "" },

Related

How to log with Serilog to a remote server?

I'm writing a .NET Core 6 Web API and decided to use Serilog for logging.
This is how I configured it in appsettings.json:
"Serilog": {
"Using": [ "Serilog.Sinks.File" ],
"MinimumLevel": {
"Default": "Information"
},
"WriteTo": [
{
"Name": "File",
"Args": {
"path": "../logs/webapi-.log",
"rollingInterval": "Day",
"outputTemplate": "[{Timestamp:yyyy-MM-dd HH:mm:ss.fff zzz} {CorrelationId} {Level:u3}] {Username} {Message:lj}{NewLine}{Exception}"
}
}
]
}
This is working fine, it's logging inside a logs folder in the root.
Now I've deployed my API to a Staging K8s cluster and don't want my logs to be stored on the pod but rather on the Staging server. Is it possible? I can't find many useful posts about it, so I assume there is a better way to achieve it.
Based on Panagiotis' 2nd suggestion I spent like a week to try to set up Elasticsearch with Fluentd and Kibana with no success.
Turned out, that the simplest and easiest solution was his 1st one: all I needed was a PersistentVolume and a PersistentVolumeClaim. This post helped me with the setup: How to store my pod logs in a persistent storage?

Extending S/4HANA OData service to SCP

I want to extend a custom OData service created in a S/4HANA system. I added a Cloud Connector to my machine, but I don't know how to go from there. The idea is that I want people to access the service from SCP and that I don't need multiple accounts accessing the service on the S/4 system, but just the one coming from SCP. Any ideas?
Ok I feel silly doing this but it seems to work. My test is actually inconclusive because I don't have a cloud connector handy, but it works proxy-ing google.
I'm still thinking about how to make it publicly accessible. There might be people with better answers than this.
create the cloud connector destination.
make a new folder in webide
create file neo-app.json.
content:
{
"routes": [{
"path": "/google",
"target": {
"type": "destination",
"name": "google"
},
"description": "google"
}],
"sendWelcomeFileRedirect": false
}
path is the proxy in your app, so myapp.scp-account/google here. the target name is your destination. I called it just google, you'll put your cloud connector destination.
Deploy.
My test app with destination google going to https://www.google.com came out looking like this. Paths are relative so it doesn't work but google seems proxied.
You'll still have to authenticate etc.

How do I properly set MONGO_URL and MONGO_OPLOG_URL on a meteor galaxy deployment?

I'm trying to deploy to meteor galaxy, but getting an error that the "MONGO_URL must be set in environment". I'm using the starter template from here: https://github.com/yogiben/meteor-starter
and the example settings.json file from here:
https://galaxy.meteor.com/help/setting-environment-variables
which looks like this:
{ "galaxy.meteor.com": { "env": { "ROOT_URL": "https://www.example.com", "MONGO_URL": "...", "MONGO_OPLOG_URL": "..." } }, ... }
but I don't know what to set MONGO_URL and MONGO_OPLOG_URL to. Anyone know what I should set them to for this particular starter template?
Galaxy does not have it's own database hosting, so you'll have to go through compose or MongoLab. I believe MongoLab has a free tier, so once you set up a database there, you can get the URL they'll give you and set your MONGO_URL. MongoLab's free tier doesn't have access to the Oplog, so you'd just omit that for now.

Configuring Meteor deployment to Google Compute Engine VM using mupx

Whilst I've tried several solutions to related problems on SO, nothing appears to fix my problem when deploying a Meteor project to a VM on Google Compute Engine.
I setup mupx to handle the deployment and don't have any apparent issues when running
sudo mupx deploy
My mup.json is as follows
{
// Server authentication info
"servers": [
{
"host": "104.199.141.232",
"username": "simonlayfield",
"password": "xxxxxxxx"
// or pem file (ssh based authentication)
// "pem": "~/.ssh/id_rsa"
}
],
// Install MongoDB in the server, does not destroy local MongoDB on future setup
"setupMongo": true,
// WARNING: Node.js is required! Only skip if you already have Node.js installed on server.
"setupNode": true,
// WARNING: If nodeVersion omitted will setup 0.10.36 by default. Do not use v, only version number.
"nodeVersion": "0.10.36",
// Install PhantomJS in the server
"setupPhantom": true,
// Show a progress bar during the upload of the bundle to the server.
// Might cause an error in some rare cases if set to true, for instance in Shippable CI
"enableUploadProgressBar": true,
// Application name (No spaces)
"appName": "simonlayfield",
// Location of app (local directory)
"app": ".",
// Configure environment
"env": {
"ROOT_URL": "http://simonlayfield.com"
},
// Meteor Up checks if the app comes online just after the deployment
// before mup checks that, it will wait for no. of seconds configured below
"deployCheckWaitTime": 30
}
When navigating to my external IP in the browser I can see the Meteor site template however the Mongodb data isn't showing up.
http://simonlayfield.com
I have set a firewall rule up on the VM to allow traffic through port 27017
Name: mongodb
Description: Allow port 27017 access to http-server
Network: default
Source filter: Allow from any source (0.0.0.0/0)
Allowed protocols and ports: tcp:27017
Target tags: http-server
I've also tried passing the env variable MONGO_URL but after several failed attempts I found this post on the Meteor forums suggesting that it is not required when using a local Mongodb database.
I'm currently connecting to the VM using ssh rather than the gcloud SDK but if it will help toward a solution I'm happy to set that up.
I'd really appreciate it if someone could provide some guidance on how I can know specifically what is going wrong. Is the firewall rule I've setup sufficient? Are there other factors than need to be considered when using a Google Compute Engine VM specifically? Is there a way for me to check logs on the server via ssh to gain extra clarity around a connection/firewall/configuration problem?
My knowledge in this area is limited and so apologies if there's an easy fix that has evaded me.
Thanks in advance.
There were some recent meteord updates, please rerun your deployment
Also as a side note: I always specify a port for mup / mupx files
"env": {
"PORT": 5050,
"ROOT_URL": "http://youripaddress"
},

How to 'trigger' chef-solo and get callback/report?

I'm thinking to use Chef-Solo as a PaaS orchestrator.
I'll have my own dashboard which will generate recipes and my nodes will pull from them. I know I can do that by using :
chef-solo -i <interval>
But, if i'd like to add more and more attributes; like having a list of virtualhosts or mysql users to deploy. I don't know how I can achieve this.
I'm looking for your ideas; I 'think' engineyard is using chef to deploy 'on demand' php, node .js apps; how did they achieve this ?
How not to re-execute an app deployment if that one has already been deployed
On first run i'll have :
"websites" : {
"site1": { "username": "dave", "password": "password123" }
},
And then, when a new site is created the attributes would become :
"websites" : {
"site1": { "username": "dave", "password": "password123" }
"site2": { "username": "bob", "password": "password123" }
}
etc.
And how to get report on what chef-solo is doing ?
Any ingenious idea is welcome :)
Add chef-server to your PAAS stack and use knife to push your receipes there. Knife can also be used to initially provision nodes in your PAAS, taking care of installing the chef client (configured to talk to your chef server).
The chef solo client is useful for simple use cases, but doesn't really scale will require additional supporting code for items like monitoring/reporting (your question) and when you move to more complex multi-tier deployment scenarios.