I have Fargate instances that use AWS_CONTAINER_CREDENTIALS_RELATIVE_URI to get the credentials and that are working fine.
Now, I would like to recreate a similar behavior in my local EC2 Docker.
How can I achieve this? shall I hardcoded the values as ENV in my docker file for local EC2 for testing?
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
output:
{
"RoleArn": "arn:aws:iam::111111111111:role/test-service",
"AccessKeyId": "HELLOWORLD",
"SecretAccessKey": "REDACTED",
"Token": "REDACTED",
"Expiration": "2020-03-20T02:01:43Z"
}
The Amazon ECS Local Container Endpoints tool can simulate those endpoints and help you test locally.
Related
I'm writing a .NET Core 6 Web API and decided to use Serilog for logging.
This is how I configured it in appsettings.json:
"Serilog": {
"Using": [ "Serilog.Sinks.File" ],
"MinimumLevel": {
"Default": "Information"
},
"WriteTo": [
{
"Name": "File",
"Args": {
"path": "../logs/webapi-.log",
"rollingInterval": "Day",
"outputTemplate": "[{Timestamp:yyyy-MM-dd HH:mm:ss.fff zzz} {CorrelationId} {Level:u3}] {Username} {Message:lj}{NewLine}{Exception}"
}
}
]
}
This is working fine, it's logging inside a logs folder in the root.
Now I've deployed my API to a Staging K8s cluster and don't want my logs to be stored on the pod but rather on the Staging server. Is it possible? I can't find many useful posts about it, so I assume there is a better way to achieve it.
Based on Panagiotis' 2nd suggestion I spent like a week to try to set up Elasticsearch with Fluentd and Kibana with no success.
Turned out, that the simplest and easiest solution was his 1st one: all I needed was a PersistentVolume and a PersistentVolumeClaim. This post helped me with the setup: How to store my pod logs in a persistent storage?
I have a DataProc cluster running in GCP. I ran the Livy initialization script for it, and I can access the livy/sessions link through the gateway interface. I have the following set up for my sparkmagic config.json:
{
"kernel_python_credentials" : {
"auth": "None",
"url": "https://{SERVER}.dataproc.googleusercontent.com/livy"
},
"should_heartbeat": true,
"livy_server_heartbeat_timeout_seconds": 60,
"heartbeat_refresh_seconds": 5,
"heartbeat_retry_seconds": 1,
"ignore_ssl_errors": false
}
I can start the kernel, but if I try to execute a cell it seems to be replying back with a login page. Is there some other parameter that I need to set to make this work?
For the benefit of anyone else who comes here - I was able to get connectivity going by setting up port forwarding from the livy service to my local machine. However, I ran into a problem with being unable to actually execute jobs against the cluster. It appears that Livy is more or less defunct, and specifically the currently available releases (latest 0.7.1) were not built against Spark 3 / Scala 2.12. There's no easy way to make this work.
Whilst I've tried several solutions to related problems on SO, nothing appears to fix my problem when deploying a Meteor project to a VM on Google Compute Engine.
I setup mupx to handle the deployment and don't have any apparent issues when running
sudo mupx deploy
My mup.json is as follows
{
// Server authentication info
"servers": [
{
"host": "104.199.141.232",
"username": "simonlayfield",
"password": "xxxxxxxx"
// or pem file (ssh based authentication)
// "pem": "~/.ssh/id_rsa"
}
],
// Install MongoDB in the server, does not destroy local MongoDB on future setup
"setupMongo": true,
// WARNING: Node.js is required! Only skip if you already have Node.js installed on server.
"setupNode": true,
// WARNING: If nodeVersion omitted will setup 0.10.36 by default. Do not use v, only version number.
"nodeVersion": "0.10.36",
// Install PhantomJS in the server
"setupPhantom": true,
// Show a progress bar during the upload of the bundle to the server.
// Might cause an error in some rare cases if set to true, for instance in Shippable CI
"enableUploadProgressBar": true,
// Application name (No spaces)
"appName": "simonlayfield",
// Location of app (local directory)
"app": ".",
// Configure environment
"env": {
"ROOT_URL": "http://simonlayfield.com"
},
// Meteor Up checks if the app comes online just after the deployment
// before mup checks that, it will wait for no. of seconds configured below
"deployCheckWaitTime": 30
}
When navigating to my external IP in the browser I can see the Meteor site template however the Mongodb data isn't showing up.
http://simonlayfield.com
I have set a firewall rule up on the VM to allow traffic through port 27017
Name: mongodb
Description: Allow port 27017 access to http-server
Network: default
Source filter: Allow from any source (0.0.0.0/0)
Allowed protocols and ports: tcp:27017
Target tags: http-server
I've also tried passing the env variable MONGO_URL but after several failed attempts I found this post on the Meteor forums suggesting that it is not required when using a local Mongodb database.
I'm currently connecting to the VM using ssh rather than the gcloud SDK but if it will help toward a solution I'm happy to set that up.
I'd really appreciate it if someone could provide some guidance on how I can know specifically what is going wrong. Is the firewall rule I've setup sufficient? Are there other factors than need to be considered when using a Google Compute Engine VM specifically? Is there a way for me to check logs on the server via ssh to gain extra clarity around a connection/firewall/configuration problem?
My knowledge in this area is limited and so apologies if there's an easy fix that has evaded me.
Thanks in advance.
There were some recent meteord updates, please rerun your deployment
Also as a side note: I always specify a port for mup / mupx files
"env": {
"PORT": 5050,
"ROOT_URL": "http://youripaddress"
},
I am new developing Meteor apps and I just set up a Telescope blog which is based in Meteor.
I want to deploy it in my own hosting (a droplet at Digital ocean) using "Meteor Up" but I dont know how to configure the "MONGO_URL" and "MAIL_URL" in the mup.json file.
Everything was set up transparently in local so I have no clue where is the DB and who is the user or the password... Any help or orientation where I should look up?
Here a snippet of my mup.json file:
{
"env": {
"PORT": 80,
"ROOT_URL": "",
"MONGO_URL": "mongodb://:#:/App",
"MAIL_URL": "smtp://postmaster%40myapp.mailgun.org:adj87sjhd7s#smtp.mailgun.org:587/"
},
Remove the mongo_url and it will use an internal mongo server. (I am sure of this)
You will need to apply for a free account at mailgun and use your api key here.
(guessing here) To get started, try eliminating that key as well and you may be fine.
{ "env": { "PORT": 80, "ROOT_URL": "" },
I'm thinking to use Chef-Solo as a PaaS orchestrator.
I'll have my own dashboard which will generate recipes and my nodes will pull from them. I know I can do that by using :
chef-solo -i <interval>
But, if i'd like to add more and more attributes; like having a list of virtualhosts or mysql users to deploy. I don't know how I can achieve this.
I'm looking for your ideas; I 'think' engineyard is using chef to deploy 'on demand' php, node .js apps; how did they achieve this ?
How not to re-execute an app deployment if that one has already been deployed
On first run i'll have :
"websites" : {
"site1": { "username": "dave", "password": "password123" }
},
And then, when a new site is created the attributes would become :
"websites" : {
"site1": { "username": "dave", "password": "password123" }
"site2": { "username": "bob", "password": "password123" }
}
etc.
And how to get report on what chef-solo is doing ?
Any ingenious idea is welcome :)
Add chef-server to your PAAS stack and use knife to push your receipes there. Knife can also be used to initially provision nodes in your PAAS, taking care of installing the chef client (configured to talk to your chef server).
The chef solo client is useful for simple use cases, but doesn't really scale will require additional supporting code for items like monitoring/reporting (your question) and when you move to more complex multi-tier deployment scenarios.