DB reset after pushing to openshift - sinatra

I have a Sinatra project that I'm publishing through Openshift.
Every time I'm pushing the changes to the openshift remote, the datebasse is reset.
I'm using sqlite3 with DataMapper
From DataMapper init file
DataMapper.setup(:default, "sqlite3://#{Dir.pwd}/main.db")
<my object here>
DataMapper.finalize
DataMapper.auto_upgrade!
/config.ru
require './App'
require 'rubygems'
run Sinatra::Application
What could be the reason? Thanks

Instance data is dropped during OpenShift deploys. Persistent data should be stored in the location specified by the environment variable $OPENSHIFT_DATA_DIR. Move your database file there.

Related

Spring cloud config with github

I just got a problem when using spring cloud config with github.I'm not really good at English,I hope I could explained this problem clearly.And thinks
you guys for reading this.
The problem is about Spring Cloud Config with github.And it happened when I added some more folds and config files to the repository where hold all my cloud config files.
First,I set uri,searchPaths,username,password under the cloud.config.server.git in the application.yml file belonged to config server.
Then,I push all my config files to the github.I called the parent repository ConfigRepo,and in this repo,I got two folders named A and B.
The construction is just look like this.
-ConfigRepo
-A
-A.yml
-B
-B.yml
Finally,I set the other applications' application name in their bootstrap.yml which means A and B.
After I've done that,I started my applications.All the client servers could find the config server and got the correct config yml file by the url.For example, the client A get its configs from
github.com/user/ConfigRepo/A/A.yml
But cause I needed to add a new application C,so I created a new folder C to hold and save the Application C's config file and push it to the github.
I finished the config work of application C just like above and start the it.But I found the url represent config file has changed.I mean,it suppose to be
github.com/user/ConfigRepo/C/C.yml
But in fact,the url of github has changed to
github.com/user/ConfigRepo/tree/master/C/C.yml
The worse thing is,not only the url represents C's config file has changed,all the url represents config file in the ConfigRepo has changed.
And no matter how i change the uri or searchPaths under cloud.config.server.git,the client server's log shows me that the name of located property source's mapPropertySource always is
github.com/user/ConfigRepo/C/C.yml
As a result,i cant get any configs except null so all the applications cant be started even include the A and B which could started before i pushed the new config file to the github.
So,what should I do?Is a way to make github get rid of /tree/master in the url?Or how to config my config server to support my project?
Thanks Again!
We are doing micro service project and used below configuration for retrieve configuration from GitHub. you need to add label as master to retrieve configuration.
spring:
application:
name: ####
profiles:
active: ####
cloud:
enable: true
config:
uri: ${CONFIG_SERVER_URL}
failFast: true
retry:
maxAttempts: 20
label: master
profile: ######

Standard practice for .wsgi secret key for flask applications on github repositories [duplicate]

This question already has answers here:
Where should I place the secret key in Flask?
(2 answers)
Closed 4 years ago.
I am building a flask web application and would like to put it on a github repo.
I notice that in the .wsgi file
#!/usr/bin/python
import sys
import logging
logging.basicConfig(stream=sys.stderr)
sys.path.insert(0,"/var/www/hashchain/")
from hashchain import app as application
application.secret_key = 'super secret key'
There is an application.secret_key for encryption...
I am guessing that the standard way of putting a flask web app on github would include cloning the entire flask app folder in it's entirety but NOT the .wsgi file?
That way, contributors can freely run flask in debug mode on their own localhost to develop it further and if they really want can deploy it to their own server (but will have to write their own .wsgi file and config for the server in their control).
Is this the correct way to think about it? I'm guessing if I put the .wsgi file on github it would be open season feasting for hackers?
I'm also guessing that if I hypothetically already did this? I would need to change the secret key after deleting it on the github repo because people could just look at the commit history to see it!
The general way to do this is read from enviroment variable:
import os
application.secret_key = os.getenv('SECRET_KEY', 'for dev')
Note it also set a default value for development.
You can set the enviroment variable SECRET_KEY manually:
$ export SECRET_KEY=you_key_here # use $ set ... in Windows
Or you can save it in a .env file at project root:
SECRET_KEY=you_key_here
Add it into .gitignore:
.env
Then you can use python-dotenv or something similar to import the variable:
# pip install python-dotenv
import os
from dotenv import load_dotenv
load_dotenv()
application.secret_key = os.getenv('SECRET_KEY', 'for dev')
As commented, the secret or any other sensitive information should never been part of a Git repository.
To illustrate that, see ubuntudesign/git-mirror-service, a simple WSGI server to create a mirror of a remote git repository on another remote.
It does include the step:
Optional secret
By default the server is unsecured - anyone who can access it can use it to mirror to repositories that the server has access to.
To prevent this, you can add a secret:
echo "79a36d50-09be-4bf4-b339-cf005241e475" > .secret
Once this file is in place, the service will only allow requests if the secret is provided.
NB: For this to be an effective security measure, the server should be only accessible over HTTPS.
The file is ignored in .gitignore.
And wsgi.py reads it if present:
secret_filename = os.path.join(script_dir, ".secret")
if os.path.isfile(secret_filename):
with open(secret_filename) as secret_file:
real_secret = secret_file.read().strip()

Capistrano mkdir: cannot create directory

I'm trying to use Capistrano (for the first time) to deploy a website. My web hosting is with MediaTemple. Where the dir structure for a website looks like this:
domains/site.com/html/index.html
It looks like Capistrano's default deployment tries to create a var/www directory to place your application inside.
I'm getting this error when trying to run cap production deploy:
mkdir: cannot create directory `/var/www': Permission denied
I assume I don't have the privileges to create these folders, is there a way around this instead of manually creating them?
Also, would the var/www structure be recommended, or would it be worth dumping my application in domains/site.com?
This is my first experience with Capistrano, so any help with this is appreciated. Thanks in advance!
In a default Capistrano deployment setup, there is a commented line that looks like:
# Default deploy_to directory is /var/www/my_app
# set :deploy_to, '/var/www/my_app'
You will want to uncomment the set line and change the path to be the location you want your application deployed to.

capistrano (v3) deploys the same code on all roles

If I understand correctly the standard git deploy implementation with capistrano v3 deploys the same repository on all roles. I have a more difficult app that has several types of servers and each type has its own code base with its own repository. My database server for example does not need to deploy any code.
How do I tackle such a problem in capistrano v3?
Should I write my own deployment tasks for each of the roles?
How do I tackle such a problem in capistrano v3?
All servers get the code, as in certain environments the code is needed to perform some actions. For example in a typical setup the web server needs your static assets, the app server needs your code to serve the app, and the db server needs your code to run migrations.
If that's not true in your environment and you don't want the code on the servers in some roles, you could easily send a pull request to add the no_release feature back from Cap2 in to Cap3.
You can of course take the .rake files out of the Gem, and load those in your Capfile, which is a perfectly valid way to use the tool, and modify them for your own needs.
The general approach is that if you don't need code on your DB server, for example, why is it listed in your deployment file?
I can confirm you can use no_release: true to disable a server from deploying the repository code.
I needed to do this so I could specifically run a restart task for a different server.
Be sure to give your server a role so that you can target it. There is a handy function called release_roles() you can use to target servers that have your repository code.
Then you can separate any tasks (like my restart) to be independent from the deploy procedure.
For Example:
server '10.10.10.10', port: 22, user: 'deploy', roles: %w{web app db assets}
server '10.10.10.20', port: 22, user: 'deploy', roles: %w{frontend}, no_release: true
namespace :nginx do
desc 'Reloading PHP will clear OpCache. Remove Nginx Cache files to force regeneration.'
task :reload do
on roles(:frontend) do
execute "sudo /usr/sbin/service php7.1-fpm reload"
execute "sudo /usr/bin/find /var/run/nginx-cache -type f -delete"
end
end
end
after 'deploy:finished', 'nginx:reload'
after 'deploy:rollback', 'nginx:reload'
# Example of a task for release_roles() only
desc 'Update composer'
task :update do
on release_roles(:all) do
execute "cd #{release_path} && composer update"
end
end
before 'deploy:publishing', 'composer:update'
I can think of many scenarios where this would come in handy.
FYI, this link has more useful examples:
https://capistranorb.com/documentation/advanced-features/property-filtering/

CherryPy : Accessing Global config

I'm working on a CherryPy application based on what I found on that BitBucket repository.
As in this example, there is two config files, server.cfg (aka "global") and app.cfg.
Both config files are loaded in the serve.py file :
# Update the global settings for the HTTP server and engine
cherrypy.config.update(os.path.join(self.conf_path, "server.cfg"))
# ...
# Our application
from webapp.app import Twiseless
webapp = Twiseless()
# Let's mount the application so that CherryPy can serve it
app = cherrypy.tree.mount(webapp, '/', os.path.join(self.conf_path, "app.cfg"))
Now, I'd like to add the Database configuration.
My first thought was to add it in the server.cfg (is this the best place? or should it be located in app.cfg ?).
But if I add the Database configuration in the server.cfg, I don't know how to access it.
Using :
cherrypy.request.app.config['Database']
Works only if the [Database] parameter is in the app.cfg.
I tried to print cherrypy.request.app.config, and it shows me only the values defined in app.cfg, nothing in server.cfg.
So I have two related question :
Is it best to put the database connection in the server.cfg or app.cfg file
How to access server.cfg configuration (aka global) in my code
Thanks for your help! :)
Put it in the app config. A good question to help you decide where to put such things is, "if I mounted an unrelated blog app at /blogs on the same server, would I want it to share that config?" If so, put it in server config. If not, put it in app config.
Note also that the global config isn't sectioned, so you can't stick a [Database] section in there anyway. Only the app config allows sections. If you wanted to stick database settings in the global config anyway, you'd have to consider config entry names like "database_port" instead. You would then access it directly by that name: cherrypy.config.get("database_port").