Regarding the database in Pythonanywhere - pythonanywhere

I am following the Djangogirls tutorial according to which I added new posts in the blog on the Django admin. I created a template using Django templates to display this Dynamic data. I checked it by opening 127.0.0.1:8000 in browser and I was able to see the data. Then for deploying this site on Pythonanywhere, I pushed the data to github from my local rep using git push and did git pull on Pythonanywhere from github.All the files including the db.sqlite3(database) file were updated properly in pythonanywhere but still I could not the see the data after running my webapp on pythonanywhere.Then , I manually removed the db.sqlite3 file from pythonanywhere and uploaded the same file from my local desktop and it worked. Why did this work? and is there an alternative for this?

That's kind of odd; if the SQLite DB was in the git repository, and was uploaded correctly, I'd expect it to work. Perhaps the database is in a different directory? On PythonAnywhere, the working directory of your running web app might be (actually, probably is) different to your local machine. And if you're specifying the database using a relative path (which you probably are) then that might mean that the one you created locally is somewhere different to where it is on PythonAnywhere.
BTW, from my memories of the Django Girls tutorial (I coached for one session a few months ago) you're not actually expected to put the database in your Git repository. It's not how websites are normally managed. You'd normally have one database locally, for testing, where you'd be able to put random testing data, and then a completely different one on your live site, with posts for public consumption.

Related

.db file from Heroku app does not sync with github repository [duplicate]

This question already has answers here:
Does heroku automatically update the repository?
(1 answer)
Heroku via Github, where is my JSON files updated?
(2 answers)
Closed last month.
I have this app on Heroku that deploys code from a GitHub repository. In this GitHub repository, there is this database file "something.db", which has all the necessary fields but empty of any records. When I build the app on Heroku and deploy it, it pulls the code from the GitHub repository, including the "something.db" database file. When people access the website, when they register a user account for example, the app inserts a new record to the database "something.db". However, I have some a concern about this:
How would I be able to sync this "something.db" database file from the Heroku app back to the GitHub repository? I.e. How can the GitHub repository be able to see the data changes in the "something.db" database file once I create a new user on the website.
I am asking this question in the context that every time I need to push code onto that GitHub repository, I would need to redeploy the app from heroku, which will then also deploy the empty "something.db" database file, which I do not want, since I want to retain the data that was previously added while the website was live.
I know this method of storing data in a .db file is not professional, but I would like to keep things as simple as possible since this website is small scale and does not hold that much data.
The app that I am using is a python Flask app, and the database is using flask alchemy (sqlite).
I tried looking online for some answers on how to push the changes from Heroku back to the GitHub pipeline but could not find anything about it. All I saw was how to deploy an app FROM GitHub to Heroku, but not push code FROM Heroku back to the GitHub repository.

Why meteor database(mongo db) contents is not uploaded in the github repo?

I uploaded a meteor project on github repository. Once I finished uploading, I downloaded the zip file to check if it is functioning properly. The project runs, however, there is not even a single collection in mongodb. (Note my meteor version is 1.8)
Can someone please help me get why the database collections are not stored/uploaded in the github repo and how can they be stored on github repo?
That is the designed behaviour of classic 3 tier web apps architecture. Your app code is separated from your app data.
Technically, the MongoDB data of your Meteor project in dev mode (i.e. when you start it with meteor run) is in your Meteor project .meteor/local folder, which is correctly excluded from version control by .gitignore.
Note that in production (i.e. when you use your app after doing meteor build) you will have to provide a MONGO_URL environment variable to specify where your MongoDB can be reached, since your local dev data will not be shipped with the built app.
Now you can backup your data (e.g. mongodump) and use the dump to fill up your new MongoDB. You can also do it automatically, typically in your Meteor server startup, if you find empty collections.

How can I put local Moodle installation under version control

I have setup local installation of Moodle and i have created some courses there and did some customization. Now i want to
1.) put this under version control on an existing bitbucket project and
2.) deploy it to server.
can someone please advise how it can be done.
The code is already under version control - just pull it down using git (see https://docs.moodle.org/en/Git_for_Administrators for more details).
If, however, you are asking about how to transfer the configuration data from one site to the other - you can do that by exporting your database and importing it to the remote site (the instructions for doing this will vary depending on the database software you are using), as well as copying your moodledata directory. This assumes you are fine to completely replace the remote site with the data from your local site.
If you just want to transfer courses, then backup each course and restore them one at a time to the live site.

Deployment with CakePhp

I have a CakePhp Website that is currently live. I would like to keep working on the site, without impacting the deployed site.
What is the best way to keep a production version separate from a deployed version, and then merging the two when appropriate?
Currently, I am using Git for version control.
Thanks!
First thing, get to know a version control system Subversion, Git, Bazaar, Mercurial are some examples. They are a safety net that can save your bacon because they save EVERY change to EVERY file in your fileset.
Then, typically I have a local development server and also a subdomain (staging.example.com) on the production server. I then do my heavy development on the local development server. Then I use SVN to archive all my site changes. Then, using a shell account on the production server I check out the new version of the software to the staging subdomain. If it works ok there, I can then update the live site using just a single SVN check out.
I've also heard of people placing a symbolic link in the location where the site root should be (/var/www/public_html) that points to the live directory (/var/www/site_ver_01234) , then set up the new version in a parallel directory (/var/www/site_ver_23456). Finally, just recreate the symbolic link pointing to the new version's directory. The switch is instantaneous and transparent. I'm sorry I'm not more clear on this method though, I read about it a while back but never tried it myself though.
I've also looked at Bazaar (another version control system) that has a plugin that automatically ftps any changed files to a given server every time a version is checked in.
The general idea, first of all, is to use a version control system. Using this, you're developing your site on your local machine or with several people, having a central repository somewhere.
When you're happy with a certain revision and would like to deploy it, you "tag" it. That means you freeze the state of that revision and separate it from the continually evolving "trunk". What that means specifically depends on your version control system.
You then take that tagged revision and copy it to the live server. Possibly you may copy it to a "staging server" before to test it in another environment. This copying can be as simple as overwriting all existing files using FTP, or it can involve automated deployment systems which will take care of the details for you and allow you to roll back an unsuccessful deployment. If a database is involved as well, you're probably also looking at database schema migration scripts that need to be run.
Each of these steps can be done in many different ways, and you'll have to figure out what's the best approach for you. If you're not doing so already, start using a version control system such as SVN or git. Do it now! Then you might want to google or search on SO about different techniques to tag and branch using that system. For serious deployment, start with a keyword like Capistrano or one of its PHP clones.

Best practice updating a website

currently my work-flow is as follows:
Locally on a machine I maintain a git repo on each website I am working on, when the time comes to publish something I compress the folder and upload this single file to the production server via ssh then I decompress, test the changes a move the changes to the live folder and I get rid of the .git folder.
I was wondering if the use of a git repo on the live server was a good idea, seems to be at first but it can be problematic if a change doesn't look the same on on the production server in comparison to the local development machine... this could start a fire...
What about creating a bare repo on some folder on production server then clone from there to the public folder thus pushing updates from local machine to the bare repo and pulling from the bare on the public folder of the production server... may anyone plese provide some feedback.
Later I read about capistrano http://capify.org but I have no experience w/ this software...
In your experience what is the best practice/methodology to accomplish a website deployment/updates?
Thanks in advance and for your feedback.
I don't think that our method can be called best practice, but it has served us well.
We have several large databases for our application (20gb+), so maintaining local copies on each developers computer has never really been an option, and even though we don't develop against the live database, we do need to do the development against a database that is as close to the real thing as possible.
As a consequence we use a central web server as well, and keep a development branch of our subversion trunk on it. Generally we don't work on the same part of the system at once, but when we do need to do that, or someone is making a lot of substantial changes, we branch the trunk and create a new vhost on the dev server.
We also have a checkout of the code on the production servers, so after we're finished testing we simply do a svn update on the production servers. We've implemented a script that executes the update command on all servers using ssh. This is extremely convinient, since our code base is large and takes a lot of time to upload. Subversion will only copy the files that actually have been changed, so it's a lot faster.
This has worked really well for us, and the only thing to watch out for is making changes on the production servers directly (which of course is a no-no from the beginning) since it might cause conflicts when updating.
I never thought about having a repository copy on the server. After reading it, I thought it might be cool... However, updating the files directly in the live environment without testing is not a great idea.
You should always update a secondary environment matching exactly the live one (webserver + DB version, if any) and test there. If everything goes well, then put the live site under maintenance, update files, and go live again.
So I wouldn't make the live site a copy of the repository, but you could do so with the test env. You'll save SSH + compressing time, plus you can check out any specific revision you'd like to test.
Capistrano is great. The default recipes The documentation is spotty, but the mailing list is active, and getting it set up is pretty easy. Are you running Rails? It has some neat built-in stuff for Rails apps, but is also used fairly frequently with other types of webapps.
There's also Webistrano, which is based on Capistrano but has a web front-end. Haven't used it myself. Another deployment system that seems to be gaining some traction, at least among Rails users, is Vlad the Deployer.