github conflict when someone manually ftp files remote - github

Some one manually upload files directly to web server, as a result github is not same as the physical file contents in production.
And there are many files, I m afraid.
1 - Howto refresh the github master, so that it will show correct contents of the production server.
2 - And howto refresh my local dev according the refreshed github master in step 1?

You need to somehow copy all the files from the web server to your local repo: Git will detect the new/modified/deleted files in your local repo after that copy.
Once that synchronization is done in your local repo, add, commit and push to GitHub.

Related

Why Github repository keeps the original files after I moved them to a new foler within the same repo?

I created a repo on Github.
I created 2 files
I created a new folder
moved the 2 files into the newfolder from the local repo using Git bash
All files are up-to-date between local and remote
But, github still kept the original files without deleting them.
At the same time, when I used git pull, the 2 remaining files were not downloaded and appeared in my local repo anymore.
I am confused.
Can someone give me an explanation and one suggestion rather than just deleting them from Github webpage?

GitHub Desktop doesnt pick and push the entire repository

I am using GitHub desktop application on my local machine and when I create and complete my repository(web directory)on my local machine, then I push it GitHub online through desktop application. But here is my problem:
Sometimes it doesn't pick and push all of the files/folders from my local repository, it only pick 3 files, while my repository has 5 folders and one inex.html file.
And sometimes it works perfectly fine. I never understand where is my problem. Any thoughts on this?
Do a git status before your push, as well as a git show HEAD to check the content of your last commit.
That way, you will see if there remain some files not added to the index or not committed.
And you will see if every files you wanted is in a commit.
If one file is consistently ignored, see if it is actually ignored by Git with:
git check-ignored -v -- a/file

How do I handle a large number of files as an input to a build when using VSTS?

To set expectations, I'm new to build tooling. We're currently using a hosted agent but we're open to other options.
We've got a local application that kicks off a build using the VSTS API. The hosted build tasks involve the Get sources step from a GitHub repo to the local file system in VSO. The next step we need to copy over a large number of files (upwards of about 10000 files), building the solution, and running the tests.
The problem is that the cloned GitHub repo is in the file system in Visual Studio Online, and my 10000 input files are on a local machine. That seems like a bit much, especially since we plan on doing CI and may have many builds being kicked off per day.
What is the best way to move the input files into the cloned repo so that we can build it? Should we be using a hosted agent for this? Or is it best to do this on our local system? I've looked in the VSO docs but haven't found an answer there. I'm not sure if I asking the right questions here.
There are some ways to handle the situation, you can follow the way which is closest to your situations.
Option 1. Add the large files to the github repo
If the local files are only related to the code of the github repo, you should add the files into the same repo so that all the required files will be cloned in Get Sources step, then you can build directly without copy files step.
Option 2. Manage the large files in another git repo, and then add the git repo as submodule for the github repo
If the local large files are also used for other code, you can manage the large files in a separate repo, and treat it as submodule for github repo by git submodule add <URL for the separate repo>. And in your VSTS build definition, select Checkout submodules in Get sources step. Then the large files can be used directly when you build the github code.
Option 3. Use private agent on your local machine
If you don’t want add the large files in the github repo or a separate git repo for some reasons, you can use a private agent instead. But the build run time may not improve obviously, because the changed run time is only the different between copying local files to server and copying local files to the same local machine.

Connecting github repository with my webpage

Hey how to connect my webpage with github repository ,I mean , when I merge pull request it immediately make change in my webpage.I was using github-pages but now I would like include some php and it doesn't work. Thanks for any help.
Manual: after each push to the repo you´d have to pull on your server for the current version. (You don´t want this...)
Automation: first you need a server (linux/ windows) with git installed and clone the repository to your webserver directory (i.e. apache.webserver: var/www/html). Then you need a script which automaticly pulls the new changes to your server and use a webhook to trigger the script. That way you´d have the current version of your repo on the server all the time. (Push --> Webhook triggers script --> Server repo get´s new changes)
Alternative: create NOT a github BUT a git repository hosted on your own server (tuturial for linux only). You could push into it aswell and you´d have the current version of your site on the server without the path over github.

How Do I Update a Live Server Correctly Using Mercurial?

I'm new to Mercurial and version control, and although I'm only working on personal PHP application projects (until I hopefully get a job soon) I'm well overdue learning how it all works.
I've been reading about Mercurial all day, but I'm still confused on a few elements...
Firstly, I understand Mercurial CAN push my files straight to my live server, but I don't see many tutorials or examples explaining how this is done, so it leads me to think it's not used often? I currently use FTP to upload my files, and it's error prone to know which files have been modified, so I'd like to eliminate this obviously.
I also see services like BitBucket being mentioned a lot, but if I'm pushing to BitBucket how do I then get my files to my live server? Can I get only the changed files to upload via FTP, or do I need to install Mercurial on my server too or something?
Apologies if this is a basic question, I'm just a little lost as to how companies would/should use this service, and how files and uploads are handled elegantly. How should i go about version control on a personal project?
There are many ways to do that, but I'll try to narrow it down to the basic steps involved in a scenario using BitBucket:
1) Install Mercurial on both your dev machine and your live server.
2) Create a repository in BitBucket.
3) Clone the repository to your dev machine using the URL that appears in BitBucket, e.g.:
hg clone https://your_user#bitbucket.org/your_account/your_repos .
4) Clone the repository to your live server in the same way.
5) Do your dev and commit your code to the local repository on your dev machine (using hg commit). Then push the changesets to BitBucket using hg push.
6) Once you're ready to deploy the changes to your live server, log in to your live server and run hg pull -u.
I just use rsync to upload everything. If you're working by yourself, it's simple and works fine.
I set up an SSL certificate, and then made a bash shortcut ,p (target directory):
,p() { rsync -avz --delete ./ "user#server.com:/var/www/html/$#/"; }
Then, on my local host I can type ,p images and the current directory will be uploaded to mysite/images.
If you're always uploading to the same place, you can make a shortcut with no argument:
alias ,pm="rsync -avz --delete ./ "user#server.com:/var/www/html/";
Finally, if you just want to type the command:
rsync -avz --delete ./ "user#server.com:/var/www/html/