We are implementing an environment in Visual Studio Team Services (VSTS).
We have a Git Repo tied to VSTS
The problem is how to keep the Config files separate so they retain their unique values in their local environments? But without uploading Configs VSTS fails to Build within it's environment.
We don't want the same config settings that are in VSTS to always Sync to Local Environments nor do we want to Push our local configs to the Master. Obviously, we can Exclude on Push but the question is how to configure VSTS in a manner that allows it to Build successfully without requiring config files to be uploaded to the Repo?
Reviewing this post, I'm not sure whether or not Repo based configs are required: How to handle multiple configurations in VSTS Release management?
And yes we will eventually have multiple configs to allow Staging and Production releases.
The direct answer is No. Usually git only track source code for projects, and VSTS usually can build successful without config files. I’m not sure what’s your project is, so we can deal with the situation for that you need to push the config file to VSTS but also do not effect local settings when git pull (assume the name of the config file is project.config):
Option 1:
If it’s ok for you to rename project.config when you build in VSTS, you can use project.config for your local environments and projectRemote.config for the remote repo: gitnore project.config in .gitignore, and create projectRemote.config file to push to remote.
Option 2:
Keep local project.config version when pull changes from remote. You can keep local versions by:
touch .gitattributes
echo 'project.config merge=ours' >> .gitattributes
git config --global merge.ours.driver true
Note: the merge strategies seems only works when the pull is not fast forward.
Related
How to config the _data folder to another git repository? The whole folder is clone from github and want to continue due to easy update. But the _data folder need to sync with another data repository. How can I config it?
Actually just one yaml file : _data/authorlist.yaml
What to put into package.js if want a script to sync the yaml file with github?
Thank you.
You can use Git submodules to keep a shared repository of data files that you can use in multiple projects. This will allow you to keep a reference to the data repository in your projects, and pull updates to this repository with a single command. The great thing is that submodules are a built-in feature of Git, so it's independent of any NPM scripts, environments (like a bash script would be) or frameworks. See the link above for documentation on how to set up and work with submodules.
I want to run TeamCity processes based on file existance.
I have two TeamCity processes (Dev and Prod):
Dev should be run if there is DevParam file in repo (or in specified location).
Prod should be run if there is ProdParam file.
I want to run exactly one process after each push to repository.
This files will be added and removed like:
[0] Repository has DevParam file
[1] Pushed, there is still DevParam file -> Dev process should be run
[2] Pushed, removed DevParam file and added ProdParam -> Prod process should be run
[3] Pushed, there is still ProdParam -> Prod should be run
I tried to create Trigger with rules, but I failed (rule like +:DevParam run also on file removal).
Git recognizes addind and removing this files as Moving with Rename, so it may be relevant.
file management is not a normal process. I strongly advise you to use the branch flow. For your example, use develop branch(DevParam) for an all your developers and master branch for a prod
Try to use the follows advice.
The developers are coding in the dev branch. Each developer working only this branch.
You should create the build configuration with the trigger to your dev branch. After each new commit, the configuration will be triggered.
If you decided that the code in the dev branch is ready to production, you just merged all to the master. And now you can also trigger the same configuration only for a master branch.
For more information about gitflow-workflow read this
To set expectations, I'm new to build tooling. We're currently using a hosted agent but we're open to other options.
We've got a local application that kicks off a build using the VSTS API. The hosted build tasks involve the Get sources step from a GitHub repo to the local file system in VSO. The next step we need to copy over a large number of files (upwards of about 10000 files), building the solution, and running the tests.
The problem is that the cloned GitHub repo is in the file system in Visual Studio Online, and my 10000 input files are on a local machine. That seems like a bit much, especially since we plan on doing CI and may have many builds being kicked off per day.
What is the best way to move the input files into the cloned repo so that we can build it? Should we be using a hosted agent for this? Or is it best to do this on our local system? I've looked in the VSO docs but haven't found an answer there. I'm not sure if I asking the right questions here.
There are some ways to handle the situation, you can follow the way which is closest to your situations.
Option 1. Add the large files to the github repo
If the local files are only related to the code of the github repo, you should add the files into the same repo so that all the required files will be cloned in Get Sources step, then you can build directly without copy files step.
Option 2. Manage the large files in another git repo, and then add the git repo as submodule for the github repo
If the local large files are also used for other code, you can manage the large files in a separate repo, and treat it as submodule for github repo by git submodule add <URL for the separate repo>. And in your VSTS build definition, select Checkout submodules in Get sources step. Then the large files can be used directly when you build the github code.
Option 3. Use private agent on your local machine
If you don’t want add the large files in the github repo or a separate git repo for some reasons, you can use a private agent instead. But the build run time may not improve obviously, because the changed run time is only the different between copying local files to server and copying local files to the same local machine.
Some one manually upload files directly to web server, as a result github is not same as the physical file contents in production.
And there are many files, I m afraid.
1 - Howto refresh the github master, so that it will show correct contents of the production server.
2 - And howto refresh my local dev according the refreshed github master in step 1?
You need to somehow copy all the files from the web server to your local repo: Git will detect the new/modified/deleted files in your local repo after that copy.
Once that synchronization is done in your local repo, add, commit and push to GitHub.
Should I use .gitignore for this?
I want to maintain an .htaccess file in my local repository which is also the root directory of apache. The .htaccess file is different from what I have in a remote repository which also serves as my production server.
What should I do so that I have a "generic" .htaccess in the github repository while all other repositories may maintain their own copies which will not be uploaded to any of the remote repositories? I want the file to be retained locally only.
This is also the same case for configuration files, for instance, config.php contains database username/host/password/name which may be different among the repositories.
Thanks!
Instead of ignoring the files, you could create a "branch" just for the production server, always merging the "master" changes inside that new branch before every deploy. In the production branch, you can commit configuration changes that apply only to the production environment.
This same strategy applies to any number of environments.
Remember: never merge the "production" branch back to "master", otherwise you'll end up with a lot of problems with the config files. If you need to fix something on production, use a "bugfix" branch, starting on the most recent master commit that is merged into production, merging it back to production AND master after the work is done.