I'm working in local branch and want to try my changes on staging server, but I don't want to commit these changes. Can I commit local changes.
I know about deploy:upload recipe. I need a way to deploy several files or whole working derictory.
Thanks.
Most important of capistrano is to allow execute code on remote server, what we call deploy is a set of default scripts that do a lot of small tasks required for setting up new version of application on server.
So it is possible to write your own scrip that will execute following script (it's not working probably):
pack sources
system "tar -czf /tmp/package.tgz *"
upload it to server
upload "/tmp/package.tgz" "/tmp/package.tgz"
remove old files, unpack sources on server
run "cd /app_path/; rm -rf *; tar -xzf /tmp/package.tgz"
override (force recursively symlinks) files with some server configs ... like database.yml
run "cp -flrs /app_shared_path/* /app_path/"
restart application - this is for passenger, use your own server command for restart
run "cd /app_path/; touch tmp/restart.txt"
I did similar setup once for deployment - before I got access to git.
I deploy some cached (minified, etc) javascript files from a rails app. The simplest way is just to do this in a capestrano task:
top.upload("public/javascripts/cache", "#{current_path}/public/javascripts/cache")
This will use scp to upload the entire 'cache' directory.
Related
I have an ubuntu staging server where I have installed apache, php, mysql, git, composer installed. I have a private git repository setup on the bitbucket, the project is already cloned to the staging server and to my local development machine. The Laravel setup is working perfectly fine on both machine.
What I am currently doing is whenever there is an update to the git repository, I do login to the staging server, pull the latest code from the git repository and do composer install, npm install, bower install.
I want to automate this process via capistrano tool. I checked the tutorials online, but all of them do the clone of repository whenever, I issue a deploy command and creates a fresh installation every time. Can't capistrano helps me to work on the existing folder that is already setup?
The basic premise of Capistrano is the idea that a new installation is created each time, such that there is not much to be done initially in terms of setup. If you'd rather use a different mechanism, a different tool would work better for you! For such cases, you could try to write a script using SSHKit directly (fairly advanced), or write a makefile or some other tool to automate your process.
If you do want to try to make Capistrano work on its terms, look into how linked_dirs and linked_files work in it. They allow you to have some files (e.g. config files, log dirs, etc) which are outside of the deployment directory and as such are shared between deploys.
I'm trying to figure out the best way to ignore some folders and files from my Codeship deployment process. At the moment it compiles all my assets as part of the deployment process but I don't really want it uploading node_modules to the server.
Is there a way to ignore the folder or remove the folder before deployment?
I tried deleting it after I ran grunt but I think it gets cached as it didn't work.
The method I used was to rm -r YOUR_PATH/node_modules in the test pipeline after Grunt has been run.
This however seemed to have some issues when running on FTP deploy. For any of the SSH deploys it seems to work fine.
I'm new to Jenkins CI.I'm trying to get SVN update (myFolder) inside a job as build steps. I want to explicitly copy some files to web root as I can't have them inside my solution.
Build Steps I need to perform.
Build Solution
Publish
Copy myFolder to web root
Sync
Up to Publish it works fine.Problem when trying to copy/update myFolder to web root.
MyFolder is located out of the project solution folder as I cant have it inside solution Folder.
Note: This myFolder has serialized items/object that I need to Sync in the next step.It should be copied to web root in-order to sync.
And this folder is committed to SVN.
In my local CMD following batch file works fine but when I try in Jenkins Execute Windows Batch Command it stops at
-- Updating source from SVN
-- Running update...
#echo off
cls
echo -- Initiating system instance variables...
echo. -- Setting the variables...
:: Here you need to make some changes to suit your system.
set SOURCE=C:\inetpub\wwwroot\Test\Website\App_Data\myFolder\
set SVN=C:\Program Files\TortoiseSVN\bin
:: Unless you want to modify the script, this is enough.
echo. %SOURCE%
echo. %SVN%
echo. ++ Done setting variables.
echo.
echo -- Updating source from SVN
echo. -- Running update...
"%SVN%\TortoiseProc.exe" /command:update /path:"%SOURCE%" /closeonend:1
echo. ++ Done.
echo. -- Cleaning up...
set SOURCE=
set SVN=
echo. ++ Done.
I have Subversion Plugin installed.Any solution for this problem.
And Also I tried using below Powershell Script
#Get checkout folder
TortoiseProc.exe /command:"update" /path:"C:\inetpub\wwwroot\Test\Website\App_Data\myFolder\"
It works in my local Windows Powershell but not in Jenkins Windows Powershell
In an effort to help answer your question, I will explain the configuration of a job which should accommodate what you are trying to achieve: building a project under version control after an svn update has been performed and moving the generated files to a separate directory.
Setup the Source Code Management section
Within this section in your job's configuration page, choose the appropriate version control system (ie, Subversion) and point the job to your project's URL, noted below. Also be mindful to select the appropriate check-out strategy. This is what Jenkins will use when your job runs (ie, svn update) as Jenkins will store a copy of your repository on the build-server in the job's workspace.
Without proceeding any further, this job will only pull down any changes from your repository through the appropriate check-out strategy configured above when this job runs.
However, you would like Jenkins job to actually do something meaningful when the job runs, such as build/publish your project. This is achieved through build steps, so let's configure build steps.
Configure the appropriate build step(s)
Build/Publish Website Locally
Assuming you have scripts already written to build/publish the website under version control (let's call it !Publish Website.bat as an example) which builds the project and publishes it locally, you can configure the step underneath the Build section as follows,
Note: %WORKSPACE% is a built-in environment variable which resolves to the current workspace of the job. There is a link under the build-step to list all the different environment variables exposed which can be used.
Without proceeding any further, the job will now pull down any changes and execute the batch file to publish/build a website locally within your workspace when this job runs.
Not quite done considering you wish to have these newly generated files to reside within your website's webroot folder so these changes are reflected on your website. For simplicity's sake we can go ahead and add another build-step to perform the copy.
Copy Contents to Webroot
Assuming you have scripts already written to copy the contents of the website under version control (let's call it !Copy Website.bat) which takes the published files and copies them to the appropriate directory on your webserver, you can configure the step underneath the Build section as follows,
Now when the job runs, it will perform an svn update against the repository on it's local workspace and execute the preceeding build-steps (ie, build/publish the solution and copy the contents to your webroot.)
I have an MVC4 + EF4.0 .NET 4.5 project (say, MyProject) I'm able to run the project locally just fine. When I FTP deploy it to Azure Websites (not cloud service) it runs fine too. However, if I do a GIT deploy, the site 'runs' for the most part until it does some EF5.0 database operations. I get an exception Unable to load the specified metadata resource.
Upon debugging I noticed that if I:
GIT deploy the entire MVC4 project (as before)
FTP in and then replace bin\MyProject.dll with the bin\MyProject.dll file that I just built locally (Windows 8 x64, VS2012, Oct'12 Azure tools) after the GIT push (i.e. same source)
then the Azure hosted website runs just fine (even the EF5.0 database functionality portion).
The locally built .dll is about 5KB larger than the Azure GIT publish built one and both are 'Release' mode. It's obvious that the project as built after the GIT push (inside Azure) is being built differently than as on my own PC. I checked the portal and it's set to .NET 4.5. I'm also GIT pushing the entire solution folder (with just one project) and not just small bits and pieces.
When I load the locally built as well as the remotely built MyProject.dll files, I noticed the following difference(FrameworkDisplayName)
local: System.Runtime.Versioning.TargetFrameworkAttribute(".NETFramework,Version=v4.5", FrameworkDisplayName = ".NET Framework 4.5"),
remote: System.Runtime.Versioning.TargetFrameworkAttribute(".NETFramework,Version=v4.5", FrameworkDisplayName = ""),
Anyone knows why this is happening and what the fix might be?
Yes, this is a bug that will be fixed in the next release. The good news is that it's possible to work around it today:
First, you need to use a custom deployment script, per this post.
Then you need to change the MSBuild command line in the custom script per this issue.
Credit goes to David above for the pointers and hints. I voted him up but I'll also post the exact solution to the issue here. I've edited my original post because I found there was a major bug that I didn't notice until I started from scratch (moved GIT servers). So here is the entire process, worked for me.
Download Node.JS (it's needed even for .NET projects because the GIT deploy tools use it)
Install the azure-cli tool (open regular command prompt => npm install azure-cli -g)
In the command prompt, cd to the root of your repository (cd \projects\MyRepoRoot)
In there, type azure site deploymentscript --aspWAP PathToMyProject\MyProject.csproj -s PathToMySolution.sln (obviously adjust the paths as needed)
This will create the .deployment and deploy.cmd files
Now edit the deploy.cmd file, find the line starting with %MSBUILD_PATH% (will be just one)
Insert the /t:Build parameter. For example:
[Before] %MSBUILD_PATH% <blah blah> /verbosity:m /t:pipelinePreDeployCopyAllFilesToOneFolder
[After] %MSBUILD_PATH% <blah blah> /verbosity:m /t:Build /t:pipelinePreDeployCopyAllFilesToOneFolder)
Push to GIT (check the GIT output if everything went ok)
Browse to your website and confirm it works!
I'll be glad when it's fixed in the next revision so we won't maintain the build script
In other rails projects, I'd have a local database.yml and in source code repository only commit the database.sample file. When deploying, a capistrano script that would symlink a shared version of database.yml to all the releases.
When deploying to heroku, git is used and they seem to override database.yml altogether and do something internal.
That's all fine and good for database.yml, but what if I have s3 configurations in config/s3.yml. And I'm putting my project on github so I don't want to commit the s3.yml where everyone can see my credentials. It'd rather commit a sample s3.sample which people will override with their own settings, and keep a local s3.yml file uncommitted in my working directory.
what is the best way to handle this?
Heroku have some guidance on this -
http://devcenter.heroku.com/articles/config-vars
An alternative solution is to create a new local-branch where you modify .gitignore so secret-file can be pushed to heroku.
DON'T push this branch to your Github repo.
To push non-master branch to heroku, use:
git push heroku secret-branch:master
More info can be found on:
https://devcenter.heroku.com/articles/multiple-environments#advanced-linking-local-branches-to-remote-apps
Use heroku run bash and then ls to check whether your secret-file have been pushed on to heroku or not
Store the s3 credentials in environment variables.
$ cd myapp
$ heroku config:add S3_KEY=8N029N81 S3_SECRET=9s83109d3+583493190
Adding config vars:
S3_KEY => 8N029N81
S3_SECRET => 9s83109d3+583493190
Restarting app...done.
In your app:
AWS::S3::Base.establish_connection!(
:access_key_id => ENV['S3_KEY'],
:secret_access_key => ENV['S3_SECRET']
)
See the Heroku Config Vars documentation which explain development setup etc.
If using Rails 4.1 beta, try the heroku_secrets gem, from https://github.com/alexpeattie/heroku_secrets:
gem 'heroku_secrets', github: 'alexpeattie/heroku_secrets'
This lets you store secret keys in Rails 4.1's config/secrets.yml (which is not checked in to source control) and then just run
rake heroku:secrets RAILS_ENV=production
to make its contents available to heroku (it parses your secrets.yml file and pushes everything in it to heroku as environment variables, per the heroku best practice docs).
You can also check out the Figaro gem.
I solved this by building the credentials from env variables during the build time, and write it to where I need it to be before the slug is created.
Some usecase specific info that you can probably translate to your situation:
I'm deploying a Node project, and in the package.json in the postinstall script I call "bash create-secret.sh". Since postinstall is performed before the slug is created, the file will be added to the slug.
I had to use a bash script because I had some trouble printing strings that contained newlines that had to be printed correctly, and I wasn't able to get it done with Node. Probably just me not being skilled enough, but maybe you run into a similar problem.
Looking into this with Heroku + Build & Deploy-time Secrets. It seems like it's not something Heroku supports. This means for a rails app, there is no way other than committing BUNDLE_GITHUB__COM for example to get from private repo.
I'll try to see if there is a way to have CI bundle private deps before beaming at heroku