Capistrano 3: Relative symlink instead of absolute for current, linked_dirs and linked_files - deployment

I need to have relative symlink​ instead of absolute symlink​ when deploying.
I think the tree tasks than need to be overwritten are.
Rake::Task["deploy:symlink:linked_dirs"]
Rake::Task["deploy:symlink:linked_files"]
Rake::Task["deploy:symlink:release"] ​ ​
What I would like is some DSL I can drop in my deploy.rb so when I'm deploying created links are all relative.

You can juste use this in your deploy.rb to overwrite the default behaviour
## Use relative path instead of absolute
Rake::Task["deploy:symlink:linked_dirs"].clear
Rake::Task["deploy:symlink:linked_files"].clear
Rake::Task["deploy:symlink:release"].clear
namespace :deploy do
namespace :symlink do
desc 'Symlink release to current'
task :release do
on release_roles :all do
tmp_current_path = release_path.parent.join(current_path.basename)
execute :ln, '-s', release_path.relative_path_from(current_path.dirname), tmp_current_path
execute :mv, tmp_current_path, current_path.parent
end
end
desc 'Symlink files and directories from shared to release'
task :shared do
invoke 'deploy:symlink:linked_files'
invoke 'deploy:symlink:linked_dirs'
end
desc 'Symlink linked directories'
task :linked_dirs do
next unless any? :linked_dirs
on release_roles :all do
execute :mkdir, '-p', linked_dir_parents(release_path)
fetch(:linked_dirs).each do |dir|
target = release_path.join(dir)
source = shared_path.join(dir)
unless test "[ -L #{target} ]"
if test "[ -d #{target} ]"
execute :rm, '-rf', target
end
execute :ln, '-s', source.relative_path_from(target.dirname), target
end
end
end
end
desc 'Symlink linked files'
task :linked_files do
next unless any? :linked_files
on release_roles :all do
execute :mkdir, '-p', linked_file_dirs(release_path)
fetch(:linked_files).each do |file|
target = release_path.join(file)
source = shared_path.join(file)
unless test "[ -L #{target} ]"
if test "[ -f #{target} ]"
execute :rm, target
end
execute :ln, '-s', source.relative_path_from(target.dirname), target
end
end
end
end
end
end

Related

"expected ")" at line 17 got "="" and "expected identifier when parsing expression got "}" " all at line 47

i have problems with the line with the properties variable. it says at line 17 there was a ( in there but when i checked there wasnt. then at the end of the variable i put a } to close the table but it says it expected a identifier when parsing the expression (its a statement).
take a look at the entire script. there is no ( at line 17
`
local gui=script.Parent
local screengui=gui.Parent
local energy=100
local maxenergy=100
local shifton=0
local player=game.Players.LocalPlayer
local char=player.Character
local human=char:WaitForChild("Humanoid")
local userinput=game:GetService("UserInputService")
local function keystart(input,gameproccess)
if not gameproccess then
if input.UserInputType==Enum.UserInputType.Keyboard then
local keycode=input.KeyCode
if keycode==Enum.KeyCode.RightShift then
shifton=1
end
end
end
end
local function keyend(input,gameproccess)
if not gameproccess then
if input.UserInputType==Enum.UserInputType.Keyboard then
local keycode=input.KeyCode
if keycode==Enum.KeyCode.RightShift then
shifton=0
end
end
end
end
userinput.InputBegan:Connect(keystart)
userinput.InputEnded:Connect(keyend)
while true do
if shifton==1 and energy>0 then
human.WalkSpeed=16
energy-=0.1
else
human.WalkSpeed=10
shifton=0
if energy<maxenergy-1 then
energy+=0.1
else
energy=100
end
end
local property={gui.Size.X=UDim2.new(guienergy,0,0.05,0).X}
local part=script.Parent
local TweenInf=TweenInfo.new(1,Enum.EasingStyle.Exponential,Enum.EasingDirection.InOut,-1,true,0)
guienergy=((energy/maxenergy)*0.9)
tween=game.TweenService:Create(part,TweenInf,property)
tween:Play()
wait(0)
end
`

How to exit a REPL from the command line

I'm currently learning Lua and also learning how to work with CMD.
I know how to change a directory path and run my codes and files from that path
but what I don't know and I'm here to ask for is how to get out of a programming language REPL when you start it in CMD
For example to jump into the Lua REPL you should type:
lua53 (--like python3 for the Python language)
then you changed the CMD environment to a Lua compiler and can't access CMD commands such as
dir, cd, cls etc. and everytime when I need to access these commands I have to close the CMD window and open a new one.
Now can you guys tell me am I able to access CMD commands while in the Lua REPL? Or do I have to exit Lua first, and is there any command to exit a REPL?
I'd recommend EOF (Ctrl + D on Unix) rather than SIGKILL (Ctrl + C) because some REPLs (node and python) choose to ignore the latter (perhaps because it's often used for copying text?); python will just print KeyboardInterrupt whereas node will only exit if you press Ctrl + C twice (it will send a message telling you this the first time).
EOF (Ctrl + D) on the other hand immediately exists all REPLs I have used so far.
The Lua REPL stops immediately when it receives either EOF or SIGKILL, so you can use both here.
Edit: As Sowban points out, EOF apparently is entered as Ctrl + Z then Enter in powershell.
You could type ctrl c to exit the process that's running generally.
I suggest to write a REPL by yourself.
But be warned :-)
The main loop with a prompt and the interpreting and executing function/method is mostly the easiest part.
99.99999% is the errorhandling thing.
One of my earliest interpreter language is (A)REXX.
A REPL without any errorhandling is done with...
/* REXX have to start with a comment */
do forever
parse pull input
interpret input
end
Now Lua sandboxed to an _ENV with io and os library and a little bit of errorhandling...
#!/usr/bin/env -S readline-editor /usr/local/bin/lua
-- ^--SHEBANG for Linux --------------------------------------------------------------------------------------
-- interpreter.lua
-- Sandboxed to io and os
-- Lua 5.4 >>-because-> goto label <const> load()
--------------------------------------------------------------------------------------------------------------
local Lua = function(...)
-- Info about OS
io.write(("%s\n"):format(io.popen('uname -a'):read())):flush()
-- Global Setting
debug.setmetatable((1), {__index = math}) -- Add math library to number as methods (like string for strings)
os.setlocale('de_DE.UTF8')
os.setlocale('en_US.UTF8', 'time')
io.write(os.date("[%c]\n" .. ("%s\n"):format(os.setlocale():gsub("%;", "\n")))):flush()
-- io.write(("%s\n"):format(_VERSION)):flush()
-- Label for goto
::lua::
-- Local Setting
local args <const> = args or {...}
local Lua = Lua or true
local cmd = cmd or ""
--------------------------------------------------------------------------------------------------------------
local env <const> = env or setmetatable({os = os, io = io}, -- The _ENV for load()
{__call = function(self, ...)
local self, t = ({...})[1] or self, "" -- First argument becomes self if one
if ({...})[1] == "help" then self = getmetatable(({...})[2]).__index end -- Showing metamethod __index (table)
for k, v in pairs(self) do
t = t .. ("%s => %s\n"):format(k, v)
end
return t
end,
__index = {cg = collectgarbage,
gt = getmetatable,
pairs = pairs,
tn = tonumber,
ts = tostring,
_V = ("%s \27[1;" .. (31):random(36) .. "m(sandboxed)\27[0m"):format(_VERSION)},
__tostring = function(self) return self._V end}) -- end env
--------------------------------------------------------------------------------------------------------------
local prompt = prompt or setmetatable({}, {__tostring = function() return getmetatable(env).__index._V .. "> " end})
local name <const> = name or _VERSION
local result = result or true
--------------------------------------------------------------------------------------------------------------
while Lua do
io.write(tostring(prompt))
cmd = io.read() or 'quit'
if cmd == "quit" then Lua, result = true, true break end
Lua, result = pcall(load("return " .. cmd or false, name, "t", env))
if Lua and result then
io.write(("%s"):format(tostring(result)))
else
goto exception
end
end
--------------------------------------------------------------------------------------------------------------
-- Errorhandler
::exception::
io.write(("%s\n"):format("Exception"))
if not Lua or not result then
io.output(io.stderr)
io.write(("[%s][%s][%s]\n\27[1;31m>>-Exception->\27[0m %s\n"):format(os.date(), _VERSION, cmd, result)):flush()
io.output(io.stdout)
collectgarbage()
goto lua
end
goto lua
end
--------------------------------------------------------------------------------------------------------------
-- EXAMPLE ---------------------------------------------------------------------------------------------------
-- Lua = require("interpreter")
Lua() -- UNCOMMENT-> For direct execution like: /bin/lua interpreter.lua
return Lua -- UNCOMMENT-> For: Lua = require("interpreter")
Ctrl&C quits hardly
Ctrl&D is only an exception
...and os.exit(0) do an clean exit with returncode 0.
Impression

Is it possible to get inside-out ordering of provisioners in a Vagrant multi-machine setup?

Is it possible to reverse the order of provisioners from innermost to outermost when using a multi-machine setup? I want a small shell provisioner to create some facts in /etc/facter/facts.d/ before provisioning with puppet, to mimic our current setup as much as possible. (I have inherited a large puppet repo and am trying to create a Vagrant testbed for it before I start doing changes.)
The puppet settings are the same for every box, but requires the shell provisioner to run first. Here's an example Vagrantfile to show what I want to do (some names changed to protect the innocent):
$facts =<<FACTS
set -x
mkdir -p /etc/facter/facts.d
echo role=$1 > /etc/facter/facts.d/role.txt
echo location=$2 > /etc/facter/facts.d/location.txt
echo environment=$3 > /etc/facter/facts.d/environment.txt
FACTS
Vagrant.configure(2) do |config|
config.vm.box = "centos-6.6"
config.vm.synced_folder "hiera", "/etc/puppet/hiera"
config.vm.provision :puppet do |puppet|
puppet.manifest_file = "site.pp"
puppet.module_path = ["modules", "internal"]
puppet.hiera_config_path = "hiera.yaml"
puppet.options = "--test"
end
config.vm.define :foo1 do |c|
c.vm.hostname = "foo-1.vagrant"
c.vm.provision :shell, inline: $facts, args: "foo testing stage"
end
config.vm.define :bar do |c|
c.vm.hostname = "bar-1.vagrant"
c.vm.provision :shell, inline: $facts, args: "bar testing stage"
end
# ... more machines omitted ...
end
Answering my own question as I found an acceptable workaround: I moved the puppet provisioning into the inner block. Here's what my current code looks like:
$facts =<<SET_FACTS
set -x
mkdir -p /etc/facter/facts.d
echo role=$1 > /etc/facter/facts.d/role.txt
echo location=$2 > /etc/facter/facts.d/location.txt
echo environment=$3 > /etc/facter/facts.d/environment.txt
SET_FACTS
module Vagrant
module Config
module V2
class Root
def provision(role, location, environment)
vm.provision "set-facts",
type: :shell,
inline: $facts,
args: [role, location, environment].map { |x| x.to_s }
vm.provision :puppet do |puppet|
puppet.manifest_file = "site.pp"
puppet.module_path = ["modules", "internal"]
puppet.hiera_config_path = "hiera.yaml"
end
end
end
end
end
end
Vagrant.configure(2) do |config|
config.vm.box = "centos-6.6"
config.vm.synced_folder "hiera", "/etc/puppet/hiera"
config.vm.define :foo1 do |c|
c.vm.hostname = "foo-1.vagrant"
c.provision(:foo, :testing, :stage)
end
config.vm.define :bar1 do |c|
c.vm.hostname = "bar-1.vagrant"
c.provision(:bar, :testing, :stage)
end
end

How to set conditional variables in capistrano's deploy.rb

Snippet from deploy.rb
task :prod1 do
set :deploy_to, "/home/project/src/prod1"
end
task :prod2 do
set :deploy_to, "/home/project/src/prod2"
end
I have 2 tasks like the above. Now instead of manually running either "cap prod1 deploy" or "cap prod2 deploy", I want to create a task "prod" which sets the required "deploy_to" based on the existence of a file on the server.
something like:
task :prod do
if (A_FILE_IN_SERVER_EXISTS)
set :deploy_to, "/home/project/src/prod2"
else
set :deploy_to, "/home/project/src/prod1"
end
How do I do that?
You can do that like this:
task :set_deploy_to_location do
if capture("[ -f /etc/passwd2 ] && echo '1' || echo '0'").strip == '1'
set :deploy_to, "/home/project/src/prod2"
else
set :deploy_to, "/home/project/src/prod1"
end
logger.info "set deploy_to = #{deploy_to}"
end
This will do what you need. You can hook this method using before and after hooks like this:
before :deploy, :set_deploy_to_location

Is it possible to copy all files from one S3 bucket to another with s3cmd?

I'm pretty happy with s3cmd, but there is one issue: How to copy all files from one S3 bucket to another? Is it even possible?
EDIT: I've found a way to copy files between buckets using Python with boto:
from boto.s3.connection import S3Connection
def copyBucket(srcBucketName, dstBucketName, maxKeys = 100):
conn = S3Connection(awsAccessKey, awsSecretKey)
srcBucket = conn.get_bucket(srcBucketName);
dstBucket = conn.get_bucket(dstBucketName);
resultMarker = ''
while True:
keys = srcBucket.get_all_keys(max_keys = maxKeys, marker = resultMarker)
for k in keys:
print 'Copying ' + k.key + ' from ' + srcBucketName + ' to ' + dstBucketName
t0 = time.clock()
dstBucket.copy_key(k.key, srcBucketName, k.key)
print time.clock() - t0, ' seconds'
if len(keys) < maxKeys:
print 'Done'
break
resultMarker = keys[maxKeys - 1].key
Syncing is almost as straight forward as copying. There are fields for ETag, size, and last-modified available for keys.
Maybe this helps others as well.
s3cmd sync s3://from/this/bucket/ s3://to/this/bucket/
For available options, please use:
$s3cmd --help
AWS CLI seems to do the job perfectly, and has the bonus of being an officially supported tool.
aws s3 sync s3://mybucket s3://backup-mybucket
http://docs.aws.amazon.com/cli/latest/reference/s3/sync.html
The answer with the most upvotes as I write this is this one:
s3cmd sync s3://from/this/bucket s3://to/this/bucket
It's a useful answer. But sometimes sync is not what you need (it deletes files, etc.). It took me a long time to figure out this non-scripting alternative to simply copy multiple files between buckets. (OK, in the case shown below it's not between buckets. It's between not-really-folders, but it works between buckets equally well.)
# Slightly verbose, slightly unintuitive, very useful:
s3cmd cp --recursive --exclude=* --include=file_prefix* s3://semarchy-inc/source1/ s3://semarchy-inc/target/
Explanation of the above command:
–recursiveIn my mind, my requirement is not recursive. I simply want multiple files. But recursive in this context just tells s3cmd cp to handle multiple files. Great.
–excludeIt’s an odd way to think of the problem. Begin by recursively selecting all files. Next, exclude all files. Wait, what?
–includeNow we’re talking. Indicate the file prefix (or suffix or whatever pattern) that you want to include.s3://sourceBucket/ s3://targetBucket/This part is intuitive enough. Though technically it seems to violate the documented example from s3cmd help which indicates that a source object must be specified:s3cmd cp s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
You can also use the web interface to do so:
Go to the source bucket in the web interface.
Mark the files you want to copy (use shift and mouse clicks to mark several).
Press Actions->Copy.
Go to the destination bucket.
Press Actions->Paste.
That's it.
I needed to copy a very large bucket so I adapted the code in the question into a multi threaded version and put it up on GitHub.
https://github.com/paultuckey/s3-bucket-to-bucket-copy-py
It's actually possible. This worked for me:
import boto
AWS_ACCESS_KEY = 'Your access key'
AWS_SECRET_KEY = 'Your secret key'
conn = boto.s3.connection.S3Connection(AWS_ACCESS_KEY, AWS_SECRET_KEY)
bucket = boto.s3.bucket.Bucket(conn, SRC_BUCKET_NAME)
for item in bucket:
# Note: here you can put also a path inside the DEST_BUCKET_NAME,
# if you want your item to be stored inside a folder, like this:
# bucket.copy(DEST_BUCKET_NAME, '%s/%s' % (folder_name, item.key))
bucket.copy(DEST_BUCKET_NAME, item.key)
Thanks - I use a slightly modified version, where I only copy files that don't exist or are a different size, and check on the destination if the key exists in the source. I found this a bit quicker for readying the test environment:
def botoSyncPath(path):
"""
Sync keys in specified path from source bucket to target bucket.
"""
try:
conn = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
srcBucket = conn.get_bucket(AWS_SRC_BUCKET)
destBucket = conn.get_bucket(AWS_DEST_BUCKET)
for key in srcBucket.list(path):
destKey = destBucket.get_key(key.name)
if not destKey or destKey.size != key.size:
key.copy(AWS_DEST_BUCKET, key.name)
for key in destBucket.list(path):
srcKey = srcBucket.get_key(key.name)
if not srcKey:
key.delete()
except:
return False
return True
I wrote a script that backs up an S3 bucket: https://github.com/roseperrone/aws-backup-rake-task
#!/usr/bin/env python
from boto.s3.connection import S3Connection
import re
import datetime
import sys
import time
def main():
s3_ID = sys.argv[1]
s3_key = sys.argv[2]
src_bucket_name = sys.argv[3]
num_backup_buckets = sys.argv[4]
connection = S3Connection(s3_ID, s3_key)
delete_oldest_backup_buckets(connection, num_backup_buckets)
backup(connection, src_bucket_name)
def delete_oldest_backup_buckets(connection, num_backup_buckets):
"""Deletes the oldest backup buckets such that only the newest NUM_BACKUP_BUCKETS - 1 buckets remain."""
buckets = connection.get_all_buckets() # returns a list of bucket objects
num_buckets = len(buckets)
backup_bucket_names = []
for bucket in buckets:
if (re.search('backup-' + r'\d{4}-\d{2}-\d{2}' , bucket.name)):
backup_bucket_names.append(bucket.name)
backup_bucket_names.sort(key=lambda x: datetime.datetime.strptime(x[len('backup-'):17], '%Y-%m-%d').date())
# The buckets are sorted latest to earliest, so we want to keep the last NUM_BACKUP_BUCKETS - 1
delete = len(backup_bucket_names) - (int(num_backup_buckets) - 1)
if delete <= 0:
return
for i in range(0, delete):
print 'Deleting the backup bucket, ' + backup_bucket_names[i]
connection.delete_bucket(backup_bucket_names[i])
def backup(connection, src_bucket_name):
now = datetime.datetime.now()
# the month and day must be zero-filled
new_backup_bucket_name = 'backup-' + str('%02d' % now.year) + '-' + str('%02d' % now.month) + '-' + str(now.day);
print "Creating new bucket " + new_backup_bucket_name
new_backup_bucket = connection.create_bucket(new_backup_bucket_name)
copy_bucket(src_bucket_name, new_backup_bucket_name, connection)
def copy_bucket(src_bucket_name, dst_bucket_name, connection, maximum_keys = 100):
src_bucket = connection.get_bucket(src_bucket_name);
dst_bucket = connection.get_bucket(dst_bucket_name);
result_marker = ''
while True:
keys = src_bucket.get_all_keys(max_keys = maximum_keys, marker = result_marker)
for k in keys:
print 'Copying ' + k.key + ' from ' + src_bucket_name + ' to ' + dst_bucket_name
t0 = time.clock()
dst_bucket.copy_key(k.key, src_bucket_name, k.key)
print time.clock() - t0, ' seconds'
if len(keys) < maximum_keys:
print 'Done backing up.'
break
result_marker = keys[maximum_keys - 1].key
if __name__ =='__main__':main()
I use this in a rake task (for a Rails app):
desc "Back up a file onto S3"
task :backup do
S3ID = "*****"
S3KEY = "*****"
SRCBUCKET = "primary-mzgd"
NUM_BACKUP_BUCKETS = 2
Dir.chdir("#{Rails.root}/lib/tasks")
system "./do_backup.py #{S3ID} #{S3KEY} #{SRCBUCKET} #{NUM_BACKUP_BUCKETS}"
end
mdahlman's code didn't work for me but this command copies all the files in the bucket1 to a new folder (command also creates this new folder) in bucket 2.
cp --recursive --include=file_prefix* s3://bucket1/ s3://bucket2/new_folder_name/
s3cmd won't cp with only prefixes or wildcards but you can script the behavior with 's3cmd ls sourceBucket', and awk to extract the object name. Then use 's3cmd cp sourceBucket/name destBucket' to copy each object name in the list.
I use these batch files in a DOS box on Windows:
s3list.bat
s3cmd ls %1 | gawk "/s3/{ print \"\\"\"\"substr($0,index($0,\"s3://\"))\"\\"\"\"; }"
s3copy.bat
#for /F "delims=" %%s in ('s3list %1') do #s3cmd cp %%s %2
You can also use s3funnel which uses multi-threading:
https://github.com/neelakanta/s3funnel
example (without the access key or secret key parameters shown):
s3funnel source-bucket-name list | s3funnel dest-bucket-name copy --source-bucket source-bucket-name --threads=10