Including variable in file directory in matlab ..? - matlab

I want to access several sequenced folders
Example :
[ndata, text, alldata] = xlsread(' D: \ folder \ 1 \ file ' ) ;
[ndata, text, alldata] = xlsread(' D: \ folder \ 2 \ file ' ) ;
[ndata, text, alldata] = xlsread(' D: \ folder \ 3 \ file ' ) ;
[ndata, text, alldata] = xlsread(' D: \ folder \ 4 \ file ' ) ;
Could I replace 1,2,3and 4 by variable i .. How could the directory be written here ?!
Please need any recommendation !

The fullfile command is meant for this purpose:
xlsread(fullfile('D:','folder', sprintf('%d',i) , 'file'));
The fullfile function takes care of OS-specific file separator and insuring only one file separator is used per folder division. (i.e. strcmp(fullfile('a','b') equals fullfile('a/','/b'))

[ndata, text, alldata] = xlsread([' D:/folder/' num2str(i) '/file ' ]) ;

Just use forward slashes, that works everywhere.
Don't make things harder than they should be.

Yes, you can. Use the sprintf() command.
i=1;
[ndata, text, alldata] = xlsread(sprintf('D:\\folder\\%i\\file',i))
To make sure this is working right, change the sprintf to a fprintf, and make sure the file exists.
>> i=1;
>> fprintf('D:\\folder\\%i\\file',i)
D:\folder\1\file
>> ls D:\folder\1\file

Related

Generate many files with wildcard, then merge into one

I have two rules on my Snakefile: one generates several sets of files using wildcards, the other one merges everything into a single file. This is how I wrote it:
chr = range(1,23)
rule generate:
input:
og_files = config["tmp"] + '/chr{chr}.bgen',
output:
out = multiext(config["tmp"] + '/plink/chr{{chr}}',
'.bed', '.bim', '.fam')
shell:
"""
plink \
--bgen {input.og_files} \
--make-bed \
--oxford-single-chr \
--out {config[tmp]}/plink/chr{chr}
"""
rule merge:
input:
plink_chr = expand(config["tmp"] + '/plink/chr{chr}.{ext}',
chr = chr,
ext = ['bed', 'bim', 'fam'])
output:
out = multiext(config["tmp"] + '/all',
'.bed', '.bim', '.fam')
shell:
"""
plink \
--pmerge-list-dir {config[tmp]}/plink \
--make-bed \
--out {config[tmp]}/all
"""
Unfortunately, this does not allow me to track the file coming from the first rule to the 2nd rule:
$ snakemake -s myfile.smk -c1 -np
Building DAG of jobs...
MissingInputException in line 17 of myfile.smk:
Missing input files for rule merge:
[list of all the files made by expand()]
What can I use to be able to generate the 22 sets of files with the wildcard chr in generate, but be able to track them in the input of merge? Thank you in advance for your help
In rule generate I think you don't want to escape the {chr} wildcard, otherwise it doesn't get replaced. I.e.:
out = multiext(config["tmp"] + '/plink/chr{{chr}}',
'.bed', '.bim', '.fam')
should be:
out = multiext(config["tmp"] + '/plink/chr{chr}',
'.bed', '.bim', '.fam')

How to run program with variable command line arguments?

I have the following script:
For $alpha = 1 to 10
For $beta = 1 to 10
Run('"C:\Users\MyProg.exe ' & alpha/10 & ' ' & beta/10 & ' 200 2 0.5'
;some other actions follow
Next
Next
I have checked many times that the string is well-formed, thus I have no idea why the script wouldn't run the program. Could you help me please?
Just replace the ending ' with "') and use proper variable names including the $... like $alpha instead of just alpha. Your syntax check in SciTE should have told you.
For $alpha = 1 to 10
For $beta = 1 to 10
Run('"C:\Users\MyProg.exe ' & $alpha/10 & ' ' & $beta/10 & ' 200 2 0.5"')
;some other actions follow
Next
Next

How to merge two video parts and get a playable video file using Python?

Here, I actually wants to merge two strings out1 & out2 (which contains the first and second 30sec long video data) and write that to a file. So that I will get a 1min long playable video file. But what I am getting is the first 30sec video only. How should I edit this code to achieve that ? Please help me. Thanks a lot in advance.
import subprocess,os
ffmpeg_command1 = ["ffmpeg", "-i", "PATH/connect.webm", "-vcodec", "copy", "-ss", "00:00:00", "-t", "00:00:30","-f", "webm", "pipe:1"]
p1 = subprocess.Popen(ffmpeg_command1,stdout=subprocess.PIPE)
out1, err = p1.communicate()
ffmpeg_command2 = ["ffmpeg", "-i", "PATH/connect.webm","-vcodec", "copy", "-ss", "00:00:31", "-t", "00:00:30","-f", "webm", "pipe:1"]
p2 = subprocess.Popen(ffmpeg_command2,stdout=subprocess.PIPE)
out2, err1 = p2.communicate()
string = out1 + out2
fname = "PATH/final.webm"
fp = open(fname,'wb')
fp.write(string)
fp.close()
Please help me. I struck.
If you want to concat two videos with ffmpeg, it works like that:
ffmpeg -vcodec copy -isync -i \
"concat:file1.mp4|file2.mp4|...|fileN.mp4" \
outputfile.mp4
#coding=utf-8
import os
#Function to create a file list in the folder
import os
s = os.sep
path = r"F:\folder_mp4_files\temp"
def create_file_list(path):
return_list = []
for filenames in os.walk(path):
for file_list in filenames:
for file_name in file_list:
if file_name.endswith((".mp4")):
return_list.append(path+s+file_name)
return return_list
alist = create_file_list(path)
tsString = '|'.join([i.replace('.mp4','.ts') for i in alist])
print(tsString)
# mp4 converts to ts
for i in alist:
noExtension = i.replace('.mp4','')
# batch processing
os.system("ffmpeg -i %s -vcodec copy -acodec copy -vbsf h264_mp4toannexb %s.ts" % (i,noExtension))
# Remove used mp4 files
for i in alist:
os.remove(i)
os.system("""ffmpeg -i concat:"{0}" -acodec copy -vcodec copy -absf aac_adtstoasc {1}""".format(tsString, alist[0]))
# Remove used ts files
for i in alist:
os.remove(i.replace('.mp4','.ts'))

Write string as it is to a file in Matlab

In a matlab script, I'm generating a latex table. A part of that table for example looks likes this.
\multirow{2}{*}{\textbf{b1}}
&
2 & 3 & 10092 & 10763 & 103390 & 2797 & 2929 & 3008 & 5\% & 8\% \\
& 4 & 2 & 20184 & 10763 & 74508 & 1830 & 1970 & 2029 & 8\% & 11\% \\
This string is saved in variable str. Now when I try to write str to a file by using the following code.
f = fopen( 'report\results.tex', 'w' );
fprintf( f, str );
fclose(f);
I get the following warning.
Warning: Invalid escape sequence appears in format string.
See help sprintf for valid escape sequences.
That is probably due to many backslash characters in my string, which is used as escape sequence. Now how can I print this string to a file as it is.
escape the backslashes and percent signs:
str = strrep(str,'\','\\');
str = strrep(str,'%','%%');
If it's just text your printing, this'll be fine.
Minimal working example:
str = '2 & 3 & 10092 & 10763 & 103390 & 2797 & 2929 & 3008 & 5\% & 8\% \\'
str = strrep(str,'\','\\');
str = strrep(str,'%','%%');
f=fopen('testing123.txt','w');
fprintf(f,str);
fclose(f);
and the file reads:
2 & 3 & 10092 & 10763 & 103390 & 2797 & 2929 & 3008 & 5\% & 8\% \\
OR as Ben A. suggests, use fwrite:
fwrite(f,str)
and I think
fprintf(f,'%s',str)
will also do the trick, and it's best to also include a newline:
fprintf(f,'%s\n',str)

Is it possible to copy all files from one S3 bucket to another with s3cmd?

I'm pretty happy with s3cmd, but there is one issue: How to copy all files from one S3 bucket to another? Is it even possible?
EDIT: I've found a way to copy files between buckets using Python with boto:
from boto.s3.connection import S3Connection
def copyBucket(srcBucketName, dstBucketName, maxKeys = 100):
conn = S3Connection(awsAccessKey, awsSecretKey)
srcBucket = conn.get_bucket(srcBucketName);
dstBucket = conn.get_bucket(dstBucketName);
resultMarker = ''
while True:
keys = srcBucket.get_all_keys(max_keys = maxKeys, marker = resultMarker)
for k in keys:
print 'Copying ' + k.key + ' from ' + srcBucketName + ' to ' + dstBucketName
t0 = time.clock()
dstBucket.copy_key(k.key, srcBucketName, k.key)
print time.clock() - t0, ' seconds'
if len(keys) < maxKeys:
print 'Done'
break
resultMarker = keys[maxKeys - 1].key
Syncing is almost as straight forward as copying. There are fields for ETag, size, and last-modified available for keys.
Maybe this helps others as well.
s3cmd sync s3://from/this/bucket/ s3://to/this/bucket/
For available options, please use:
$s3cmd --help
AWS CLI seems to do the job perfectly, and has the bonus of being an officially supported tool.
aws s3 sync s3://mybucket s3://backup-mybucket
http://docs.aws.amazon.com/cli/latest/reference/s3/sync.html
The answer with the most upvotes as I write this is this one:
s3cmd sync s3://from/this/bucket s3://to/this/bucket
It's a useful answer. But sometimes sync is not what you need (it deletes files, etc.). It took me a long time to figure out this non-scripting alternative to simply copy multiple files between buckets. (OK, in the case shown below it's not between buckets. It's between not-really-folders, but it works between buckets equally well.)
# Slightly verbose, slightly unintuitive, very useful:
s3cmd cp --recursive --exclude=* --include=file_prefix* s3://semarchy-inc/source1/ s3://semarchy-inc/target/
Explanation of the above command:
–recursiveIn my mind, my requirement is not recursive. I simply want multiple files. But recursive in this context just tells s3cmd cp to handle multiple files. Great.
–excludeIt’s an odd way to think of the problem. Begin by recursively selecting all files. Next, exclude all files. Wait, what?
–includeNow we’re talking. Indicate the file prefix (or suffix or whatever pattern) that you want to include.s3://sourceBucket/ s3://targetBucket/This part is intuitive enough. Though technically it seems to violate the documented example from s3cmd help which indicates that a source object must be specified:s3cmd cp s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
You can also use the web interface to do so:
Go to the source bucket in the web interface.
Mark the files you want to copy (use shift and mouse clicks to mark several).
Press Actions->Copy.
Go to the destination bucket.
Press Actions->Paste.
That's it.
I needed to copy a very large bucket so I adapted the code in the question into a multi threaded version and put it up on GitHub.
https://github.com/paultuckey/s3-bucket-to-bucket-copy-py
It's actually possible. This worked for me:
import boto
AWS_ACCESS_KEY = 'Your access key'
AWS_SECRET_KEY = 'Your secret key'
conn = boto.s3.connection.S3Connection(AWS_ACCESS_KEY, AWS_SECRET_KEY)
bucket = boto.s3.bucket.Bucket(conn, SRC_BUCKET_NAME)
for item in bucket:
# Note: here you can put also a path inside the DEST_BUCKET_NAME,
# if you want your item to be stored inside a folder, like this:
# bucket.copy(DEST_BUCKET_NAME, '%s/%s' % (folder_name, item.key))
bucket.copy(DEST_BUCKET_NAME, item.key)
Thanks - I use a slightly modified version, where I only copy files that don't exist or are a different size, and check on the destination if the key exists in the source. I found this a bit quicker for readying the test environment:
def botoSyncPath(path):
"""
Sync keys in specified path from source bucket to target bucket.
"""
try:
conn = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
srcBucket = conn.get_bucket(AWS_SRC_BUCKET)
destBucket = conn.get_bucket(AWS_DEST_BUCKET)
for key in srcBucket.list(path):
destKey = destBucket.get_key(key.name)
if not destKey or destKey.size != key.size:
key.copy(AWS_DEST_BUCKET, key.name)
for key in destBucket.list(path):
srcKey = srcBucket.get_key(key.name)
if not srcKey:
key.delete()
except:
return False
return True
I wrote a script that backs up an S3 bucket: https://github.com/roseperrone/aws-backup-rake-task
#!/usr/bin/env python
from boto.s3.connection import S3Connection
import re
import datetime
import sys
import time
def main():
s3_ID = sys.argv[1]
s3_key = sys.argv[2]
src_bucket_name = sys.argv[3]
num_backup_buckets = sys.argv[4]
connection = S3Connection(s3_ID, s3_key)
delete_oldest_backup_buckets(connection, num_backup_buckets)
backup(connection, src_bucket_name)
def delete_oldest_backup_buckets(connection, num_backup_buckets):
"""Deletes the oldest backup buckets such that only the newest NUM_BACKUP_BUCKETS - 1 buckets remain."""
buckets = connection.get_all_buckets() # returns a list of bucket objects
num_buckets = len(buckets)
backup_bucket_names = []
for bucket in buckets:
if (re.search('backup-' + r'\d{4}-\d{2}-\d{2}' , bucket.name)):
backup_bucket_names.append(bucket.name)
backup_bucket_names.sort(key=lambda x: datetime.datetime.strptime(x[len('backup-'):17], '%Y-%m-%d').date())
# The buckets are sorted latest to earliest, so we want to keep the last NUM_BACKUP_BUCKETS - 1
delete = len(backup_bucket_names) - (int(num_backup_buckets) - 1)
if delete <= 0:
return
for i in range(0, delete):
print 'Deleting the backup bucket, ' + backup_bucket_names[i]
connection.delete_bucket(backup_bucket_names[i])
def backup(connection, src_bucket_name):
now = datetime.datetime.now()
# the month and day must be zero-filled
new_backup_bucket_name = 'backup-' + str('%02d' % now.year) + '-' + str('%02d' % now.month) + '-' + str(now.day);
print "Creating new bucket " + new_backup_bucket_name
new_backup_bucket = connection.create_bucket(new_backup_bucket_name)
copy_bucket(src_bucket_name, new_backup_bucket_name, connection)
def copy_bucket(src_bucket_name, dst_bucket_name, connection, maximum_keys = 100):
src_bucket = connection.get_bucket(src_bucket_name);
dst_bucket = connection.get_bucket(dst_bucket_name);
result_marker = ''
while True:
keys = src_bucket.get_all_keys(max_keys = maximum_keys, marker = result_marker)
for k in keys:
print 'Copying ' + k.key + ' from ' + src_bucket_name + ' to ' + dst_bucket_name
t0 = time.clock()
dst_bucket.copy_key(k.key, src_bucket_name, k.key)
print time.clock() - t0, ' seconds'
if len(keys) < maximum_keys:
print 'Done backing up.'
break
result_marker = keys[maximum_keys - 1].key
if __name__ =='__main__':main()
I use this in a rake task (for a Rails app):
desc "Back up a file onto S3"
task :backup do
S3ID = "*****"
S3KEY = "*****"
SRCBUCKET = "primary-mzgd"
NUM_BACKUP_BUCKETS = 2
Dir.chdir("#{Rails.root}/lib/tasks")
system "./do_backup.py #{S3ID} #{S3KEY} #{SRCBUCKET} #{NUM_BACKUP_BUCKETS}"
end
mdahlman's code didn't work for me but this command copies all the files in the bucket1 to a new folder (command also creates this new folder) in bucket 2.
cp --recursive --include=file_prefix* s3://bucket1/ s3://bucket2/new_folder_name/
s3cmd won't cp with only prefixes or wildcards but you can script the behavior with 's3cmd ls sourceBucket', and awk to extract the object name. Then use 's3cmd cp sourceBucket/name destBucket' to copy each object name in the list.
I use these batch files in a DOS box on Windows:
s3list.bat
s3cmd ls %1 | gawk "/s3/{ print \"\\"\"\"substr($0,index($0,\"s3://\"))\"\\"\"\"; }"
s3copy.bat
#for /F "delims=" %%s in ('s3list %1') do #s3cmd cp %%s %2
You can also use s3funnel which uses multi-threading:
https://github.com/neelakanta/s3funnel
example (without the access key or secret key parameters shown):
s3funnel source-bucket-name list | s3funnel dest-bucket-name copy --source-bucket source-bucket-name --threads=10