Scala Process for Linux is stuck - scala

I'm trying to use Scala Process in order to concate two files and send the result to a new file.
The code works fine, but when i remove the permissions to the folder, it seems to be stuck.
Here is the code:
val copyCommand = Seq("bash", "-c", "cat \"" + headerPath + "\" \"" + FilePath + "\"")
Process(copyCommand).#>>(new File(FileWithHeader)).!

Maybe something like this can help (without invoking bash)?
import sys.process._
(Seq("cat", "file-1.txt", "file-2.txt") #>> new java.io.File("files-1n2.txt")).!

I preformed the concatination in the same comend without creating new file and it's work fine:
val copyCommand = Seq("bash", "-c", "cat \"" + headerPath + "\" \"" + FilePath + "\">FileWithHeader")
Process(copyCommand).#!

Related

Subprocess no output in windows

I need to run an exe file with different parameters and write the output to a file. I tried the following script. After the program is running, the output file is empty. Tell me how to solve this problem
import subprocess
output_f = open('output.txt', 'a')
for i in "abcdefghijklmnopqrstuvwxyz{}_":
for j in "abcdefghijklmnopqrstuvwxyz{}_":
program = 'C:/Users/PC/Desktop/task2 (1).exe ' + i +" " +j;
process = subprocess.Popen(program,stdout=output_f)
code = process.wait()
output_f.close()
print(code)

ImageJ macro for deinterleaving and merging colors

I'm trying to write a macro in Fiji that would deinterleave my original tif file, and then merge the two channels.
name=getTitle();
subname = substring(name, 0,14);
selectWindow(name);
dir = getDirectory("image");
fullname2 = name + " #2";
fullname1 = name + " #1";
run("Deinterleave", "how=2 keep");
selectWindow(name + " #2");
run("8-bit");
selectWindow(name + " #1");
run("8-bit");
run("Merge Channels...", "c1=["fullname2"] c2=["fullname1"] create");
saveAs("Tiff", dir + subname + "_composite.tif");
But there seems to be an error in the 12. line with Merge channels.
I don't get why.
I also tried writing that line like this:
run("Merge Channels...", "c1=[fullname2] c2=[fullname1] create");
But it also doesn't work.
Any ideas on what I'm doing wrong?
Thanks a lot!!
Ok! I figured it out! This is the solution, in case anyone has the same problem.
run("Merge Channels...", "c1=["+fullname2+"] c2=["+fullname1+"] create");

as400 crtcmd command not created in library

I am making my own command, and so far the cl code that processes the .cmd code works just fine on its own. I can call it and send in the parameters and it does exactly what it needs to do, so I'm assuming that the error must be with the .cmd:
CMD 'DISPLAY SYSTEM LEVEL (DSPSYSLVL) NADIA S.C.'
PARM KWD(OUTPUT)
MIN(1)
TYPE(*CHAR) LEN(8)
RSTD(*YES)
VALUES(*MSGLINE *DISPLAY)
PROMPT('OUTPUT FOR SYSTEM LEVEL')
PARM KWD(SOLUTION)
TYPE(*CHAR) LEN(4)
RSTD(*YES)
VALUES(*YES *NO)
DFT(*NO)
PROMPT('TELL ME HOW YOU DID IT')
PARM KWD(SHOWCMD)
TYPE(*CHAR) LEN(4)
RSTD(*YES)
VALUES(*YES *NO)
DFT(*NO)
PROMPT('SHOW COMMAND')
when I run crtcmd and give the appropriate filenames, I get the message "Command DSPSYSLVL not created in library [library name]." with a CPF0201 message.
I'm still fairly new to the whole system, and I'm really not sure what the problem could be. The job log doesn't provide any new information either...
It may just be a transcription issue but the first thing that stands out is the multi-line format without the continuation character (+):
CMD 'DISPLAY SYSTEM LEVEL (DSPSYSL'
PARM KWD(OUTPUT) +
MIN(1) +
TYPE(*CHAR) LEN(8) +
RSTD(*YES) +
VALUES(*MSGLINE *DISPLAY) +
PROMPT('OUTPUT FOR SYSTEM LEVEL')
PARM KWD(SOLUTION) +
TYPE(*CHAR) LEN(4) +
RSTD(*YES) +
VALUES(*YES *NO) +
DFT(*NO) +
PROMPT('TELL ME HOW YOU DID IT')
PARM KWD(SHOWCMD) +
TYPE(*CHAR) LEN(4) +
RSTD(*YES) +
VALUES(*YES *NO) +
DFT(*NO) +
PROMPT('SHOW COMMAND')
Each PARM is a single entity and must be 'continued' if split onto multiple lines.
The CRTCMD command should generate a spooled file containing more details about the errors.
EDIT: Also the maximum length of the CMD prompt is 30 characters.

Scripting with Scala: How to launch an uncompiled script?

Apart from serious performance problems, Scala is a very powerful language. Therefore I am now using it frequently for scripted tasks inside Bash. Is there a way to just execute a *.scala file exactly the way I can do with Python files? As far as I know, Python uses bytecode to execute programs, exactly like the JVM does. However, there is not anything called pythonc (like scalac or javac) I need to call in order to accomplish this. Hence I expect Scala to be able to act in a similar manner.
The scala man page provides some examples on how to run Scala code fragments as if they were a script, for both Windows and non-Windows platforms (below examples copied from the man page):
Unix
#!/bin/sh
exec scala "$0" "$#"
!#
Console.println("Hello, world!")
argv.toList foreach Console.println
Windows
::#!
#echo off
call scala %0 %*
goto :eof
::!#
Console.println("Hello, world!")
argv.toList foreach Console.println
To speed up subsequent runs you can cache the compiled fragment with the -savecompiled option:
#!/bin/sh
exec scala -savecompiled "$0" "$#"
!#
Console.println("Hello, world!")
argv.toList foreach Console.println
Update: as of Scala 2.11 (as noted in this similar answer), you can now just do this on Unix:
#!/usr/bin/env scala
println("Hello, world!")
println(args.mkString(" "))
I don't use python, but in Scala, the most scripty thing I can do is this:
thinkpux:~/proj/mini/forum > echo 'println(" 3 + 4 = " + (3 + 4))' | scala
Welcome to Scala version 2.10.2 (Java HotSpot(TM) Server VM, Java 1.7.0_09).
Type in expressions to have them evaluated.
Type :help for more information.
scala> println(" 3 + 4 = " + (3 + 4))
3 + 4 = 7
scala> thinkpux:~/proj/mini/forum >
However, afterwards, I don't have visual feedback in the bash, so I have to call 'clear'.
But there is no problem in writing a script and executing that:
thinkpux:~/proj/mini/forum > echo 'println(" 3 + 4 = " + (3 + 4))' > print7.scala
thinkpux:~/proj/mini/forum > scala print7.scala
3 + 4 = 7
Then, there aren't issues with the shell.
With an enclosing class, the code wouldn't be executed:
thinkpux:~/proj/mini/forum > echo -e 'class Foo {\nprintln(" 3 + 4 = " + (3 + 4))\n}\n'
class Foo {
println(" 3 + 4 = " + (3 + 4))
}
thinkpux:~/proj/mini/forum > scala Foo.scala
thinkpux:~/proj/mini/forum > cat Foo.scala
class Foo {
println(" 3 + 4 = " + (3 + 4))
}
But with instatiating a class, you can execute code in it, without using the wellknown (hope so) 'main' way:
thinkpux:~/proj/mini/forum > echo -e 'class Foo {\nprintln(" 3 + 4 = " + (3 + 4))\n}\nval foo = new Foo()' > Foo.scala
thinkpux:~/proj/mini/forum > cat Foo.scala
class Foo {
println(" 3 + 4 = " + (3 + 4))
}
val foo = new Foo()
thinkpux:~/proj/mini/forum > scala Foo.scala
3 + 4 = 7

Is it possible to copy all files from one S3 bucket to another with s3cmd?

I'm pretty happy with s3cmd, but there is one issue: How to copy all files from one S3 bucket to another? Is it even possible?
EDIT: I've found a way to copy files between buckets using Python with boto:
from boto.s3.connection import S3Connection
def copyBucket(srcBucketName, dstBucketName, maxKeys = 100):
conn = S3Connection(awsAccessKey, awsSecretKey)
srcBucket = conn.get_bucket(srcBucketName);
dstBucket = conn.get_bucket(dstBucketName);
resultMarker = ''
while True:
keys = srcBucket.get_all_keys(max_keys = maxKeys, marker = resultMarker)
for k in keys:
print 'Copying ' + k.key + ' from ' + srcBucketName + ' to ' + dstBucketName
t0 = time.clock()
dstBucket.copy_key(k.key, srcBucketName, k.key)
print time.clock() - t0, ' seconds'
if len(keys) < maxKeys:
print 'Done'
break
resultMarker = keys[maxKeys - 1].key
Syncing is almost as straight forward as copying. There are fields for ETag, size, and last-modified available for keys.
Maybe this helps others as well.
s3cmd sync s3://from/this/bucket/ s3://to/this/bucket/
For available options, please use:
$s3cmd --help
AWS CLI seems to do the job perfectly, and has the bonus of being an officially supported tool.
aws s3 sync s3://mybucket s3://backup-mybucket
http://docs.aws.amazon.com/cli/latest/reference/s3/sync.html
The answer with the most upvotes as I write this is this one:
s3cmd sync s3://from/this/bucket s3://to/this/bucket
It's a useful answer. But sometimes sync is not what you need (it deletes files, etc.). It took me a long time to figure out this non-scripting alternative to simply copy multiple files between buckets. (OK, in the case shown below it's not between buckets. It's between not-really-folders, but it works between buckets equally well.)
# Slightly verbose, slightly unintuitive, very useful:
s3cmd cp --recursive --exclude=* --include=file_prefix* s3://semarchy-inc/source1/ s3://semarchy-inc/target/
Explanation of the above command:
–recursiveIn my mind, my requirement is not recursive. I simply want multiple files. But recursive in this context just tells s3cmd cp to handle multiple files. Great.
–excludeIt’s an odd way to think of the problem. Begin by recursively selecting all files. Next, exclude all files. Wait, what?
–includeNow we’re talking. Indicate the file prefix (or suffix or whatever pattern) that you want to include.s3://sourceBucket/ s3://targetBucket/This part is intuitive enough. Though technically it seems to violate the documented example from s3cmd help which indicates that a source object must be specified:s3cmd cp s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
You can also use the web interface to do so:
Go to the source bucket in the web interface.
Mark the files you want to copy (use shift and mouse clicks to mark several).
Press Actions->Copy.
Go to the destination bucket.
Press Actions->Paste.
That's it.
I needed to copy a very large bucket so I adapted the code in the question into a multi threaded version and put it up on GitHub.
https://github.com/paultuckey/s3-bucket-to-bucket-copy-py
It's actually possible. This worked for me:
import boto
AWS_ACCESS_KEY = 'Your access key'
AWS_SECRET_KEY = 'Your secret key'
conn = boto.s3.connection.S3Connection(AWS_ACCESS_KEY, AWS_SECRET_KEY)
bucket = boto.s3.bucket.Bucket(conn, SRC_BUCKET_NAME)
for item in bucket:
# Note: here you can put also a path inside the DEST_BUCKET_NAME,
# if you want your item to be stored inside a folder, like this:
# bucket.copy(DEST_BUCKET_NAME, '%s/%s' % (folder_name, item.key))
bucket.copy(DEST_BUCKET_NAME, item.key)
Thanks - I use a slightly modified version, where I only copy files that don't exist or are a different size, and check on the destination if the key exists in the source. I found this a bit quicker for readying the test environment:
def botoSyncPath(path):
"""
Sync keys in specified path from source bucket to target bucket.
"""
try:
conn = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
srcBucket = conn.get_bucket(AWS_SRC_BUCKET)
destBucket = conn.get_bucket(AWS_DEST_BUCKET)
for key in srcBucket.list(path):
destKey = destBucket.get_key(key.name)
if not destKey or destKey.size != key.size:
key.copy(AWS_DEST_BUCKET, key.name)
for key in destBucket.list(path):
srcKey = srcBucket.get_key(key.name)
if not srcKey:
key.delete()
except:
return False
return True
I wrote a script that backs up an S3 bucket: https://github.com/roseperrone/aws-backup-rake-task
#!/usr/bin/env python
from boto.s3.connection import S3Connection
import re
import datetime
import sys
import time
def main():
s3_ID = sys.argv[1]
s3_key = sys.argv[2]
src_bucket_name = sys.argv[3]
num_backup_buckets = sys.argv[4]
connection = S3Connection(s3_ID, s3_key)
delete_oldest_backup_buckets(connection, num_backup_buckets)
backup(connection, src_bucket_name)
def delete_oldest_backup_buckets(connection, num_backup_buckets):
"""Deletes the oldest backup buckets such that only the newest NUM_BACKUP_BUCKETS - 1 buckets remain."""
buckets = connection.get_all_buckets() # returns a list of bucket objects
num_buckets = len(buckets)
backup_bucket_names = []
for bucket in buckets:
if (re.search('backup-' + r'\d{4}-\d{2}-\d{2}' , bucket.name)):
backup_bucket_names.append(bucket.name)
backup_bucket_names.sort(key=lambda x: datetime.datetime.strptime(x[len('backup-'):17], '%Y-%m-%d').date())
# The buckets are sorted latest to earliest, so we want to keep the last NUM_BACKUP_BUCKETS - 1
delete = len(backup_bucket_names) - (int(num_backup_buckets) - 1)
if delete <= 0:
return
for i in range(0, delete):
print 'Deleting the backup bucket, ' + backup_bucket_names[i]
connection.delete_bucket(backup_bucket_names[i])
def backup(connection, src_bucket_name):
now = datetime.datetime.now()
# the month and day must be zero-filled
new_backup_bucket_name = 'backup-' + str('%02d' % now.year) + '-' + str('%02d' % now.month) + '-' + str(now.day);
print "Creating new bucket " + new_backup_bucket_name
new_backup_bucket = connection.create_bucket(new_backup_bucket_name)
copy_bucket(src_bucket_name, new_backup_bucket_name, connection)
def copy_bucket(src_bucket_name, dst_bucket_name, connection, maximum_keys = 100):
src_bucket = connection.get_bucket(src_bucket_name);
dst_bucket = connection.get_bucket(dst_bucket_name);
result_marker = ''
while True:
keys = src_bucket.get_all_keys(max_keys = maximum_keys, marker = result_marker)
for k in keys:
print 'Copying ' + k.key + ' from ' + src_bucket_name + ' to ' + dst_bucket_name
t0 = time.clock()
dst_bucket.copy_key(k.key, src_bucket_name, k.key)
print time.clock() - t0, ' seconds'
if len(keys) < maximum_keys:
print 'Done backing up.'
break
result_marker = keys[maximum_keys - 1].key
if __name__ =='__main__':main()
I use this in a rake task (for a Rails app):
desc "Back up a file onto S3"
task :backup do
S3ID = "*****"
S3KEY = "*****"
SRCBUCKET = "primary-mzgd"
NUM_BACKUP_BUCKETS = 2
Dir.chdir("#{Rails.root}/lib/tasks")
system "./do_backup.py #{S3ID} #{S3KEY} #{SRCBUCKET} #{NUM_BACKUP_BUCKETS}"
end
mdahlman's code didn't work for me but this command copies all the files in the bucket1 to a new folder (command also creates this new folder) in bucket 2.
cp --recursive --include=file_prefix* s3://bucket1/ s3://bucket2/new_folder_name/
s3cmd won't cp with only prefixes or wildcards but you can script the behavior with 's3cmd ls sourceBucket', and awk to extract the object name. Then use 's3cmd cp sourceBucket/name destBucket' to copy each object name in the list.
I use these batch files in a DOS box on Windows:
s3list.bat
s3cmd ls %1 | gawk "/s3/{ print \"\\"\"\"substr($0,index($0,\"s3://\"))\"\\"\"\"; }"
s3copy.bat
#for /F "delims=" %%s in ('s3list %1') do #s3cmd cp %%s %2
You can also use s3funnel which uses multi-threading:
https://github.com/neelakanta/s3funnel
example (without the access key or secret key parameters shown):
s3funnel source-bucket-name list | s3funnel dest-bucket-name copy --source-bucket source-bucket-name --threads=10