I am working on a batch script that produces the deployment shell script according to my input, and I found my generated shell script has EOL \r\n, so I need to add a procedure to convert this file to use \n. I was looking for a solution in native batch but failed, so now I am trying to do it in other approaches.
The input is a typical shell script of commands with \r\n as EOL.
The output should be the same shell script with \n as EOL.
Any method that can be done in Windows environment will be appreciated.
nodejs approach:
const fs = require('fs');
fs.readFile('apply_patchx.sh','utf8',(err,data)=>{
fs.writeFile('apply_patch.sh',data.replace(/\r?\n/g,"\n"),()=>{
console.log('converted');
});
});
Call by node rn2n.js, input embedded in script.
Java approach:
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;
public class RN2N {
public static void main(String[] args){
try {
BufferedReader br = new BufferedReader(new FileReader("apply_patchx.sh"));
BufferedWriter bw = new BufferedWriter(new FileWriter("apply_patch.sh"));
for(String line; (line = br.readLine()) != null;){
bw.write(line.replaceAll("\\\\r\\\\\n", "\\\\n"));
}
br.close();
bw.flush();
bw.close();
System.out.println("converted");
} catch (IOException e) {
System.exit(1);
}
}
}
Call by java RN2N, input embedded in script.
Related
I'm running OpenJDK 14 on macOS 10.15.7. I'm doing some proof-of-concept code establishing an SSH server with Apache Mina SSHD and then connecting to it. Here's what I have:
import java.io.IOException;
import java.nio.file.Paths;
import java.util.Arrays;
import org.apache.sshd.common.cipher.BuiltinCiphers;
import org.apache.sshd.common.util.logging.AbstractLoggingBean;
import org.apache.sshd.server.ServerBuilder;
import org.apache.sshd.server.SshServer;
import org.apache.sshd.server.auth.AsyncAuthException;
import org.apache.sshd.server.auth.password.PasswordAuthenticator;
import org.apache.sshd.server.auth.password.PasswordChangeRequiredException;
import org.apache.sshd.server.keyprovider.SimpleGeneratorHostKeyProvider;
import org.apache.sshd.server.session.ServerSession;
import org.apache.sshd.server.shell.InteractiveProcessShellFactory;
import org.apache.sshd.server.shell.ProcessShellFactory;
public class FunctionalTest
{
private static class TestAuthenticator
extends AbstractLoggingBean
implements PasswordAuthenticator
{
#Override
public boolean authenticate(String username, String password, ServerSession session)
throws PasswordChangeRequiredException, AsyncAuthException
{
if ("test".equals(username) && "foobar".equals(password))
{
this.log.info("authenticate({}[{}]: accepted", username, session);
return true;
}
this.log.warn("authenticate({}[{}]: rejected", username, session);
return false;
}
}
public static void main(String... args) throws IOException
{
SshServer sshd = SshServer.setUpDefaultServer();
sshd.setHost("0.0.0.0");
sshd.setPort(1022);
sshd.setShellFactory(InteractiveProcessShellFactory.INSTANCE);
sshd.setPasswordAuthenticator(new TestAuthenticator());
sshd.setCipherFactories(Arrays.asList(BuiltinCiphers.aes256ctr, BuiltinCiphers.aes192ctr));
sshd.setKeyExchangeFactories(ServerBuilder.setUpDefaultKeyExchanges(false));
sshd.setKeyPairProvider(new SimpleGeneratorHostKeyProvider(Paths.get("key.ser")));
sshd.start();
try
{
Thread.sleep(3_600_000);
}
catch(InterruptedException e)
{
System.out.println("Caught interrupt ... stopping server.");
sshd.stop(true);
}
}
}
When I start this, I can ssh -p 1022 test#localhost with the password foobar and it works. After successful authentication, I first see this:
sh: no job control in this shell
Then at the prompt, characters I type (including newlines) are echoed twice instead of once, resulting in everything being dduupplliiccaatteedd:
williamsn:mina-test williamsn$ llss --aall
total 24
... (list of files)
williamsn:mina-test williamsn$ eecchhoo hheelllloo
hello
williamsn:mina-test williamsn$
Additionally, if I run an interactive command like top, it doesn't recognize my inputs, and control characters don't work. ttoopp starts (though its output is ugly and additive instead of replacing the screen), but if I type q to exit (q is not echoed twice in this case), top does not exit in response to the q. It just keeps going. ctrl+c also does not workâtop just keeps going. The only way to exit top is to kill my ssh process or shut down the MINA server.
I feel like I must be doing something terribly wrong here. Thoughts?
The "no job control" message indicates that the spawned shell Is not in a full interactive mode, and the double letters show that you have a mismatch between local and remote character echo.
I can only assume that the default shell on mac-os (/usr/bin/sh ) is not a bash implementation like it is on Linux. Try changing the shell factory to new ProcessShellFactory("/usr/bin/bash","-i")
Sorry I don't have a mac to try this out.
Paul
I am trying to execute android CTS via this command:
./cts-tradefed run cts --shards ${no_of_devices}
When I execute a plain shell command from terminal it detects all the connected devices and executes test suite in parallel using all connected devices to execute tests.
While when I try to call this shell command from Java code(locally) or CI server; it detects all devices but executes tests on (no_of_devices -1).
The device that gets ignored is always the first device in the list. Confirmed that device itself is not a problem because if same device is not the first one in the list of devices, that device will be used for executing the tests.
My shell script looks like:
!#/bin/bash
./cts-tradefed run cts --shards 2 #say if I have two devices connected
The java code I use to execute the shell script is this:
public class Main {
public static void main(String[] args) {
ProcessBuilder pb = new ProcessBuilder("temp/run-cts-with-sharding.sh");
try {
Process p = pb.start();
Thread.sleep(2000);
BufferedReader reader = new BufferedReader(
new InputStreamReader(p.getInputStream()));
String line;
while ((line = reader.readLine()) != null) {
System.out.println(line);
}
} catch(Exception e) {
System.out.println("Exception on pb.start(): " + e);
}
}
}
I have followed the tutorial given by Amazon here but it seems that my code failed to run.
The error that I got:
Exception in thread "main" java.lang.Error: Unresolved compilation problem:
The method withJobFlowRole(String) is undefined for the type AddJobFlowStepsRequest
at main.main(main.java:38)
My full code:
import java.io.IOException;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.PropertiesCredentials;
import com.amazonaws.services.elasticmapreduce.*;
import com.amazonaws.services.elasticmapreduce.model.AddJobFlowStepsRequest;
import com.amazonaws.services.elasticmapreduce.model.AddJobFlowStepsResult;
import com.amazonaws.services.elasticmapreduce.model.HadoopJarStepConfig;
import com.amazonaws.services.elasticmapreduce.model.StepConfig;
import com.amazonaws.services.elasticmapreduce.util.StepFactory;
public class main {
public static void main(String[] args) {
AWSCredentials credentials = null;
try {
credentials = new PropertiesCredentials(
main.class.getResourceAsStream("AwsCredentials.properties"));
} catch (IOException e1) {
System.out.println("Credentials were not properly entered into AwsCredentials.properties.");
System.out.println(e1.getMessage());
System.exit(-1);
}
AmazonElasticMapReduce client = new AmazonElasticMapReduceClient(credentials);
// predefined steps. See StepFactory for list of predefined steps
StepConfig hive = new StepConfig("Hive", new StepFactory().newInstallHiveStep());
// A custom step
HadoopJarStepConfig hadoopConfig1 = new HadoopJarStepConfig()
.withJar("s3://mywordcountbuckett/binary/WordCount.jar")
.withMainClass("com.my.Main1") // optional main class, this can be omitted if jar above has a manifest
.withArgs("--verbose"); // optional list of arguments
StepConfig customStep = new StepConfig("Step1", hadoopConfig1);
AddJobFlowStepsResult result = client.addJobFlowSteps(new AddJobFlowStepsRequest()
.withJobFlowRole("jobflow_role")
.withServiceRole("service_role")
.withSteps(hive, customStep));
System.out.println(result.getStepIds());
}
}
What could be the reason that the code is not running ?
Are there any tutorials based on the latest version ?
Is there an elegant, easy and fast way to move data out of Hive into MongoDB?
You can do the export with the Hadoop-MongoDB connector. Just run the Hive query in your job's main method. This output will then be used by the Mapper in order to insert the data into MongoDB.
Example:
Here I'm inserting a semicolon separated text file (id;firstname;lastname) to a MongoDB
collection using a simple Hive query :
import java.io.IOException;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.sql.Statement;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import com.mongodb.hadoop.MongoOutputFormat;
import com.mongodb.hadoop.io.BSONWritable;
import com.mongodb.hadoop.util.MongoConfigUtil;
public class HiveToMongo extends Configured implements Tool {
private static class HiveToMongoMapper extends
Mapper<LongWritable, Text, IntWritable, BSONWritable> {
//See: https://issues.apache.org/jira/browse/HIVE-634
private static final String HIVE_EXPORT_DELIMETER = '\001' + "";
private IntWritable k = new IntWritable();
private BSONWritable v = null;
#Override
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String [] split = value.toString().split(HIVE_EXPORT_DELIMETER);
k.set(Integer.parseInt(split[0]));
v = new BSONWritable();
v.put("firstname", split[1]);
v.put("lastname", split[2]);
context.write(k, v);
}
}
public static void main(String[] args) throws Exception {
try {
Class.forName("org.apache.hadoop.hive.jdbc.HiveDriver");
}
catch (ClassNotFoundException e) {
System.out.println("Unable to load Hive Driver");
System.exit(1);
}
try {
Connection con = DriverManager.getConnection(
"jdbc:hive://localhost:10000/default");
Statement stmt = con.createStatement();
String sql = "INSERT OVERWRITE DIRECTORY " +
"'hdfs://localhost:8020/user/hive/tmp' select * from users";
stmt.executeQuery(sql);
}
catch (SQLException e) {
System.exit(1);
}
int res = ToolRunner.run(new Configuration(), new HiveToMongo(), args);
System.exit(res);
}
#Override
public int run(String[] args) throws Exception {
Configuration conf = getConf();
Path inputPath = new Path("/user/hive/tmp");
String mongoDbPath = "mongodb://127.0.0.1:6900/mongo_users.mycoll";
MongoConfigUtil.setOutputURI(conf, mongoDbPath);
/*
Add dependencies to distributed cache via
DistributedCache.addFileToClassPath(...) :
- mongo-hadoop-core-x.x.x.jar
- mongo-java-driver-x.x.x.jar
- hive-jdbc-x.x.x.jar
HadoopUtils is an own utility class
*/
HadoopUtils.addDependenciesToDistributedCache("/libs/mongodb", conf);
HadoopUtils.addDependenciesToDistributedCache("/libs/hive", conf);
Job job = new Job(conf, "HiveToMongo");
FileInputFormat.setInputPaths(job, inputPath);
job.setJarByClass(HiveToMongo.class);
job.setMapperClass(HiveToMongoMapper.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(MongoOutputFormat.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setNumReduceTasks(0);
job.submit();
System.out.println("Job submitted.");
return 0;
}
}
One drawback is that a 'staging area' (/user/hive/tmp) is needed to store the intermediate Hive output. Furthermore as far as I know the Mongo-Hadoop connector doesn't support upserts.
I'm not quite sure but you can also try to fetch the data from Hive without running
hiveserver which exposes a Thrift service so that you can probably save some overhead.
Look at the source code of Hive's org.apache.hadoop.hive.cli.CliDriver#processLine(String line, boolean allowInterupting) method which actually executes the query. Then you can hack together something like this:
...
LogUtils.initHiveLog4j();
CliSessionState ss = new CliSessionState(new HiveConf(SessionState.class));
ss.in = System.in;
ss.out = new PrintStream(System.out, true, "UTF-8");
ss.err = new PrintStream(System.err, true, "UTF-8");
SessionState.start(ss);
Driver qp = new Driver();
processLocalCmd("SELECT * from users", qp, ss); //taken from CliDriver
...
Side notes:
There's also a hive-mongo connector implementation you might also check.
It's also worth having a look at the implementation of the Hive-HBase connector to get some idea if you want to implement a similar one for MongoDB.
Have you looked into Sqoop? It's supposed to make it very simple to move data between Hadoop and SQL/NoSQL databases. This article also gives an example of using it with Hive.
Take a look at the hadoop-MongoDB connector project :
http://api.mongodb.org/hadoop/MongoDB%2BHadoop+Connector.html
"This connectivity takes the form of allowing both reading MongoDB data into Hadoop (for use in MapReduce jobs as well as other components of the Hadoop ecosystem), as well as writing the results of Hadoop jobs out to MongoDB."
not sure if it will work for your use case but it's worth looking at.
From a java application I run a bat file which starts another java application:
ProcessBuilder processBuilder = new ProcessBuilder("path to bat file");
Process process = processBuilder.start();
But the process never starts and no errors gets printed. But if I add the line:
String resultString = convertStreamToString(process.getInputStream());
after : Process process = processBuilder.start();
where:
public String convertStreamToString(InputStream is) throws IOException {
/*
* To convert the InputStream to String we use the Reader.read(char[]
* buffer) method. We iterate until the Reader return -1 which means there's
* no more data to read. We use the StringWriter class to produce the
* string.
*/
if (is != null) {
Writer writer = new StringWriter();
char[] buffer = new char[1024];
try {
Reader reader = new BufferedReader(new InputStreamReader(is, "UTF-8"));
int n;
while ((n = reader.read(buffer)) != -1) {
writer.write(buffer, 0, n);
}
} finally {
is.close();
}
return writer.toString();
} else {
return "";
} }
it runs fine! Any ideas?
If it's really a batch file, you should run the command line interpreter as process (e.g. cmd.exe) with that file as parameter.
Solved here:
Starting a process with inherited stdin/stdout/stderr in Java 6
But, FYI, the deal is that sub-processes have a limited output buffer so if you don't read from it they hang waiting to write more IO. Your example in the original post correctly resolves this by continuing to read from the process's output stream so it doesn't hang.
The linked-to article demonstrates one method of reading from the streams. Key take-away concept though is you've got to keep reading output/error from the subprocess to keep it from hanging due to I/O blocking.