I am trying to run a hive query using gcloud compute ssh via scala
First, here is what i tried
scala> import sys.process._
scala> val results = Seq("hive", "-e", "show databases;").!!
asd
zxc
qwe
scala>
which is good. Now, i want to run the same hive command, but against a GCP cluster. I have gcloud setup on my VM and from the command line, i can easily do
$ gcloud compute ssh --zone myZone myNode --internal-ip -- 'hive -e "show databases;"'
Updating project ssh metadata...⠶Updated [https://www.googleapis.com/compute/v1/projects/myProject].
Updating project ssh metadata...done.
Waiting for SSH key to propagate.
Warning: Permanently added 'compute.2746937995265952194' (RSA) to the list of known hosts.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 19 100 19 0 0 2982 0 --:--:-- --:--:-- --:--:-- 3166
Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j2.properties Async: true
OK
asd
zxc
qwe
Now, I want to run the above using scala. Here is what i tried
scala> val results = Seq("gcloud", "compute", "ssh", "--zone", "myZone", "myNode", "--internal-ip", "--", "hive", "-e" ,"show databases").!!
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 19 100 19 0 0 3270 0 --:--:-- --:--:-- --:--:-- 3800
Pseudo-terminal will not be allocated because stdin is not a terminal.
Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j2.properties Async: true
NoViableAltException(-1#[846:1: ddlStatement : ( createDatabaseStatement | switchDatabaseStatement | dropDatabaseStatement | createTableStatement | dropTableStatement | truncateTableStatement | alterStatement | descStatement | showStatement | metastoreCheck | createViewStatement | createMaterializedViewStatement | dropViewStatement | dropMaterializedViewStatement | createFunctionStatement | createMacroStatement | createIndexStatement | dropIndexStatement | dropFunctionStatement | reloadFunctionStatement | dropMacroStatement | analyzeStatement | lockStatement | unlockStatement | lockDatabase | unlockDatabase | createRoleStatement | dropRoleStatement | ( grantPrivileges )=> grantPrivileges | ( revokePrivileges )=> revokePrivileges | showGrants | showRoleGrants | showRolePrincipals | showRoles | grantRole | revokeRole | setRole | showCurrentRole | abortTransactionStatement );])
at org.antlr.runtime.DFA.noViableAlt(DFA.java:158)
at org.antlr.runtime.DFA.predict(DFA.java:144)
at org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:3757)
at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:2382)
at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1333)
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:208)
at org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:77)
at org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:70)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:468)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:787)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:244)
at org.apache.hadoop.util.RunJar.main(RunJar.java:158)
FAILED: ParseException line 1:4 cannot recognize input near 'show' '<EOF>' '<EOF>' in ddl statement
java.lang.RuntimeException: Nonzero exit value: 64
at scala.sys.package$.error(package.scala:27)
at scala.sys.process.ProcessBuilderImpl$AbstractBuilder.slurp(ProcessBuilderImpl.scala:132)
at scala.sys.process.ProcessBuilderImpl$AbstractBuilder.$bang$bang(ProcessBuilderImpl.scala:102)
... 50 elided
scala>
why am i getting this error ? I also tried
scala> val results = Seq("gcloud", "compute", "ssh", "--zone", "myZone", "myNode", "--internal-ip", "--", "hive", "-e" ,"show databases;").!!
but got the same error. Then i tried
scala> val results = Seq("gcloud", "compute", "ssh", "--zone", "myZone", "myNode", "--internal-ip", "--", "'hive -e \"show databases;\"'").!!
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 19 100 19 0 0 3245 0 --:--:-- --:--:-- --:--:-- 3800
Pseudo-terminal will not be allocated because stdin is not a terminal.
bash: hive -e "show databases;": command not found
java.lang.RuntimeException: Nonzero exit value: 127
at scala.sys.package$.error(package.scala:27)
at scala.sys.process.ProcessBuilderImpl$AbstractBuilder.slurp(ProcessBuilderImpl.scala:132)
at scala.sys.process.ProcessBuilderImpl$AbstractBuilder.$bang$bang(ProcessBuilderImpl.scala:102)
... 50 elided
How can I run the gcloud comput ssh properly using scala ?
You don't need the single quotes in your last example. You're trying to pass the string:
hive -e "show databases;"
For fun, I would use triple quotes in Scala:
"""hive -e "show databases;""""
to avoid backslash. Single quotes in your good command line are processed by bash.
This is what worked in bash:
$ gcloud compute ssh --zone myZone myNode --internal-ip -- 'hive -e "show databases;"'
scala.sys.process got some basic parsing at some point. There is a space in this file name that must be quoted. Amazingly, it seems to do shell-style quotes:
$ scala
Welcome to Scala 2.13.0 (OpenJDK 64-Bit Server VM, Java 11.0.3).
Type in expressions for evaluation. Or try :help.
scala> import scala.sys.process._
import scala.sys.process._
scala> "ls -l /tmp/skypeforlinux Crashes".!!
ls: cannot access '/tmp/skypeforlinux': No such file or directory
ls: cannot access 'Crashes': No such file or directory
java.lang.RuntimeException: Nonzero exit value: 2
at scala.sys.process.ProcessBuilderImpl$AbstractBuilder.slurp(ProcessBuilderImpl.scala:155)
at scala.sys.process.ProcessBuilderImpl$AbstractBuilder.$bang$bang(ProcessBuilderImpl.scala:112)
... 28 elided
scala> """ls -l "/tmp/skypeforlinux Crashes"""".!!
res1: String =
"total 0
"
scala> """ls -l '/tmp/skypeforlinux Crashes'""".!!
res2: String =
"total 0
"
scala> """ls -l /tmp/skypeforlin'ux Cr'ashes""".!!
res3: String =
"total 0
"
scala> """echo 'hive -e "show databases;"'""".!!
res4: String =
"hive -e "show databases;"
"
The double quotes around "my house" are part of the file name:
scala> """ls '/tmp/"my house"'""".!!
res5: String =
"/tmp/"my house"
"
I guess that code is where I learned how shell-style quotes work, though I never have a chance to use that knowledge. Except for this answer, so thanks for the opportunity.
Related
Here is my feature file - a.feature:
Scenario Outline: Some outline
Given something
When <thing> is moved to <position>
Then something else
Examples:
| thing | position |
| 1 | 1 |
and save it in /tmp/a.feature
Here is my pytest step file (/tmp/a.py):
from pytest_bdd import (
given,
scenario,
then,
when,
)
#scenario('./x.feature', 'Some outline')
def test_some_outline():
"""Some outline."""
#given('something')
def something():
"""something."""
pass
#when('<thing> is moved to <position>')
def thing_is_moved_to_position(thing, position):
assert isinstance(thing, int)
assert isinstance(position, int)
#then('something else')
def something_else():
"""something else."""
pass
When I run it:
$ pwd
/tmp
$ pytest ./a.py
............
............
E pytest_bdd.exceptions.StepDefinitionNotFoundError: Step definition is not found: When "1 is moved to 1". Line 3 in scenario "Some outline" in the feature "/tmp/x.feature"
/home/cyan/.local/lib/python3.10/site-packages/pytest_bdd/scenario.py:192: StepDefinitionNotFoundError
============= short test summary info =============
FAILED x.py::test_some_outline[1-1] - pytest_bdd.exceptions.StepDefinitionNotFoundError: Step definition is not found: When "1 is moved to 1". Line 3 in scenario "Some outli...
============ 1 failed in 0.09s ============
I am following the tutorial in the documentation (https://snakemake.readthedocs.io/en/stable/tutorial/advanced.html) and have been stuck on the "Step 4: Rule parameter" exercise. I would like to access a float from my config file using a wildcard in my params directive.
I seem to be getting the same error whenever I run snakemake -np in the command line:
InputFunctionException in line 46 of /mnt/c/Users/Matt/Desktop/snakemake-tutorial/Snakefile:
Error:
AttributeError: 'Wildcards' object has no attribute 'sample'
Wildcards:
Traceback:
File "/mnt/c/Users/Matt/Desktop/snakemake-tutorial/Snakefile", line 14, in get_bcftools_call_priors
This is my code so far
import time
configfile: "config.yaml"
rule all:
input:
"plots/quals.svg"
def get_bwa_map_input_fastqs(wildcards):
print(wildcards.__dict__, 1, time.time()) #I have this print as a check
return config["samples"][wildcards.sample]
def get_bcftools_call_priors(wildcards):
print(wildcards.__dict__, 2, time.time()) #I have this print as a check
return config["prior_mutation_rates"][wildcards.sample]
rule bwa_map:
input:
"data/genome.fa",
get_bwa_map_input_fastqs
#lambda wildcards: config["samples"][wildcards.sample]
output:
"mapped_reads/{sample}.bam"
params:
rg=r"#RG\tID:{sample}\tSM:{sample}"
threads: 2
shell:
"bwa mem -R '{params.rg}' -t {threads} {input} | samtools view -Sb - > {output}"
rule samtools_sort:
input:
"mapped_reads/{sample}.bam"
output:
"sorted_reads/{sample}.bam"
shell:
"samtools sort -T sorted_reads/{wildcards.sample} "
"-O bam {input} > {output}"
rule samtools_index:
input:
"sorted_reads/{sample}.bam"
output:
"sorted_reads/{sample}.bam.bai"
shell:
"samtools index {input}"
rule bcftools_call:
input:
fa="data/genome.fa",
bam=expand("sorted_reads/{sample}.bam", sample=config["samples"]),
bai=expand("sorted_reads/{sample}.bam.bai", sample=config["samples"])
#prior=get_bcftools_call_priors
params:
prior=get_bcftools_call_priors
output:
"calls/all.vcf"
shell:
"samtools mpileup -g -f {input.fa} {input.bam} | "
"bcftools call -P {params.prior} -mv - > {output}"
rule plot_quals:
input:
"calls/all.vcf"
output:
"plots/quals.svg"
script:
"scripts/plot-quals.py"
and here is my config.yaml
samples:
A: data/samples/A.fastq
#B: data/samples/B.fastq
#C: data/samples/C.fastq
prior_mutation_rates:
A: 1.0e-4
#B: 1.0e-6
I don't understand why my input function call in bcftools_call says that the wildcards object is empty of attributes, yet an almost identical function call in bwa_map has the attribute sample that I want. From the documentation it seems like the wildcards would be propogated before anything is run, so why is it missing?
This is the full output of the commandline call snakemake -np:
{'_names': {'sample': (0, None)}, '_allowed_overrides': ['index', 'sort'], 'index': functools.partial(<function Namedlist._used_attribute at 0x7f91b1a58f70>, _name='index'), 'sort': functools.partial(<function Namedlist._used_attribute at 0x7f91b1a58f70>, _name='sort'), 'sample': 'A'} 1 1628877061.8831172
Job stats:
job count min threads max threads
-------------- ------- ------------- -------------
all 1 1 1
bcftools_call 1 1 1
bwa_map 1 1 1
plot_quals 1 1 1
samtools_index 1 1 1
samtools_sort 1 1 1
total 6 1 1
[Fri Aug 13 10:51:01 2021]
rule bwa_map:
input: data/genome.fa, data/samples/A.fastq
output: mapped_reads/A.bam
jobid: 4
wildcards: sample=A
resources: tmpdir=/tmp
bwa mem -R '#RG\tID:A\tSM:A' -t 1 data/genome.fa data/samples/A.fastq | samtools view -Sb - > mapped_reads/A.bam
[Fri Aug 13 10:51:01 2021]
rule samtools_sort:
input: mapped_reads/A.bam
output: sorted_reads/A.bam
jobid: 3
wildcards: sample=A
resources: tmpdir=/tmp
samtools sort -T sorted_reads/A -O bam mapped_reads/A.bam > sorted_reads/A.bam
[Fri Aug 13 10:51:01 2021]
rule samtools_index:
input: sorted_reads/A.bam
output: sorted_reads/A.bam.bai
jobid: 5
wildcards: sample=A
resources: tmpdir=/tmp
samtools index sorted_reads/A.bam
[Fri Aug 13 10:51:01 2021]
rule bcftools_call:
input: data/genome.fa, sorted_reads/A.bam, sorted_reads/A.bam.bai
output: calls/all.vcf
jobid: 2
resources: tmpdir=/tmp
{'_names': {}, '_allowed_overrides': ['index', 'sort'], 'index': functools.partial(<function Namedlist._used_attribute at 0x7f91b1a58f70>, _name='index'), 'sort': functools.partial(<function Namedlist._used_attribute at 0x7f91b1a58f70>, _name='sort')} 2 1628877061.927639
InputFunctionException in line 46 of /mnt/c/Users/Matt/Desktop/snakemake-tutorial/Snakefile:
Error:
AttributeError: 'Wildcards' object has no attribute 'sample'
Wildcards:
Traceback:
File "/mnt/c/Users/Matt/Desktop/snakemake-tutorial/Snakefile", line 14, in get_bcftools_call_priors
If anyone knows what is going wrong I would really appreciate an explaination. Also if there is a better way of getting information out of the config.yaml into the different directives, I would gladly appreciate those tips.
Edit:
I have searched around the internet quite a bit, but have yet to understand this issue.
Wildcards for each rule are based on that rule's output file(s). The rule bcftools_call has one output file (calls/all.vcf), which has no wildcards. Because of this, when get_bcftools_call_priors is called, it throws an exception when it tries to access the unset wildcards.sample attribute.
You should probably set a global prior_mutation_rate in your config file and then access that in the bcftools_call rule:
rule bcftools_call:
...
params:
prior=config["prior_mutation_rate"],
I have an ETL job where I load some data from S3 into a dynamic frame, relationalize it, and iterate through the dynamic frames returned. I want to query the result of this in Athena later so I want to change the names of the columns from having '.' to '_' and lower case them. When I do this transformation, I change the DynamicFrame into a spark dataframe and have been doing it this way. I've also seen a problem in another SO question where it turned out there is a reported problem with AWS Glue rename field transform so I've stayed away from that.
I've tried a couple things, including adding a load limit size to 50MB, repartitioning the dataframe, using both dataframe.schema.names and dataframe.columns, using reduce instead of loops, using sparksql to change it and nothing has worked. I'm fairly certain that its this transformation that failing because I've put some print statements in and the print that I have right after the completion of this transformation never shows up. I used a UDF at one point but that also failed. I've tried the actual transformation using df.toDF(new_column_names) and df.withColumnRenamed() but it never gets this far because I've not seen it get past retrieving the column names. Here's the code I've been using. I've been changing the actual name transformation as I said above, but the rest of it has stayed pretty much the same.
I've seen some people try and use the spark.executor.memory, spark.driver.memory, spark.executor.memoryOverhead and spark.driver.memoryOverhead. I've used those and set them to the most AWS Glue will let you but to no avail.
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglue.dynamicframe import DynamicFrame
from pyspark.sql.functions import explode, col, lower, trim, regexp_replace
import copy
import json
import boto3
import botocore
import time
# ========================================================
# UTILITY FUNCTIONS
# ========================================================
def lower_and_pythonize(s=None):
if s is not None:
return s.replace('.', '_').lower()
else:
return None
# pyspark implementation of renaming
# exprs = [
# regexp_replace(lower(trim(col(c))),'\.' , '_').alias(c) if t == "string" else col(c)
# for (c, t) in data_frame.dtypes
# ]
# ========================================================
# END UTILITY FUNCTIONS
# ========================================================
## #params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
#my params
bucket_name = '<my-s3-bucket>' # name of the bucket. do not include 's3://' thats added later
output_key = '<my-output-path>' # key where all of the output is saved
input_keys = ['<root-directory-i'm using'] # highest level key that holds all of the desired data
s3_exclusions = "[\"*.orc\"]" # list of strings to exclude. Documentation: https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-connect.html#aws-glue-programming-etl-connect-s3
s3_exclusions = s3_exclusions.replace('\n', '')
dfc_root_table_name = 'root' # name of the root table generated in the relationalize process
input_paths = ['s3://' + bucket_name + '/' + x for x in input_keys] # turn input keys into s3 paths
output_connection_opts = {"path": "s3://" + bucket_name + "/" + output_key} # dict of options. Documentation link found above the write_dynamic_frame.from_options line
s3_client = boto3.client('s3', 'us-east-1') # s3 client used for writing to s3
s3_resource = boto3.resource('s3', 'us-east-1') # s3 resource used for checking if key exists
group_mb = 50 # NOTE: 75 has proven to be too much when running on all of the april data
group_size = str(group_mb * 1024 * 1024)
input_connection_opts = {'paths': input_paths,
'groupFiles': 'inPartition',
'groupSize': group_size,
'recurse': True,
'exclusions': s3_exclusions} # dict of options. Documentation link found above the create_dynamic_frame_from_options line
print(sc._conf.get('spark.executor.cores'))
num_paritions = int(sc._conf.get('spark.executor.cores')) * 4
print('Loading all json files into DynamicFrame...')
loading_time = time.time()
df = glueContext.create_dynamic_frame_from_options(connection_type='s3', connection_options=input_connection_opts, format='json')
print('Done. Time to complete: {}s'.format(time.time() - loading_time))
# using the list of known null fields (at least on small sample size) remove them
#df = df.drop_fields(drop_paths)
# drop any remaining null fields. The above covers known problems that this step doesn't fix
print('Dropping null fields...')
dropping_time = time.time()
df_without_null = DropNullFields.apply(frame=df, transformation_ctx='df_without_null')
print('Done. Time to complete: {}s'.format(time.time() - dropping_time))
df = None
print('Relationalizing dynamic frame...')
relationalizing_time = time.time()
dfc = Relationalize.apply(frame=df_without_null, name=dfc_root_table_name, info="RELATIONALIZE", transformation_ctx='dfc', stageThreshold=3)
print('Done. Time to complete: {}s'.format(time.time() - relationalizing_time))
keys = dfc.keys()
keys.sort(key=lambda s: len(s))
print('Writting all dynamic frames to s3...')
writting_time = time.time()
for key in keys:
good_key = lower_and_pythonize(s=key)
data_frame = dfc.select(key).toDF()
# lowercase all the names and remove '.'
print('Removing . and _ from names for {} frame...'.format(key))
df_fix_names_time = time.time()
print('Repartitioning data frame...')
data_frame.repartition(num_paritions)
print('Done.')
#
print('Changing names...')
for old_name in data_frame.schema.names:
data_frame = data_frame.withColumnRenamed(old_name, old_name.replace('.','_').lower())
print('Done.')
#
df_now = DynamicFrame.fromDF(dataframe=data_frame, glue_ctx=glueContext, name='df_now')
print('Done. Time to complete: {}'.format(time.time() - df_fix_names_time))
# if a conflict of types appears, make it 2 columns
# https://docs.aws.amazon.com/glue/latest/dg/built-in-transforms.html
print('Fixing any type conficts for {} frame...'.format(key))
df_resolve_time = time.time()
resolved = ResolveChoice.apply(frame = df_now, choice = 'make_cols', transformation_ctx = 'resolved')
print('Done. Time to complete: {}'.format(time.time() - df_resolve_time))
# check if key exists in s3. if not make one
out_connect = copy.deepcopy(output_connection_opts)
out_connect['path'] = out_connect['path'] + '/' + str(good_key)
try:
s3_resource.Object(bucket_name, output_key + '/' + good_key + '/').load()
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == '404' or 'NoSuchKey' in e.response['Error']['Code']:
# object doesn't exist
s3_client.put_object(Bucket=bucket_name, Key=output_key+'/'+good_key + '/')
else:
print(e)
## https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-crawler-pyspark-extensions-glue-context.html
print('Writing {} frame to S3...'.format(key))
df_writing_time = time.time()
datasink4 = glueContext.write_dynamic_frame.from_options(frame = df_now, connection_type = "s3", connection_options = out_connect, format = "orc", transformation_ctx = "datasink4")
out_connect = None
datasink4 = None
print('Done. Time to complete: {}'.format(time.time() - df_writing_time))
print('Done. Time to complete: {}s'.format(time.time() - writting_time))
job.commit()
Here is the error I'm getting
19/06/07 16:33:36 DEBUG Client:
client token: N/A
diagnostics: Application application_1559921043869_0001 failed 1 times due to AM Container for appattempt_1559921043869_0001_000001 exited with exitCode: -104
For more detailed output, check application tracking page:http://ip-172-32-9-38.ec2.internal:8088/cluster/app/application_1559921043869_0001Then, click on links to logs of each attempt.
Diagnostics: Container [pid=9630,containerID=container_1559921043869_0001_01_000001] is running beyond physical memory limits. Current usage: 5.6 GB of 5.5 GB physical memory used; 8.8 GB of 27.5 GB virtual memory used. Killing container.
Dump of the process-tree for container_1559921043869_0001_01_000001 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 9630 9628 9630 9630 (bash) 0 0 115822592 675 /bin/bash -c LD_LIBRARY_PATH=/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native /usr/lib/jvm/java-openjdk/bin/java -server -Xmx5120m -Djava.io.tmpdir=/mnt/yarn/usercache/root/appcache/application_1559921043869_0001/container_1559921043869_0001_01_000001/tmp '-XX:+UseConcMarkSweepGC' '-XX:CMSInitiatingOccupancyFraction=70' '-XX:MaxHeapFreeRatio=70' '-XX:+CMSClassUnloadingEnabled' '-XX:OnOutOfMemoryError=kill -9 %p' '-Djavax.net.ssl.trustStore=ExternalAndAWSTrustStore.jks' '-Djavax.net.ssl.trustStoreType=JKS' '-Djavax.net.ssl.trustStorePassword=amazon' '-DRDS_ROOT_CERT_PATH=rds-combined-ca-bundle.pem' '-DREDSHIFT_ROOT_CERT_PATH=redshift-ssl-ca-cert.pem' '-DRDS_TRUSTSTORE_URL=file:RDSTrustStore.jks' -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1559921043869_0001/container_1559921043869_0001_01_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.deploy.PythonRunner' --primary-py-file runscript.py --arg 'script_2019-06-07-15-29-50.py' --arg '--JOB_NAME' --arg 'tss-json-to-orc' --arg '--JOB_ID' --arg 'j_f9f7363e5d8afa20784bc83d7821493f481a78352641ad2165f8f68b88c8e5fe' --arg '--JOB_RUN_ID' --arg 'jr_a77087792dd74231be1f68c1eda2ed33200126b8952c5b1420cb6684759cf233' --arg '--job-bookmark-option' --arg 'job-bookmark-disable' --arg '--TempDir' --arg 's3://aws-glue-temporary-059866946490-us-east-1/zmcgrath' --properties-file /mnt/yarn/usercache/root/appcache/application_1559921043869_0001/container_1559921043869_0001_01_000001/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/containers/application_1559921043869_0001/container_1559921043869_0001_01_000001/stdout 2> /var/log/hadoop-yarn/containers/application_1559921043869_0001/container_1559921043869_0001_01_000001/stderr
|- 9677 9648 9630 9630 (python) 12352 2628 1418354688 261364 python runscript.py script_2019-06-07-15-29-50.py --JOB_NAME tss-json-to-orc --JOB_ID j_f9f7363e5d8afa20784bc83d7821493f481a78352641ad2165f8f68b88c8e5fe --JOB_RUN_ID jr_a77087792dd74231be1f68c1eda2ed33200126b8952c5b1420cb6684759cf233 --job-bookmark-option job-bookmark-disable --TempDir s3://aws-glue-temporary-059866946490-us-east-1/zmcgrath
|- 9648 9630 9630 9630 (java) 265906 3083 7916974080 1207439 /usr/lib/jvm/java-openjdk/bin/java -server -Xmx5120m -Djava.io.tmpdir=/mnt/yarn/usercache/root/appcache/application_1559921043869_0001/container_1559921043869_0001_01_000001/tmp -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError=kill -9 %p -Djavax.net.ssl.trustStore=ExternalAndAWSTrustStore.jks -Djavax.net.ssl.trustStoreType=JKS -Djavax.net.ssl.trustStorePassword=amazon -DRDS_ROOT_CERT_PATH=rds-combined-ca-bundle.pem -DREDSHIFT_ROOT_CERT_PATH=redshift-ssl-ca-cert.pem -DRDS_TRUSTSTORE_URL=file:RDSTrustStore.jks -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1559921043869_0001/container_1559921043869_0001_01_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class org.apache.spark.deploy.PythonRunner --primary-py-file runscript.py --arg script_2019-06-07-15-29-50.py --arg --JOB_NAME --arg tss-json-to-orc --arg --JOB_ID --arg j_f9f7363e5d8afa20784bc83d7821493f481a78352641ad2165f8f68b88c8e5fe --arg --JOB_RUN_ID --arg jr_a77087792dd74231be1f68c1eda2ed33200126b8952c5b1420cb6684759cf233 --arg --job-bookmark-option --arg job-bookmark-disable --arg --TempDir --arg s3://aws-glue-temporary-059866946490-us-east-1/zmcgrath --properties-file /mnt/yarn/usercache/root/appcache/application_1559921043869_0001/container_1559921043869_0001_01_000001/__spark_conf__/__spark_conf__.properties
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1559921462650
final status: FAILED
tracking URL: http://ip-172-32-9-38.ec2.internal:8088/cluster/app/application_1559921043869_0001
user: root
Here are the log contents from the job
LogType:stdout
Log Upload Time:Fri Jun 07 16:33:36 +0000 2019
LogLength:487
Log Contents:
4
Loading all json files into DynamicFrame...
Done. Time to complete: 59.5056920052s
Dropping null fields...
null_fields [<some fields that were dropped>]
Done. Time to complete: 529.95293808s
Relationalizing dynamic frame...
Done. Time to complete: 2773.11689401s
Writting all dynamic frames to s3...
Removing . and _ from names for root frame...
Repartitioning data frame...
Done.
Changing names...
End of LogType:stdout
As I said earlier, the Done. print after changing the names never appears in the logs. I've seen plenty of people getting the same error I'm seeing and I've tried a fair bit of them with no success. Any help you can provide would b e much appreciated. Let me know if you need any more information. Thanks
Edit
Prabhakar's comment reminded me that I have tried the memory worker type in AWS Glue and it still failed. As stated above, I have tried raising the amount of memory in the memoryOverhead from 5 to 12, but to avail. Neither of these made the job complete successfully
Update
I put in the following code for column name change instead of the above code for easier debugging
print('Changing names...')
name_counter = 0
for old_name in data_frame.schema.names:
print('Name number {}. name being changed: {}'.format(name_counter, old_name))
data_frame = data_frame.withColumnRenamed(old_name, old_name.replace('.','_').lower())
name_counter += 1
print('Done.')
And I got the following output
Removing . and _ from names for root frame...
Repartitioning data frame...
Done.
Changing names...
End of LogType:stdout
So it must be a problem with the data_frame.schema.names part. Could it be this line with my loop through all of the DynamicFrames? Am I looping through the DynamicFrames from the relationalize transformation correctly?
Update 2
Glue recently added more verbose logs and I found this
ERROR YarnClusterScheduler: Lost executor 396 on ip-172-32-78-221.ec2.internal: Container killed by YARN for exceeding memory limits. 5.5 GB of 5.5 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
This happens for more than just this executor too; it looks like almost all of them.
I can try to increase the executor memory overhead, but I would like to know why getting the column names results in an OOM error. I wouldn't think that something that trivial would take up that much memory?
Update
I attempted to run the job with both spark.driver.memoryOverhead=7g and spark.yarn.executor.memoryOverhead=7g and I again got an OOM error
BusyBox v1.22.1 (2014-11-14 10:11:32 CST) built-in shell (ash)
Enter 'help' for a list of built-in commands.
_______ ________ __
| |.-----.-----.-----.| | | |.----.| |_
| - || _ | -| || | | || _|| _|
|_______|| __|_____||||________||| |____|
|__| W I R E L E S S F R E E D O M
CHAOS CALMER (Bleeding Edge, unknown)
1 1/2 oz Gin Shake with a glassful
1/4 oz Triple Sec of broken ice and pour
3/4 oz Lime Juice unstrained into a goblet.
1 1/2 oz Orange Juice
1 tsp. Grenadine Syrup
root#OpenWrt:~# wget https://downloads.openwrt.org/chaos_calmer/15.05.1/ramips/m
t7620/openwrt-15.05.1-ramips-mt7620-ArcherC20i-squashfs-sysupgrade.bin
wget: not an http or ftp url: https://downloads.openwrt.org/chaos_calmer/15.05.1/ramips/mt7620/openwrt-15.05.1-ramips-mt7620-ArcherC20i-squashfs-sysupgrade.bin
root#OpenWrt:~#
Replace https in your link with http:
wget http://downloads.openwrt.org/chaos_calmer/15.05.1/ramips/mt7620/openwrt-15.05.1-ramips-mt7620-ArcherC20i-squashfs-sysupgrade.bin
Your installed version of wget doesn't support TLS (libopenssl is not installed).
Also, I would change to /tmp dir first, so that downloaded image is stored in RAM (you probably don't have enough space in flash):
cd /tmp
wget http://...
So i have simple shell commands to ping websites to retrieve data about said websites.
For example one of my pinging.sh looks like this:
ping -R -c 120 blar.org.cn >> pingdata.txt
ping -R -c 120 another.net >> pingdata.txt
But then my crontabs look like this:
7 * * * ./pinging.sh >> pingdata.log
The pingdata.log doesn't output. Is it best to do it through the crontab or through the script? I assumed the crontab would be better because it would cover the entire script rather than having to write it out multiple times.
You need to indicate the full path of your script in the cronjob, together with the binary running it.
For example:
7 * * * * /bin/sh /home/you/pinging.sh >> /home/you/pingdata.log
Note also you are just adding 4 parameters to the cronjob, whereas you need at least 5:
+---------------- minute (0 - 59)
| +------------- hour (0 - 23)
| | +---------- day of month (1 - 31)
| | | +------- month (1 - 12)
| | | | +---- day of week (0 - 6) (Sunday=0 or 7)
| | | | |
* * * * * command to be executed
You can test your cron syntax with Crontab guru (---> http://crontab.guru/).
First, the executable must be provided as full path in cron.
Example:
7 * * * * /bin/bash /path/to/pinging.sh
Second, create a wrapper script for pinging.sh >> pingdata.log and add that to crontab.
Third, your crontab entry is wrong. There must be 5 fields whereas your's have 4 (maybe that's a typo ?)