Gstreamer with gst-omx Raspberry Pi - raspberry-pi

I compiled the gstreamer with gst-omx following this tutorial: http://www.onepitwopi.com/raspberry-pi/gstreamer-1-2-on-the-raspberry-pi/
Everything went fine and in the end when i ran gst-inspect-1.0 | grep omx I got:
omx: omxmpeg2videodec: OpenMAX MPEG2 Video Decoder
omx: omxmpeg4videodec: OpenMAX MPEG4 Video Decoder
omx: omxh263dec: OpenMAX H.263 Video Decoder
omx: omxh264dec: OpenMAX H.264 Video Decoder
omx: omxtheoradec: OpenMAX Theora Video Decoder
omx: omxvp8dec: OpenMAX VP8 Video Decoder
omx: omxmjpegdec: OpenMAX MJPEG Video Decoder
omx: omxvc1dec: OpenMAX WMV Video Decoder
omx: omxh264enc: OpenMAX H.264 Video Encoder
omx: omxanalogaudiosink: OpenMAX Analog Audio Sink
omx: omxhdmiaudiosink: OpenMAX HDMI Audio Sink
Everything seems fine but when I try to use gst-launch-1.0 with the omx decoder I get nothing.
This pipeline runs fine(but really slow so I closed it in the middle):
pi#raspberrypi ~ $ gst-launch-1.0 filesrc location=./h264_720p_hp_5.1_6mbps_ac3_planet.mp4 ! qtdemux ! h264parse ! avdec_h264 ! eglglessink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(Gst EGLDisplay)NULL;
Redistribute latency...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
WARNING: from element /GstPipeline:pipeline0/GstEglGlesSink:eglglessink0: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2791): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:eglglessink0:
There may be a timestamping problem, or this computer is too slow.
^Chandling interrupt.
Interrupt: Stopping pipeline ...
Execution ended after 0:00:07.915424268
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
Then I try the same pipeline with omx I get this:
pi#raspberrypi ~ $ gst-launch-1.0 -v filesrc location=h264_720p_hp_5.1_6mbps_ac3_planet.mp4 ! qtdemux ! h264parse ! omxh264dec ! eglglessink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:sink: caps = video/x-h264, stream-format=(string)avc, alignment=(string)au, level=(string)5.1, profile=(string)high, codec_data=(buffer)01640033ffe1001867640033ac34e2805005ba10001974f004c4b408f18318a801008468eebce5531cc305d2628d13080214868783a1c0d04e12142c0ac0da02fe10042ad35e9e850b748c778a1410088b172105449ca3050e204448b20a4d8a081827090809848541dc4290a43164215a201900cae8340f81e86f03300b6017002ac05981d61a07802a8400a902087404700bc010506e036404b811805902e07203e0087ff85b, width=(int)1280, height=(int)720, framerate=(fraction)24000/1001, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps = video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, level=(string)5.1, profile=(string)high, width=(int)1280, height=(int)720, framerate=(fraction)24000/1001, pixel-aspect-ratio=(fraction)1/1, parsed=(boolean)true
/GstPipeline:pipeline0/GstOMXH264Dec-omxh264dec:omxh264dec-omxh264dec0.GstPad:sink: caps = video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, level=(string)5.1, profile=(string)high, width=(int)1280, height=(int)720, framerate=(fraction)24000/1001, pixel-aspect-ratio=(fraction)1/1, parsed=(boolean)true
/GstPipeline:pipeline0/GstOMXH264Dec-omxh264dec:omxh264dec-omxh264dec0.GstPad:src: caps = video/x-raw, format=(string)I420, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, colorimetry=(string)bt709, framerate=(fraction)24000/1001
/GstPipeline:pipeline0/GstEglGlesSink:eglglessink0.GstPad:sink: caps = video/x-raw, format=(string)I420, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, colorimetry=(string)bt709, framerate=(fraction)24000/1001
ERROR: from element /GstPipeline:pipeline0/GstOMXH264Dec-omxh264dec:omxh264dec-omxh264dec0: Could not configure supporting library.
Additional debug info:
gstomxvideodec.c(1505): gst_omx_video_dec_loop (): /GstPipeline:pipeline0/GstOMXH264Dec-omxh264dec:omxh264dec-omxh264dec0:
Unable to reconfigure output port
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
/GstPipeline:pipeline0/GstEglGlesSink:eglglessink0.GstPad:sink: caps = NULL
/GstPipeline:pipeline0/GstOMXH264Dec-omxh264dec:omxh264dec-omxh264dec0.GstPad:src: caps = NULL
/GstPipeline:pipeline0/GstOMXH264Dec-omxh264dec:omxh264dec-omxh264dec0.GstPad:sink: caps = NULL
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps = NULL
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:sink: caps = NULL
/GstPipeline:pipeline0/GstQTDemux:qtdemux0.GstPad:video_0: caps = NULL
/GstPipeline:pipeline0/GstQTDemux:qtdemux0.GstPad:audio_0: caps = NULL
Freeing pipeline ...
I think this is the most important part of this error:
ERROR: from element /GstPipeline:pipeline0/GstOMXH264Dec-omxh264dec:omxh264dec-omxh264dec0: Could not configure supporting library.
but couldn't find any reference to this error...
Tried to make check the gst-omx but it doesn't have any check routine.
Can anyone shed some light in this matter?
Thanks a lot!
=D
UPDATE:
Strangely, if I started my rpi without the hdmi cable and executed my pipeline via ssh it worked(but I didn't see any image because the hdmi cable was off)
pi#raspberrypi ~ $ gst-launch-1.0 -v filesrc location=h264_720p_hp_5.1_6mbps_ac3_planet.mp4 ! qtdemux ! h264parse ! omxh264dec ! eglglessink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:sink: caps = video/x-h264, stream-format=(string)avc, alignment=(string)au, level=(string)5.1, profile=(string)high, codec_data=(buffer)01640033ffe1001867640033ac34e2805005ba10001974f004c4b408f18318a801008468eebce5531cc305d2628d13080214868783a1c0d04e12142c0ac0da02fe10042ad35e9e850b748c778a1410088b172105449ca3050e204448b20a4d8a081827090809848541dc4290a43164215a201900cae8340f81e86f03300b6017002ac05981d61a07802a8400a902087404700bc010506e036404b811805902e07203e0087ff85b, width=(int)1280, height=(int)720, framerate=(fraction)24000/1001, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps = video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, level=(string)5.1, profile=(string)high, width=(int)1280, height=(int)720, framerate=(fraction)24000/1001, pixel-aspect-ratio=(fraction)1/1, parsed=(boolean)true
/GstPipeline:pipeline0/GstOMXH264Dec-omxh264dec:omxh264dec-omxh264dec0.GstPad:sink: caps = video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, level=(string)5.1, profile=(string)high, width=(int)1280, height=(int)720, framerate=(fraction)24000/1001, pixel-aspect-ratio=(fraction)1/1, parsed=(boolean)true
/GstPipeline:pipeline0/GstOMXH264Dec-omxh264dec:omxh264dec-omxh264dec0.GstPad:src: caps = video/x-raw, format=(string)I420, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, colorimetry=(string)bt709, framerate=(fraction)24000/1001
/GstPipeline:pipeline0/GstEglGlesSink:eglglessink0.GstPad:sink: caps = video/x-raw, format=(string)I420, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, colorimetry=(string)bt709, framerate=(fraction)24000/1001
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:01:52.821428472
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
Does this mean that the problem is in the eglglessink?

Just found the solution:
I forgot to increase the memory of the gpu, so my 720p video decoding didn't have enough memory to run. The easy fix is just to add
gpu_mem=128
to /boot/config.txt and reboot the raspberry. It was, after all, related to eglglessink ;D

Related

Python Apache beam SDK: ReadFromKafka can't consume the data (error)?

Running env:
OS: ubuntu 20.04
kafka version: 2.12-2.0.1
apach-beam library version: apache-beam==2.32.0
Procedure:
shell 1: Run below code
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.io.external.kafka import ReadFromKafka
pipeline_options = PipelineOptions(["--runner=DirectRunner"])
def run():
with beam.Pipeline(options=pipeline_options) as p:
_ = (
p
| 'ReadData' >> ReadFromKafka(
consumer_config={"bootstrap.servers": "localhost:9092"},
topics=["my-first-topic"],
)
| 'PrintData' >> beam.Map(print)
)
if __name__ == "__main__":
run()
output of shell1:
WARNING:apache_beam.runners.interactive.interactive_environment:You have limited Interactive Beam features since your ipython kernel is not connected to any notebook frontend.
WARNING:root:Make sure that locally built Python SDK docker image has Python 3.8 interpreter.
2.32.0: Pulling from apache/beam_java11_sdk
Digest: sha256:a45f89584071950d371966abf910869c456179ab54c7b5213e3f4e2a54bd2753
Status: Image is up to date for apache/beam_java11_sdk:2.32.0
docker.io/apache/beam_java11_sdk:2.32.0
shell 2:
$ cd kafka_2.12-2.0.1/bin && ./kafka-console-producer.sh --topic "my-first-topic" --broker-list localhost:9092
>2
>3
>4
output of shell 1:
WARNING:root:Make sure that locally built Python SDK docker image has Python 3.8 interpreter.
2.32.0: Pulling from apache/beam_java11_sdk
Digest: sha256:a45f89584071950d371966abf910869c456179ab54c7b5213e3f4e2a54bd2753
Status: Image is up to date for apache/beam_java11_sdk:2.32.0
docker.io/apache/beam_java11_sdk:2.32.0
ERROR:root:severity: ERROR
timestamp {
seconds: 1630485467
nanos: 764000000
}
message: "Client failed to deque and process the value"
trace: "org.apache.beam.sdk.util.UserCodeException: java.lang.IllegalArgumentException: Unable to encode element \'org.apache.beam.sdk.io.kafka.KafkaRecord#4c9edf30\' with coder \'KafkaRecordCoder(ByteArrayCoder,ByteArrayCoder)\'.\n\tat org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:39)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner.outputTo(FnApiDoFnRunner.java:1683)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner.access$2500(FnApiDoFnRunner.java:139)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner$NonWindowObservingProcessBundleContext.outputWithTimestamp(FnApiDoFnRunner.java:2205)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner$ProcessBundleContextBase.output(FnApiDoFnRunner.java:2374)\n\tat org.apache.beam.sdk.transforms.DoFnOutputReceivers$WindowedContextOutputReceiver.output(DoFnOutputReceivers.java:78)\n\tat org.apache.beam.sdk.transforms.MapElements$1.processElement(MapElements.java:142)\n\tat org.apache.beam.sdk.transforms.MapElements$1$DoFnInvoker.invokeProcessElement(Unknown Source)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner.processElementForParDo(FnApiDoFnRunner.java:750)\n\tat org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:266)\n\tat org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:218)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner.outputTo(FnApiDoFnRunner.java:1680)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner.access$2500(FnApiDoFnRunner.java:139)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner$WindowObservingProcessBundleContext.outputWithTimestamp(FnApiDoFnRunner.java:2092)\n\tat org.apache.beam.sdk.transforms.DoFnOutputReceivers$WindowedContextOutputReceiver.outputWithTimestamp(DoFnOutputReceivers.java:87)\n\tat org.apache.beam.sdk.io.kafka.ReadFromKafkaDoFn.processElement(ReadFromKafkaDoFn.java:378)\n\tat org.apache.beam.sdk.io.kafka.ReadFromKafkaDoFn$DoFnInvoker.invokeProcessElement(Unknown Source)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner.processElementForWindowObservingSizedElementAndRestriction(FnApiDoFnRunner.java:1048)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner.access$1000(FnApiDoFnRunner.java:139)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner$4.accept(FnApiDoFnRunner.java:637)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner$4.accept(FnApiDoFnRunner.java:632)\n\tat org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:266)\n\tat org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:218)\n\tat org.apache.beam.fn.harness.BeamFnDataReadRunner.forwardElementToConsumer(BeamFnDataReadRunner.java:221)\n\tat org.apache.beam.sdk.fn.data.DecodingFnDataReceiver.accept(DecodingFnDataReceiver.java:43)\n\tat org.apache.beam.sdk.fn.data.DecodingFnDataReceiver.accept(DecodingFnDataReceiver.java:25)\n\tat org.apache.beam.fn.harness.data.QueueingBeamFnDataClient$ConsumerAndData.accept(QueueingBeamFnDataClient.java:316)\n\tat org.apache.beam.fn.harness.data.QueueingBeamFnDataClient.drainAndBlock(QueueingBeamFnDataClient.java:219)\n\tat org.apache.beam.fn.harness.control.ProcessBundleHandler.processBundle(ProcessBundleHandler.java:329)\n\tat org.apache.beam.fn.harness.control.BeamFnControlClient.delegateOnInstructionRequestType(BeamFnControlClient.java:140)\n\tat org.apache.beam.fn.harness.control.BeamFnControlClient$InboundObserver.lambda$onNext$0(BeamFnControlClient.java:110)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:829)\nCaused by: java.lang.IllegalArgumentException: Unable to encode element \'org.apache.beam.sdk.io.kafka.KafkaRecord#4c9edf30\' with coder \'KafkaRecordCoder(ByteArrayCoder,ByteArrayCoder)\'.\n\tat org.apache.beam.sdk.coders.Coder.getEncodedElementByteSize(Coder.java:300)\n\tat org.apache.beam.sdk.coders.Coder.registerByteSizeObserver(Coder.java:291)\n\tat org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$SampleByteSizeDistribution.tryUpdate(PCollectionConsumerRegistry.java:385)\n\tat org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:259)\n\tat org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:218)\nCaused by: org.apache.beam.sdk.coders.CoderException: cannot encode a null byte[]\n\tat org.apache.beam.sdk.coders.ByteArrayCoder.encode(ByteArrayCoder.java:63)\n\tat org.apache.beam.sdk.coders.ByteArrayCoder.encode(ByteArrayCoder.java:56)\n\tat org.apache.beam.sdk.coders.ByteArrayCoder.encode(ByteArrayCoder.java:41)\n\tat org.apache.beam.sdk.coders.KvCoder.encode(KvCoder.java:72)\n\tat org.apache.beam.sdk.coders.KvCoder.encode(KvCoder.java:63)\n\tat org.apache.beam.sdk.io.kafka.KafkaRecordCoder.encode(KafkaRecordCoder.java:70)\n\tat org.apache.beam.sdk.io.kafka.KafkaRecordCoder.encode(KafkaRecordCoder.java:40)\n\tat org.apache.beam.sdk.coders.Coder.getEncodedElementByteSize(Coder.java:297)\n\tat org.apache.beam.sdk.coders.Coder.registerByteSizeObserver(Coder.java:291)\n\tat org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$SampleByteSizeDistribution.tryUpdate(PCollectionConsumerRegistry.java:385)\n\tat org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:259)\n\tat org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:218)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner.outputTo(FnApiDoFnRunner.java:1680)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner.access$2500(FnApiDoFnRunner.java:139)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner$NonWindowObservingProcessBundleContext.outputWithTimestamp(FnApiDoFnRunner.java:2205)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner$ProcessBundleContextBase.output(FnApiDoFnRunner.java:2374)\n\tat org.apache.beam.sdk.transforms.DoFnOutputReceivers$WindowedContextOutputReceiver.output(DoFnOutputReceivers.java:78)\n\tat org.apache.beam.sdk.transforms.MapElements$1.processElement(MapElements.java:142)\n\tat org.apache.beam.sdk.transforms.MapElements$1$DoFnInvoker.invokeProcessElement(Unknown Source)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner.processElementForParDo(FnApiDoFnRunner.java:750)\n\tat org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:266)\n\tat org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:218)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner.outputTo(FnApiDoFnRunner.java:1680)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner.access$2500(FnApiDoFnRunner.java:139)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner$WindowObservingProcessBundleContext.outputWithTimestamp(FnApiDoFnRunner.java:2092)\n\tat org.apache.beam.sdk.transforms.DoFnOutputReceivers$WindowedContextOutputReceiver.outputWithTimestamp(DoFnOutputReceivers.java:87)\n\tat org.apache.beam.sdk.io.kafka.ReadFromKafkaDoFn.processElement(ReadFromKafkaDoFn.java:378)\n\tat org.apache.beam.sdk.io.kafka.ReadFromKafkaDoFn$DoFnInvoker.invokeProcessElement(Unknown Source)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner.processElementForWindowObservingSizedElementAndRestriction(FnApiDoFnRunner.java:1048)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner.access$1000(FnApiDoFnRunner.java:139)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner$4.accept(FnApiDoFnRunner.java:637)\n\tat org.apache.beam.fn.harness.FnApiDoFnRunner$4.accept(FnApiDoFnRunner.java:632)\n\tat org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:266)\n\tat org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:218)\n\tat org.apache.beam.fn.harness.BeamFnDataReadRunner.forwardElementToConsumer(BeamFnDataReadRunner.java:221)\n\tat org.apache.beam.sdk.fn.data.DecodingFnDataReceiver.accept(DecodingFnDataReceiver.java:43)\n\tat org.apache.beam.sdk.fn.data.DecodingFnDataReceiver.accept(DecodingFnDataReceiver.java:25)\n\tat org.apache.beam.fn.harness.data.QueueingBeamFnDataClient$ConsumerAndData.accept(QueueingBeamFnDataClient.java:316)\n\tat org.apache.beam.fn.harness.data.QueueingBeamFnDataClient.drainAndBlock(QueueingBeamFnDataClient.java:219)\n\tat org.apache.beam.fn.harness.control.ProcessBundleHandler.processBundle(ProcessBundleHandler.java:329)\n\tat org.apache.beam.fn.harness.control.BeamFnControlClient.delegateOnInstructionRequestType(BeamFnControlClient.java:140)\n\tat org.apache.beam.fn.harness.control.BeamFnControlClient$InboundObserver.lambda$onNext$0(BeamFnControlClient.java:110)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:829)\n"
instruction_id: "bundle_116"
log_location: "org.apache.beam.fn.harness.data.QueueingBeamFnDataClient"
thread: "31"
ERROR:root:severity: ERROR
timestamp {
seconds: 1630485467
nanos: 770000000
}
message: "Exception while trying to handle InstructionRequest bundle_116"
trace: "org.apache.beam.sdk.util.UserCodeException: java.lang.IllegalArgumentException: Unable to encode element \'org.apache.beam.sdk.io.kafka.KafkaRecord#4c9edf30\' with coder \'KafkaRecordCoder(ByteArrayCoder,ByteArrayCoder)\'.
...
/home/newdisk/miniconda3/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py in _run_bundle(self, runner_execution_context, bundle_context_manager, data_input, data_output, input_timers, expected_timer_output, bundle_manager)
767 expected_timer_output)
768
--> 769 result, splits = bundle_manager.process_bundle(
770 data_input, data_output, input_timers, expected_timer_output)
771 # Now we collect all the deferred inputs remaining from bundle execution.
/home/newdisk/miniconda3/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py in process_bundle(self, inputs, expected_outputs, fired_timers, expected_output_timers, dry_run)
1118
1119 if result.error:
-> 1120 raise RuntimeError(result.error)
1121
1122 if result.process_bundle.requires_finalization:
RuntimeError: org.apache.beam.sdk.util.UserCodeException: java.lang.IllegalArgumentException: Unable to encode element 'org.apache.beam.sdk.io.kafka.KafkaRecord#4c9edf30' with coder 'KafkaRecordCoder(ByteArrayCoder,ByteArrayCoder)'.
at org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:39)
at org.apache.beam.fn.harness.FnApiDoFnRunner.outputTo(FnApiDoFnRunner.java:1683)
at org.apache.beam.fn.harness.FnApiDoFnRunner.access$2500(FnApiDoFnRunner.java:139)
at org.apache.beam.fn.harness.FnApiDoFnRunner$NonWindowObservingProcessBundleContext.outputWithTimestamp(FnApiDoFnRunner.java:2205)
at org.apache.beam.fn.harness.FnApiDoFnRunner$ProcessBundleContextBase.output(FnApiDoFnRunner.java:2374)
at org.apache.beam.sdk.transforms.DoFnOutputReceivers$WindowedContextOutputReceiver.output(DoFnOutputReceivers.java:78)
at org.apache.beam.sdk.transforms.MapElements$1.processElement(MapElements.java:142)
at org.apache.beam.sdk.transforms.MapElements$1$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.fn.harness.FnApiDoFnRunner.processElementForParDo(FnApiDoFnRunner.java:750)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:266)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:218)
at org.apache.beam.fn.harness.FnApiDoFnRunner.outputTo(FnApiDoFnRunner.java:1680)
at org.apache.beam.fn.harness.FnApiDoFnRunner.access$2500(FnApiDoFnRunner.java:139)
at org.apache.beam.fn.harness.FnApiDoFnRunner$WindowObservingProcessBundleContext.outputWithTimestamp(FnApiDoFnRunner.java:2092)
at org.apache.beam.sdk.transforms.DoFnOutputReceivers$WindowedContextOutputReceiver.outputWithTimestamp(DoFnOutputReceivers.java:87)
at org.apache.beam.sdk.io.kafka.ReadFromKafkaDoFn.processElement(ReadFromKafkaDoFn.java:378)
at org.apache.beam.sdk.io.kafka.ReadFromKafkaDoFn$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.fn.harness.FnApiDoFnRunner.processElementForWindowObservingSizedElementAndRestriction(FnApiDoFnRunner.java:1048)
at org.apache.beam.fn.harness.FnApiDoFnRunner.access$1000(FnApiDoFnRunner.java:139)
at org.apache.beam.fn.harness.FnApiDoFnRunner$4.accept(FnApiDoFnRunner.java:637)
at org.apache.beam.fn.harness.FnApiDoFnRunner$4.accept(FnApiDoFnRunner.java:632)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:266)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:218)
at org.apache.beam.fn.harness.BeamFnDataReadRunner.forwardElementToConsumer(BeamFnDataReadRunner.java:221)
at org.apache.beam.sdk.fn.data.DecodingFnDataReceiver.accept(DecodingFnDataReceiver.java:43)
at org.apache.beam.sdk.fn.data.DecodingFnDataReceiver.accept(DecodingFnDataReceiver.java:25)
at org.apache.beam.fn.harness.data.QueueingBeamFnDataClient$ConsumerAndData.accept(QueueingBeamFnDataClient.java:316)
at org.apache.beam.fn.harness.data.QueueingBeamFnDataClient.drainAndBlock(QueueingBeamFnDataClient.java:219)
at org.apache.beam.fn.harness.control.ProcessBundleHandler.processBundle(ProcessBundleHandler.java:329)
at org.apache.beam.fn.harness.control.BeamFnControlClient.delegateOnInstructionRequestType(BeamFnControlClient.java:140)
at org.apache.beam.fn.harness.control.BeamFnControlClient$InboundObserver.lambda$onNext$0(BeamFnControlClient.java:110)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.IllegalArgumentException: Unable to encode element 'org.apache.beam.sdk.io.kafka.KafkaRecord#4c9edf30' with coder 'KafkaRecordCoder(ByteArrayCoder,ByteArrayCoder)'.
at org.apache.beam.sdk.coders.Coder.getEncodedElementByteSize(Coder.java:300)
at org.apache.beam.sdk.coders.Coder.registerByteSizeObserver(Coder.java:291)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$SampleByteSizeDistribution.tryUpdate(PCollectionConsumerRegistry.java:385)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:259)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:218)
Caused by: org.apache.beam.sdk.coders.CoderException: cannot encode a null byte[]
at org.apache.beam.sdk.coders.ByteArrayCoder.encode(ByteArrayCoder.java:63)
at org.apache.beam.sdk.coders.ByteArrayCoder.encode(ByteArrayCoder.java:56)
at org.apache.beam.sdk.coders.ByteArrayCoder.encode(ByteArrayCoder.java:41)
at org.apache.beam.sdk.coders.KvCoder.encode(KvCoder.java:72)
at org.apache.beam.sdk.coders.KvCoder.encode(KvCoder.java:63)
at org.apache.beam.sdk.io.kafka.KafkaRecordCoder.encode(KafkaRecordCoder.java:70)
at org.apache.beam.sdk.io.kafka.KafkaRecordCoder.encode(KafkaRecordCoder.java:40)
at org.apache.beam.sdk.coders.Coder.getEncodedElementByteSize(Coder.java:297)
at org.apache.beam.sdk.coders.Coder.registerByteSizeObserver(Coder.java:291)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$SampleByteSizeDistribution.tryUpdate(PCollectionConsumerRegistry.java:385)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:259)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:218)
at org.apache.beam.fn.harness.FnApiDoFnRunner.outputTo(FnApiDoFnRunner.java:1680)
at org.apache.beam.fn.harness.FnApiDoFnRunner.access$2500(FnApiDoFnRunner.java:139)
at org.apache.beam.fn.harness.FnApiDoFnRunner$NonWindowObservingProcessBundleContext.outputWithTimestamp(FnApiDoFnRunner.java:2205)
at org.apache.beam.fn.harness.FnApiDoFnRunner$ProcessBundleContextBase.output(FnApiDoFnRunner.java:2374)
at org.apache.beam.sdk.transforms.DoFnOutputReceivers$WindowedContextOutputReceiver.output(DoFnOutputReceivers.java:78)
at org.apache.beam.sdk.transforms.MapElements$1.processElement(MapElements.java:142)
at org.apache.beam.sdk.transforms.MapElements$1$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.fn.harness.FnApiDoFnRunner.processElementForParDo(FnApiDoFnRunner.java:750)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:266)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:218)
at org.apache.beam.fn.harness.FnApiDoFnRunner.outputTo(FnApiDoFnRunner.java:1680)
at org.apache.beam.fn.harness.FnApiDoFnRunner.access$2500(FnApiDoFnRunner.java:139)
at org.apache.beam.fn.harness.FnApiDoFnRunner$WindowObservingProcessBundleContext.outputWithTimestamp(FnApiDoFnRunner.java:2092)
at org.apache.beam.sdk.transforms.DoFnOutputReceivers$WindowedContextOutputReceiver.outputWithTimestamp(DoFnOutputReceivers.java:87)
at org.apache.beam.sdk.io.kafka.ReadFromKafkaDoFn.processElement(ReadFromKafkaDoFn.java:378)
at org.apache.beam.sdk.io.kafka.ReadFromKafkaDoFn$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.fn.harness.FnApiDoFnRunner.processElementForWindowObservingSizedElementAndRestriction(FnApiDoFnRunner.java:1048)
at org.apache.beam.fn.harness.FnApiDoFnRunner.access$1000(FnApiDoFnRunner.java:139)
at org.apache.beam.fn.harness.FnApiDoFnRunner$4.accept(FnApiDoFnRunner.java:637)
at org.apache.beam.fn.harness.FnApiDoFnRunner$4.accept(FnApiDoFnRunner.java:632)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:266)
at org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:218)
at org.apache.beam.fn.harness.BeamFnDataReadRunner.forwardElementToConsumer(BeamFnDataReadRunner.java:221)
at org.apache.beam.sdk.fn.data.DecodingFnDataReceiver.accept(DecodingFnDataReceiver.java:43)
at org.apache.beam.sdk.fn.data.DecodingFnDataReceiver.accept(DecodingFnDataReceiver.java:25)
at org.apache.beam.fn.harness.data.QueueingBeamFnDataClient$ConsumerAndData.accept(QueueingBeamFnDataClient.java:316)
at org.apache.beam.fn.harness.data.QueueingBeamFnDataClient.drainAndBlock(QueueingBeamFnDataClient.java:219)
at org.apache.beam.fn.harness.control.ProcessBundleHandler.processBundle(ProcessBundleHandler.java:329)
at org.apache.beam.fn.harness.control.BeamFnControlClient.delegateOnInstructionRequestType(BeamFnControlClient.java:140)
at org.apache.beam.fn.harness.control.BeamFnControlClient$InboundObserver.lambda$onNext$0(BeamFnControlClient.java:110)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Looks like it is related with key_deserializer and value_deserializer args of ReadFromKafka, So I tried to change them:
# key_deserializer="org.apache.kafka.common.serialization.StringSerializer",
# value_deserializer="org.apache.kafka.common.serialization.StringSerializer",
But it raised an another error:
RuntimeError: java.lang.RuntimeException: Failed to build transform beam:external:java:kafkaio:typedwithoutmetadata:v1 from spec urn: "beam:external:java:kafkaio:typedwithoutmetadata:v1"
payload: "\n\213\002\n\035\n\017consumer_config\032\n*\b\n\002\020\a\022\002\020\a\n\020\n\006topics\032\006\032\004\n\002\020\a\n\026\n\020key_deserializer\032\002\020\a\n\030\n\022value_deserializer\032\002\020\a\n\027\n\017start_read_time\032\004\b\001\020\004\n\027\n\017max_num_records\032\004\b\001\020\004\n\025\n\rmax_read_time\032\004\b\001\020\004\n\037\n\031commit_offset_in_finalize\032\002\020\b\n\026\n\020timestamp_policy\032\002\020\a\022$6a700b0b-2839-492d-8629-9b3268d90919\022\272\001\t\002p\000\000\000\000\001\021bootstrap.servers\016localhost:9092\000\000\000\001\016my-first-topic6org.apache.kafka.common.serialization.StringSerializer6org.apache.kafka.common.serialization.StringSerializer\000\016ProcessingTime"
What's wrong with them? Anything that I'm missing?
I had this issue as well. It turned out the smoking gun was this line in the traceback:
Caused by: org.apache.beam.sdk.coders.CoderException: cannot encode a null byte[]
What it's saying is that there's a null value value somewhere and it can't encode it. For me, it turned out I was sending key=None in my kafka-python producer.send function and changing the key value to a string fixed the issue. For example:
with open("../data/sample_data.json") as fp:
for line in fp.readlines():
producer.send(KAFKA_TOPIC, key=str.encode("foo"), value=str.encode(line))
You can also set the key_serializer and value_serializer in the KafkaProducer object.
A heads up: even after fixing the null read there are still problems with reading from Kafka in Python outside of the Dataflow Runner. See: https://issues.apache.org/jira/browse/BEAM-11998
I had the same problem. As i figured out, ReadFromKafka wants to read the Data in a key:value format. My solution was to start the kafka-console-producer.sh with the option:
--property "parse.key=true" --property "key.separator=:"
and add data in the format key:value (f.e. name:peter instead of peter)

Can i avoid this error. E/InputMethodManager: Failed to get fallback IMM with expected .flutter.plugins.webviewflutter.InputAwareWebView

I'm Using the flutter_tex library
flutter_tex
flutter_tex: ^3.6.7+10
I want to release the app in the play store. My app is running fine. But I'm getting this message in the Debug Console.
Can I Avoid It?
TeXView(
renderingEngine: const TeXViewRenderingEngine.katex(),
child: TeXViewDocument('\$\$$text\$\$'),
),
Getting this error:
E/InputMethodManager(28630): b/117267690: Failed to get fallback IMM with expected displayId=137 actual IMM#displayId=0 view=io.flutter.plugins.webviewflutter.InputAwareWebView{86b4162 VFEDHVC.. ........ 0,0-864,188}
E/InputMethodManager(28630): b/117267690: Failed to get fallback IMM with expected displayId=137
actual IMM#displayId=0 view=io.flutter.plugins.webviewflutter.InputAwareWebView{86b4162 VFEDHVC..
........ 0,0-864,188}
D/EGL_emulation(28630): eglMakeCurrent: 0x963e0360: ver 2 0 (tinfo 0xebe0dce0)
E/InputMethodManager(28630): b/117267690: Failed to get fallback IMM with expected displayId=138
actual IMM#displayId=0 view=io.flutter.plugins.webviewflutter.InputAwareWebView{47b28d1 VFEDHVC..
........ 0,0-864,188}
D/HostConnection(28630): HostConnection::get() New Host Connection established 0x6bd973d0, tid 28933
D/HostConnection(28630): HostComposition ext ANDROID_EMU_CHECKSUM_HELPER_v1 ANDROID_EMU_dma_v1
ANDROID_EMU_direct_mem ANDROID_EMU_host_composition_v1 ANDROID_EMU_host_composition_v2
ANDROID_EMU_vulkan ANDROID_EMU_deferred_vulkan_commands ANDROID_EMU_vulkan_null_optional_strings
ANDROID_EMU_vulkan_create_resources_with_requirements ANDROID_EMU_YUV_Cache
ANDROID_EMU_async_unmap_buffer ANDROID_EMU_vulkan_ignored_handles ANDROID_EMU_vulkan_free_memory_sync
ANDROID_EMU_vulkan_shader_float16_int8 ANDROID_EMU_vulkan_async_queue_submit
GL_OES_vertex_array_object GL_KHR_texture_compression_astc_ldr ANDROID_EMU_host_side_tracing
ANDROID_EMU_gles_max_version_2
E/InputMethodManager(28630): b/117267690: Failed to get fallback IMM with expected displayId=139
actual IMM#displayId=0 view=io.flutter.plugins.webviewflutter.InputAwareWebView{d55bd6b VFEDHVC..
........ 0,0-864,188}
E/InputMethodManager(28630): b/117267690: Failed to get fallback IMM with expected displayId=140
actual IMM#displayId=0 view=io.flutter.plugins.webviewflutter.InputAwareWebView{24c90e0 VFEDHVC..
........ 0,0-864,188}
Is this safe to not care about this.
We have to enable hybrid composition, set WebView.platform = SurfaceAndroidWebView();
You can look at this github link https://github.com/flutter/flutter/issues/40716#issuecomment-708076795. This helped me solve this error.
This problem is due to clicking on this widget and will not be fixed by resetting or deleting the build files.
All you have to do is design an operation that clicks on this widget or add a read-only feature to it, and this will solve the problem.
Your day without bugs!
Had the same problem. A clean rebuild of the project fixed it.

pyspark foreach/foreachPartition send http request failed

I use urllib.request to send http request in foreach/foreachPartition. pyspark throw error as follow:
objc[74094]: +[__NSPlaceholderDate initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug.
20/07/20 19:05:58 ERROR Executor: Exception in task 7.0 in stage 0.0 (TID 7)
org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:536)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:525)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:643)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:621)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:456)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator.foreach(Iterator.scala:941)
at scala.collection.Iterator.foreach$(Iterator.scala:941)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:315)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:313)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1004)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2133)
_
when i call rdd.foreach(send_http), rdd=sc.parallelize(["http://192.168.1.1:5000/index.html"]),
send_http defined as follow:
def send_http(url):
req = urllib.request.Request(url)
resp = urllib.request.urlopen(req)
anyone can tell me the problem? thanks.
This is #tmylt's answer in the comment above, but I too can confirm that using http.client instead of requests.get does work. I'm sure there's a reason why this is happening, but using python http.client is a quick fix.

GitLab CI - How to verify that Unity3D Tests actually passed

Here is my gitlab-ci.yml:
variables:
GIT_STRATEGY: fetch
GIT_CLONE_PATH: $CI_BUILDS_DIR
GIT_DEPTH: 10
GIT_CLEAN_FLAGS: none
stages:
- test
- build
unit-test:
script: "C:\\'Program Files'\\Unity\\Hub\\Editor\\'2019.4.3f1'\\Editor\\Unity.exe \
-runTests \
-batchmode \
-projectPath . \
-logFile ./log.txt \
-testResults ./unit-tests.xml"
stage: test
tags:
- unity
unity-build:
stage: build
script: echo 'Building...'
tags:
- unity
Here is the output of the pipeline:
Running with gitlab-runner 13.1.1 (6fbc7474)
on Desktop ryBW4ftU
Preparing the "shell" executor
00:00
Using Shell executor...
Preparing environment
00:01
Running on DESKTOP-G62KPLQ...
Getting source from Git repository
00:06
Fetching changes with git depth set to 10...
Reinitialized existing Git repository in X:/ScarsOFHonor_CI/.git/
Checking out 8ca0d362 as master...
Encountered 2 file(s) that should have been pointers, but weren't:
ProjectSettings/EditorSettings.asset
ProjectSettings/XRSettings.asset
git-lfs/2.4.2 (GitHub; windows amd64; go 1.8.3; git 6f4b2e98)
Skipping Git submodules setup
Executing "step_script" stage of the job script
01:18
$ C:\'Program Files'\Unity\Hub\Editor\'2019.4.3f1'\Editor\Unity.exe -runTests -batchmode -projectPath . -logFile ./log.txt -testResults ./unit-tests.xml
. is not a valid directory name. Please make sure there are no unallowed characters in the name.
(Filename: C:\buildslave\unity\build\Runtime/Utilities/FileVFS.cpp Line: 218)
. is not a valid directory name. Please make sure there are no unallowed characters in the name.
(Filename: C:\buildslave\unity\build\Runtime/Utilities/FileVFS.cpp Line: 218)
Job succeeded
As you can see pipeline passed with success. However that should not be true. I intentionally created an pushed a failing test.
See the content of unit-tests.xml:
<?xml version="1.0" encoding="utf-8"?>
<test-run id="2" testcasecount="1" result="Failed(Child)" total="1" passed="0" failed="1" inconclusive="0" skipped="0" asserts="0" engine-version="3.5.0.0" clr-version="4.0.30319.42000" start-time="2020-07-17 11:40:32Z" end-time="2020-07-17 11:40:32Z" duration="0.105692">
<test-suite type="TestSuite" id="1007" name="ScarsOFHonor" fullname="ScarsOFHonor" runstate="Runnable" testcasecount="1" result="Failed" site="Child" start-time="2020-07-17 11:40:32Z" end-time="2020-07-17 11:40:32Z" duration="0.105692" total="1" passed="0" failed="1" inconclusive="0" skipped="0" asserts="0">
<properties />
<failure>
<message><![CDATA[One or more child tests had errors]]></message>
</failure>
<test-suite type="Assembly" id="1010" name="EditMode.dll" fullname="X:/ScarsOFHonor_CI/Library/ScriptAssemblies/EditMode.dll" runstate="Runnable" testcasecount="1" result="Failed" site="Child" start-time="2020-07-17 11:40:32Z" end-time="2020-07-17 11:40:32Z" duration="0.079181" total="1" passed="0" failed="1" inconclusive="0" skipped="0" asserts="0">
<properties>
<property name="_PID" value="27144" />
<property name="_APPDOMAIN" value="Unity Child Domain" />
<property name="platform" value="EditMode" />
</properties>
<failure>
<message><![CDATA[One or more child tests had errors]]></message>
</failure>
<test-suite type="TestSuite" id="1011" name="Tests" fullname="Tests" runstate="Runnable" testcasecount="1" result="Failed" site="Child" start-time="2020-07-17 11:40:32Z" end-time="2020-07-17 11:40:32Z" duration="0.076239" total="1" passed="0" failed="1" inconclusive="0" skipped="0" asserts="0">
<properties />
<failure>
<message><![CDATA[One or more child tests had errors]]></message>
</failure>
<test-suite type="TestFixture" id="1008" name="DummyTest" fullname="Tests.DummyTest" classname="Tests.DummyTest" runstate="Runnable" testcasecount="1" result="Failed" site="Child" start-time="2020-07-17 11:40:32Z" end-time="2020-07-17 11:40:32Z" duration="0.067615" total="1" passed="0" failed="1" inconclusive="0" skipped="0" asserts="0">
<properties />
<failure>
<message><![CDATA[One or more child tests had errors]]></message>
</failure>
<test-case id="1009" name="DummyTestSimplePasses" fullname="Tests.DummyTest.DummyTestSimplePasses" methodname="DummyTestSimplePasses" classname="Tests.DummyTest" runstate="Runnable" seed="785721540" result="Failed" start-time="2020-07-17 11:40:32Z" end-time="2020-07-17 11:40:32Z" duration="0.032807" asserts="0">
<properties />
<failure>
<message><![CDATA[ Expected: 1
But was: 2
]]></message>
<stack-trace><![CDATA[at Tests.DummyTest.DummyTestSimplePasses () [0x00001] in X:\ScarsOFHonor_CI\Assets\Tests\EditMode\DummyTest.cs:15
]]></stack-trace>
</failure>
</test-case>
</test-suite>
</test-suite>
</test-suite>
</test-suite>
</test-run>
As you can see there is a failing test. Why then my pipeline passes ?
Is there any way I can print my results in the pipeline itself and mark the pipeline as failed when actually I have failing test?
Hm a bit (actually extremely :D ) dirty but you could simply check if the result file contains the word <failure> using find like e.g.
find "<failure>" ./unit-tests.xml > nul && exit 1 || exit 0
> nul don't print to the console
so if the word <failure> was found it exits with failure (1) otherwise with success (0)
Not a yml expert but I think you could e.g. add it to
script: "C:\\'Program Files'\\Unity\\Hub\\Editor\\'2019.4.3f1'\\Editor\\Unity.exe -runTests -batchmode -projectPath . -logFile ./log.txt -testResults ./unit-tests.xml && find '<failure>' ./unit-tests.xml > nul && exit 1 || exit 0"

Importing tensorflow into Scala

I have installed Tensorflow for Scala on my OSX, and although everything seemed fine, I have a NoClassDefFoundError when trying to run simple example like that –
import org.platanios.tensorflow.api.Tensor
val tensor = Tensor( 1.2, 4.5)
which gives:
java.lang.NoClassDefFoundError: Could not initialize class org.platanios.tensorflow.api.package$
at #worksheet#.tensor$lzycompute(testone.sc:3)
at #worksheet#.tensor(testone.sc:3)
at #worksheet#.get$$instance$$tensor(testone.sc:3)
at A$A16$.main(testone.sc:17)
at A$A16.main(testone.sc)
at #worksheet#.#worksheet#(testone.sc)
Similar error both on a Jupyter Notebook and an IntelliJ worksheet. My build.sbt:
scalaVersion := "2.12.4"
resolvers += Resolver.sonatypeRepo("snapshots")
libraryDependencies += "org.platanios" %% "tensorflow" % "0.1.2-SNAPSHOT"
libraryDependencies += "org.platanios" %% "tensorflow" % "0.1.2-SNAPSHOT" classifier "darwin-cpu-x86_64"
The problem could be due to a missing dependency library of libtensorflow_jni.so contained by the scala_tensorflow jar
To find the missing library:
sbt console
then from the scala shell import the tensorflow api:
scala> import org.platanios.tensorflow.api._
scala> val tensor = Tensor.zeros(INT32, Shape(2, 5))
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/angelo/.ivy2/cache/org.slf4j/slf4j-log4j12/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/angelo/.ivy2/cache/ch.qos.logback/logback-classic/jars/logback-classic-1.2.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No appenders could be found for logger (TensorFlow Native).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
java.lang.UnsatisfiedLinkError: /tmp/tensorflow_scala_native_libraries3327494822622243889/libtensorflow_jni.so: libcusolver.so.9.0: cannot open shared object file: No such file or directory
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
at java.lang.Runtime.load0(Runtime.java:809)
at java.lang.System.load(System.java:1086)
at org.platanios.tensorflow.jni.TensorFlow$$anonfun$load$3.apply(TensorFlow.scala:95)
at org.platanios.tensorflow.jni.TensorFlow$$anonfun$load$3.apply(TensorFlow.scala:93)
at scala.Option.foreach(Option.scala:257)
at org.platanios.tensorflow.jni.TensorFlow$.load(TensorFlow.scala:93)
at org.platanios.tensorflow.jni.TensorFlow$.<init>(TensorFlow.scala:155)
at org.platanios.tensorflow.jni.TensorFlow$.<clinit>(TensorFlow.scala)
at org.platanios.tensorflow.jni.Tensor$.<init>(Tensor.scala:24)
at org.platanios.tensorflow.jni.Tensor$.<clinit>(Tensor.scala)
at org.platanios.tensorflow.api.tensors.Context$.apply(Context.scala:50)
at org.platanios.tensorflow.api.package$.<init>(package.scala:89)
at org.platanios.tensorflow.api.package$.<clinit>(package.scala)
... 40 elided
You can check all the missing libraries using ldd on another terminal (ldd in linux or otool -L on macosx):
ldd /tmp/tensorflow_scala_native_libraries3327494822622243889/libtensorflow_jni.so
/tmp/tensorflow_scala_native_libraries3327494822622243889/libtensorflow_jni.so: /usr/lib/libcublas.so.9.0: version `libcublas.so.9.0' not found (required by /tmp/tensorflow_scala_native_libraries3327494822622243889/libtensorflow.so)
/tmp/tensorflow_scala_native_libraries3327494822622243889/libtensorflow_jni.so: /usr/lib/libcublas.so.9.0: version `libcublas.so.9.0' not found (required by /tmp/tensorflow_scala_native_libraries3327494822622243889/libtensorflow_framework.so)
linux-vdso.so.1 => (0x00007ffc00d85000)
libdlfaker.so => /usr/lib/x86_64-linux-gnu/libdlfaker.so (0x00007f0cd05d8000)
librrfaker.so => /usr/lib/x86_64-linux-gnu/librrfaker.so (0x00007f0cd033d000)
libtensorflow.so => /tmp/tensorflow_scala_native_libraries3327494822622243889/libtensorflow.so (0x00007f0cc8023000)
libtensorflow_framework.so => /tmp/tensorflow_scala_native_libraries3327494822622243889/libtensorflow_framework.so (0x00007f0cc714b000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f0cc6dc9000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f0cc6bb3000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f0cc67e9000)
libGL.so.1 => /usr/lib/nvidia-390/libGL.so.1 (0x00007f0cc64ac000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f0cc62a8000)
libturbojpeg.so.0 => /usr/lib/x86_64-linux-gnu/libturbojpeg.so.0 (0x00007f0cc6047000)
libXv.so.1 => /usr/lib/x86_64-linux-gnu/libXv.so.1 (0x00007f0cc5e42000)
libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 (0x00007f0cc5b08000)
libXext.so.6 => /usr/lib/x86_64-linux-gnu/libXext.so.6 (0x00007f0cc58f6000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f0cc56d9000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f0cc53d0000)
/lib64/ld-linux-x86-64.so.2 (0x00007f0cd0a66000)
libcublas.so.9.0 => /usr/lib/libcublas.so.9.0 (0x00007f0cc1caf000)
libcusolver.so.9.0 => not found
libcudart.so.9.0 => not found
libgomp.so.1 => /usr/lib/x86_64-linux-gnu/libgomp.so.1 (0x00007f0cc1a8d000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f0cc1885000)
libcuda.so.1 => /usr/lib/x86_64-linux-gnu/libcuda.so.1 (0x00007f0cc0ce5000)
libcudnn.so.7 => /usr/local/cuda-9.1/targets/x86_64-linux/lib/libcudnn.so.7 (0x00007f0cafd54000)
libcufft.so.9.0 => not found
libcurand.so.9.0 => not found
libcudart.so.9.0 => not found
libnvidia-tls.so.390.30 => /usr/lib/nvidia-390/tls/libnvidia-tls.so.390.30 (0x00007f0cafb50000)
libnvidia-glcore.so.390.30 => /usr/lib/nvidia-390/libnvidia-glcore.so.390.30 (0x00007f0cadd50000)
libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 (0x00007f0cadb2e000)
libnvidia-fatbinaryloader.so.390.30 => /usr/lib/nvidia-390/libnvidia-fatbinaryloader.so.390.30 (0x00007f0cad8e2000)
libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 (0x00007f0cad6de000)
libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 (0x00007f0cad4d8000)
On my computer the rutime linking process cannot resolve libcusolver.so.9.0 since I have installed cuda-9.1
To make it working I had to compile tensorflow as follows:
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow
./configure
bazel build --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" --config=opt //tensorflow:libtensorflow.so
Copy the libraries into a path which is in LD_LIBRARY_PATH:
sudo cp bazel-bin/tensorflow/libtensorflow.so /usr/local/lib
sudo cp bazel-bin/tensorflow/libtensorflow_framework.so /usr/local/lib
Finally I was able to compile the tensorflow_scala project:
sbt compile
Now from the tensorflow_scala project I can run sbt console and it works:
scala> import org.platanios.tensorflow.api._
import org.platanios.tensorflow.api._
scala> val tensor = Tensor.zeros(INT32, Shape(2, 5))
2018-02-16 17:08:26.184 [run-main-0] INFO TensorFlow Native - Extracting the 'tensorflow_jni' native library to /tmp/tensorflow_scala_native_libraries8283851378265055495/libtensorflow_jni.so.
2018-02-16 17:08:26.188 [run-main-0] INFO TensorFlow Native - Copied 645872 bytes to /tmp/tensorflow_scala_native_libraries8283851378265055495/libtensorflow_jni.so.
2018-02-16 17:08:26.254 [run-main-0] INFO TensorFlow Native - Extracting the 'tensorflow_ops' native library to /tmp/tensorflow_scala_native_libraries8283851378265055495/libtensorflow_ops.so.
2018-02-16 17:08:26.254 [run-main-0] INFO TensorFlow Native - Copied 78232 bytes to /tmp/tensorflow_scala_native_libraries8283851378265055495/libtensorflow_ops.so.
2018-02-16 17:08:26.449239: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:898] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-02-16 17:08:26.449483: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1331] Found device 0 with properties:
name: GeForce GTX 1050 major: 6 minor: 1 memoryClockRate(GHz): 1.493
pciBusID: 0000:01:00.0
totalMemory: 3,95GiB freeMemory: 1,53GiB
2018-02-16 17:08:26.449498: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1410] Adding visible gpu devices: 0
2018-02-16 17:08:26.670568: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-02-16 17:08:26.670601: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0
2018-02-16 17:08:26.670610: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N
2018-02-16 17:08:26.670698: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1021] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1287 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1)
2018-02-16 17:08:26.698355: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1410] Adding visible gpu devices: 0
2018-02-16 17:08:26.698399: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-02-16 17:08:26.698408: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0
2018-02-16 17:08:26.698414: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N
2018-02-16 17:08:26.698518: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1021] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 224 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1)
tensor: org.platanios.tensorflow.api.tensors.Tensor = INT32[2, 5]
To use the locally built tensorflow_scala:
sbt
sbt:TensorFlow for Scala> + publishLocal
The jar will be placed into ~/.ivy2/local/org.platanios ; then you can add the jar to your sbt project e.g:
libraryDependencies ++= { Seq("org.platanios" %% "tensorflow" % "0.1.2-SNAPSHOT") }