GStreamer streaming stopped, reason error (-5) - encoding

Every time I run my pipeline, it goes for a while, and then eventually crashes with, "streaming stopped, reason error (-5). I tried to debug the error, but actually got nothing specific that might be helpful.
My Gstramer pipeline:
'gst-launch-1.0',
'--gst-debug=*src*:5',
'-vvv',
'rtpbin', 'name=rtpbin', 'rtp-profile=avpf',
'fdsrc', 'timeout=1000000',
'!', 'queue',
'!', 'image/x-portable-pixmap,width=1280,height=720,framerate=30/1',
'!', 'pnmdec',
'!', 'videoconvert',
'!', 'x264enc', 'tune=zerolatency', 'speed-preset=1', 'dct8x8=true', 'quantizer=23', 'pass=qual',
'!', 'video/x-h264, profile=baseline',
'!', 'rtph264pay', 'ssrc=111110', 'pt=96',
'!', 'rtprtxqueue', 'max-size-time=2000', 'max-size-packets=0',
'!', 'rtpbin.send_rtp_sink_0',
'rtpbin.send_rtp_src_0',
'!', 'udpsink', 'bind-address=127.0.0.1', 'host=127.0.0.1', 'bind-port=5004', `port=${videoRtpPort}`,
'rtpbin.send_rtcp_src_0',
'!', 'udpsink', 'bind-address=127.0.0.1', 'host=127.0.0.1', `port=${videoRtcpPort}`, 'sync=false', 'async=false', 'udpsrc', 'port=5005',
'!', 'rtpbin.recv_rtcp_sink_0',
The error:
0:00:00.383847433 8025 0x55cb237fe940 DEBUG basesrc gstbasesrc.c:2460:gst_base_src_get_range:<fdsrc0> calling create offset 5821984 length 4096, time 0
0:00:00.401808796 8025 0x55cb237fe940 DEBUG basesrc gstbasesrc.c:2315:gst_base_src_do_sync:<fdsrc0> no sync needed
0:00:00.401819755 8025 0x55cb237fe940 DEBUG basesrc gstbasesrc.c:2524:gst_base_src_get_range:<fdsrc0> buffer ok
0:00:00.401824818 8025 0x55cb237fe940 INFO basesrc gstbasesrc.c:2860:gst_base_src_loop:<fdsrc0> pausing after gst_pad_push() = error
0:00:00.401828989 8025 0x55cb237fe940 DEBUG basesrc gstbasesrc.c:2903:gst_base_src_loop:<fdsrc0> pausing task, reason error
0:00:00.401834372 8025 0x55cb237fe940 WARN basesrc gstbasesrc.c:2950:gst_base_src_loop:<fdsrc0> error: Internal data stream error.
0:00:00.401837214 8025 0x55cb237fe940 WARN basesrc gstbasesrc.c:2950:gst_base_src_loop:<fdsrc0> error: streaming stopped, reason error (-5)
Interesting fact:
If I enable debug mode, it stops working right away and crashes with the error;
Update
Set debug for Queue element, and got the following:
0:00:00.947557704 4268 0x5607b98d5230 DEBUG queue_dataflow gstqueue.c:1219:gst_queue_chain_buffer_or_list:<queue0> queue is full, waiting for free space
0:00:00.967780458 4268 0x5607b98d5230 WARN queue gstqueue.c:989:gst_queue_handle_sink_event:<queue0> error: Internal data stream error.
0:00:00.967813978 4268 0x5607b98d5230 WARN queue gstqueue.c:989:gst_queue_handle_sink_event:<queue0> error: streaming stopped, reason error (-5)
ERROR: from element /GstPipeline:pipeline0/GstFdSrc:fdsrc0: Internal data stream error.
Additional debug info:
gstbasesrc.c(2950): gst_base_src_loop (): /GstPipeline:pipeline0/GstFdSrc:fdsrc0:
streaming stopped, reason error (-5)
Seems like the queue is full, so how to solve the issue?
Update 2
Seems that the problem is caused by pnmdec plugin. The following pipeline throws the same error:
'gst-launch-1.0',
'fdsrc', 'fd=0',
'!', 'queue',
'!', 'image/x-portable-pixmap,width=1280,height=720,framerate=20/1',
'!', 'pnmdec',
'!', 'fakesink', 'dump=true',
But the pipeline without pnmdec works fine. Any thoughts about the issue?
One more interesting thing:
I've noticed that the pipeline crashes after the following event:
GST_SCHEDULING gstpad.c:4329:gst_pad_chain_data_unchecked:<pnmdec0:sink>[00m called chainfunction &gst_video_decoder_chain with buffer 0x7f97c8310d80, returned error
I tried to google it but found nothing.

Related

Task Control Option - Custom Condition - run task when previous failed or timed out

Is there an option to set the custom condition that will test if the previous task has failed OR timed out?
Currently, I'm using the Only when a previous task has failed which works when the task fails. If the task times out, then it is not considered an error and it is skipped.
I need a custom condition then, something like or(failed(), timedout()). Is it possible?
Context
We have this intermittent problem with the npm install task that we can't find a reason for but it is resolved with next job run, so we were searching for a retry functionality. Partial solution was to duplicate npm install and use the Control Option but it wasnt working for all "failure" cases. Solution gave by #Levi Lu-MSFT seems to be working for all our needs (it does retry) but sadly it doesnt solve the problem, 2nd line repeated task also fails.
Sample errors:
20741 error stack: 'Error: EPERM: operation not permitted, unlink \'C:\\agent2\\_work\\4\\s\\node_modules\\.staging\\typescript-4440ace9\\lib\\tsc.js\'',
20741 error errno: -4048,
20741 error code: 'EPERM',
20741 error syscall: 'unlink',
20741 error path: 'C:\\agent2\\_work\\4\\s\\node_modules\\.staging\\typescript-4440ace9\\lib\\tsc.js',
20741 error parent: 's' }
20742 error The operation was rejected by your operating system.
20742 error It's possible that the file was already in use (by a text editor or antivirus),
20742 error or that you lack permissions to access it.
or
21518 verbose stack SyntaxError: Unexpected end of JSON input while parsing near '...ter/doc/TypeScript%20'
21518 verbose stack at JSON.parse (<anonymous>)
21518 verbose stack at parseJson (C:\agent2\_work\_tool\node\8.17.0\x64\node_modules\npm\node_modules\json-parse-better-errors\index.js:7:17)
21518 verbose stack at consumeBody.call.then.buffer (C:\agent2\_work\_tool\node\8.17.0\x64\node_modules\npm\node_modules\node-fetch-npm\src\body.js:96:50)
21518 verbose stack at <anonymous>
21518 verbose stack at process._tickCallback (internal/process/next_tick.js:189:7)
21519 verbose cwd C:\agent2\_work\7\s
21520 verbose Windows_NT 10.0.14393
21521 verbose argv "C:\\agent2\\_work\\_tool\\node\\8.17.0\\x64\\node.exe" "C:\\agent2\\_work\\_tool\\node\\8.17.0\\x64\\node_modules\\npm\\bin\\npm-cli.js" "install"
21522 verbose node v8.17.0
21523 verbose npm v6.13.4
21524 error Unexpected end of JSON input while parsing near '...ter/doc/TypeScript%20'
21525 verbose exit [ 1, true ]
Sometimes also time's out
It is possible to add a custom condition. If you want the task to be executed when previous task failed or skipped, you can use custom condition not(succeeded())
However there is a problem with above custom condition, it does not work in the multiple tasks scenario.
For example, there are three tasks A,B,C. The expected behavior is Task C gets executed only when Task B failed. But the actual behavior is Task C will also get executed when Task A failed even if Task B succeeded. Check below screenshot.
The workaround for above problem is to add a script task to call azure devops restful api to get the status of Task B and set it to a variable using this expression echo "##vso[task.setvariable variable=taskStatus]taskStatus".
For below example, Add a powershell task (You need to set conditon for this task to Even if a previous task has failed, even if the build was canceled to always run this powershell task) before Task C to run below inline scripts:
$url = "$(System.TeamFoundationCollectionUri)$(System.TeamProject)/_apis/build/builds/$(Build.BuildId)/timeline?api-version=5.1"
$result = Invoke-RestMethod -Uri $url -Headers #{authorization = "Bearer $env:SYSTEM_ACCESSTOKEN"} -ContentType "application/json" -Method get
#Get the task B's task result
$taskResult = $result.records | where {$_.name -eq "B"} | select result
#set the Task B's taskResult to variable taskStatus
echo "##vso[task.setvariable variable=taskStatus]$($taskResult.result)"
In order above scripts can access the access token, you also need to click the Agent job and check option Allow scripts to access the OAuth token. Refer to below screenshot.
At last you can use custom condition and(not(canceled()), ne(variables.taskStatus, 'succeeded')) for Task C. Task C should be executed only when Task B not succeeded.
Although I failed to find a built-in function to detect if a build step is timed out, you can try to emulate this with the help of variables.
Consider the following YAML piece of pipeline declaration:
steps:
- script: |
echo Hello from the first task!
sleep 90
echo "##vso[task.setvariable variable=timedOut]false"
timeoutInMinutes: 1
displayName: 'A'
continueOnError: true
- script: echo Previous task has failed or timed out!
displayName: 'B'
condition: or(failed(), ne(variables.timedOut, 'false'))
The first task (A) is set to time out after 1 minute, but the script inside emulates the long-running task (sleep 90) for 1.5 minutes. As a result, the task times out and the timedOut variable is NOT set to false. Hence, the condition of the task B evaluates to true and it executes. The same happens if you replace sleep 90 with exit 1 to emulate the task A failure.
On the other hand, if task A succeeds, neither of the condition parts of task B evaluates to true, and the whole task B is skipped.
This is a very simplified example, but it demonstrates the idea which you can tweak further to satisfy the needs of your pipeline.

Azure Datafactory Pipeline execution status

It is kind of annoying we cannot change the logical order(AND/OR) of the Activity dependencies. however, I have got another issue. having said that I have activities for on failure to log the error messages in DB, since the logging activity succeeds, the entire pipeline succeeds too! is there any workaround to say if any activities failed the entire pipeline and the parent pipeline, if it is called from another pipeline, should be failed either?
In my screenshot, i have selected the on completion dependencies to log the successful or error.
I see that you defined "On Success" of the copy activity to run "usp_postexecution" . Please define a "On failure" of the copy activity and add any activity ( may be a set variable for testing ) and execute the pipeline . The pipeline will fail .
Just to give you more context what i tried .
I have a variable name "test" of the type boolean and I am failing it deliberately ( by assigning to a non-boolean value of true1 )
Pipeline will fail when I define both success and failure scenarios .
The pipeline will succeed when you have only "Failure" defined

Argument does not begin with '--' error while executing Apache Beam WordCount example from Eclipse in java

I'm trying to execute the example WordCount(Java code) from Eclipse as specified in
https://cloud.google.com/dataflow/docs/quickstarts/quickstart-java-eclipse#run-the-wordcount-example-pipeline-on-the-cloud-dataflow-service
While executing through RunConfiguration, getting below error in Eclipse console.
Exception in thread "main" java.lang.IllegalArgumentException: Argument '-output=gs://bucket-for-beam/stage-folder/output-file-prefix' does not begin with '--'
at org.apache.beam.repackaged.beam_sdks_java_core.com.google.common.base.Preconditions.checkArgument(Preconditions.java:191)
at org.apache.beam.sdk.option[s.PipelineOptionsFactory.parseCommandLine(PipelineOptionsFactory.java:1423)
at org.apache.beam.sdk.options.PipelineOptionsFactory.access$200(PipelineOptionsFactory.java:110)
at org.apache.beam.sdk.options.PipelineOptionsFactory$Builder.as(PipelineOptionsFactory.java:294)
at com.gcp.dataflow.examples.WordCount.main(WordCount.java:190)][1]
I've created a bucket, named 'bucket-for-beam' folder,named 'stage-folder':
- and -- are different. You have specified the first one on the arguments tab. It expects the second one.
For me it was due to a space in between the different option of the commands

RDBMS to MongoDB using Talend 6.1.2 throwing error

I am trying to convert data from ORacle to MongoDB JSON format . But when trying to convert, I am getting following error when writing to JSON field. can anyone help. I have attached the flow and the mapping
Error message
tarting job ora2mongo at 03:14 17/08/2018.
[statistics] connecting to socket on port 4071
[statistics] connected
Exception in thread "main" java.lang.Error: Unresolved compilation problems:
Incorrect number of arguments for type Queue<E>; it cannot be parameterized with arguments <>
Syntax error on token "null", delete this token
Syntax error on token "null", delete this token
Syntax error on token "null", delete this token
queue cannot be resolved or is not a field
queue cannot be resolved to a variable
queue cannot be resolved or is not a field
queue cannot be resolved to a variable
Syntax error, insert "AssignmentOperator Expression" to complete Assignment
Syntax error, insert ";" to complete Statement
The left-hand side of an assignment must be a variable
Syntax error on token "null", invalid ClassType
Syntax error, insert "new ClassType ( )" to complete Expression
Syntax error, insert "AssignmentOperator Expression" to complete Assignment
Syntax error, insert ";" to complete Statement
Syntax error on token "null", invalid ClassType
Syntax error, insert "new ClassType ( )" to complete Expression
at rdbms_to_mongo.ora2mongo_0_1.ora2mongo.tOracleInput_1Process(ora2mongo.java:3156)
[statistics] disconnected
at rdbms_to_mongo.ora2mongo_0_1.ora2mongo.tOracleConnection_1Process(ora2mongo.java:550)
at rdbms_to_mongo.ora2mongo_0_1.ora2mongo.runJobInTOS(ora2mongo.java:5272)
at rdbms_to_mongo.ora2mongo_0_1.ora2mongo.main(ora2mongo.java:5129)
Job ora2mongo ended at 03:14 17/08/2018. [exit code=0]

Is there a way to add errors from build step in TeamCity to the email notification?

I have a Powershell build step, and I'd like to add some messages fromt the script to the resulting email that is sent to people on the notification list. I see this happen for tests where the number of failures and the error is added to the email. But, how can I add my custom messages from the PowerShell build step to the resulting email?
Have you tried using service messages?
See here:http://confluence.jetbrains.com/display/TCD7/Build+Script+Interaction+with+TeamCity
You could use
write-host "##teamcity[message text='Its broken again' errorDetails='Your exception message' status='FAILURE']"
In order for the errors to be included in emails, I found I needed to add "compilationStarted" and "compilationFinished" tags, e.g:
##teamcity[compilationStarted compiler='Solution.sln']
##teamcity[message text='1>File.cpp(1): error C2065: "stackoverflow" : undeclared identifier' status='ERROR']
##teamcity[compilationFinished compiler='Solution.sln']
I use a Python script to parse the output from devenv, looking for specific strings to add as errors and warnings. The email adds these under a "compilation errors" section.
If you mean to pipe the output of an error that occurred in the Powershell script you are running then try piping the error object to a TeamCity service message after it has been caught
This is untested code but it might work for you:
trap [SystemException]
{
write-host "##teamcity[message text='An error occurred' errorDetails='$_' status='ERROR']";exit 1
}
OR
try
{
# Something...
}
catch
{
write-host "##teamcity[message text='An error occurred' errorDetails='$_' status='ERROR']";exit 1
}