I am using MultiResourceItemWriter for generating files. Suppose each file will have 15 lines and chunk size is 5, then there will be 3 chunks for each file.
If an exception occurs in the second or third chunk of a file, the file gets created and it contains the data till the last committed chunk. After restart, the rest of the data is written to the file as expected.
But if an exception occurs in the first chunk of a file, the file is not getting generated. Now If I restart the failed job, I get a "File is not writable: [filename]" error message.
Is there a way to restart the job when the first chunk of a file fails?
Related
When I try to update a record with some data I'm getting this exception:
Caused by: com.orientechnologies.common.io.OIOException: Impossible to write a chunk of length:83644944 max allowed chunk length:16777216 see NETWORK_BINARY_MAX_CONTENT_LENGTH settings
at com.orientechnologies.orient.client.remote.OStorageRemote.handleIOException(OStorageRemote.java:321)
at com.orientechnologies.orient.client.remote.OStorageRemote.baseNetworkOperation(OStorageRemote.java:296)
at com.orientechnologies.orient.client.remote.OStorageRemote.asyncNetworkOperation(OStorageRemote.java:163)
at com.orientechnologies.orient.client.remote.OStorageRemote.createRecord(OStorageRemote.java:564)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.executeSaveRecord(ODatabaseDocumentTx.java:2202)
at com.orientechnologies.orient.core.tx.OTransactionNoTx.saveNew(OTransactionNoTx.java:241)
at com.orientechnologies.orient.core.tx.OTransactionNoTx.saveRecord(OTransactionNoTx.java:171)
... 56 more
Caused by: com.orientechnologies.common.io.OIOException: Impossible to write a chunk of length:83644944 max allowed chunk length:16777216 see NETWORK_BINARY_MAX_CONTENT_LENGTH settings
at com.orientechnologies.orient.enterprise.channel.binary.OChannelBinary.writeBytes(OChannelBinary.java:273)
at com.orientechnologies.orient.enterprise.channel.binary.OChannelBinary.writeBytes(OChannelBinary.java:259)
at com.orientechnologies.orient.client.remote.OStorageRemote$5.execute(OStorageRemote.java:571)
at com.orientechnologies.orient.client.remote.OStorageRemote$1.execute(OStorageRemote.java:167)
at com.orientechnologies.orient.client.remote.OStorageRemote.baseNetworkOperation(OStorageRemote.java:252)
... 61 more
How and where do I need to increase this configuration of maxlength???
My OrientDB version is: 2.2.34
image of table structure
Here trying to add BINARY data to screenshot column
You can change this setting as follow:
Go into orientdb-server-config.xml file and change as follow:
<entry name="network.binary.maxLength" value="<a value in KB here>"/>
At startup, specifying the following parameter on the command line:
-Dnetwork.binary.maxLength=<aValueInKb>
eg.
-Dnetwork.binary.maxLength=32768
if you are running embedded, you can do the following before you start the server:
OGlobalConfiguration.NETWORK_BINARY_MAX_CONTENT_LENGTH.set(32768);
I'm trying to transcode stream with liquid and output to icecast2
Below is my config taken from official website http://savonet.sourceforge.net/doc-svn/cookbook.html
# Input the stream,
# from an Icecast server or any other source
url = "http://www.protonradio.com:8000/schedule.m3u"
input = mksafe(input.http(url))
# First transcoder: MP3 32 kbps
# We also degrade the samplerate, and encode in mono
# Accordingly, a mono conversion is performed on the input stream
output.icecast(
%mp3(bitrate=32, samplerate=22050, stereo=false),
mount="/your-stream-32.mp3",
host="streaming.example.com", port=8000, password="xxx",
mean(input))
When I try to run it with ./radio.liq
I get this error:
root#Ubuntu:/etc/liquidsoap# ./radio.liq
./radio.liq: line 4: url: command not found
./radio.liq: line 5: syntax error near unexpected token `('
./radio.liq: line 5: `input = mksafe(input.http(url))'
root#Ubuntu:/etc/liquidsoap#
Here's what happens when I run with this command:
root#Ubuntu:/etc/liquidsoap# liquidsoap radio2.liq
init: security exit, root euid (user).
root#Ubuntu:/etc/liquidsoap#
Buffer errors with this stream url http://46.21.106.168:80
2016/09/30 15:57:17 [http_4756:3] Buffer overrun: Dropping 0.03s.
2016/09/30 15:57:20 [http_4756:3] Buffer overrun: Dropping 0.00s.
2016/09/30 15:57:26 [http_4756:3] Buffer overrun: Dropping 0.00s.
2016/09/30 15:57:37 [http_4756:3] Buffer overrun: Dropping 0.01s.
2016/09/30 15:57:44 [http_4756:3] Buffer overrun: Dropping 0.00s.
2016/09/30 15:58:11 [http_4756:3] Buffer overrun: Dropping 0.00s.
2016/09/30 15:58:47 [http_4756:3] Buffer overrun: Dropping 0.00s.
You should start liquidsoap interpreter and feed your script to it like this:
liquidsoap radio.liq
In your example you start the script from the command line directly and it goes to shell (bash), not liquidsoap.
To add on to the comments of Alexeys answer.
Your script did not run because you did not tell the script what application to run it with. On windows the file extension (.exe or .txt or .doc) is used to distinguish which application to open. On Unix the first line known as the "sha bang" tells the OS which application must run this file.
So if you first check where your liquidsoap is installed with:
which liquidsoap
Then add the response path to your scripts first line like so:
#!/usr/bin/liquidsoap
Unix will now know which application to open it with.
On your second observation(not issue), the Overrun is seen because you are fetching a stream from a Icecast server using http. The Icecast sever is generally configured to "burst" some data to you on connection (normally for players to fill their buffers). Anyhow, the buffer used in the input.http command is too small for this sudden burst of data and therefore liquidsoap throws a overflow exception. To fix this, increase the max buffer.
input = mksafe(input.http(url,buffer=2.,max=120.))
I am trying to use ChronicleMap for my index structure, this seems to work fine on Linux but when I am running my JUnit test on Windows (which is my development environment), I keep getting an error: java.io.IOException: Unable to wait until the file is ready, likely the process which created the file crashed or hung for more than 1 minute.
Here's the code snippet that is problematic:
File file = new File(idxFullPath);
ChronicleMap<Integer, int[]> idx =
ChronicleMapBuilder.of(Integer.class, int[].class)
.averageValue(getSampleIdxList())
.entries(IDX_MAX_SIZE)
.createPersistedTo(file);
The following exception is thrown:
[2016-06-17 14:32:47.779] ERROR main com.mcm.op.persistence.Persistence ERR java.io.IOException: Unable to wait until the file is ready, likely the process which created the file crashed or hung for more than 1 minute
at net.openhft.chronicle.map.ChronicleMapBuilder.waitUntilReady(ChronicleMapBuilder.java:1520)
at net.openhft.chronicle.map.ChronicleMapBuilder.openWithExistingFile(ChronicleMapBuilder.java:1583)
at net.openhft.chronicle.map.ChronicleMapBuilder.createWithFile(ChronicleMapBuilder.java:1444)
at net.openhft.chronicle.map.ChronicleMapBuilder.createPersistedTo(ChronicleMapBuilder.java:1405)
at com.mcm.op.persistence.Persistence.initIdx(Persistence.java:131)
at com.mcm.op.persistence.Persistence.init(Persistence.java:177)
at com.mcm.op.persistence.PersistenceTest.initPersist(PersistenceTest.java:47)
at com.mcm.op.persistence.PersistenceTest.setUp(PersistenceTest.java:29)
Indeed, it is likely that the process which created the file has crashed, or stopped terminated debugging, or something like that.
If it's ok to have a fresh index from unit test-to-test runs, I recommend to try either delete the file at idxFullPath before creating a Chronicle Map, or randomize the mapping file via something like File.createTempFile(). In either case File.deleteOnExit() could appear to be helpful.
If you want to keep the index between unit test runs and always use the same file at idxFullPath for persistence, you could try to use builder.createOrRecoverPersistedTo() instead of plain createPersistedTo() map creation method. However this might slow down the map creation.
I am doing offline analysis of the logs from previous days. Elasticsearch and logstash version used is 2.3.1
My input file:
path => "/opt/logs/test/*"
sincedb_path => "/opt/logs/.sincedb"
sincedb_write_interval => 10
start_position => "beginning"
I see the sincedb file is created only when the last log line is reached. And whenever the logstash file is stopped in between, the log parsing is started from the beginning instead of the position where it was stopped. This causes duplicate entries in kibana.
I assume the sincedb should write at interval of every 10sec to the file(as specified in my input). And if the logstash is stopped for any reason and restarted it should continue from the previous stoped position. Is there some more code to be added or the sincedb file is created only at the end of the file? Please suggest how to avoid duplicate parsing.
An error has crashed my application server and I can't seem to figure out what could be causing the issue. My application is built with Meteor and hosted on modulus.io. Here are my application logs:
Error: no chunks found for file, possibly corrupt
at /mnt/data/2/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:817:20
at /mnt/data/2/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:594:7
at /mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:758:35
at Cursor.close (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:989:5)
at Cursor.nextObject (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:758:17)
at commandHandler (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:727:14)
at /mnt/data/2/node_modules/mongodb/lib/mongodb/db.js:1916:9
at Server.Base._callHandler (/mnt/data/2/node_modules/mongodb/lib/mongodb/connection/base.js:448:41)
at /mnt/data/2/node_modules/mongodb/lib/mongodb/connection/server.js:481:18
at [object Object].MongoReply.parseBody (/mnt/data/2/node_modules/mongodb/lib/mongodb/responses/mongo_reply.js:68:5)
[2015-03-29T22:05:57.573Z] Application CRASH detected. Exit code 8.
Most probably this is a mongo bug with gridfs (has been fixed)
Writing two or more different files concurrently from different node
processes using the GridStore.writeFile command results in some files
not being correctly written (ending up with a number of corrupt files
in the gridstore). Ending up with corrupt files even with all
writeFile calls being successfull and no indication of error.
writeFile occasionally fails with error "chunks out of order", but
this happens very rarely (something like 1 failed writeFile for 100
corrupt files or more).
Based on the comments with in a discussion, the problem will be fixed if you will update mongo (the gridfs files should be removed, as they are corrupt).
Error: no chunks found for file, possibly corrupt
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:808:20
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:586:5
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/collection/query.js:164:5
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/cursor.js:778:35
I had a similar occurance, but it ended up the file sought in a GFS read stream had actually been deleted - so in my case it wasn't corrupt, but gone! Above is a log from when that happened.