I'm running Cassandra 1.1.9 on a two-node test cluster and trying to use cassandra-cli to create a new column family in an existing keyspace. The creation seems to succeed, but then describe claims that the column family does not exist:
[default#unknown] use test_kubrick_a;
Authenticated to keyspace: test_kubrick_a
[default#test_kubrick_a] create column family counters_bd603cd5_cf1f_45ff_8522_f871a8f0c327;
ce75766b-fca0-3d7a-9ddf-646a63149e04
Waiting for schema agreement...
... schemas agree across the cluster
[default#test_kubrick_a] describe counters_bd603cd5_cf1f_45ff_8522_f871a8f0c327;
Sorry, no Keyspace nor ColumnFamily was found with name: counters_bd603cd5_cf1f_45ff_8522_f871a8f0c327
The server logs also indicate that creation was successful:
tail -f /var/log/cassandra/system.log
INFO [RPC-Thread:82640] 2013-07-25 09:34:03,604 MigrationManager.java (line 177) Create new ColumnFamily: org.apache.cassandra.config.CFMetaData#6b4611b[cfId=16865,ksName=test_kubrick_a,cfName=counters_bd603cd5_cf1f_45ff_8522_f871a8f0c327,cfType=Standard,comparator=org.apache.cassandra.db.marshal.BytesType,subcolumncomparator=<null>,comment=,readRepairChance=0.1,dclocalReadRepairChance=0.0,replicateOnWrite=true,gcGraceSeconds=864000,defaultValidator=org.apache.cassandra.db.marshal.BytesType,keyValidator=org.apache.cassandra.db.marshal.BytesType,minCompactionThreshold=4,maxCompactionThreshold=32,keyAlias=<null>,columnAliases=[],valueAlias=<null>,column_metadata={},compactionStrategyClass=class org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy,compactionStrategyOptions={},compressionOptions={sstable_compression=org.apache.cassandra.io.compress.SnappyCompressor},bloomFilterFpChance=<null>,caching=KEYS_ONLY]
INFO [MigrationStage:1] 2013-07-25 09:34:05,166 ColumnFamilyStore.java (line 679) Enqueuing flush of Memtable-schema_columnfamilies#1842686868(1958/5565 serialized/live bytes, 20 ops)
INFO [FlushWriter:115] 2013-07-25 09:34:05,167 Memtable.java (line 264) Writing Memtable-schema_columnfamilies#1842686868(1958/5565 serialized/live bytes, 20 ops)
INFO [FlushWriter:115] 2013-07-25 09:34:05,272 Memtable.java (line 305) Completed flushing /videoplaza/cassandra/datac/data/system/schema_columnfamilies/system-schema_columnfamilies-hf-21754-Data.db (2030 bytes) for commitlog position ReplayPosition(segmentId=1374476230336, position=20511454)
I see the same behaviour when trying to use Astyanax to create the column family, so this is not an issue with the client.
Related
I have installed Cassandra on Kubernetes (9 pods) All the pods are up and running except
for one pod, which shows the below error.
org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: Encountered bad header at position 47137 of commit log /var/lib/cassandra/commitlog/CommitLog-600-1630582314923.log, with bad position but valid CRC
at org.apache.cassandra.db.commitlog.CommitLogReplayer.shouldSkipSegmentOnError(CommitLogReplayer.java:438)
at org.apache.cassandra.db.commitlog.CommitLogReplayer.handleUnrecoverableError(CommitLogReplayer.java:452)
at org.apache.cassandra.db.commitlog.CommitLogSegmentReader$SegmentIterator.computeNext(CommitLogSegmentReader.java:109)
at org.apache.cassandra.db.commitlog.CommitLogSegmentReader$SegmentIterator.computeNext(CommitLogSegmentReader.java:84)
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:236)
at org.apache.cassandra.db.commitlog.CommitLogReader.readAllFiles(CommitLogReader.java:134)
at org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFiles(CommitLogReplayer.java:154)
at org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:213)
at org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:194)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:338)
at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:527)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:702)
at com.datastax.bdp.DseModule.main(DseModule.java:96)
Caused by: org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException: Encountered bad header at position 47137 of commit log /var/lib/cassandra/commitlog/CommitLog-600-1630582314923.log, with bad position but valid CRC
at org.apache.cassandra.db.commitlog.CommitLogSegmentReader$SegmentIterator.computeNext(CommitLogSegmentReader.java:111)
... 12 more
ERROR [main] 2021-09-06 06:19:08,990 JVMStabilityInspector.java:251 - JVM state determined to be unstable. Exiting forcefully due to:
org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: Encountered bad header at position 47137 of commit log /var/lib/cassandra/commitlog/CommitLog-600-1630582314923.log, with bad position but valid CRC
at org.apache.cassandra.db.commitlog.CommitLogReplayer.shouldSkipSegmentOnError(CommitLogReplayer.java:438)
at org.apache.cassandra.db.commitlog.CommitLogReplayer.handleUnrecoverableError(CommitLogReplayer.java:452)
at org.apache.cassandra.db.commitlog.CommitLogSegmentReader$SegmentIterator.computeNext(CommitLogSegmentReader.java:109)
at org.apache.cassandra.db.commitlog.CommitLogSegmentReader$SegmentIterator.computeNext(CommitLogSegmentReader.java:84)
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:236)
at org.apache.cassandra.db.commitlog.CommitLogReader.readAllFiles(CommitLogReader.java:134)
at org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFiles(CommitLogReplayer.java:154)
at org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:213)
at org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:194)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:338)
at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:527)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:702)
at com.datastax.bdp.DseModule.main(DseModule.java:96)
Can someone help me out please
For whatever reason, one of the commit log segments got corrupted on the node.
You can workaround the issue by manually deleting this file on the pod:
/var/lib/cassandra/commitlog/CommitLog-600-1630582314923.log
Interestingly, that commit log segment was created on September 2 (1630582314923) but the log entry you posted was from September 6. This indicates something happened to the pod which resulted in the corrupted file.
You'll need to review the Cassandra logs on the pod (not the pod logs itself) to determine the root cause and address it. Cheers!
when i use the DESCRIBE EXTENDED in ksqlDB over stream i created, i'm getting the following result:
Local runtime statistics:
consumer-messages-per-sec: 0 | consumer-total-bytes: 101768 |
consumer-total-messages: 98 | last-message: 2021-08-18T15:27:28.978Z
| consumer-failed-messages: 1 |consumer-failed-messages-per-sec: 0 |
last-failed: 2021-08-18T15:27:29.405Z
The failed message (consumer-failed-messages: 1) is a message that I intentionally sent to the topic and it failed because I sent a different data type than what is defined in the schema to a particular field.
Now i want to view the failed message (getting the actual json that was sent)
Is it possible?
The goal is to build a monitor of messages that failed to pass the processing stage.
Thanks,
ruth.
Deserialization messages are logged to the ksqlDB processing log: https://docs.ksqldb.io/en/latest/reference/processing-log/
You can set ksql.logging.processing.rows.include to true in order to include the actual messages themselves (encoded as base64) in the processing log message for deserialization errors if you want to.
I have OrienDB 2.1.4 cluster of 3 nodes with basic configuration. The only change in hazelcast.xml I made is to replace multicast by implicit tcp-ip hosts list.
After heavy request to DB (select without joins, about 300k rows in result set), OrientDB stops response to network connection attempts from application (OrientDB Studio is still working), the follwoing exceptions continuously appear in logs:
on master node
2016-02-24 10:02:17:647 INFO [10.10.10.124]:2434 [zertodb] [3.3.5] Remaining migration tasks in queue => 1 [InternalPartitionService][10.10.10.124]:2434 [zertodb] [3.3.5] Received data format is invalid. (An old version of Hazelcast may be running here.)
com.hazelcast.nio.serialization.HazelcastSerializationException: java.io.UTFDataFormatException: Length check failed, maybe broken bytestream or wrong stream position
at com.hazelcast.nio.serialization.SerializationServiceImpl.handleException(SerializationServiceImpl.java:354)
at com.hazelcast.nio.serialization.SerializationServiceImpl.readObject(SerializationServiceImpl.java:341)
at com.hazelcast.nio.serialization.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:454)
at com.hazelcast.cluster.MulticastService.receive(MulticastService.java:155)
at com.hazelcast.cluster.MulticastService.run(MulticastService.java:113)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.UTFDataFormatException: Length check failed, maybe broken bytestream or wrong stream position
at com.hazelcast.nio.UTFEncoderDecoder.readUTF0(UTFEncoderDecoder.java:505)
at com.hazelcast.nio.UTFEncoderDecoder.readUTF(UTFEncoderDecoder.java:77)
at com.hazelcast.nio.serialization.ByteArrayObjectDataInput.readUTF(ByteArrayObjectDataInput.java:450)
at com.hazelcast.cluster.ConfigCheck.readData(ConfigCheck.java:219)
at com.hazelcast.cluster.JoinMessage.readData(JoinMessage.java:80)
at com.hazelcast.cluster.JoinRequest.readData(JoinRequest.java:64)
at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:111)
at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:39)
at com.hazelcast.nio.serialization.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.nio.serialization.SerializationServiceImpl.readObject(SerializationServiceImpl.java:335)
... 4 more
on other nodes:
[10.10.10.194]:2434 [zertodb] [3.3.5] Received data format is invalid. (An old version of Hazelcast may be running here.)
com.hazelcast.nio.serialization.HazelcastSerializationException: java.io.StreamCorruptedException: invalid type code: 00
at com.hazelcast.nio.serialization.SerializationServiceImpl.handleException(SerializationServiceImpl.java:354)
at com.hazelcast.nio.serialization.SerializationServiceImpl.readObject(SerializationServiceImpl.java:341)
at com.hazelcast.nio.serialization.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:454)
at com.hazelcast.cluster.ConfigCheck.readData(ConfigCheck.java:215)
at com.hazelcast.cluster.JoinMessage.readData(JoinMessage.java:80)
at com.hazelcast.cluster.JoinRequest.readData(JoinRequest.java:64)
at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:111)
at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:39)
at com.hazelcast.nio.serialization.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.nio.serialization.SerializationServiceImpl.readObject(SerializationServiceImpl.java:335)
at com.hazelcast.nio.serialization.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:454)
at com.hazelcast.cluster.MulticastService.receive(MulticastService.java:155)
at com.hazelcast.cluster.MulticastService.run(MulticastService.java:113)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.StreamCorruptedException: invalid type code: 00
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1379)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at com.hazelcast.nio.serialization.DefaultSerializers$ObjectSerializer.read(DefaultSerializers.java:196)
at com.hazelcast.nio.serialization.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.nio.serialization.SerializationServiceImpl.readObject(SerializationServiceImpl.java:335)
... 12 more
The same query with smaller result set works fine.
I have found this issue https://github.com/hazelcast/hazelcast/issues/4327 about your problem with hazelcast.
Hope it helps.
I am using Hadoop on Windows Server 2008 - Hortonworks distribution
We are using PIG and trying to write the data into MongoDB; I am not able to read or write to the MongoDB; not sure what the issue we get an error 2116 which states that the mongodb schema is empty
Command to read -
register 'D:\hdp\pig-0.12.1.2.1.1.0-1621\lib\mongo-hadoop-core-1.2.0.jar'
register 'D:\mongo-hadoop-2.2-1.2.0\mongo-hadoop-2.2-1.2.0\mongo-hadoop-1.2.0.jar'
register 'D:\mongo-hadoop-2.2-1.2.0\mongo-hadoop-2.2-1.2.0\mongo-hadoop-pig-1.2.0.jar'
register 'D:\hdp\hadoop-2.4.0.2.1.1.0-1621\lib\mongo-2.6.1.jar'
set mapred.map.tasks.speculative.execution false;
set mapred.reduce.tasks.speculative.execution false;
SET mapreduce.fileoutputcommitter.marksuccessfuljobs false;
SalesLoading = load 'mongodb://localhost/benvenuedb.SalesData' using com.mongodb.hadoop.pig.MongoLoader();
store SalesLoading into 'mongodb://localhost:27017/benvenuedb.SalesData1' using com.mongodb.hadoop.pig.MongoStorage();
Error Messages
Pig Stack Trace
---------------
ERROR 2116:
<line 5, column 0> Output Location Validation Failed for: 'mongodb://127.0.0.1:27017/benvenuedb.SalesData More info to follow:
The value of property mongo.pig.output.schema must not be null
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1002: Unable to store alias salesLoading
at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1637)
at org.apache.pig.PigServer.registerQuery(PigServer.java:577)
at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:1093)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:501)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:198)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:69)
at org.apache.pig.Main.run(Main.java:541)
at org.apache.pig.Main.main(Main.java:156)
Caused by: org.apache.pig.impl.plan.VisitorException: ERROR 2116:
<line 5, column 0> Output Location Validation Failed for: 'mongodb://127.0.0.1:27017/benvenuedb.SalesData More info to follow:
The value of property mongo.pig.output.schema must not be null
at org.apache.pig.newplan.logical.rules.InputOutputFileValidator$InputOutputFileVisitor.visit(InputOutputFileValidator.java:75)
at org.apache.pig.newplan.logical.relational.LOStore.accept(LOStore.java:66)
at org.apache.pig.newplan.DepthFirstWalker.depthFirst(DepthFirstWalker.java:64)
at org.apache.pig.newplan.DepthFirstWalker.depthFirst(DepthFirstWalker.java:66)
at org.apache.pig.newplan.DepthFirstWalker.walk(DepthFirstWalker.java:53)
at org.apache.pig.newplan.PlanVisitor.visit(PlanVisitor.java:52)
at org.apache.pig.newplan.logical.rules.InputOutputFileValidator.validate(InputOutputFileValidator.java:45)
at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.compile(HExecutionEngine.java:303)
at org.apache.pig.PigServer.compilePp(PigServer.java:1382)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
at org.apache.pig.PigServer.execute(PigServer.java:1299)
at org.apache.pig.PigServer.access$400(PigServer.java:124)
at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1632)
... 8 more
Caused by: java.lang.IllegalArgumentException: The value of property mongo.pig.output.schema must not be null
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:971)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:953)
at com.mongodb.hadoop.pig.MongoStorage.setStoreLocation(MongoStorage.java:249)
at org.apache.pig.newplan.logical.rules.InputOutputFileValidator$InputOutputFileVisitor.visit(InputOutputFileValidator.java:68)
... 20 more
I have issued netstat -an to see the open ports
The local address is 10.69.148.89; I do not see the port 27017 open in this IP; however 127.0.0.1 has 27017 open. There is something simple we are overlooking.
Need some help; we have spent over 2 days with no resolution
Have you tried setting the property it says is missing?
The value of property mongo.pig.output.schema must not be null
There are some issues in writing to MongoDB from PIG especially when you use Hortonworks windows distribution. I have broken this into Three steps;
Write to HDFS filesystem as a json file using JSONStorage( );
Move the HDFS file to windows filesystem
Load the json file into MongoDB
I am open if anyone has attempted this in a different way
I have three zookeeper nodes. All ports are open. The ip address are correct. Below is my config file. All nodes where booted by chef and all have the same install and config file.
# The number of milliseconds of each tick
tickTime=3000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=/var/lib/zookeeper
# Place the dataLogDir to a separate physical disc for better performance
# dataLogDir=/disk2/zookeeper
# the port at which the clients will connect
clientPort=2181
server.1=111.111.111:2888:3888
server.2=111.111.112:2888:3888
server.3=111.111.113:2888:3888
Here is error for one of the nodes. So...I am rather confused on how I could get an error since the config is rather vanilla. All three nodes are doing hte same thing.
2012-07-16 05:16:57,558 - INFO [main:QuorumPeerConfig#90] - Reading configuration from: /etc/zookeeper/conf/zoo.cfg
2012-07-16 05:16:57,567 - INFO [main:QuorumPeerConfig#310] - Defaulting to majority quorums
2012-07-16 05:16:57,572 - FATAL [main:QuorumPeerMain#83] - Invalid config, exiting abnormally
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing /etc/zookeeper/conf/zoo.cfg
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:110)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:99)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:76)
Caused by: java.lang.IllegalArgumentException: serverid replace this text with the cluster-unique zookeeper's instance id (1-255) is not a number
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:333)
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:106)
... 2 more
You need create a file named myid and put it into zookeeper var directory, one for each server, consists of a single line containing only the text of that machine's id. So myid of server 1 would contain the text "1" and nothing else. The id must be unique within the ensemble and should have a value between 1 and 255.
see more at http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html#sc_zkMulitServerSetup
server.1=111.111.111:2888:3888
server.2=111.111.112:2888:3888
server.3=111.111.113:2888:3888
Are your servers and IP's
Then create myid file on each of the nodes with value 1 in 111.111.111 and 2 in 111.111.111.112 and 3 in 111.111.111.113 servers under directory(dataDir=/var/lib/zookeeper)
If you place value "1" myid file you will get Number format exception and "Invalid config, exiting abnormally" if the myid file is created with any extension.
Therefore just create myid file without any extension and place integer values 1,2,3 in the corresponding servers without double quotes