Kafka - performance test - apache-kafka

Kafka version: 0.9
Command:
kafka-run-class org.apache.kafka.tools.ProducerPerformance –-topic testY --num-records 10000 --record-size 5000 --producer-props bootstrap.servers=servers --throughput 10
Error:
usage: producer-performance [-h] --topic TOPIC --num-records
NUM-RECORDS --record-size RECORD-SIZE --throughput THROUGHPUT
--producer-props PROP-NAME=PROP-VALUE [PROP-NAME=PROP-VALUE ...] producer-performance: error: unrecognized arguments: '–-topic testY
--num-records 10000 --record-size 5000 --producer-props bootstrap.servers=servers --throughput 10'
What is wrong with the command?

You have an oversized dash before topic.
kafka-run-class org.apache.kafka.tools.ProducerPerformance --topic testY --num-records 10000 --record-size 5000 --producer-props bootstrap.servers=servers --throughput 10

Related

Kafka Producer - Not able to download/refresh metadata after brokers were restarted in the cluster

we have a kafka cluster with 5 nodes and 3 zookeepers and all the topics has replication factor of 3. Currently using Kafka and kafka clients (2.2.0) and Zookeeper version (5.2.1)
When couple of brokers are down, the producers are failing to send the message with the below error.
org.apache.kafka.common.errors.TimeoutException: Topic testTopic not present in metadata after 120000ms.
Client seems to skip the metadata update by comparing the latest leader epoch
Cluster config :
--override num.network.threads=3 --override num.io.threads=8 --override default.replication.factor=3 --override auto.create.topics.enable=true --override delete.topic.enable=true --override socket.send.buffer.bytes=102400 --override socket.receive.buffer.bytes=102400 --override socket.request.max.bytes=104857600 --override num.partitions=30 --override num.recovery.threads.per.data.dir=1 --override offsets.topic.replication.factor=3 --override transaction.state.log.replication.factor=3 --override transaction.state.log.min.isr=1 --override log.retention.hours=48 --override log.segment.bytes=1073741824 --override log.retention.check.interval.ms=300000 --override zookeeper.connection.timeout.ms=6000 --override confluent.support.metrics.enable=true --override group.initial.rebalance.delay.ms=0 --override confluent.support.customer.id=anonymous
Producer config:
acks = 1
batch.size = 8192
bootstrap.servers = []
buffer.memory = 33554432
client.dns.lookup = default
client.id = C02Z93MPLVCH
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 120000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 20000
reconnect.backoff.ms = 20000
request.timeout.ms = 300000
retries = 2
retry.backoff.ms = 500
Did anyone faced the same issue? - we normally expect kafka clients to download the metadata incase the brokers are down after some retries.We would have to restart the server to initialize the connections again after waiting for couple of hours.
Is this the expected behaviour?

Trouble connecting to MSK over SSL using Kafka-Connect

I'm having trouble using the AWS MSK TLS endpoints in Confluent Kafka-Connect image as it times out creating/reading to the topics. Works totally fine when I pass the PlainText endpoints.
I tried referencing the jks store path available on the that docker image still doesn't work not quite sure if I'm missing any other configs. From what I read from AWS docs Amazon MSK brokers use public AWS Certificate Manager certificates therefore, any truststore that trusts Amazon Trust Services also trusts the certificates of Amazon MSK brokers.
**Error:**
org.apache.kafka.connect.errors.ConnectException: Timed out while checking for or creating topic(s) '_confluent-command'. This could indicate a connectivity issue, unavailable topic partitions, or if this is your first use of the topic it may have taken too long to create.
Attaching the kafka-connect config I'm using any help would be great :)
INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values:
bootstrap.servers = [**.us-east-1.amazonaws.com:9094,*.us-east-1.amazonaws.com:9094]
client.dns.lookup = default
client.id =
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 120000
retries = 5
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = SSL
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = JKSStorePath
ssl.truststore.password = ***
ssl.truststore.type = JKS
I used the java cacerts in the docker image at /usr/lib/jvm/zulu-8-amd64/jre/lib/security/cacerts as the truststore. With keytool, if you look at the certs:
keytool --list -v -keystore /usr/lib/jvm/zulu-8-amd64/jre/lib/security/cacerts|grep Amazon
It will list out the Amazon CAs.
I then started the container using:
docker run -d \
--name=kafka-connect-avro-ssl \
--net=host \
-e CONNECT_BOOTSTRAP_SERVERS=<msk_broker1>:9094,<msk_broker2>:9094,<msk_broker3>:9094 \
-e CONNECT_REST_PORT=28083 \
-e CONNECT_GROUP_ID="quickstart-avro" \
-e CONNECT_CONFIG_STORAGE_TOPIC="avro-config" \
-e CONNECT_OFFSET_STORAGE_TOPIC="avro-offsets" \
-e CONNECT_STATUS_STORAGE_TOPIC="avro-status" \
-e CONNECT_KEY_CONVERTER="io.confluent.connect.avro.AvroConverter" \
-e CONNECT_VALUE_CONVERTER="io.confluent.connect.avro.AvroConverter" \
-e CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL="<hostname of EC2 instance>:8081" \
-e CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL="http://<hostname of EC2 instance>:8081" \
-e CONNECT_INTERNAL_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_INTERNAL_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_REST_ADVERTISED_HOST_NAME="<hostname of EC2 instance>" \
-e CONNECT_LOG4J_ROOT_LOGLEVEL=DEBUG \
-e CONNECT_SECURITY_PROTOCOL=SSL \
-e CONNECT_SSL_TRUSTSTORE_LOCATION=/usr/lib/jvm/zulu-8-amd64/jre/lib/security/cacerts \
-e CONNECT_SSL_TRUSTSTORE_PASSWORD=changeit \
confluentinc/cp-kafka-connect:latest
With that, it started successfully. I was also able to connect to the container, create topics, produce and consume from within the container. If you're unable to create topics, it could be a network connectivity issue, possibly a security group issue of the security group attached to the MSK cluster, blocking ports 2181 and TLS port 9094.

Read pipe separated values in ksql

I am working on POC, I have to read pipe separated value file and insert these records into ms sql server.
I am using confluent 5.4.1 to use value_delimiter create stream property. But its giving exception: Delimeter only supported with DELIMITED format
1. Start Confluent (version: 5.4.1)::
[Dev root # myip ~]
# confluent local start
The local commands are intended for a single-node development environment
only, NOT for production usage. https://docs.confluent.io/current/cli/index.html
Using CONFLUENT_CURRENT: /tmp/confluent.vHhSRAnj
Starting zookeeper
zookeeper is [UP]
Starting kafka
kafka is [UP]
Starting schema-registry
schema-registry is [UP]
Starting kafka-rest
kafka-rest is [UP]
Starting connect
connect is [UP]
Starting ksql-server
ksql-server is [UP]
Starting control-center
control-center is [UP]
[Dev root # myip ~]
# jps
49923 KafkaRestMain
50099 ConnectDistributed
49301 QuorumPeerMain
50805 KsqlServerMain
49414 SupportedKafka
52103 Jps
51020 ControlCenter
1741
49646 SchemaRegistryMain
[Dev root # myip ~]
#
2. Create Topic:
[Dev root # myip ~]
# kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic SampleData
Created topic SampleData.
3. Provide pipe separated data to SampeData Topic
[Dev root # myip ~]
# kafka-console-producer --broker-list localhost:9092 --topic SampleData <<EOF
> this is col1|and now col2|and col 3 :)
> EOF
>>[Dev root # myip ~]
#
4. Start KSQL::
[Dev root # myip ~]
# ksql
===========================================
= _ __ _____ ____ _ =
= | |/ // ____|/ __ \| | =
= | ' /| (___ | | | | | =
= | < \___ \| | | | | =
= | . \ ____) | |__| | |____ =
= |_|\_\_____/ \___\_\______| =
= =
= Streaming SQL Engine for Apache Kafka® =
===========================================
Copyright 2017-2019 Confluent Inc.
CLI v5.4.1, Server v5.4.1 located at http://localhost:8088
Having trouble? Type 'help' (case-insensitive) for a rundown of how things work!
5. Declare a schema for the existing topic: SampleData
ksql> CREATE STREAM sample_delimited (
> column1 varchar(1000),
> column2 varchar(1000),
> column3 varchar(1000))
> WITH (KAFKA_TOPIC='SampleData', VALUE_FORMAT='DELIMITED', VALUE_DELIMITER='|');
Message
----------------
Stream created
----------------
6. Verify data into KSQl Stream
ksql> SET 'auto.offset.reset' = 'earliest';
Successfully changed local property 'auto.offset.reset' to 'earliest'. Use the UNSET command to revert your change.
ksql> SELECT * FROM sample_delimited emit changes limit 1;
+---------------------------+---------------------------+---------------------------+---------------------------+---------------------------+
|ROWTIME |ROWKEY |COLUMN1 |COLUMN2 |COLUMN3 |
+---------------------------+---------------------------+---------------------------+---------------------------+---------------------------+
|1584339233947 |null |this is col1 |and now col2 |and col 3 :) |
Limit Reached
Query terminated
7. Write a new Kafka topic: SampleDataAvro that serializes all the data from sample_delimited stream to Avro format stream
ksql> CREATE STREAM sample_avro WITH (KAFKA_TOPIC='SampleDataAvro', VALUE_FORMAT='AVRO') AS SELECT * FROM sample_delimited;
Delimeter only supported with DELIMITED format
ksql>
8. Above line gives exception::
Delimeter only supported with DELIMITED format
9. Load ms sql kafka connect configuration
confluent local load test-sink -- -d ./etc/kafka-connect-jdbc/sink-quickstart-mssql.properties
The only time you need to specify the delimiter is when you define the stream that is reading from the source topic.
Here's my worked example:
Populate a topic with pipe-delimited data:
$ kafkacat -b localhost:9092 -t SampleData -P<<EOF
this is col1|and now col2|and col 3 :)
EOF
Declare a stream over it
CREATE STREAM sample_delimited (
column1 varchar(1000),
column2 varchar(1000),
column3 varchar(1000))
WITH (KAFKA_TOPIC='SampleData', VALUE_FORMAT='DELIMITED', VALUE_DELIMITER='|');
Query the stream to make sure it works
ksql> SET 'auto.offset.reset' = 'earliest';
Successfully changed local property 'auto.offset.reset' to 'earliest'. Use the UNSET command to revert your change.
ksql> SELECT * FROM sample_delimited emit changes limit 1;
+----------------+--------+---------------+--------------+--------------+
|ROWTIME |ROWKEY |COLUMN1 |COLUMN2 |COLUMN3 |
+----------------+--------+---------------+--------------+--------------+
|1583933584658 |null |this is col1 |and now col2 |and col 3 :) |
Limit Reached
Query terminated
Reserialise the data to Avro:
CREATE STREAM sample_avro WITH (KAFKA_TOPIC='SampleDataAvro', VALUE_FORMAT='AVRO') AS SELECT * FROM sample_delimited;
Dump the contents of the topic - note that it is now Avro:
ksql> print SampleDataAvro;
Key format: UNDEFINED
Value format: AVRO
rowtime: 3/11/20 1:33:04 PM UTC, key: <null>, value: {"COLUMN1": "this is col1", "COLUMN2": "and now col2", "COLUMN3": "and col 3 :)"}
The error that you're hitting is as a result of bug #4200. You can wait for the next release of Confluent Platform, or use standalone ksqlDB in which the issue is already fixed.
Here's using ksqlDB 0.7.1 streaming the data to MS SQL:
CREATE SINK CONNECTOR SINK_MSSQL WITH (
'connector.class' = 'io.confluent.connect.jdbc.JdbcSinkConnector',
'connection.url' = 'jdbc:sqlserver://mssql:1433',
'connection.user' = 'sa',
'connection.password' = 'Admin123',
'topics' = 'SampleDataAvro',
'key.converter' = 'org.apache.kafka.connect.storage.StringConverter',
'auto.create' = 'true',
'insert.mode' = 'insert'
);
Now query the data in MS SQL
1> Select ##version
2> go
---------------------------------------------------------------------
Microsoft SQL Server 2017 (RTM-CU17) (KB4515579) - 14.0.3238.1 (X64)
Sep 13 2019 15:49:57
Copyright (C) 2017 Microsoft Corporation
Developer Edition (64-bit) on Linux (Ubuntu 16.04.6 LTS)
(1 rows affected)
1> SELECT * FROM SampleDataAvro;
2> GO
COLUMN3 COLUMN2 COLUMN1
-------------- --------------- ------------------
and col 3 :) and now col2 this is col1
(1 rows affected)

Offset commit failed on partition

Kafka consumer continues to print error messages
I built a cluster (kafka version 2.3.0) using 5 machines kafka, which has a partition with a partition of 0 and a data copy of 3. When I consume the kafka-clients api, I continue to output exceptions:
Offset commit failed on partition test-0 at offset 1: The request timed out.
But this topic reads and writes messages are fine.
Consumer configuration:
Auto.commit.interval.ms = 5000
Auto.offset.reset = latest
Bootstrap.servers = [qs-kfk-01:9092, qs-kfk-02:9092, qs-kfk-03:9092, qs-kfk-04:9092, qs-kfk-05:9092]
Check.crcs = true
Client.id =
Connections.max.idle.ms = 540000
Default.api.timeout.ms = 60000
Enable.auto.commit = true
Exclude.internal.topics = true
Fetch.max.bytes = 52428800
Fetch.max.wait.ms = 500
Fetch.min.bytes = 1
Group.id = erp-sales
Heartbeat.interval.ms = 3000
Interceptor.classes = []
Internal.leave.group.on.close = true
Isolation.level = read_uncommitted
Key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
Max.partition.fetch.bytes = 1048576
Max.poll.interval.ms = 300000
Max.poll.records = 500
Metadata.max.age.ms = 300000
Metric.reporters = []
Metrics.num.samples = 2
Metrics.recording.level = INFO
Metrics.sample.window.ms = 30000
Partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
Receive.buffer.bytes = 65536
Reconnect.backoff.max.ms = 1000
Reconnect.backoff.ms = 50
Request.timeout.ms = 30000
Retry.backoff.ms = 100
Sasl.client.callback.handler.class = null
Sasl.jaas.config = null
Sasl.kerberos.kinit.cmd = /usr/bin/kinit
Sasl.kerberos.min.time.before.relogin = 60000
Sasl.kerberos.service.name = null
Sasl.kerberos.ticket.renew.jitter = 0.05
Sasl.kerberos.ticket.renew.window.factor = 0.8
Sasl.login.callback.handler.class = null
Sasl.login.class = null
Sasl.login.refresh.buffer.seconds = 300
Sasl.login.refresh.min.period.seconds = 60
Sasl.login.refresh.window.factor = 0.8
Sasl.login.refresh.window.jitter = 0.05
Sasl.mechanism = GSSAPI
Security.protocol = PLAINTEXT
Send.buffer.bytes = 131072
Session.timeout.ms = 10000
Ssl.cipher.suites = null
Ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
Ssl.endpoint.identification.algorithm = https
Ssl.key.password = null
Ssl.keymanager.algorithm = SunX509
Ssl.keystore.location = null
Ssl.keystore.password = null
Ssl.keystore.type = JKS
Ssl.protocol = TLS
Ssl.provider = null
Ssl.secure.random.implementation = null
Ssl.trustmanager.algorithm = PKIX
Ssl.truststore.location = null
Ssl.truststore.password = null
Ssl.truststore.type = JKS
Value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
java code:
ConsumerRecords<K, V> consumerRecords = _kafkaConsumer.poll(50L);
for (ConsumerRecord<K, V> record : consumerRecords.records(topic)) {
kafkaConsumer.receive(topic, record.key(), record.value(), record.partition(), record.offset());
}
I have tried the following:
Increase request timeout to 5 minutes (not working)
Replaced another group-id (it work): I found that as long as I use this group-id, there will be problems.
Kill the machine where the group coordinator is located. After the group coordinator switches to another machine, the error remains.
I get continuous error message output in the console
2019-11-03 16:21:11.687 DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=erp-sales] Sending asynchronous auto-commit of offsets {test-0=OffsetAndMetadata{offset=1, metadata=''}}
2019-11-03 16:21:11.704 ERROR org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=erp-sales] Offset commit failed on partition test-0 at offset 1: The request timed out.
2019-11-03 16:21:11.704 INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=erp-sales] Group coordinator qs-kfk-04:9092 (id: 2147483643 rack: null) is unavailable or invalid, will attempt rediscovery
2019-11-03 16:21:11.705 DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=erp-sales] Asynchronous auto-commit of offsets {test-0=OffsetAndMetadata{offset=1, metadata=''}} failed due to retriable error: {}
org.apache.kafka.clients.consumer.RetriableCommitFailedException: Offset commit failed with a retriable exception. You should retry committing the latest consumed offsets.
Caused by: org.apache.kafka.common.errors.TimeoutException: The request timed out.
2019-11-03 16:21:11.708 DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=erp-sales] Manually disconnected from 2147483643. Removed requests: OFFSET_COMMIT.
2019-11-03 16:21:11.708 DEBUG org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient - [Consumer clientId=consumer-1, groupId=erp-sales] Cancelled request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=4, clientId=consumer-1, correlationId=42) due to node 2147483643 being disconnected
2019-11-03 16:21:11.708 DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=erp-sales] Asynchronous auto-commit of offsets {test-0=OffsetAndMetadata{offset=1, metadata=''}} failed due to retriable error: {}
org.apache.kafka.clients.consumer.RetriableCommitFailedException: Offset commit failed with a retriable exception. You should retry committing the latest consumed offsets.
Caused by: org.apache.kafka.common.errors.DisconnectException: null
Why did the submission of the offset fail?
Why is the offset information in the kafka cluster still correct when the submission fails?
Thank you for your help.
Group coordinator Unavailability is the main cause of this issue.
Group coordinator is Unavailable --
This issue is already raised in the KAFKA Community (KAFKA-7017).
You can fix this by deleting the topic _offset_topics and restart the cluster.
You can go through the following to get few more details.
https://www.stuckintheloop.com/posts/my_first_kafka_outage/
#Rohit Yadav, Thanks for the answer, I have replaced a consumer group for the time being and I am working very well. But now the client has another problem, continuous output:
2019-11-05 14:11:14.892 INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=fs-sales-be] Error sending fetch request (sessionId=2035993355, epoch=1) to node 4: org.apache.kafka.common.errors.DisconnectException.
2019-11-05 14:11:14.892 INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=fs-sales-be] Error sending fetch request (sessionId=181175071, epoch=INITIAL) to node 5: org.apache.kafka.common.errors.DisconnectException.
What is this caused by this? 4, 5 node status is OK

Failing to run Kafka consumer performance test

The command:
./bin/kafka-consumer-perf-test.sh --topic test_topic --messages 500000 --zookeeper zookeepernode --show-detailed-stats --consumer.config ./conf/connect-distributed.properties
leads to
{metadata.broker.list=kafkanode1:9093,kafkanode2:9093,av3l338p.kafkanode3:9093, request.timeout.ms=30000, client.id=perf-consumer-68473, security.protocol=SASL_SSL}
WARN Fetching topic metadata with correlation id 0 for topics [Set(test_topic)] from broker [BrokerEndPoint(1003,kafkanode3,9093)] failed (kafka.client.ClientUtils$)
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:99)
[...]
WARN Fetching topic metadata with correlation id 0 for topics [Set(test_topic)] from broker [BrokerEndPoint(1001,kafkanode1,9093)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedByInterruptException
at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
[...]
WARN Fetching topic metadata with correlation id 0 for topics [Set(test_topic)] from broker [BrokerEndPoint(1002,kafkanode2,9093)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:122)
Try to add the flag "--new-consumer":
./bin/kafka-consumer-perf-test.sh --new-consumer --broker-list <broker-host>:9093 --topic <topic-read-from> --messages <number-of-messages> --show-detailed-stats --consumer.config consumer.properties --zookeeper <zookeeper-host>:2181
The consumer.properties looks like (including SASL_SSL and Kerberos configuration):
zookeeper.connect=<zookeeper-host>:2181
# timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
#consumer group id
group.id=<consumer-group>
security.protocol=SASL_SSL
ssl.keystore.location=<path-to-keystore-file>
ssl.keystore.password=<password>
ssl.truststore.location=<path-to-truststore-file>
ssl.truststore.password=<password>
The output should look like:
time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec
2017-06-01 11:29:15:194, 0, 1,6726, 4,1816, 5000, 12500,0000
2017-06-01 11:29:15:207, 0, 3,3916, 859,4761, 10000, 2500000,0000
2017-06-01 11:29:15:208, 0, 5,1009, 1709,3201, 15000, 5000000,0000
2017-06-01 11:29:15:209, 0, 6,7850, 1684,1164, 20000, 5000000,0000
2017-06-01 11:29:15:291, 0, 8,5101, 21,2977, 25000, 61728,3951
2017-06-01 11:29:15:460, 0, 10,2138, 10,0808, 30000, 29585,7988
2017-06-01 11:29:15:462, 0, 11,9258, 1711,9637, 35000, 5000000,0000
2017-06-01 11:29:15:463, 0, 13,6348, 1709,0416, 40000, 5000000,0000
2017-06-01 11:29:15:673, 0, 15,3255, 8,0511, 45000, 23809,5238
2017-06-01 11:29:15:962, 0, 17,0417, 5,9384, 50000, 17301,0381
2017-06-01 11:29:15:963, 0, 18,7520, 1710,2280, 55000, 5000000,0000
2017-06-01 11:29:15:963, 0, 20,4568, Infinity, 60000, Infinity
2017-06-01 11:29:16:090, 0, 22,1223, 13,1138, 65000, 39370,0787
2017-06-01 11:29:16:282, 0, 23,8131, 8,8062, 70000, 26041,6667