I met a problem about getting data by HBase v1.0 client API.
The followings are my HBase and custom Zookeeper settings:
HMaster hosts:
172.17.0.2 master
127.0.0.1 localhost
192.168.1.32 master2
hbase-site.xml:
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hduser/zookeeper/data</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master</value>
</property>
<property>
<name>dfs.support.append</name>
<value>true</value>
</property>
</configuration>
zoo.cfg:
dataDir=/home/hduser/zookeeper/data
clientPort=2181
server.1=master:2888:3888
After I run my client code to connect to zookeeper, it looks like this connection successful.
2015-04-09 08:51:37,901 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.1.30:56295
2015-04-09 08:51:37,906 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#868] - Client attempting to establish new session at /192.168.1.30:56295
2015-04-09 08:51:37,939 [myid:] - INFO [SyncThread:0:ZooKeeperServer#617] - Established session 0x14c9d46501d000b with negotiated timeout 40000 for client /192.168.1.30:56295
2015-04-09 08:52:04,000 [myid:] - INFO [SessionTracker:ZooKeeperServer#347] - Expiring session 0x14c9d46501d000a, timeout of 40000ms exceeded
2015-04-09 08:52:04,000 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#494] - Processed session termination for sessionid: 0x14c9d46501d000a
However, when I try to put a data into a table, there is nothing happened I observed in zookeeper.out
What's the possible problem under this circumstance?
The following code is what I try to connected to HBase by scala
main.scala:
object Main {
def main(args: Array[String]): Unit = {
val con = new HBaseUtils().getConnection
try {
val table = con.getTable(TableName.valueOf("TimeIndexTable"))
val put = new Put(Bytes.toBytes("r2")).addColumn(Bytes.toBytes("post_info"), Bytes.toBytes("abc"), Bytes.toBytes("value"))
table.put(put)
println("Success")
}
catch {
case e: Exception => e.printStackTrace()
}
finally {
con.close()
}
}
}
HBaseUtils.scala:
class HBaseUtils {
private val hbaseURL = "192.168.1.31"
def getConnection = ConnectionFactory.createConnection(setConf)
def setConf = {
val config = HBaseConfiguration.create()
config.set("hbase.zookeeper.quorum",hbaseURL)
config
}
}
Related
In my spring batch application, i'm using atomikos version(4.0.4) and JTA(1.1).Some of the jobs are hanged in PROD and acquired all the connections from DB which in turns stopped the other jobs which were triggered in parallel. And all were failed with below error.
Error Log 1:
Could not get JDBC Connection; nested exception is com.atomikos.jdbc.AtomikosSQLException: Connection pool exhausted - try increasing 'maxPoolSize' and/or 'borrowConnectionTimeout' on the DataSourceBean.
Error Log 2 :
Failed to grow connection pool.
And almost for 13 jobs no instance got created in DB and in control-m the logs were captured with "Null Exception Message intercepted"
Can anyone please suggest on this issue? Even tried with upgrading the atomikos version upto 5.0.0 but still same issue occurring.
{
AtomikosDataSourceBean ads = new AtomikosDataSourceBean();
if (mDevModeDriverClassName.toLowerCase().contains("oracle")) {
if (!mDevModeDriverClassName.equals("oracle.jdbc.xa.client.OracleXADataSource")) {
log.warn("DataSource property 'devModeDriverClassName' should be set "
+ "to 'oracle.jdbc.xa.client.OracleXADataSource' when using Oracle! " + "Current value is: "
+ mDevModeDriverClassName);
}
}
String vUniqueResourceName = "DS-" + UUID.randomUUID();
log.debug("Creating Oracle XA DataSource. uniqueResourceName={}"+vUniqueResourceName);
ads.setUniqueResourceName(vUniqueResourceName);
ads.setXaDataSourceClassName(mDevModeDriverClassName); // "oracle.jdbc.xa.client.OracleXADataSource");
ads.setMaxPoolSize((mDevModeMaxSize > 0) ? mDevModeMaxSize : 1); //mDevModeMaxSize =10
ads.setTestQuery("SELECT 1 FROM DUAL");
Properties xaProps = new Properties();
xaProps.setProperty("user", mDevModeUsername);
xaProps.setProperty("password", mDevModePassword);
xaProps.setProperty("URL", mDevModeJdbcUrl);
ads.setXaProperties(xaProps);
OracleXADataSource xaDataSource = new OracleXADataSource();
xaDataSource.setUser(mDevModeUsername);
xaDataSource.setPassword(mDevModePassword);
xaDataSource.setURL(mDevModeJdbcUrl);
ads.setXaDataSource(xaDataSource);
<bean id="rsDataSource" class="com.sample.CustomDataSource" scope="singleton" destroy-method="close">
<property name="devModeDriverClassName" value="${spring.datasource.driver-class-name}" />
<property name="devModeJdbcUrl" value="${spring.datasource.rs.url}" />
<property name="devModeUsername" value="${spring.datasource.rs.username}" />
<property name="devModePassword" value="${spring.datasource.rs.password}" />
<property name="devModeMaxSize" value="10" />
</bean>
<bean id="transactionManager"
class="org.springframework.transaction.jta.JtaTransactionManager" >
<property name="nestedTransactionAllowed" value="true"/>
<property name="allowCustomIsolationLevels" value="true"/>
<property name="defaultTimeout" value="-1"/>
<property name="transactionManager" ref="txManager"></property>
</bean>
<bean id="txManager" class="com.atomikos.icatch.jta.UserTransactionManager" destroy-method="close">
<property name="forceShutdown" value="true"/>
<property name="transactionTimeout" value="60"></property>
</bean>
You either have a connection leak or your pool size is not rightly configured. And looking at your config, it is most likely that your connection pool size is not correctly configured:
And almost for 13 jobs no instance got created
ads.setMaxPoolSize((mDevModeMaxSize > 0) ? mDevModeMaxSize : 1); //mDevModeMaxSize =10
<property name="devModeMaxSize" value="10" />
Your connection pool is set to serve at most 10 connections, but you are launching 13 jobs. So it should not be surprising to have the error:
Connection pool exhausted - try increasing 'maxPoolSize'
You need to increase the maxPoolSize accordingly.
I am coding an application with Akka v2.5.23. The application involves below actors:
An router actor class named CalculatorRouter
An routee actor class named Calculator
I've configured a PinnedDispatcher when creating Calculator actor and put log.info in this actor class's receive method. I've expected to see in the log file the thread name field to contain pinned. However, the thread name field is default-dispatcher. I've searched in the log file and found that all the thread name with respect to this log.info to be default-dispatcher. Is there something wrong with my code?
Log file snippet:
09:49:25.116 [server-akka.actor.default-dispatcher-14] INFO handler.Calculator $anonfun$applyOrElse$3 92 - akka://server/user/device/$a/$a Total calc received
Follows are the code snippets:
class CalculatorRouter extends Actor with ActorLogging {
var router = {
val routees = Vector.fill(5) {
val r = context.actorOf(Props[Calculator].withDispatcher("calc.my-pinned-dispatcher"))
context.watch(r)
ActorRefRoutee(r)
}
Router(SmallestMailboxRoutingLogic(), routees)
}
def receive = {
case w: Calc => router.route(w, sender)
case Terminated(a) =>
router.removeRoutee(a)
val r = context.actorOf(Props[Calculator].withDispatcher("calc.my-pinned-dispatcher"))
context.watch(r)
router = router.addRoutee(r)
}
}
The calc.my-pinned-dispatcher is configured as follows:
calc.my-pinned-dispatcher {
executor="thread-pool-executor"
type=PinnedDispatcher
}
Source code of class calculator as follows:
class Calculator extends Actor with ActorLogging {
val w = new UdanRemoteCalculateTotalBalanceTime
def receive = {
case TotalCalc(fn, ocvFilepath, ratedCapacity, battCount) ⇒
log.info(s"${self.path} Total calc received")
Try{
w.CalculateTotalBalanceTime(1, fn, ocvFilepath, ratedCapacity)
} match {
case Success(t) ⇒
val v = t.getIntData
sender.!(Calculated(v))(context.parent)
case Failure(e) ⇒ log.error(e.getMessage)
}
}
}
object Calculator {
sealed trait Calc
final case class TotalCalc(filename: String, ocvFilepath: String, ratedCapacity: String, batteryCount: Int) extends Calc
}
logback.xml
<configuration debug="true">
<contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator">
<!-- reset all previous level configurations of all j.u.l. loggers -->
<resetJUL>true</resetJUL>
</contextListener>
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>/var/log/app.log</file>
<append>true</append>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- daily rollover -->
<fileNamePattern>/var/log/app.%d{yyyy-MM-dd}.log</fileNamePattern>
<!-- keep 30 days' worth of history capped at 3GB total size -->
<maxHistory>100</maxHistory>
<totalSizeCap>30000MB</totalSizeCap>
</rollingPolicy>
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} %M %L - %msg%n</pattern>
</encoder>
</appender>
<appender name="ASYNCFILE" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="FILE" />
<queueSize>500</queueSize>
<includeCallerData>true</includeCallerData>
</appender>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} %M %L - %msg%n</pattern>
</encoder>
</appender>
<logger name="application" level="DEBUG"/>
<root level="INFo">
<appender-ref ref="ASYNCFILE"/>
</root>
</configuration>
'20 Mar 4 Update
Thanks #anand-sai. After I put akka.loggers-dispatcher = "calc.my-pinned-dispatcher" in conf file, I've got my-pinned-dispatcher-xx as the thread name in every line of the log file. I thought the thread name should indicate the thread wherein actor Calculator's receive method is executing, in this case, something similar to 'pinned-dispatcher-xx' as the thread was obtained by a pinned dispatcher per my configuration. Now it proves that it indicates the thread obtained by logger's dispatcher. If this is the case, how to log the thread name for an actor's message handler code?
I think the solution is to add akka.loggers-dispatcher in your application.conf
calc.my-pinned-dispatcher {
executor="thread-pool-executor"
type=PinnedDispatcher
}
akka.loggers-dispatcher = "calc.my-pinned-dispatcher"
If you search for logger-dispatcher in the default configuration of akka, you will find the value to be "akka.actor.default-dispatcher` and we need to override this config as shown above.
EDIT
ActorLogging is asynchronous. When you log using ActorLogging, it sends a message to the logging actor, which by default runs on the default dispatcher. Logback logs the thread that called it, which will be the ActorLogging actor's thread, not your actor's thread.In order to achieve this goal, there is a so-called Mapped Diagnostic Context (MDC) that captures the akka source(The path of the actor in which the logging was performed ) , source thread( the thread in which the logging was performed) and much more in which the logging was performed.
As given in the documentation:
Since the logging is done asynchronously the thread in which the
logging was performed is captured in MDC with attribute name
sourceThread.
The path of the actor in which the logging was performed is available
in the MDC with attribute name akkaSource.
The actor system name in which the logging was performed is available
in the MDC with attribute name sourceActorSystem, but that is
typically also included in the akkaSource attribute.
The address of the actor system, containing host and port if the
system is using cluster, is available through akkaAddress.
For typed actors the log event timestamp is taken when the log call
was made but for Akka’s internal logging as well as the classic actor
logging is asynchronous which means that the timestamp of a log entry
is taken from when the underlying logger implementation is called,
which can be surprising at first. If you want to more accurately
output the timestamp for such loggers, use the MDC attribute
akkaTimestamp. Note that the MDC key will not have any value for a
typed actor.
Let me know if it helps!!
I have a problem when running unit test using specs2with scalikejdbc 2.4.1, scalikejdbc-config2.4.1
Here is my code:
object PostDAOImplSpec extends Specification{
sequential
DBs.setupAll
implicit val session = AutoSession
"resolveAll shoudn't have any syntax error" in new AutoRollback {
val postIds = DB readOnly { implicit session =>
sql"select post_id from posts".map(_.long(1)).list.apply()
}
}
DBs.closeAll()
}
Here is logs:
09:11:16.931 [main] DEBUG scalikejdbc.ConnectionPool$ - Registered connection pool : ConnectionPool(url:jdbc:mysql://localhost/bbs, user:root) using factory : <default>
09:11:17.130 [main] DEBUG scalikejdbc.ConnectionPool$ - Registered connection pool : ConnectionPool(url:jdbc:mysql://localhost/bbs, user:root) using factory : <default>
java.lang.IllegalStateException: Connection pool is not yet initialized.(name:'default)
java.lang.IllegalStateException: Connection pool is not yet initialized.(name:'default)
As you can see from the first two of lines, scalikejdbc found database's configuration, but it can't initilize connection pool.
Do you have any idea? Thanks.
The DBs.closeAll() closes your connection pools before running your tests.
I am trying to configure my custom ActiveMQ producer to use XA transaction. Unfortunately it does't work as expected because messages are sent to queue immediately instead of waiting for transactions to commit.
Here is the producer:
public class MyProducer {
#Autowired
#Qualifier("myTemplate")
private JmsTemplate template;
#Transactional
public void sendMessage(final Order order) {
template.send(new MessageCreator() {
public Message createMessage(Session session) throws JMSException {
ObjectMessage message = new ActiveMQObjectMessage();
message.setObject(order);
return message;
}
});
}
}
And this is template and connection factory configuration:
<bean id="jmsConnectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="java:/activemq/ConnectionFactory" />
</bean>
<bean id="myTemplate" class="org.springframework.jms.core.JmsTemplate"
p:connectionFactory-ref="jmsConnectionFactory"
p:defaultDestination-ref="myDestination"
p:sessionTransacted="true"
p:sessionAcknowledgeModeName="SESSION_TRANSACTED" />
As you can see I am using ConnectionFactory initiated via JNDI. It is configured on JBoss EAP 6.3:
<subsystem xmlns="urn:jboss:domain:resource-adapters:1.1">
<resource-adapters>
<resource-adapter id="activemq-rar.rar">
<module slot="main" id="org.apache.activemq.ra"/>
<transaction-support>XATransaction</transaction-support>
<config-property name="ServerUrl">
tcp://localhost:61616
</config-property>
<connection-definitions>
<connection-definition class-name="org.apache.activemq.ra.ActiveMQManagedConnectionFactory" jndi-name="java:/activemq/ConnectionFactory" enabled="true" use-java-context="true" pool-name="ActiveMQConnectionFactoryPool" use-ccm="true">
<xa-pool>
<min-pool-size>1</min-pool-size>
<max-pool-size>20</max-pool-size>
</xa-pool>
</connection-definition>
</connection-definitions>
</resource-adapter>
</resource-adapters>
</subsystem>
When I debug I can see that JmsTemplate is configured properly:
it has a reference to valid connection factory org.apache.activemq.ra.ActiveMQConnectionFactory
connection factory has a reference to valid transaction manager: org.jboss.jca.core.connectionmanager.tx.TxConnectionManagerImpl
session transacted is set to true
session acknowledge mode is set to SESSION_TRANSACTED(0)
Do you have any idea why these messages are pushed to the queue immediately and they are not removed when transaction is rolled back (e.g. when I throw exception at the end of "sendMessage" method?
You need to show the rest of your configuration (transaction manager etc).
It looks like you don't have transactions enabled in the application context so the template is committing the transaction itself.
Do you have <tx:annotation-driven/> in the context?
I'm using hadoop distcp -update to copy directory from one HDFS cluster to different one.
Sometime (pretty often) I get this kind of exception:
13/07/03 00:20:03 INFO tools.DistCp: srcPaths=[hdfs://HDFS1:51175/directory_X]
13/07/03 00:20:03 INFO tools.DistCp: destPath=hdfs://HDFS2:51175/directory_X
13/07/03 00:25:27 WARN hdfs.DFSClient: src=directory_X, datanodes[0].getName()=***.***.***.***:8550
java.net.SocketTimeoutException: 69000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/***.***.***.***:35872 remote=/***.***.***.***:8550]
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:116)
at java.io.DataInputStream.readShort(DataInputStream.java:295)
at org.apache.hadoop.hdfs.DFSClient.getFileChecksum(DFSClient.java:885)
at org.apache.hadoop.hdfs.DFSClient.getFileChecksum(DFSClient.java:822)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:541)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:53)
at org.apache.hadoop.tools.DistCp.sameFile(DistCp.java:1230)
at org.apache.hadoop.tools.DistCp.setup(DistCp.java:1110)
at org.apache.hadoop.tools.DistCp.copy(DistCp.java:666)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:881)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:908)
13/07/03 00:26:40 INFO tools.DistCp: sourcePathsCount=8542
13/07/03 00:26:40 INFO tools.DistCp: filesToCopyCount=0
13/07/03 00:26:40 INFO tools.DistCp: bytesToCopyCount=0.0
Does anyone has any idea what could it be?
Using Hadoop 0.20.205.0
Suggest to increase timeouts for both dfs.socket.timeout, for read timeout. And dfs.datanode.socket.write.timeout, for write timeout.
Default:
// Timeouts for communicating with DataNode for streaming writes/reads
public static int READ_TIMEOUT = 60 * 1000; // here, 69000 millis > 60000
public static int WRITE_TIMEOUT = 8 * 60 * 1000;
Add below in your hadoop-site.xml or hdfs-site.xml
<property>
<name>dfs.datanode.socket.write.timeout</name>
<value>3000000</value>
</property>
<property>
<name>dfs.socket.timeout</name>
<value>3000000</value>
</property>
Hope that helps.
I think you also want to set dfs.client.socket-timeout
Here is why.
Deprecated property name -> New property name
dfs.socket.timeout ->dfs.client.socket-timeout