I am trying to run Apache Atlas in a standalone fashion on Ubuntu - meaning without having to setup Solr and/or HBase.
What I did (according to the documentation: http://atlas.apache.org/0.8.1/InstallationSteps.html) was cloning the Git repository, build the maven project with embadded HBase an dSolr:
mvn clean package -Pdist,embedded-hbase-solr
Unpacked the resuting tar.gz file and executed bin/atlas_start.py - without having changed any configuration. To my understanding of the documentatino that should actually start up HBase along with Atlas - right?
The is what I find in logs/applocation.log:
2017-11-30 17:14:24,093 INFO - [main:] ~ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (Atlas:216)
2017-11-30 17:14:24,093 INFO - [main:] ~ Server starting with TLS ? false on port 21000 (Atlas:217)
2017-11-30 17:14:24,093 INFO - [main:] ~ <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< (Atlas:218)
2017-11-30 17:14:27,684 INFO - [main:] ~ No authentication method configured. Defaulting to simple authentication (LoginProcessor:102)
2017-11-30 17:14:28,527 INFO - [main:] ~ Logged in user daniel (auth:SIMPLE) (LoginProcessor:77)
2017-11-30 17:14:31,777 INFO - [main:] ~ Not running setup per configuration atlas.server.run.setup.on.start. (SetupSteps$SetupRequired:189)
2017-11-30 17:14:39,456 WARN - [main-SendThread(localhost:2181):] ~ Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (ClientCnxn$SendThread:110$
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2017-11-30 17:14:39,594 WARN - [main:] ~ Possibly transient ZooKeeper, quorum=localhost:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = Connecti$
2017-11-30 17:14:40,593 WARN - [main-SendThread(localhost:2181):] ~ Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (ClientCnxn$SendThread:110$
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
...
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2017-11-30 17:14:56,185 WARN - [main:] ~ Possibly transient ZooKeeper, quorum=localhost:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = Connecti$
2017-11-30 17:14:56,186 ERROR - [main:] ~ ZooKeeper exists failed after 4 attempts (RecoverableZooKeeper:277)
2017-11-30 17:14:56,186 WARN - [main:] ~ hconnection-0x1dba4e060x0, quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode (/hbase/hbaseid) (ZKUtil:544)
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:221)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:541)
at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65)
To me it reads as if no HBase (and Zookeeper) are started by the script.
Am I missing something?
Thanks for your hints!
OK, meanwhile I figured out the issue. The start script obviously does not execute the script conf/atlas-env.sh which sets some environment variable. Among this are MANAGE_LOCAL_HBASE and MANAGE_LOCAL_SOLR. So if you set those two env vars to true (and set JAVA_HOME properly which is needed for the embedded HBase), then Atlas automatically starts HBase and Solr - and we get a local running instance of Atlas!
Maybe this helps someone who comes across the same issue in future!
Update March 2021
There are two ways of running apache atlas:
A) Building it from scratch:
git clone https://github.com/apache/atlas
mvn clean install -DskipTests
mvn clean package -Pdist -DskipTests
Running atlas_start.py:
python <atlas-directory>/conf/atlas_start.py
B) Using docker image:
docker-compose.yml
version: "3.3"
services:
atlas:
image: sburn/apache-atlas
container_name: atlas
ports:
- "21000:21000"
volumes:
- "./bash_script:/app"
command: bash -exc "/opt/apache-atlas-2.1.0/bin/atlas_start.py"
docker-compose up
Related
Recently i've been experimenting with deploying Statefull applications onto Kubernetes. For my dev environment everything is on-premise, either on my local machine or on remote VMs. I deployed OpenSearch through its helm chart, got it and dashboards up and running, and everything was going well. I am now trying to setup data-prepper running as a docker container on my local machine (the kubernetes cluster is on remote VMs, not sure if this matters). I have the kube service that defines access to OpenSearch port-forwarded to my machine and am able to access it using "curl -u : https://localhost:9200 -k". Since my only interest is seeing it up and running I don't care (yet) that it is insecure. When I setup my data-prepper pipeline to hit OpenSearch in the exact same way, it is refusing the connection and I'm at a loss as to why.
pipelines.yaml:
simple-sample-pipeline:
workers: 2
delay: "5000"
source:
random:
sink:
- opensearch:
hosts: [ "https://localhost:9200" ]
insecure: true
username: <user>
password: <admin>
index: test
data-prepper-config.yaml
ssl: false
Docker command to run container:
docker run --name data-prepper \
-v C:/users/<profile>/documents/pipelines.yaml:/usr/share/data-prepper/pipelines.yaml \
-v C:/users/<profile>/documents/data-prepper.yaml:/usr/share/data-prepper/data-prepper-config.yaml \
opensearchproject/data-prepper:latest
logs exerpt:
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2022-06-07T19:39:50,959 [main] INFO com.amazon.dataprepper.parser.config.DataPrepperAppConfiguration - Command line args: /usr/share/data-prepper/pipelines.yaml,/usr/share/data-prepper/data-prepper-config.yaml
2022-06-07T19:39:50,960 [main] INFO com.amazon.dataprepper.parser.config.DataPrepperArgs - Using /usr/share/data-prepper/pipelines.yaml configuration file
2022-06-07T19:39:54,599 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building pipeline [simple-sample-pipeline] from provided configuration
2022-06-07T19:39:54,600 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building [random] as source component for the pipeline [simple-sample-pipeline]
2022-06-07T19:39:54,624 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building buffer for the pipeline [simple-sample-pipeline]
2022-06-07T19:39:54,634 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building processors for the pipeline [simple-sample-pipeline]
2022-06-07T19:39:54,635 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building sinks for the pipeline [simple-sample-pipeline]
2022-06-07T19:39:54,635 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building [opensearch] as sink component
2022-06-07T19:39:54,643 [main] INFO com.amazon.dataprepper.plugins.sink.opensearch.OpenSearchSink - Initializing OpenSearch sink
2022-06-07T19:39:54,649 [main] INFO com.amazon.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - Using the username provided in the config.
2022-06-07T19:39:54,789 [main] INFO com.amazon.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - Using the trust all strategy
2022-06-07T19:39:54,881 [main] ERROR com.amazon.dataprepper.plugin.PluginCreator - Encountered exception while instantiating the plugin OpenSearchSink
java.lang.reflect.InvocationTargetException: null
-----
Caused by: java.net.ConnectException: Connection refused
I'm trying to run a simple test if my application is running properly without any issues. My issue is that faust needs a connection to kafka on initialization - so I'm trying to run kafka with zookeeper as services but I'm not able to connect them properly.
Error:
2021-12-16T13:53:51.385341793Z [2021-12-16 13:53:51,385] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
2021-12-16T13:53:51.391012666Z [2021-12-16 13:53:51,390] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn)
2021-12-16T13:53:51.395158219Z [2021-12-16 13:53:51,395] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
2021-12-16T13:53:51.399485772Z [2021-12-16 13:53:51,397] ERROR Unable to resolve address: zookeeper:2181 (org.apache.zookeeper.client.StaticHostProvider)
2021-12-16T13:53:51.399499707Z java.net.UnknownHostException: zookeeper: Name or service not known
2021-12-16T13:53:51.399503169Z at java.base/java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
2021-12-16T13:53:51.399506400Z at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929)
2021-12-16T13:53:51.399509510Z at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1519)
2021-12-16T13:53:51.399512353Z at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848)
2021-12-16T13:53:51.399531020Z at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1509)
2021-12-16T13:53:51.399534098Z at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1368)
2021-12-16T13:53:51.399537044Z at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1302)
2021-12-16T13:53:51.399540881Z at org.apache.zookeeper.client.StaticHostProvider$1.getAllByName(StaticHostProvider.java:88)
2021-12-16T13:53:51.399544771Z at org.apache.zookeeper.client.StaticHostProvider.resolve(StaticHostProvider.java:141)
2021-12-16T13:53:51.399548877Z at org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:368)
2021-12-16T13:53:51.399553025Z at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1207)
2021-12-16T13:53:51.406655054Z [2021-12-16 13:53:51,406] WARN Session 0x0 for sever zookeeper:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn)
2021-12-16T13:53:51.406696302Z java.lang.IllegalArgumentException: Unable to canonicalize address zookeeper:2181 because it's not resolvable
2021-12-16T13:53:51.406703099Z at org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:78)
2021-12-16T13:53:51.406707676Z at org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:41)
2021-12-16T13:53:51.406711700Z at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1161)
2021-12-16T13:53:51.406715631Z at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1210)
2021-12-16T13:53:52.508636206Z [2021-12-16 13:53:52,508] ERROR Unable to resolve address: zookeeper:2181 (org.apache.zookeeper.client.StaticHostProvider)
2021-12-16T13:53:52.508665462Z java.net.UnknownHostException: zookeeper
.gitlab-ci.yml:
.zoo_service: &zoo_service
name: zookeeper:latest
alias: zookeeper
.kafka_service: &kafka_service
name: bitnami/kafka:latest
alias: kafka
faust:
variables:
ALLOW_ANONYMOUS_LOGIN: "yes"
KAFKA_BROKER_ID: 1
KAFKA_CFG_LISTENERS: "PLAINTEXT://:9092"
KAFKA_CFG_ADVERTISED_LISTENERS: "PLAINTEXT://127.0.0.1:9092"
KAFKA_CFG_ZOOKEEPER_CONNECT: "zookeeper:2181"
ALLOW_PLAINTEXT_LISTENER: "yes"
stage: test
<<: *python_image
services:
- *zoo_service
- *kafka_service
before_script:
- *setup_venv_script
script:
- faust -A runner worker -l info & sleep 15; kill -HUP $!
<<: *load_env
except:
- schedules
I was hoping I'm doing it the right way - sadly there is not many resources I can read about this issue. I understand the issue is between kafka and zookeeper, but I'm not sure how to fix it (Thought this is the correct way). Can even 2 services communicate to each other in CI?
Thanks!
Glancing over the GitLab CI docs about connecting to different services, it mentions a feature flag to allow cross-service communication, so try
faust:
variables:
FF_NETWORK_PER_BUILD: 1
...
services:
...
Also, for Kafka communication, it need to advertise its alias rather than localhost, so change
KAFKA_CFG_ADVERTISED_LISTENERS: "PLAINTEXT://kafka:9092"
I'm trying to run a Micronaut service that uses PostgreSQL on a docker-compose file. But I'm having the following issue:
21:12:32.741 [main] INFO com.zaxxer.hikari.HikariDataSource - HikariPool-1 - Starting...
21:12:33.744 [main] ERROR com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Exception during pool initialization.
org.postgresql.util.PSQLException: The connection attempt failed.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:315)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:225)
at org.postgresql.Driver.makeConnection(Driver.java:465)
at org.postgresql.Driver.connect(Driver.java:264)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477)
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:560)
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115)
at com.zaxxer.hikari.HikariDataSource.<init>(HikariDataSource.java:81)
at io.micronaut.configuration.jdbc.hikari.HikariUrlDataSource.<init>(HikariUrlDataSource.java:35)
at io.micronaut.configuration.jdbc.hikari.DatasourceFactory.dataSource(DatasourceFactory.java:66)
at io.micronaut.configuration.jdbc.hikari.$DatasourceFactory$DataSource0Definition.build(Unknown Source)
at io.micronaut.context.BeanDefinitionDelegate.build(BeanDefinitionDelegate.java:153)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1979)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingletonInternal(DefaultBeanContext.java:2768)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingleton(DefaultBeanContext.java:2754)
at io.micronaut.context.DefaultBeanContext.getBeanForDefinition(DefaultBeanContext.java:2425)
at io.micronaut.context.DefaultBeanContext.getBeanInternal(DefaultBeanContext.java:2399)
at io.micronaut.context.DefaultBeanContext.getBean(DefaultBeanContext.java:1264)
at io.micronaut.context.AbstractBeanDefinition.getBeanForConstructorArgument(AbstractBeanDefinition.java:1014)
at io.micronaut.configuration.hibernate.jpa.$EntityManagerFactoryBean$HibernateStandardServiceRegistry0Definition.doBuild(Unknown Source)
at io.micronaut.context.AbstractParametrizedBeanDefinition.build(AbstractParametrizedBeanDefinition.java:118)
at io.micronaut.context.BeanDefinitionDelegate.build(BeanDefinitionDelegate.java:149)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1979)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingletonInternal(DefaultBeanContext.java:2768)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingleton(DefaultBeanContext.java:2754)
at io.micronaut.context.DefaultBeanContext.getBeanForDefinition(DefaultBeanContext.java:2425)
at io.micronaut.context.DefaultBeanContext.getBeanInternal(DefaultBeanContext.java:2399)
at io.micronaut.context.DefaultBeanContext.getBean(DefaultBeanContext.java:1264)
at io.micronaut.context.AbstractBeanDefinition.getBeanForConstructorArgument(AbstractBeanDefinition.java:1014)
at io.micronaut.configuration.hibernate.jpa.$EntityManagerFactoryBean$HibernateMetadataSources1Definition.doBuild(Unknown Source)
at io.micronaut.context.AbstractParametrizedBeanDefinition.build(AbstractParametrizedBeanDefinition.java:118)
at io.micronaut.context.BeanDefinitionDelegate.build(BeanDefinitionDelegate.java:149)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1979)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingletonInternal(DefaultBeanContext.java:2768)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingleton(DefaultBeanContext.java:2754)
at io.micronaut.context.DefaultBeanContext.loadContextScopeBean(DefaultBeanContext.java:2292)
at io.micronaut.context.DefaultBeanContext.initializeContext(DefaultBeanContext.java:1562)
at io.micronaut.context.DefaultApplicationContext.initializeContext(DefaultApplicationContext.java:234)
at io.micronaut.context.DefaultBeanContext.readAllBeanDefinitionClasses(DefaultBeanContext.java:2905)
at io.micronaut.context.DefaultBeanContext.start(DefaultBeanContext.java:231)
at io.micronaut.context.DefaultApplicationContext.start(DefaultApplicationContext.java:180)
at io.micronaut.runtime.Micronaut.start(Micronaut.java:71)
at org.wcode.author.service.ApplicationKt.main(Application.kt:10)
Caused by: java.net.UnknownHostException: st-database
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:220)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:609)
at org.postgresql.core.PGStream.createSocket(PGStream.java:231)
at org.postgresql.core.PGStream.<init>(PGStream.java:95)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:98)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:213)
... 46 common frames omitted
Files definition
Following is the definition of the files that I'm using on my current project.
docker-compose.yml
version: '3.8'
services:
st-author-service:
image: author-service:latest
environment:
- JDBC_URL=jdbc:postgresql://st-database:5432/sentency_db
- JDBC_USER=<USERNAME>
- JDBC_PASSWORD=<PASSWORD>
- JDBC_DRIVER=org.postgresql.Driver
depends_on:
- st-database
networks:
- database-network
- internal
st-database:
image: postgres:13.3-alpine
restart: always
env_file:
- database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/
networks:
- database-network
volumes:
database-data:
networks:
database-network:
driver: bridge
external: false
internal:
application.yml
micronaut:
application:
name: authorService
server:
port: 7000
datasources:
default:
url: ${JDBC_URL:`jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE`}
username: ${JDBC_USER:sa}
password: ${JDBC_PASSWORD:""}
driverClassName: ${JDBC_DRIVER:org.h2.Driver}
schema-generate: CREATE_DROP
jpa:
default:
entity-scan:
packages: 'org.wcode.author.service.domain.entities'
properties:
hibernate:
hbm2ddl:
auto: update
show_sql: true
Tests
I tested the application.yml pointing to a PostgreSQL database deployed on Docker and it worked, the problem started when I tried to use docker-compose to structure the deployment. To check if the network definition was working I deployed pgAdmin together with that configuration and I was able to connect to the database.
After that specific line:
Caused by: java.net.UnknownHostException: st-database
I come to the conclusion that the issue was caused by the host not being found.
I already made a lot of research and haven't found a way to solve that problem. I can't see what I'm missing.
Thanks in advance for all the help.
Update
I was able to get the connection log from the PostgreSQL container and only pgAdmin connection is appearing, the service is not even reaching the container to connect. The network being used is the same. Here is the log:
2021-08-10 15:43:13.465 GMT [100] LOG: connection received: host=sentency-deploy_pgAdmin_1.sentency-deploy_database-network port=40848
2021-08-10 15:43:13.466 GMT [100] LOG: connection authorized: user=user database=postgres
2021-08-10 15:43:13.478 GMT [101] LOG: connection received: host=sentency-deploy_pgAdmin_1.sentency-deploy_database-network port=40850
2021-08-10 15:43:13.479 GMT [101] LOG: connection authorized: user=user database=sentency_db
2021-08-10 15:43:13.491 GMT [102] LOG: connection received: host=sentency-deploy_pgAdmin_1.sentency-deploy_database-network port=40852
2021-08-10 15:43:13.493 GMT [102] LOG: connection authorized: user=user database=postgres
2021-08-10 15:43:17.750 GMT [103] LOG: connection received: host=sentency-deploy_pgAdmin_1.sentency-deploy_database-network port=40858
Now I think the problem could be the way I'm building the image. My current Dockerfile is:
FROM ghcr.io/graalvm/graalvm-ce:ol7-java11-21.2.0 as build
WORKDIR /author-service
COPY . /author-service
RUN yum install -y -q xz
RUN curl -sL -o - https://github.com/upx/upx/releases/download/v3.96/upx-3.96-amd64_linux.tar.xz | tar xJ
RUN gu install native-image
RUN ./gradlew assemble
RUN native-image --no-fallback --static -jar /author-service/build/libs/author-service-*-all.jar service
RUN ./upx-3.96-amd64_linux/upx -7 /author-service/service
FROM scratch
EXPOSE 7000:7000
COPY --from=build /author-service/service /service
ENTRYPOINT ["./service"]
That's giving me an image of 27.67MB.
From the docker compose documentation: depends_on does not wait for [st-database] to be “ready”. So maybe this is caused by the Postgres database not being ready to accept connections at the time your app tries to. The UnknownHostException is a bit strange in that case but that may depend on when docker is binding the port. To fix the problem you would need to write a script that waits for readiness as in the second example here.
I am now using Alibaba Canal to sync MySQL from datacenter A to datacenter B(the canal deploy in kubernetes), after I start the canal-server, shows error like this:
[root#canal-server-stable-0 bin]# tail -f /home/canal/logs/canal/canal.log
2021-05-26 11:47:32.329 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## set default uncaught exception handler
2021-05-26 11:47:32.366 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## load canal configurations
2021-05-26 11:47:32.849 [main] ERROR com.alibaba.otter.canal.deployer.CanalLauncher - ## Something goes wrong when starting up the canal Server:
com.alibaba.otter.canal.common.CanalException: load manager config failed.
Caused by: com.alibaba.otter.canal.common.CanalException: requestGet for canal config error: auth :admin is failed
2021-05-26 11:52:50.402 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## set default uncaught exception handler
2021-05-26 11:52:50.432 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## load canal configurations
2021-05-26 11:52:50.836 [main] ERROR com.alibaba.otter.canal.deployer.CanalLauncher - ## Something goes wrong when starting up the canal Server:
com.alibaba.otter.canal.common.CanalException: load manager config failed.
Caused by: com.alibaba.otter.canal.common.CanalException: requestGet for canal config error: auth :admin is failed
this is my canal server config:
[root#canal-server-stable-0 bin]# cat ../conf/canal.properties
# register ip
# canal.register.ip = canal-server-stable-0.canal-server-discovery-svc-stable.hades-pro.svc.cluster.local
canal.register.ip = 10.244.5.5
# canal admin config
canal.admin.manager = 10.105.49.36:8089
canal.admin.port = 11110
canal.admin.user = admin
canal.admin.passwd = 6bb4837eb74329105ee4568dda7dc67ed2ca2ad9
# admin auto register
canal.admin.register.auto = true
canal.admin.register.cluster = online
the hash password was encrypt from 123456. I am sure the password is right. I tried to find the password in database, it matched with my config:
I also using Arthas to trace the online app of canal-admin:
watch com.alibaba.otter.canal.admin.controller.PollingConfigController auth "{params,returnObj}" -x 3 -b
shows the password I pass is: 6bb4837eb74329105ee4568dda7dc67ed2ca2ad9. I did not know where is going wrong now, what should I do to fix it?
you can check canal admin conf/application.yaml file
canal:
adminUser: admin
adminPasswd: 123456
if you modified "canal.adminPasswd" attribute, you can modified it correct.
hope, help you.
I am trying to setup drillv1.18 running. Facing the error below.
The drill-override.conf points to the zookeeper which runs on port 12181. On starting in distributed mode, it fails with the following log output. But the embedded mode has no issues.
It appears like permission issue, but both zookeeper, drill, zookeeper data-dir all are running under the same user.
2020-05-10 16:23:01,160 [main] DEBUG o.apache.drill.exec.server.Drillbit - Construction started.
2020-05-10 16:23:01,448 [main] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Connect localhost:12181, zkRoot drill, clusterId: drillbits1
2020-05-10 16:23:01,531 [main] INFO o.a.d.e.s.s.PersistentStoreRegistry - Using the configured PStoreProvider class: 'org.apache.drill.exec.store.sys.store.provider.ZookeeperPersistentStoreProvider'.
2020-05-10 16:23:01,718 [main] DEBUG o.a.drill.exec.ssl.SSLConfigServer - Using Hadoop configuration for SSL
2020-05-10 16:23:01,718 [main] DEBUG o.a.drill.exec.ssl.SSLConfigServer - Hadoop SSL configuration file: ssl-server.xml
2020-05-10 16:23:01,731 [main] DEBUG org.apache.drill.exec.ssl.SSLConfig - Initialized SSL context.
2020-05-10 16:23:01,731 [main] INFO o.a.drill.exec.rpc.user.UserServer - Rpc server configured to use TLS protocol 'TLSv1.2'
2020-05-10 16:23:01,738 [main] INFO o.apache.drill.exec.server.Drillbit - Construction completed (577 ms).
2020-05-10 16:23:01,738 [main] DEBUG o.apache.drill.exec.server.Drillbit - Startup begun.
2020-05-10 16:23:01,738 [main] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Starting ZKClusterCoordination.
2020-05-10 16:23:03,775 [main] ERROR o.apache.drill.exec.server.Drillbit - Failure during initial startup of Drillbit.
org.apache.zookeeper.KeeperException$UnimplementedException: KeeperErrorCode = Unimplemented for /drill
at org.apache.zookeeper.KeeperException.create(KeeperException.java:106)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1538)
at org.apache.curator.utils.ZKPaths.mkdirs(ZKPaths.java:351)
at org.apache.curator.framework.imps.ExistsBuilderImpl$2.call(ExistsBuilderImpl.java:230)
at org.apache.curator.framework.imps.ExistsBuilderImpl$2.call(ExistsBuilderImpl.java:224)
at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:67)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:81)
at org.apache.curator.framework.imps.ExistsBuilderImpl.pathInForeground(ExistsBuilderImpl.java:221)
at org.apache.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:206)
at org.apache.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:35)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.createContainers(CuratorFrameworkImpl.java:265)
at org.apache.curator.framework.EnsureContainers.internalEnsure(EnsureContainers.java:69)
at org.apache.curator.framework.EnsureContainers.ensure(EnsureContainers.java:53)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.ensurePath(PathChildrenCache.java:596)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.rebuild(PathChildrenCache.java:327)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.start(PathChildrenCache.java:304)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.start(PathChildrenCache.java:252)
at org.apache.curator.x.discovery.details.ServiceCacheImpl.start(ServiceCacheImpl.java:99)
at org.apache.drill.exec.coord.zk.ZKClusterCoordinator.start(ZKClusterCoordinator.java:145)
at org.apache.drill.exec.server.Drillbit.run(Drillbit.java:220)
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:584)
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:554)
at org.apache.drill.exec.server.Drillbit.main(Drillbit.java:550)
Version 1.17 has no issues in starting in distributed mode.
The issue here is with the zookeeper version. Perhaps you use 3.4.X version, but the current version of Drill requires 3.5.X. As a workaround, you may replace zookeeper jar in jars/ext/zookeeper-3.5.7.jar and jars/ext/zookeeper-jute-3.5.7.jar with the jars that corresponds to your zookeeper version.
In Addition to the answer of Vova Vysotskyi, you may find more information in Drill documentation about this issue:
https://drill.apache.org/docs/distributed-mode-prerequisites/
Starting in Drill 1.18 the bundled ZooKeeper libraries are upgraded to version 3.5.7, preventing connections to older (< 3.5) ZooKeeper clusters. In order to connect to a ZooKeeper < 3.5 cluster, replace the ZooKeeper library JARs in ${DRILL_HOME}/jars/ext with zookeeper-3.4.x.jar then restart the cluster.