Code deploy reports: "Deployment Failed: No hosts succeeded", while deploying from S3 .zip revision to EC2 instance - deployment

I'm trying to make an automated CI workflow from Bitbucket to aws EC2 instance using Jenkins hosted in a separate EC2 instance.
I created and configured everything needed (IAM roles, aws client and code deploy agent) as the following article describes:
https://pranavpshah.wordpress.com/configure-aws-codedeploy/
Btw, all the instances are based on ubuntu and running inside a private VPC, and I'm deploying a node.js application.
For instance, I can successfully create a .zip build in S3 bucket, every time I push to Bitbucket repo. But in Code Deploy dashboard, I get "Deployment Failed No hosts succeeded." error message.
the status "In progress" takes more than 5 min, every time I start the process.
When the deployment process finished with the status failed, I checked /var/log/aws/codedeploy-agent/codedeploy-agent.log file, and here is what I got:
2015-12-04 17:17:36 INFO [codedeploy-agent(28199)]: Stopping master 27971
2015-12-04 17:17:36 INFO [codedeploy-agent(27971)]: master 27971: Received TERM - stopping children and shutting down
2015-12-04 17:17:36 INFO [codedeploy-agent(27975)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller of master 27971: Received TERM - setting internal shutting down flag and possibly finishing last run
2015-12-04 17:17:55 INFO [codedeploy-agent(27975)]: [Aws::CodeDeployCommand::Client 200 60.113784 0 retries] poll_host_command(host_identifier:"arn:aws:ec2:us-west-2:219450671821:instance/i-348913ed")
2015-12-04 17:17:56 INFO [codedeploy-agent(27975)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller of master 27971: shutting down
2015-12-04 17:17:57 INFO [codedeploy-agent(28219)]: master 28219: Spawned child 1/1
2015-12-04 17:17:57 DEBUG [codedeploy-agent(28223)]: Registering Plugins: ["codedeploy"].
2015-12-04 17:17:57 DEBUG [codedeploy-agent(28223)]: Loading plugin codedeploy from /opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/register_plugin
2015-12-04 17:17:57 DEBUG [codedeploy-agent(28223)]: Registered Plugins: #<Set: {InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller}>.
2015-12-04 17:17:57 INFO [codedeploy-agent(28223)]: On Premises config file does not exist or not readable
2015-12-04 17:17:57 DEBUG [codedeploy-agent(28223)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Configuring deploy control client: Region = "us-west-2"
2015-12-04 17:17:57 DEBUG [codedeploy-agent(28223)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Deploy control endpoint override = nil
2015-12-04 17:17:57 DEBUG [codedeploy-agent(28223)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Initializing Host Agent: Host Identifier = arn:aws:ec2:us-west-2:219450671821:instance/i-348913ed
2015-12-04 17:17:57 DEBUG [codedeploy-agent(28223)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Validating CodeDeploy Plugin Configuration
2015-12-04 17:17:57 DEBUG [codedeploy-agent(28223)]: InstanceAgent::Plugins::CodeDeployPlugin::CodeDeployControlCertVerifier: Actual certificate subject is '/C=US/ST=Washington/L=Seattle/O=Amazon.com, Inc./CN=codedeploy-commands.us-west-2.amazonaws.com'
2015-12-04 17:17:57 DEBUG [codedeploy-agent(28223)]: InstanceAgent::Plugins::CodeDeployPlugin::CodeDeployControlCertVerifier: Actual certificate subject is '/C=US/ST=Washington/L=Seattle/O=Amazon.com, Inc./CN=codedeploy-commands.us-west-2.amazonaws.com'
2015-12-04 17:17:57 DEBUG [codedeploy-agent(28223)]: InstanceAgent::Plugins::CodeDeployPlugin::CodeDeployControlCertVerifier: Actual certificate subject is '/C=US/ST=Washington/L=Seattle/O=Amazon.com, Inc./CN=codedeploy-commands.us-west-2.amazonaws.com'
2015-12-04 17:17:57 DEBUG [codedeploy-agent(28223)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: CodeDeploy Plugin Configuration is valid
2015-12-04 17:17:57 DEBUG [codedeploy-agent(28223)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Calling PollHostCommand:
2015-12-04 17:17:58 INFO [codedeploy-agent(28219)]: Started master 28219 with 1 children
2015-12-04 17:18:58 INFO [codedeploy-agent(28223)]: [Aws::CodeDeployCommand::Client 200 60.534255 0 retries] poll_host_command(host_identifier:"arn:aws:ec2:us-west-2:219450671821:instance/i-348913ed")
Am I missing something in the configuration ?
Any help please?

I just checked the detailed information for the deployment Id "d-FEPBDKJMC", seems the instance id is "arn:aws:ec2:us-west-2:219450671821:instance/i-d796060e" instead of "arn:aws:ec2:us-west-2:219450671821:instance/i-348913ed" inside the host agent log that you pasted. So probably should check the log on the right instance.
All the lifecycle events are skipped for the deployment, and I suspect the host agent is not pulling command at all. Since you mentioned that the instance is under a VPC, please make sure Codedeploy and S3 endpoints are whitelisted(We need to connect to these endpoints to do deployments). There is also a doc here about Codedeploy working with VPC, please click Security: https://aws.amazon.com/codedeploy/faqs/.

Are you sure the instance has a ROLE to access S3? I missed that out and did not have the role attached to my instance to access S3.
See http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-create-iam-instance-profile.html#getting-started-create-ec2-role-console

Related

On-Premise Data-Prepper Connection Refused

Recently i've been experimenting with deploying Statefull applications onto Kubernetes. For my dev environment everything is on-premise, either on my local machine or on remote VMs. I deployed OpenSearch through its helm chart, got it and dashboards up and running, and everything was going well. I am now trying to setup data-prepper running as a docker container on my local machine (the kubernetes cluster is on remote VMs, not sure if this matters). I have the kube service that defines access to OpenSearch port-forwarded to my machine and am able to access it using "curl -u : https://localhost:9200 -k". Since my only interest is seeing it up and running I don't care (yet) that it is insecure. When I setup my data-prepper pipeline to hit OpenSearch in the exact same way, it is refusing the connection and I'm at a loss as to why.
pipelines.yaml:
simple-sample-pipeline:
workers: 2
delay: "5000"
source:
random:
sink:
- opensearch:
hosts: [ "https://localhost:9200" ]
insecure: true
username: <user>
password: <admin>
index: test
data-prepper-config.yaml
ssl: false
Docker command to run container:
docker run --name data-prepper \
-v C:/users/<profile>/documents/pipelines.yaml:/usr/share/data-prepper/pipelines.yaml \
-v C:/users/<profile>/documents/data-prepper.yaml:/usr/share/data-prepper/data-prepper-config.yaml \
opensearchproject/data-prepper:latest
logs exerpt:
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2022-06-07T19:39:50,959 [main] INFO com.amazon.dataprepper.parser.config.DataPrepperAppConfiguration - Command line args: /usr/share/data-prepper/pipelines.yaml,/usr/share/data-prepper/data-prepper-config.yaml
2022-06-07T19:39:50,960 [main] INFO com.amazon.dataprepper.parser.config.DataPrepperArgs - Using /usr/share/data-prepper/pipelines.yaml configuration file
2022-06-07T19:39:54,599 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building pipeline [simple-sample-pipeline] from provided configuration
2022-06-07T19:39:54,600 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building [random] as source component for the pipeline [simple-sample-pipeline]
2022-06-07T19:39:54,624 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building buffer for the pipeline [simple-sample-pipeline]
2022-06-07T19:39:54,634 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building processors for the pipeline [simple-sample-pipeline]
2022-06-07T19:39:54,635 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building sinks for the pipeline [simple-sample-pipeline]
2022-06-07T19:39:54,635 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building [opensearch] as sink component
2022-06-07T19:39:54,643 [main] INFO com.amazon.dataprepper.plugins.sink.opensearch.OpenSearchSink - Initializing OpenSearch sink
2022-06-07T19:39:54,649 [main] INFO com.amazon.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - Using the username provided in the config.
2022-06-07T19:39:54,789 [main] INFO com.amazon.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - Using the trust all strategy
2022-06-07T19:39:54,881 [main] ERROR com.amazon.dataprepper.plugin.PluginCreator - Encountered exception while instantiating the plugin OpenSearchSink
java.lang.reflect.InvocationTargetException: null
-----
Caused by: java.net.ConnectException: Connection refused

Unable to start Drill in distributed mode

I am trying to setup drillv1.18 running. Facing the error below.
The drill-override.conf points to the zookeeper which runs on port 12181. On starting in distributed mode, it fails with the following log output. But the embedded mode has no issues.
It appears like permission issue, but both zookeeper, drill, zookeeper data-dir all are running under the same user.
2020-05-10 16:23:01,160 [main] DEBUG o.apache.drill.exec.server.Drillbit - Construction started.
2020-05-10 16:23:01,448 [main] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Connect localhost:12181, zkRoot drill, clusterId: drillbits1
2020-05-10 16:23:01,531 [main] INFO o.a.d.e.s.s.PersistentStoreRegistry - Using the configured PStoreProvider class: 'org.apache.drill.exec.store.sys.store.provider.ZookeeperPersistentStoreProvider'.
2020-05-10 16:23:01,718 [main] DEBUG o.a.drill.exec.ssl.SSLConfigServer - Using Hadoop configuration for SSL
2020-05-10 16:23:01,718 [main] DEBUG o.a.drill.exec.ssl.SSLConfigServer - Hadoop SSL configuration file: ssl-server.xml
2020-05-10 16:23:01,731 [main] DEBUG org.apache.drill.exec.ssl.SSLConfig - Initialized SSL context.
2020-05-10 16:23:01,731 [main] INFO o.a.drill.exec.rpc.user.UserServer - Rpc server configured to use TLS protocol 'TLSv1.2'
2020-05-10 16:23:01,738 [main] INFO o.apache.drill.exec.server.Drillbit - Construction completed (577 ms).
2020-05-10 16:23:01,738 [main] DEBUG o.apache.drill.exec.server.Drillbit - Startup begun.
2020-05-10 16:23:01,738 [main] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Starting ZKClusterCoordination.
2020-05-10 16:23:03,775 [main] ERROR o.apache.drill.exec.server.Drillbit - Failure during initial startup of Drillbit.
org.apache.zookeeper.KeeperException$UnimplementedException: KeeperErrorCode = Unimplemented for /drill
at org.apache.zookeeper.KeeperException.create(KeeperException.java:106)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1538)
at org.apache.curator.utils.ZKPaths.mkdirs(ZKPaths.java:351)
at org.apache.curator.framework.imps.ExistsBuilderImpl$2.call(ExistsBuilderImpl.java:230)
at org.apache.curator.framework.imps.ExistsBuilderImpl$2.call(ExistsBuilderImpl.java:224)
at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:67)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:81)
at org.apache.curator.framework.imps.ExistsBuilderImpl.pathInForeground(ExistsBuilderImpl.java:221)
at org.apache.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:206)
at org.apache.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:35)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.createContainers(CuratorFrameworkImpl.java:265)
at org.apache.curator.framework.EnsureContainers.internalEnsure(EnsureContainers.java:69)
at org.apache.curator.framework.EnsureContainers.ensure(EnsureContainers.java:53)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.ensurePath(PathChildrenCache.java:596)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.rebuild(PathChildrenCache.java:327)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.start(PathChildrenCache.java:304)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.start(PathChildrenCache.java:252)
at org.apache.curator.x.discovery.details.ServiceCacheImpl.start(ServiceCacheImpl.java:99)
at org.apache.drill.exec.coord.zk.ZKClusterCoordinator.start(ZKClusterCoordinator.java:145)
at org.apache.drill.exec.server.Drillbit.run(Drillbit.java:220)
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:584)
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:554)
at org.apache.drill.exec.server.Drillbit.main(Drillbit.java:550)
Version 1.17 has no issues in starting in distributed mode.
The issue here is with the zookeeper version. Perhaps you use 3.4.X version, but the current version of Drill requires 3.5.X. As a workaround, you may replace zookeeper jar in jars/ext/zookeeper-3.5.7.jar and jars/ext/zookeeper-jute-3.5.7.jar with the jars that corresponds to your zookeeper version.
In Addition to the answer of Vova Vysotskyi, you may find more information in Drill documentation about this issue:
https://drill.apache.org/docs/distributed-mode-prerequisites/
Starting in Drill 1.18 the bundled ZooKeeper libraries are upgraded to version 3.5.7, preventing connections to older (< 3.5) ZooKeeper clusters. In order to connect to a ZooKeeper < 3.5 cluster, replace the ZooKeeper library JARs in ${DRILL_HOME}/jars/ext with zookeeper-3.4.x.jar then restart the cluster.

Error restoring Rancher: This cluster is currently Unavailable; areas that interact directly with it will not be available until the API is ready

I am trying to backup and restore rancher server (single node install), as the described here.
After backup, I tried to turn off the rancher server node, and I run a new rancher container on a new node (in the same network, but another ip address), then I restored using the backup file.
After restoring, I logged in to the rancher UI and it showed the error below:
So, I checked the logs of the rancher server and it showed as below:
2019-10-05 16:41:32.197641 I | http: TLS handshake error from 127.0.0.1:38388: EOF
2019-10-05 16:41:32.202442 I | http: TLS handshake error from 127.0.0.1:38380: EOF
2019-10-05 16:41:32.210378 I | http: TLS handshake error from 127.0.0.1:38376: EOF
2019-10-05 16:41:32.211106 I | http: TLS handshake error from 127.0.0.1:38386: EOF
2019/10/05 16:42:26 [ERROR] ClusterController c-4pgjl [user-controllers-controller] failed with : failed to start user controllers for cluster c-4pgjl: failed to contact server: Get https://192.168.94.154:6443/api/v1/namespaces/kube-system?timeout=30s: waiting for cluster agent to connect
2019/10/05 16:44:34 [ERROR] ClusterController c-4pgjl [user-controllers-controller] failed with : failed to start user controllers for cluster c-4pgjl: failed to contact server: Get https://192.168.94.154:6443/api/v1/namespaces/kube-system?timeout=30s: waiting for cluster agent to connect
2019/10/05 16:48:50 [ERROR] ClusterController c-4pgjl [user-controllers-controller] failed with : failed to start user controllers for cluster c-4pgjl: failed to contact server: Get https://192.168.94.154:6443/api/v1/namespaces/kube-system?timeout=30s: waiting for cluster agent to connect
2019-10-05 16:50:19.114475 I | mvcc: store.index: compact 75951
2019-10-05 16:50:19.137825 I | mvcc: finished scheduled compaction at 75951 (took 22.527694ms)
2019-10-05 16:55:19.120803 I | mvcc: store.index: compact 76282
2019-10-05 16:55:19.124813 I | mvcc: finished scheduled compaction at 76282 (took 2.746382ms)
After that, I checked logs of the master nodes, I found that the rancher agent still tries to connect to the old rancher server (old ip address), not as the new one, so it makes the cluster not available.
How can I fix this?
You need to re-register the node in Rancher using the following steps.
Update the server-url in Rancher by going to Global -> Settings -> server-url
This should be the full URL with https://
Then use this script to re-register the node in Rancher https://github.com/mattmattox/cluster-agent-tool

com.hazelcast.client.AuthenticationException: Invalid credentials! Principal :null

I have configured my multi-cluster Hazelcast server on Kubernetes via the Kubernetes API discovery strategy. (Please see Two separate hazelcast clusters in kubernetes) And the members of each cluster are successfully discovering each other.
My client project is running on the k8s cluster as my Hazelcast server.
I have added the following dependency to my client project pom:
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-kubernetes</artifactId>
<version>1.3.1</version>
</dependency>
I have configured my Hazelcast client as given in the official documentation:
clientConfig.getNetworkConfig().getKubernetesConfig()
.setEnabled(true)
.setProperty("namespace", "default")
.setProperty("service-name", "xyz");
(I have a namespace called "default" and k8s service object named "xyz")
These are the logs on client startup. Although it recognized the Hazelcast server pod, it gave an AuthenticationException (as expanded below). Also, want to point out that it did not try to connect to the correct port.
2019-09-18 12:59:36,699 [instance=local-service_01.devciny-dock] [localhost-startStop-1] INFO com.hazelcast.client.HazelcastClient (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] A non-empty group password is configured for the Hazelcast client. Starting with Hazelcast version 3.11, clients with the same group name, but with different group passwords (that do not use authentication) will be accepted to a cluster. The group password configuration will be removed completely in a future release.
2019-09-18 12:59:36,709 [instance=local-service_01.devciny-dock] [localhost-startStop-1] INFO com.hazelcast.core.LifecycleService (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] HazelcastClient 3.11.1 (20181218 - d294f31) is STARTING
2019-09-18 12:59:36,977 [instance=local-service_01.devciny-dock] [localhost-startStop-1] INFO com.hazelcast.spi.discovery.integration.DiscoveryService (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] Kubernetes Discovery properties: { service-dns: null, service-dns-timeout: 5, service-name: xyz, service-port: 0, service-label: null, service-label-value: true, namespace: default, resolve-not-ready-addresses: false, kubernetes-master: https://kubernetes.default.svc}
2019-09-18 12:59:36,980 [instance=local-service_01.devciny-dock] [localhost-startStop-1] INFO com.hazelcast.spi.discovery.integration.DiscoveryService (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] Kubernetes Discovery activated resolver: ServiceEndpointResolver
2019-09-18 12:59:36,999 [instance=local-service_01.devciny-dock] [localhost-startStop-1] INFO com.hazelcast.client.spi.ClientInvocationService (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] Running with 2 response threads
2019-09-18 12:59:37,060 [instance=local-service_01.devciny-dock] [localhost-startStop-1] INFO com.hazelcast.core.LifecycleService (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] HazelcastClient 3.11.1 (20181218 - d294f31) is STARTED
2019-09-18 12:59:37,390 [instance=local-service_01.devciny-dock] [local-service_01.devciny-dock.cluster-] INFO com.hazelcast.client.connection.ClientConnectionManager (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] Trying to connect to [10.42.1.111]:5701 as owner member
2019-09-18 12:59:37,432 [instance=local-service_01.devciny-dock] [local-service_01.devciny-dock.internal-3] WARN com.hazelcast.client.connection.nio.ClientConnection (Slf4jFactory.java:67) - local-service_01.devciny-dock [instance_identifier] [3.11.1] ClientConnection{alive=false, connectionId=1, channel=NioChannel{/10.42.1.121:39003->/10.42.1.111:5701}, remoteEndpoint=null, lastReadTime=2019-09-18 12:59:37.426, lastWriteTime=2019-09-18 12:59:37.425, closedTime=2019-09-18 12:59:37.431, connected server version=null} closed. Reason: com.hazelcast.client.AuthenticationException[Invalid credentials! Principal: null]
com.hazelcast.client.AuthenticationException: Invalid credentials! Principal: null
at com.hazelcast.client.connection.nio.ClientConnectionManagerImpl$AuthCallback.onResponse(ClientConnectionManagerImpl.java:747)
at com.hazelcast.client.connection.nio.ClientConnectionManagerImpl$AuthCallback.onResponse(ClientConnectionManagerImpl.java:702)
at com.hazelcast.client.spi.impl.ClientInvocationFuture$InternalDelegatingExecutionCallback.onResponse(ClientInvocationFuture.java:130)
at com.hazelcast.client.spi.impl.ClientInvocationFuture$InternalDelegatingExecutionCallback.onResponse(ClientInvocationFuture.java:118)
at com.hazelcast.client.spi.impl.ClientInvocationFuture$InternalDelegatingExecutionCallback.onResponse(ClientInvocationFuture.java:130)
at com.hazelcast.client.spi.impl.ClientInvocationFuture$InternalDelegatingExecutionCallback.onResponse(ClientInvocationFuture.java:118)
at com.hazelcast.spi.impl.AbstractInvocationFuture$1.run(AbstractInvocationFuture.java:255)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:64)
at com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:80)
Your Hazelcast client tries to connect to 10.42.1.111:5701 and it find Hazelcast server there. The port looks correct and it correctly finds the Hazelcast server there.
What happens next is that it cannot authenticate with the server, which probably means that you didn't specify the cluster password in your Hazelcast configuration. You can read more on how to do it in this StackOverflow question.
You didn't share the most important part of configuration related to client authentication. That is the group config of clusters and group config from your client. I anticipate, the problem is rooted there.
The default authentication compares group name on the member with username coming from the client. The username is filled by the client's group name (by default).
Check the AuthenticationBaseMessageTask code
private AuthenticationStatus authenticate(UsernamePasswordCredentials credentials) {
GroupConfig groupConfig = nodeEngine.getConfig().getGroupConfig();
String nodeGroupName = groupConfig.getName();
boolean usernameMatch = nodeGroupName.equals(credentials.getUsername());
return usernameMatch ? AuthenticationStatus.AUTHENTICATED : AuthenticationStatus.CREDENTIALS_FAILED;
}

Hyperledger Fabric in Kubernetes: Not able to instantiate chaincode

Hello Everyone i am working on setup of fabric default first-network in kubernetes. But when i am instantiate the chaincode it gives me error. Please check below are my peer logs.
2019-07-22 07:25:02.134 UTC [endorser] SimulateProposal -> ERRO 066 [mychannel][c4b4e2ae] failed to invoke chaincode name:"lscc" , error: container exited with 0
github.com/hyperledger/fabric/core/chaincode.(*RuntimeLauncher).Launch.func1
/opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/runtime_launcher.go:63
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:1333
chaincode registration failed
Getting this error on Cli :-
2019-07-22 07:24:58.263 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
2019-07-22 07:24:58.264 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
Error: could not assemble transaction, err proposal response was not successful, error code 500, msg chaincode registration failed: container exited with 0
Once check if all your docker containers are up and running and if you are simply running the sample network without making any changes to the smart contract and the docker files then you can simply stop your network and freshly start the network(it worked in my case).
I have check with my configuration files it was due to wrong CORE_PEER_CHAINCODELISTENADDRESS env variable value for the peer.