How to run your app on Kubernetes through a Jenkins build/post build step - plugins

I tried looking for a Jenkins plugin (like AWS codeDeploy) so that I could deploy my application to a Kubernetes cluster. So far, I have been successful at pushing it to a Docker registry and adding some command line build steps to deploy to Kubernetes.
Looking at the CloudBees announcement this seems possible
Installing the Kubernetes plugin gave me errors...I can attach a screenshot if that helps ...
Also it seems like this plugin allows you to run slaves in Docker containers not deploy your own app.
After looking at this video
, it seemed I could accomplish this
using the withKubernetes workflow stage...
However adding that line to my workflow script gives me the following error
java.lang.NoSuchMethodError: No such DSL method withKubernetes found among [archive, bat, build, catchError, checkout, dir, dockerFingerprintFrom, dockerFingerprintRun, echo, error, fileExists, git, input, load, mail, node, parallel, publishHTML, pwd, readFile, retry, sh, sleep, stage, stash, step, svn, timeout, tool, unarchive, unstash, waitUntil, withDockerContainer, withDockerRegistry, withDockerServer, withEnv, wrap, writeFile, ws]
at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:107)
at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:112)
at groovy.lang.GroovyObject$invokeMethod.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:151)
at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:21)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:75)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:123)
at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:15)
at WorkflowScript.run(WorkflowScript:17)
at Unknown.Unknown(Unknown)
at ___cps.transform___(Native Method)
at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:69)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:106)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:79)
at sun.reflect.GeneratedMethodAccessor290.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
at com.cloudbees.groovy.cps.impl.ClosureBlock.eval(ClosureBlock.java:40)
at com.cloudbees.groovy.cps.Next.step(Next.java:58)
at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:145)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:19)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:33)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:30)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:106)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:30)
at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:164)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:271)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$000(CpsThreadGroup.java:71)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:180)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:178)
at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:47)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

The Jenkins Kubernetes plugin is (so far) only to run slaves dynamically in a Kubernetes cluster
There's not a lot about deploying from Jenkins to Kubernetes, maybe this post Continuous Delivery Pipelines with Fabric8 and Jenkins on OpenShift helps

Related

Exception deploying SCDF App with Docker Compose

I am running SCDF with Docker Compose in Ubuntu 20.04 on WSL. I can create and deploy streams using starter sources/sinks without issue but when trying to deploy a custom Spring Cloud Stream app the following happens and I am not sure how to resolve it:
dataflow:>app register --name order-source --type source --uri docker://*******/****:order-producer
Successfully registered application 'source:order-source'
dataflow:>stream create ordering-stream --definition "order-source | log"
Created new stream 'ordering-stream'
dataflow:>stream deploy ordering-stream
Command failed org.springframework.cloud.dataflow.rest.client.DataFlowClientException: SkipperException
SkipperException
org.springframework.cloud.dataflow.rest.client.DataFlowClientException: SkipperException
at org.springframework.cloud.dataflow.rest.client.VndErrorResponseErrorHandler.handleError(VndErrorResponseErrorHandler.java:65)
at org.springframework.web.client.ResponseErrorHandler.handleError(ResponseErrorHandler.java:63)
at org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:780)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:738)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:672)
at org.springframework.web.client.RestTemplate.postForObject(RestTemplate.java:416)
at org.springframework.cloud.dataflow.rest.client.StreamTemplate.deploy(StreamTemplate.java:141)
at org.springframework.cloud.dataflow.shell.command.StreamCommands.deployStream(StreamCommands.java:167)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:282)
at org.springframework.shell.core.SimpleExecutionStrategy.invoke(SimpleExecutionStrategy.java:68)
at org.springframework.shell.core.SimpleExecutionStrategy.execute(SimpleExecutionStrategy.java:59)
at org.springframework.shell.core.AbstractShell.executeCommand(AbstractShell.java:134)
at org.springframework.shell.core.JLineShell.promptLoop(JLineShell.java:533)
at org.springframework.shell.core.JLineShell.run(JLineShell.java:179)
at java.lang.Thread.run(Thread.java:748)
This skipper server exception includes this:
Caused by: java.io.IOException: Cannot run program "docker" (in directory "/tmp/1627185411996/ordering-stream.order-source-v1"): error=2, No such file or directory
I can include the verbose server output if that would be helpful.
I was able to resolve the problem by referencing the application's jar file locally with Uri = file:///root/scdf/order-service-0.0.1-SNAPSHOT.jar. The docker-compose.yml file also needs to be inside the same directory as the jar file (~/scdf in this case). I'm still unsure why the Docker Hub Uri reference fails since docker is working for SCDF, all of the other services within Docker Compose, and elsewhere.

WSO2 Integration Studio v6.5.0's built-in Kafka template throws NoClassDefFoundError

I've installed WSO2 Integration Studio version 6.5.0 in my Windows workstation and created a project using the Kafka Consumer and Producer built-in template.
Then I configured the project with my own Kafka server settings (topic name "myTopic").
I then right-clicked the composite application and chose Export Project Artifacts and Run.
The Console window displayed at the very top the following messages:
[2019-06-25 09:23:45,499] [micro-integrator] INFO - LibraryArtifactDeployer Synapse Library named '{org.wso2.carbon.connector}kafkaTransport' has been deployed from file : C:\IntegrationStudio\runtime\microesb\tmp\carbonapps\-1234\1561465425230TestCompositeApplication_1.0.0.car\kafkaTransport-connector_2.0.6\kafkaTransport-connector-2.0.6.zip
[2019-06-25 09:23:45,517] [micro-integrator] INFO - SynapseImportFactory Successfully created Synapse Import: kafkaTransport
[2019-06-25 09:23:45,533] [micro-integrator] ERROR - ClassMediatorFactory
Error in instantiating class :
org.wso2.carbon.connector.KafkaProduceConnector
java.lang.NoClassDefFoundError: org/apache/kafka/common/header/Headers
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
at java.lang.Class.getConstructor0(Class.java:3075)
[snipped rest for clarity]
I've tried uninstalling Integrator Studio and running it with elevated right to no avail.
I expected the project to be deployed normally.
EDIT: after copying:
kafka_2.11-2.2.1.jar
metrics-core-2.2.0.jar
zkclient-0.11.jar
kafka-clients-2.2.1.jar
scala-library-2.11.12.jar
zookeeper-3.4.13.jar
to the EI_HOME/lib directory, the exception changed to:
org.apache.axis2.deployment.DeploymentException: kafka/consumer/ConsumerTimeoutException
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:219)
at org.wso2.carbon.application.deployer.synapse.SynapseAppDeployer.deployArtifactType(SynapseAppDeployer.java:1099)
at org.wso2.carbon.application.deployer.synapse.SynapseAppDeployer.deployArtifacts(SynapseAppDeployer.java:114)
at org.wso2.carbon.application.deployer.internal.ApplicationManager.deployCarbonApp(ApplicationManager.java:272)
at org.wso2.carbon.application.deployer.CappAxis2Deployer.deploy(CappAxis2Deployer.java:72)
at org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(DeploymentFileData.java:136)
at org.apache.axis2.deployment.DeploymentEngine.doDeploy(DeploymentEngine.java:807)
at org.apache.axis2.deployment.repository.util.WSInfoList.update(WSInfoList.java:144)
[snipped for clarity]
Caused by: org.apache.axis2.deployment.DeploymentException: kafka/consumer/ConsumerTimeoutException
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:207)
... 87 more
Caused by: java.lang.NoClassDefFoundError: kafka/consumer/ConsumerTimeoutException
at org.wso2.carbon.inbound.endpoint.protocol.kafka.KAFKAPollingConsumer.startsMessageListener(KAFKAPollingConsumer.java:90)
at org.wso2.carbon.inbound.endpoint.protocol.kafka.KAFKAProcessor.init(KAFKAProcessor.java:96)
at org.apache.synapse.inbound.InboundEndpoint.init(InboundEndpoint.java:79)
at org.apache.synapse.deployers.InboundEndpointDeployer.deploySynapseArtifact(InboundEndpointDeployer.java:57)
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:197)
... 87 more
Caused by: java.lang.ClassNotFoundException: kafka.consumer.ConsumerTimeoutException cannot be found by synapse-core_2.1.7.wso2v111
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:475)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 92 more
Have you copied the required jars from kafka_home/libs folder to EI_home/lib, if yes then share your code to get the issue detail
According to this documentation https://docs.wso2.com/display/EI650/Kafka+Inbound+Protocol, the recommended versions for Kafka is kafka_2.9.2-0.8.1.1. You can download it in the below link. http://kafka.apache.org/downloads.html. Please use those jars and copy them to the EI_HOME/lib. There is an github issue for this as well. https://github.com/wso2/product-ei/issues/2239
I may be a bit too late but we have been using Custom inbound endpoint for Kafka. We also faced exactly same issue and that was the only way to fix it.
You could use https://github.com/wso2-extensions/esb-inbound-kafka/blob/master/docs/config.md to configure it.

Eclipse UI tests fail with gtk_init_check() on a Redhat Jenkins server

When running Tycho UI tests for Eclipse on a Redhat server (6.7) via Jenkins, the Exception below occurs. I know that a graphical subsystem must be installed and running, but there seems to be something amiss with my setup. I already installed GTK via "yum groupinstall Desktop".
org.eclipse.swt.SWTError: No more handles [gtk_init_check() failed]
at org.eclipse.swt.SWT.error(SWT.java:4517)
at org.eclipse.swt.widgets.Display.createDisplay(Display.java:908)
at org.eclipse.swt.widgets.Display.create(Display.java:892)
at org.eclipse.swt.graphics.Device.<init>(Device.java:156)
at org.eclipse.swt.widgets.Display.<init>(Display.java:512)
at org.eclipse.swt.widgets.Display.<init>(Display.java:503)
at org.eclipse.ui.internal.Workbench.createDisplay(Workbench.java:790)
at org.eclipse.ui.PlatformUI.createDisplay(PlatformUI.java:162)
at org.eclipse.ui.internal.ide.application.IDEApplication.createDisplay(IDEApplication.java:169)
at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:111)
at org.eclipse.tycho.surefire.osgibooter.UITestApplication.runApplication(UITestApplication.java:31)
at org.eclipse.tycho.surefire.osgibooter.AbstractUITestApplication.run(AbstractUITestApplication.java:115)
at org.eclipse.tycho.surefire.osgibooter.UITestApplication.start(UITestApplication.java:37)
at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:134)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:380)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:235)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:669)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:608)
at org.eclipse.equinox.launcher.Main.run(Main.java:1515)
at org.eclipse.equinox.launcher.Main.main(Main.java:1488)
A virtual desktop must run. One can use Xvfb for this as described at http://www-01.ibm.com/support/docview.wss?uid=swg21421214. On Redhat one can use:
yum install xorg-x11-server-Xvfb
Xvfb :5 -screen 0 1280x1024x8 -fbdir /tmp &
export DISPLAY=:5
The number for the display can be chosen arbitrarily. In the Jenkins job the environment variable must be set as well (via set-env plugin).
For enabling this for all jobs a script as desribed here: https://superuser.com/questions/319040/proper-way-to-start-xvfb-on-startup-on-centos should be setup with chkconfig. Finally the environment variable DISPLAY should be made available by adding a shell script to /etc/profile.d/ which contains the sole line:
export DISPLAY=:5
During experimenting I set the default runlevel to 5 in /etc/inittab/ but I did no further research whether this is strictly necessary.

Deleted google storage directory appears "already exists" when calling Spark DataFrame.saveAsParquetFile()

After I deleted a Google Cloud Storage directory through the Google Cloud Console, (the directory was generated by early Spark (ver 1.3.1) job), when re-run the job, it always fail and seemed the directory was still there to the job; I cannot find the directory with gsutil.
Is this a bug, or anything I missed? Thanks!
The error I got:
java.lang.RuntimeException: path gs://<my_bucket>/job_dir1/output_1.parquet already exists.
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.parquet.DefaultSource.createRelation(newParquet.scala:112)
at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:240)
at org.apache.spark.sql.DataFrame.save(DataFrame.scala:1196)
at org.apache.spark.sql.DataFrame.saveAsParquetFile(DataFrame.scala:995)
at com.xxx.Job1$.execute(Job1.scala:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
It appears you might be running into a known bug with the NFS list-consistency cache: https://github.com/GoogleCloudPlatform/bigdata-interop/issues/5
It was fixed in the latest release, and if you upgrade by deploying a new cluster with bdutil-1.3.1 (announced here: https://groups.google.com/forum/#!topic/gcp-hadoop-announce/vstNuV0LpDc) the problem should be fixed. If you need to upgrade in-place, you can try to download the latest gcs-connector-1.4.1 jarfile onto your master and worker nodes under /home/hadoop/hadoop-install/lib/gcs-connector-*.jar and then rebooting the Spark daemons:
sudo sudo -u hadoop /home/hadoop/spark-install/sbin/stop-all.sh
sudo sudo -u hadoop /home/hadoop/spark-install/sbin/start-all.sh

Issue with MyEclipse Proxy Connection

I am unable to get MyEclipse to connect to the marketplace. I am aware of the proxy setup. These are the steps I followed within a proxy environment and within a direct environment.
A. Within the Company Network. (browers use automatic configuration script)
Chose Native option. Does not work.
Chose Manual option. Set the domain, username. Opened the proxy script to figure out available proxy servers. Verified independently that these proxy servers work. Does not work.
Modified the vmargs to provide the http host, user, password and port properties. Does not work.
Did steps 1-3 with restarts of Eclipse.
B. Within home environment. (Direct connection to internet)
Tried Direct Option. Does not work.
Tried Native Option. Does not work.
The error message that I constantly see (through error logs) is this.
java.lang.reflect.InvocationTargetException
at org.eclipse.epp.internal.mpc.ui.commands.MarketplaceWizardCommand$3.run(MarketplaceWizardCommand.java:203)
at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:121)
Caused by: org.eclipse.core.runtime.CoreException: HTTP Server Unknown HTTP Response Code (-1):http://marketplace.eclipse.org/catalogs/api/p
at org.eclipse.equinox.internal.p2.transport.ecf.RepositoryTransport.stream(RepositoryTransport.java:161)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.eclipse.epp.internal.mpc.core.util.AbstractP2TransportFactory.invokeStream(AbstractP2TransportFactory.java:35)
at org.eclipse.epp.internal.mpc.core.util.TransportFactory$1.stream(TransportFactory.java:69)
at org.eclipse.epp.internal.mpc.core.service.RemoteMarketplaceService.processRequest(RemoteMarketplaceService.java:141)
at org.eclipse.epp.internal.mpc.core.service.RemoteMarketplaceService.processRequest(RemoteMarketplaceService.java:80)
at org.eclipse.epp.internal.mpc.core.service.DefaultCatalogService.listCatalogs(DefaultCatalogService.java:36)
at org.eclipse.epp.internal.mpc.ui.commands.MarketplaceWizardCommand$3.run(MarketplaceWizardCommand.java:200)
... 1 more
Caused by: org.eclipse.ecf.filetransfer.BrowseFileTransferException: Could not connect to http://marketplace.eclipse.org/catalogs/api/p
at com.genuitec.pulse2.common.http.ecf.PulseRetrieveFileTransfer.openStreams(Unknown Source)
at org.eclipse.ecf.provider.filetransfer.retrieve.AbstractRetrieveFileTransfer.sendRetrieveRequest(AbstractRetrieveFileTransfer.java:889)
at org.eclipse.ecf.provider.filetransfer.retrieve.AbstractRetrieveFileTransfer.sendRetrieveRequest(AbstractRetrieveFileTransfer.java:576)
at org.eclipse.ecf.provider.filetransfer.retrieve.MultiProtocolRetrieveAdapter.sendRetrieveRequest(MultiProtocolRetrieveAdapter.java:106)
at org.eclipse.equinox.internal.p2.transport.ecf.FileReader.sendRetrieveRequest(FileReader.java:349)
at org.eclipse.equinox.internal.p2.transport.ecf.FileReader.read(FileReader.java:213)
at org.eclipse.equinox.internal.p2.transport.ecf.RepositoryTransport.stream(RepositoryTransport.java:153)
... 11 more
Is there any alternative or any other step that I can take to resolve this problem. I know I can go through the manual update by downloading the plugin and all. But I really want to solve this issue.
MyEclipse Version Information:
MyEclipse Blue Edition
Version: 10.7.1 Blue
Build id: 10.7.1-Blue-20130201
Apparently it could be a bug.
I just read this whole bug report here.
I tried adding the VM arguments to ensure the the HTTPClient workaround can be achieved via a configuration change. Did not work.
However, I was able to remove the http client libraries from the plugins folder, courtesy Comment #27 and #29 on the bug report.
Now I'm able to connect over proxy and direct as well.