I'm developing some custom topic name mapping and a jar file has been produced out of it.
And since I'm using MirrorMaker v1, these variables have also been added KAFKA_MIRRORMAKER_MESSAGE_HANDLER and KAFKA_MIRRORMAKER_MESSAGE_HANDLER_ARGS inside the KafkaMirrorMaker yaml file.
But I don't know how to physically add this custom jar file into the KafkaMirrorMaker pod. I have checked the CRD of the KafkaMirrorMaker but can't find any clue yet.
So, is there a way to let KafkaMirrorMaker download some file(s) / artifact(s) and include the jar file(s) into the classpath so that the custom MessageHandler can find it?
The helm install command is used to deploy the Mirror Maker. The apiVersion of the KafkaMirrorMaker I'm using currently: kafka.strimzi.io/v1beta2
Based on the strimzi tag, I assume you use Strimzi's Mirror Maker v1? To add your own JAR, you would need to build a custom container image.
You could modify the Strimzi project sources and build everything from scratch (you can add your JAR as a dependency into the 3rd party libs in `docker-images/kafka/...). But that is rather complicated as you build the whole project.
The easiest way is to just write your own Dockerfile and use the existing Strimzi image as a base image. For example:
FROM quay.io/strimzi/kafka:0.26.0-kafka-3.0.0
USER root:root
COPY ./my-jar.jar /opt/kafka/libs/my-jar.jar
USER 1001
You can build this Dockerfile and push it into your own Docker registry (Docker Hub, Quay, whetever you use). You should make sure the FROM uses the right image based on the Strimzi version you use and KAfka version you use.
And once you have it, you have to tell Strimzi to use this image. You can do that either using the .spec.image option in the KafkaMirrorMaker custom resource. Or you can just change the environment variable STRIMZI_KAFKA_MIRROR_MAKER_IMAGES in the Strimzi Cluster Operator deployment and update the images which should be used there.
Related
I'm attempting to deploy a python server to Google App Engine.
I'm trying to use the gcloud sdk to do so.
It appears the command I need to use is gcloud app deploy.
I get the following error:
me#mymachine:~/development/some-app/backend$ gcloud app deploy
ERROR: (gcloud.app.deploy) Error Response: [3] The directory [~/.config/google-chrome/Default/Cache] has too many files (greater than 1000).
I had to add ~/.config to my .gcloudignore to get past this error.
Why was it looking there at all?
The full repo of my project is public but I believe I've included the relevant portion.
I looked at your linked repo and there aren't any yaml files. As far as I know, a GAE project needs an app.yaml file because that file tells GAE what your runtime is so that GAE knows how to deploy/run your code. In fact, according to the gcloud app deploy documentation, if you don't specify any yaml files to be deployed, it will default to app.yaml in the current directory. If it can't find any in the current directory, it will try to build one.
Your repo also shows you have a Dockerfile. GAE documentation for custom runtimes says ...Custom runtimes let you build apps that run in an environment defined by a Dockerfile... In the app.yaml file for custom runtimes, you will have the following entry
runtime: custom
env: flex
Since you don't have an app.yaml file and you have a Docker file in which you are downloading and installing Chrome, it seems to me that gcloud app deploy is trying to infer your runtime and this has led to it executing some or all of the contents of the Dockerfile before it attempts to then push it to Production. This is what is making it take a peek at the config file on your local machine till you explicitly tell it to ignore it. To be clear, I'm not 100% sure of this, just trying to see if I can draw a logical conclusion.
My suggestion would be to create an app.yaml file and specify a custom runtime. Or just use the python runtime with flex
When building an upgrade package for Configuration only (lets say MyConfig from the canonical sample). Cannot find the details of what manifest files should include. Since there are two manifest files (ApplicationManifest.xml and ServiceManifest.xml) what should go into the config only upgrade package? A pointer to a sample would be great.
You'd provide the same layout structure as the full package but you'd remove everything that's not relevant to what you're updating. So, using the example you linked to, you'd only include these files:
ApplicationManifest.xml
MyServiceManifest\ServiceManifest.xml
MyServiceManifest\MyConfig\Settings.xml
My application is built on Play Framework (2.5.3) using Scala. In order to deploy the app, I create a docker image using sbt docker:publishLocal command. I am trying to figure out where the base docker image file is in the play framework folder structure. I do see a DockerFile in target/docker folder. I don't know how Play Framework creates this DockerFile and where / how Play Framework tells docker to layer the application on the base image. I am scala/play/docker n00b. Please help.
Here are some definitions in order to get an idea of what im talking about.
Docker containers are stored applications or services that have been configured and stored so that docker can run it on any machine docker is installed on.
Docker Images are save states of docker container instances which are used to run new containers.
Dockerfiles are files that contain build instructions for how a docker image should be built. This is for offloading some of the many additional parameters one has to attach in order to run a container just how they want to.
#michaJlS I am looking for details on the base docker image that play framework uses. Regarding your question on why I need it - 1. I like to know where it is 2. I'd like to know how I can add another layer to it if need be.
1.) This is going to be based on what you want to look for. If you are looking for the ready-to-use image for docker, simply do docker images and find what image is implying what you are using. If you are looking for the raw data, you can find that in /var/lib/docker/graph. However this is pointless as you need the image ID which can only be specified by the docker images command i gave above. If the image was not built correctly the image will not appear.
2.) If you wish to modify or attach an additional layer (which by adding layers, you mean to add new files and modules to your application), you need to run the docker image (via the docker run command). Again this is moot if the docker image can not be located by the docker daemon.
While this should answer your question, I do not believe it was what you were asking. People who are concerned with docker are people who want to put applications into containers that can be ran on any platform avoiding dependency issues while maintaining relative convenience. If you are trying to use docker to place your newly created application, you are going to want to build from a dockerfile.
In short, dockerfiles are build instructions that you would normally have to specify either before running a container or when inside one. If you are stating that the sbt docker command created a docker file, your going to need to look for that file and use the docker build command in order to create an instance of your image (info should be provided in that second link). Your best course of action in order to try to get your application to run on a container is to either build it inside a container that has the environment of the running app or simply build it from a dockerfile.
The first option would be to simply be docker run -ti imagename with the image being the environment of your machine that has the running app. You can find some images on the docker hub and one that may be of interest to you is this play-framework image someone else created. That command will put you into an interactive session with the container so you can work as if you are trying to create the app within your own environment. Just don't forget to use docker commit when you are done building!
The faster method of building your image would be to do it from a dockerfile. If you know all the commands and you have all the dependencies needed to create and run your application simply put those into a file named "Dockerfile" (following the instructions and guidance of link 2) and do docker build -t NewImageName /path/to/Dockerfile. This will create an image which you can use to create containers of that image or distribute it how you see fit.
There is no specified Docker file, it's generated by plugin with predefined defaults, that you can override
http://www.scala-sbt.org/sbt-native-packager/formats/docker.html
You can take a look at docker.DockerPlugin, to get a better idea of the defaults used.
You can also look at DockerKeys for additional tasks, like
docker-generate-config
Documentation here: http://www.scala-sbt.org/sbt-native-packager/formats/docker.html indicates that the base image is dockerfile/java which doesn't seem to be in docker hub but details for the image are on github: https://github.com/dockerfile/java
The documentation also indicates that you can specify your own base image using a "dockerBaseImage" environment setting or creating a custom dockerfile http://www.scala-sbt.org/sbt-native-packager/formats/docker.html#custom-dockerfile
It's also indicated what the requirements are when using your own base image:
The image to use as a base for running the application. It should include binaries on the path for chown, mkdir, have a discoverable java binary, and include the user configured by daemonUser (daemon, by default).
I am attempting to install the piechart plugin on my Grafana v2.5 environment and no matter what I do the panel does now show as an option in the UI. I cloned the repository to /var/lib/grafana/plugins as documented and restarted the grafana-server service and that did not work. I also tried putting the plugin in a separate directory and referencing it as:
[plugin.piechart]
path = /home/usr/share/grafana/panel-plugin-piechart
I made sure that the grafana service has ownership of the plugin directory, and checked the grafana logs but it did not have useful information.
https://github.com/grafana/panel-plugin-piechart
You will need Grafana master based on the release date of the plugin.
Confirmed here - https://groups.io/g/grafana/message/1181
You definitely need to upgrade your Grafana. This is very seamless operation - just install a new package on top of the old one. You can back up through copy /var/lib/grafana/grafana.db for safety before doing that.
Check the permission of the files in plugins directory,
all the files of the plugin should be in its directory, i.e. every plugin should be contained its own directory
if the plugins directory has any package.json file or webpack.config.js available outside then also your plugins will fail to load.
the above mentioned files are part of every panel plugins and should only exist in their respective directories.
execute "chown" and set the owner to grafana:grafana group:user
(by default root is the owner of the files and directories.)
Are you running Grafana as a standalone service or in a docker container?
If running as a service directly you can visit the Grafana community page and find the plugin installation instructions from there.
https://grafana.com/grafana/plugins/grafana-piechart-panel
(Verified on Grafana version 6.x.x & 7)
If running within a dockerized service you need to copy the plugin in your workspace and specify the directory within the docker image so it can locate the plugin from there. You can do this by using the environment variables and mention these within a docker-compose file
GF_PATHS_PLUGINS /var/lib/grafana/plugins
https://grafana.com/docs/grafana/latest/installation/configure-docker/
I have been able to work with both these options
I'm trying to create my first pod, and, as per the recommendation on the website, am doing so at the command line with pod lib create <mylib>. The trouble is lib create assumes I want to create an iOS library, when in fact I'm developing for OS X. I've grep'ed my way through the cocoapod files on my computer looking for the template on which the generated project is based, but have come up empty-handed. Does anyone know how I might fiddle with these settings, wherever they are, to get the configuration I'm after?
If you already have your library created and just want to create a sample podspec you should use:
pod spec create
Instead. You can also pass this a URL to set that as the source automatically. See
pod spec create --help
For more info.