I've set up the Docker Deployment plugin on PyCharm, however, on hitting play, the Deploy log shows a failure with:
Could not open requirements file: [Errno 2] No such file or directory: 'environments/dev/requirements.txt'
I assume that this has something to do with the docker build context that is part of the Docker Deployment plugin. I've confirmed that there is in fact a requirements.txt file in the environments/dev/ directory inside the root of my PyCharm project. Does anyone know how to specify the docker build context/path in PyCharm or the Docker Deployment plugin?
UPDATE:
By adding RUN ls -l to the Dockerfile, I was able to deduce that the plugin is running from the same directory as the Dockerfile. Still looking for a way to specify the build path if it's at all possible.
My current solution to this problem is to move the Dockerfile to the root directory of my PyCharm project so that when ADD . /var/app is run, it copies over the correct files. As I mentioned in the "UPDATE", the PyCharm plugin uses the directory the Dockerfile is run from as the build context/path.
Would still like to be able to specify the build path, but this may be the best solution given the limitations of the plugin.
Related
I am trying to run Calc.com app. so I installed pgsql and then cloned calc.com And I run like this
yarn workspace #calcom/prisma db-deploy
than I got error
t first, I didn't have the .env file in any of my project folders, then I added it with the link to the database url, still not working.
I'm attempting to deploy a python server to Google App Engine.
I'm trying to use the gcloud sdk to do so.
It appears the command I need to use is gcloud app deploy.
I get the following error:
me#mymachine:~/development/some-app/backend$ gcloud app deploy
ERROR: (gcloud.app.deploy) Error Response: [3] The directory [~/.config/google-chrome/Default/Cache] has too many files (greater than 1000).
I had to add ~/.config to my .gcloudignore to get past this error.
Why was it looking there at all?
The full repo of my project is public but I believe I've included the relevant portion.
I looked at your linked repo and there aren't any yaml files. As far as I know, a GAE project needs an app.yaml file because that file tells GAE what your runtime is so that GAE knows how to deploy/run your code. In fact, according to the gcloud app deploy documentation, if you don't specify any yaml files to be deployed, it will default to app.yaml in the current directory. If it can't find any in the current directory, it will try to build one.
Your repo also shows you have a Dockerfile. GAE documentation for custom runtimes says ...Custom runtimes let you build apps that run in an environment defined by a Dockerfile... In the app.yaml file for custom runtimes, you will have the following entry
runtime: custom
env: flex
Since you don't have an app.yaml file and you have a Docker file in which you are downloading and installing Chrome, it seems to me that gcloud app deploy is trying to infer your runtime and this has led to it executing some or all of the contents of the Dockerfile before it attempts to then push it to Production. This is what is making it take a peek at the config file on your local machine till you explicitly tell it to ignore it. To be clear, I'm not 100% sure of this, just trying to see if I can draw a logical conclusion.
My suggestion would be to create an app.yaml file and specify a custom runtime. Or just use the python runtime with flex
I want to integrate newrelic in my flink project. I have downloaded my newrelic.yml file from my account and have changed the app name only and I have created a folder named newrelic in my project root folder and have placed newrelic.yml file in it.
I have also placed the following dependency in my buld.sbt file:
"com.newrelic.agent.java" % "newrelic-api" % "3.0.0"
I am using the following command to run my jar:
flink run -m yarn-cluster -yn 2 -c Main /home/hadoop/test-assembly-0.2.jar
I guess, my code is not able to read my newrelic.yml file because I can't see my app name in newrelic. Do i need to initialize newrelic agent somewhere (if yes, how?). Please help me with this integration.
You should only need the newrelic.jar and newrelic.yml files to be accessible and have -javaagent:path/to/newrelic.jar passed to the JVM as an argument. You could try putting both newrelic.jar and newrelic.yml into your lib/ directory so they get copied to the job & task managers, then adding this to your conf/flink-conf.yaml:
env.java.opts: -javaagent:lib/newrelic.jar
Both New Relic files should be in the same directory and you ought to be able to remove the New Relic line from your build.sbt file. Also double check that your license key is in the newrelic.yml file.
I haven't tested this but the main goal is for the .yml and .jar to be accessible in the same directory(the yml can go into a different directory but other JVM arguments will need to be passed to reference it) and to pass -javaagent:path/to/newrelic.jar to as a JVM argument. If you run into issues try checking for new relic logs in the log folder of the directory where the .jar is located.
I am attempting to install the piechart plugin on my Grafana v2.5 environment and no matter what I do the panel does now show as an option in the UI. I cloned the repository to /var/lib/grafana/plugins as documented and restarted the grafana-server service and that did not work. I also tried putting the plugin in a separate directory and referencing it as:
[plugin.piechart]
path = /home/usr/share/grafana/panel-plugin-piechart
I made sure that the grafana service has ownership of the plugin directory, and checked the grafana logs but it did not have useful information.
https://github.com/grafana/panel-plugin-piechart
You will need Grafana master based on the release date of the plugin.
Confirmed here - https://groups.io/g/grafana/message/1181
You definitely need to upgrade your Grafana. This is very seamless operation - just install a new package on top of the old one. You can back up through copy /var/lib/grafana/grafana.db for safety before doing that.
Check the permission of the files in plugins directory,
all the files of the plugin should be in its directory, i.e. every plugin should be contained its own directory
if the plugins directory has any package.json file or webpack.config.js available outside then also your plugins will fail to load.
the above mentioned files are part of every panel plugins and should only exist in their respective directories.
execute "chown" and set the owner to grafana:grafana group:user
(by default root is the owner of the files and directories.)
Are you running Grafana as a standalone service or in a docker container?
If running as a service directly you can visit the Grafana community page and find the plugin installation instructions from there.
https://grafana.com/grafana/plugins/grafana-piechart-panel
(Verified on Grafana version 6.x.x & 7)
If running within a dockerized service you need to copy the plugin in your workspace and specify the directory within the docker image so it can locate the plugin from there. You can do this by using the environment variables and mention these within a docker-compose file
GF_PATHS_PLUGINS /var/lib/grafana/plugins
https://grafana.com/docs/grafana/latest/installation/configure-docker/
I have been able to work with both these options
I am using docker for continuous integration of a Scala project. Inside the container I am building the project and creating a distribution with "sbt dist".
This takes ages pulling down all the dependencies and I would like to use a docker data volume as mentioned here: http://docs.docker.io/en/latest/use/working_with_volumes/
However, I don't understand how I could get SBT to put the jar files in the volume, or how SBT would know how to read them from that volume.
SBT uses ivy to resolve project dependencies. Ivy caches downloaded artifacts locally and every time it is asked to pull something, it first goes to that cache and if nothing found downloads from remote. By default cache is located in ~/.ivy2, but it is actually a configurable property. So just mount volume, point ivy to it (or mount it in a way it will be on default location) and enjoy the caches.
Not sure if this makes sense on an integration server, but when developing on localhost, I'm mapping my host's .ivy2/ and .sbt/ directories to volumes in the container, like so:
docker run ... -v ~/.ivy2:/root/.ivy2 -v ~/.sbt:/root/.sbt ...
(Apparently, inside the container, .ivy2/ and .sbt/ are placed in /root/, since we're logging in to the container as the root user.)