I have a build.sbt file but I am not able to figure out the role of this import import docker.{addDockerPackage}
Is this an open source import? I am not able to find info about it. Further down in the script it calls a method addDockerPackage()
I wonder if the method is in that package? Or all this is some proprietary stuff? If it is a standard import, where do I read about it?
You can use sbt-native it has a Docker plugin:
http://www.scala-sbt.org/sbt-native-packager/formats/docker.html
Here are 4 different good example that you can run to see how it works:
https://github.com/marcuslonnberg/sbt-docker
As far as:
import docker.{addDockerPackage}
I don't think that is a package. It looks more like a helper to define something like this:
packageName in Docker := packageName.value
Related
What cause this issue
Ansible supports user-defined module_utils folder, we can add following line in ansible.cfg:
module_utils = /xxx/lib/module_utils
Then, when the playbook running, ansible will combine both /usr/local/lib/python3.6/dist-packages/ansible/module_utils and /xxx/lib/module_utils together.
So, we can import module utilities in user-defined ansible module, like:
import ansible.module_utils.my_utils
But, pylint doesn't read the ansibe.cfg file and combine the user-defined utility folder with system one. So, it can't find my_utils in /usr/local/lib/python3.6/dist-packages/ansible/module_utils, and cause this issue.
My question
Is there any way to make the pylint 'see' the modules in user-defined folder?
BTW, add additional search path in pylint configuration like below won't fix this issue.
init-hook='import sys; sys.path.append("/xxx/lib/module_utils")'
because in ansible module, we used ansible.module_utils namespace
import ansible.module_utils.my_utils
not
import my_utils
I'm using ammonite (http://ammonite.io/) to write Scala scripts. It allows you to fetch remote dependencies via this kind of text:
import $ivy.`org.scalaz::scalaz-core:7.2.7`, scalaz._, Scalaz._
How do you use a local maven repo (sitting in e.g. ~/.m2), though?
It changed in v 1.7.1
Now the correct way to do this is like this :
import coursierapi.MavenRepository
interp.repositories.update(
interp.repositories() ::: List(MavenRepository.of("https://some_repo"))
)
If you wish to link up a local repository, you can replace https://some_repo with file://path_to_local_rep
Thanks to #danslapman on github - here's the reference discussion https://github.com/lihaoyi/Ammonite/issues/1003
With many thanks to #sake92 on https://gitter.im/lihaoyi/Ammonite
#!/usr/bin/env amm
interp.repositories() ++= Seq(coursier.Cache.Dangerous.maven2Local)
#
import $ivy.`com.foo:artifact:1.3.0`
The # forces the script to be compiled in two parts. Without it the extra repo will simply be ignored.
There was an issue some time ago
with a following PR that concluded that there quite often local Maven repository contains broken things, so it is not there by default.
However, later ability to add your own resolvers was added, probably sth like:
import coursier.MavenRepository
interp.repositories() ++= Seq(MavenRepository(
"~/.m2/local"
))
should work.
I'm having trouble importing magellan-1.0.4-s_2.11 in spark notebook. I've downloaded the jar from https://spark-packages.org/package/harsha2010/magellan and have tried placing SPARK_HOME/bin/spark-shell --packages harsha2010:magellan:1.0.4-s_2.11 in the Start of Customized Settings section of the spark-notebook file of the bin folder.
Here are my imports
import magellan.{Point, Polygon, PolyLine}
import magellan.coord.NAD83
import org.apache.spark.sql.magellan.MagellanContext
import org.apache.spark.sql.magellan.dsl.expressions._
import org.apache.spark.sql.Row
import org.apache.spark.sql.types._
And my errors...
<console>:71: error: object Point is not a member of package org.apache.spark.sql.magellan
import magellan.{Point, Polygon, PolyLine}
^
<console>:72: error: object coord is not a member of package org.apache.spark.sql.magellan
import magellan.coord.NAD83
^
<console>:73: error: object MagellanContext is not a member of package org.apache.spark.sql.magellan
import org.apache.spark.sql.magellan.MagellanContext
I then tried to import the new library like any other library by placing it into the main script like so:
$lib_dir/magellan-1.0.4-s_2.11.jar"
This didn't work and I'm left scratching my head wondering what I've done wrong. How do I import libraries such as magellan into spark notebook?
Try evaluating something like
:dp "harsha2010" % "magellan" % "1.0.4-s_2.11"
It will load the library into Spark, allowing it to be imported - assuming it can be obtained though the Maven repo. In my case it failed with a message:
failed to load 'harsha2010:magellan:jar:1.0.4-s_2.11 (runtime)' from ["Maven2 local (file:/home/dev/.m2/repository/, releases+snapshots) without authentication", "maven-central (http://repo1.maven.org/maven2/, releases+snapshots) without authentication", "spark-packages (http://dl.bintray.com/spark-packages/maven/, releases+snapshots) without authentication", "oss-sonatype (https://oss.sonatype.org/content/repositories/releases/, releases+snapshots) without authentication"] into /tmp/spark-notebook/aether/b2c7d8c5-1f56-4460-ad39-24c4e93a9786
I think file was to big and connection was interrupted before whole file could be downloaded.
Workaround
So I downloaded the JAR manually from:
http://dl.bintray.com/spark-packages/maven/harsha2010/magellan/1.0.4-s_2.11/
and copied it into the:
/tmp/spark-notebook/aether/b2c7d8c5-1f56-4460-ad39-24c4e93a9786/harsha2010/magellan/1.0.4-s_2.11
And then :dp command worked. Try Calling it first, and if it will fail copy JAR into the right path to make things work.
Better solution
I should investigate why download failed to fix it in the first place... or put that library in my local M2 repo. But that should get you going.
I would suggest to check this:
https://github.com/spark-notebook/spark-notebook/blob/master/docs/metadata.md#import-download-dependencies
and
https://github.com/spark-notebook/spark-notebook/blob/master/docs/metadata.md#add-spark-packages
I think the :dp magic command is depreciated, instead you should add your custom dependencies in the notebook metadata. You can go in the menu Edit > Edit notebook metadata, there add something like:
"customDeps": [
"harsha2010 % magellan % 1.0.4-s_2.11"
]
Once done, you will need to restart the kernel, you can check in the browser console if the package is being downloaded properly.
The easy way, you should set or add the EXTRA_CLASSPATH environnent variable to point to your .jar file downloaded :
export EXTRA_CLASSPATH = </link/to/your.jar> or set EXTRA_CLASSPATH= </link/to/your.jar> in wondows OS. Here find the detailed solution.
I am trying to build a Debian package using sbt-native-packager as described in this build.sbt.
I set appName using
val appName = "megamgateway"
in project/Build.scala.
All works well. It is just that the contents are stored in /usr/share/megamgateway.
I'd like to have the contents under /usr/share/megam/gateway.
The only way I could find is to use
linuxPackageMapping
as shown here.
Before following along, I'd like to know about other approaches.
You could try to add to your project settings
name := "gateway"
defaultLinuxInstallLocation := "/usr/share/megam/"
The following modules appear to be missing
email.Generator
email.Iterators
email.Utils
win32api
win32con
w in32pipe
wx
My setup file looks like this:
from distutils.core import setup
import py2exe
setup(console=['fwsm_migration.py'])
i'm using Python 2.5.4 and the py2exe 0.6.8
Looked here and outside for a peculiar solution but have not found one!!
read about using "optoins: but being new to python itself failing to know where to do it.
Please HELP!
Try cx_freeze. After having a hell of a time with py2exe cx_freeze compiled my script without any configuration. In the same environment Py2exe claimed I'd missed nine packages.
For simple scripts you only need to do:
cxfreeze hello.py --target-dir dist