When I am importing image library
import 'package:image/image.dart' as img;
import 'package:image/image.dart';
I am getting the error that this import
This happens when same keyword is used by different package at that situation the IDE gets confused and throws this error .
This is how you solve it:
use **img.Image** instead of Image this way you tell the ide which package to import it from. This will generally
solve the issue.
Related
I've a problem that I can't resolve I did a lot of research but nothing.
I need to use the library flutter_countdown_timer but he seems that CurrrentRemainingTime isnt recognized. Someone has already use this library ?
The error code is 'CurrentRemainingTime' isn't a type.enter image description here
Seems like you haven't imported the import statement for the CurrentRemainingTime. There are two import statements, one for CountDownTimer and the other for the CurrentRemainingTime.
Add this import statement:
import 'package:flutter_countdown_timer/current_remaining_time.dart';
I'm trying to use the DeBerta model and I'm first trying to implement some of the code I found here (an example of how to use the model), just to make sure I have the right dependencies and know what imports I need. I'm working in Scala 2.4.0, Zeppelin 0.8.1. I've added "com.johnsnowlabs.nlp:spark-nlp-spark24_2.11:3.4.4" as a dependency, which I believe should give me spark-nlp 3.4.4. So far most of the imports I got from the sample code are working:
import com.johnsnowlabs.nlp.embeddings
import com.johnsnowlabs.nlp.annotator._
import com.johnsnowlabs.nlp.base._
import com.johnsnowlabs.nlp.training.CoNLL
import com.johnsnowlabs.nlp.util.io.ResourceHelper
import com.johnsnowlabs.util.Benchmark
import org.apache.spark.ml.Pipeline
import org.apache.spark.sql.functions.{col, explode, size}
Although two of the imports get errors
but I'm not sure I need these to load the model. But when I try to load
val embeddings = DeBertaEmbeddings.pretrained("deberta_v3_base", "en")
.setInputCols("sentence", "token")
.setOutputCol("embeddings")
.setMaxSentenceLength(512)
I get an error saying
java.lang.IllegalArgumentException: requirement failed: Can not find deberta_v3_base inside public/models to download. Please make sure the name and location are correct!
at scala.Predef$.require(Predef.scala:224)
at com.johnsnowlabs.nlp.pretrained.ResourceDownloader$.downloadResource(ResourceDownloader.scala:441)
at com.johnsnowlabs.nlp.pretrained.ResourceDownloader$.downloadModel(ResourceDownloader.scala:499)
at com.johnsnowlabs.nlp.pretrained.ResourceDownloader$.downloadModel(ResourceDownloader.scala:492)
at com.johnsnowlabs.nlp.HasPretrained$class.pretrained(HasPretrained.scala:44)
at com.johnsnowlabs.nlp.annotator.package$DeBertaEmbeddings$.com$johnsnowlabs$nlp$embeddings$ReadablePretrainedDeBertaModel$$super$pretrained(annotator.scala:532)
at com.johnsnowlabs.nlp.embeddings.ReadablePretrainedDeBertaModel$class.pretrained(DeBertaEmbeddings.scala:329)
at com.johnsnowlabs.nlp.annotator.package$DeBertaEmbeddings$.pretrained(annotator.scala:532)
at com.johnsnowlabs.nlp.annotator.package$DeBertaEmbeddings$.pretrained(annotator.scala:532)
at com.johnsnowlabs.nlp.HasPretrained$class.pretrained(HasPretrained.scala:47)
at com.johnsnowlabs.nlp.annotator.package$DeBertaEmbeddings$.com$johnsnowlabs$nlp$embeddings$ReadablePretrainedDeBertaModel$$super$pretrained(annotator.scala:532)
at com.johnsnowlabs.nlp.embeddings.ReadablePretrainedDeBertaModel$class.pretrained(DeBertaEmbeddings.scala:326)
at com.johnsnowlabs.nlp.annotator.package$DeBertaEmbeddings$.pretrained(annotator.scala:532)
... 51 elided
I've tried to go through the documentation on the DeBerta model, and from what I've seen I have adequate versions of scala and spark-nlp according to John Snow Labs models hub. I saw on this stackoverflow thread that I might need to put "_noncontrib" after my model name. I tried that even though it says I wouldn't need it after spark-nlp 2.4.0+. So now I'm pretty sure "deberta_v3_base" is the right name, but I'm not sure what the error means by the location being incorrect. Do I need to specify a location for the pretrained model per the config helper? Does anyone know what's going on here? I apologize if I've left anything out, I'm new to Zeppelin and spark-nlp. Thanks!
In my Flutter project I have besides main.dart a second dart-file (helpers.dart) in which I am trying to use the debugPrint() function:
debugPrint(someString);
I get the following message:
The function 'debugPrint' isn't defined.
Try importing the library that defines 'debugPrint', correcting the name to the name of an existing function, or defining a function named 'debugPrint'.
The official Flutter documentation states that the debugPrint function is part of Flutter's foundation library. But import 'package:flutter/foundation.dart'; did not solve the problem. So which library/ package do I have to import?
Importing the material package solves the problem:
import 'package:flutter/material.dart';
I have found code on github in Go and want to use it as library in my program. Unfortunately, whole code is in "main" package. Is there any way how I can import the code as library without changing that code?
No. Fork the repo, and fix it to work as a library, or if it's simple enough, copy the files directly into your main package.
No, you can't.
Agree with #JimB - fork repo and change it like 'package main' > 'package lib' and import in your code like that:
package main
import L "somelib"
func main() {
L.SomeFunc()
}
etc..
you can import it as a separate package, something like:
import sth "path/to/your/package"
I have a class uses the following lines, it works fine in a Google App Engine project:
import javax.jdo.annotations.IdGeneratorStrategy;
import javax.jdo.annotations.IdentityType;
import javax.jdo.annotations.PersistenceCapable;
import javax.jdo.annotations.Persistent;
import javax.jdo.annotations.PrimaryKey;
But when I included this class in another project, it cause error :
package javax.jdo.annotations does not exist
What should I do to find javax.jdo.* ?
Add the JDO jar file to the class path.
The star notation for imports isn't working the way you think it does.
It's not recursive - it only applies the child classes in javax.jdo, not the child packages.
If you want all the classes in javax.jdo.annotations, you'll need to import javax.jdo.annotations.*, too.
I'd recommend not using the star notation. Better to type out the imports for every class individually. Use an IDE to help you. It's clearer for you and other programmers who come after you where those classes came from.