I am getting an error if I import readline, as follows:
import scala.io.StdIn.{readline, readInt} =>
error: value readline is not a member of object scala.io.StdIn
import scala.io.StdIn.{readline, readInt}
Scala code runner version 2.12.1
If I don't import this, I get a deprecated message:
warning: there was one deprecation warning (since 2.11.0); re-run with -deprecation for details
one warning found
I get no errors if I use the fill path to the function:
var x = scala.io.StdIn.readLine.toInt
Let me know if you can help me resolve the import. Thanks.
A very tiny overlook:
import scala.io.StdIn.{readLine, readInt}
readLine has an upper case L
Related
I am into a situation where I am able to successfully run the below snippet in azure Databricks from a separate CMD.
%run ./HSCModule
But running into issues when including that piece of code with other scala code which is importing below packages and getting following error.
import java.io.{File, FileInputStream}
import java.text.SimpleDateFormat
import java.util{Calendar, Properties}
import org.apache.spark.SparkException
import org.apache.spark.sql.SparkSession
import scala.collection.JavaConverters._
import scala.util._
ERROR = :168: error: ';' expected but '.' found. %run
./HSCModule
FYI - I have also used dbutils.notebook.run and still facing same issues.
You can't mix the magic commands, like, %run, %pip, etc. with the Scala/Python code in the same cell. Documentation says:
%run must be in a cell by itself, because it runs the entire notebook inline.
So you need to put this magic command into a separate cell.
Is it possible to use PrimOps in GHCi? The following does not work:
$ ghci -XMagicHash
GHCi, version 8.10.1: https://www.haskell.org/ghc/ :? for help
Loaded GHCi configuration from /home/stefan/.ghci
λ> let x = 42.0# :: Float#
<interactive>:1:18: error:
Not in scope: type constructor or class ‘Float#’
Perhaps you meant ‘Float’ (imported from Prelude)
Importing Prelude manually does not solve this error.
Update: After importing GHC.Exts the following error shows up:
$ ghci -XMagicHash
GHCi, version 8.10.1: https://www.haskell.org/ghc/ :? for help
Loaded GHCi configuration from /home/stefan/.ghci
λ> import GHC.Exts
λ> let x = 42.0# :: Float#
<interactive>:1:1: error:
GHCi can't bind a variable of unlifted type: x :: Float#
The MagicHash extension just makes it legal to use # in names. If you want to specify types like Float#, you still need to import them. Do import GHC.Exts and try again.
Also, you can't bind them with let. You need to immediately use them and rewrap them.
I get the error;
error: not found: value col
when I issue the command in Databricks notebook which I don't get the error when running it from a spark-shell;
altitiduDF.select("height", "terrain").filter(col("height") >= 11000...
^
I tried importing the following before my command query, but it did not help;
import org.apache.spark.sql.Column
import org.apache.spark.sql.Dataset
import org.apache.spark.sql.Row
Where can I find what I need to import to use the col function?
Found that I need to import;
import org.apache.spark.sql.functions.{col}
to use the col function in Databricks.
I am a beginner in Apache Spark, so please excuse me if this is quite trivial.
Basically, I was running the following import in spark-shell:
import org.apache.spark.sql.{DataFrame, Row, SQLContext, DataFrameReader}
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql._
import org.apache.hadoop.hive.ql.io.orc.{OrcInputFormat,OrcStruct};
import org.apache.hadoop.io.NullWritable;
...
val rdd = sc.hadoopFile(path,
classOf[org.apache.hadoop.hive.ql.io.orc.OrcInputFormat],
classOf[NullWritable],
classOf[OrcStruct],
1)
The import statements up till OrcInputFormat works fine, with the exception that:
error: object apache is not a member of package org
import org.apache.hadoop.io.NullWritable;
It does not make sense, if the import statement before goes through without any issue.
In addition, when referencing OrcInputFormat, I was told:
error: type OrcInputFormat is not a member of package org.apache.hadoop.hive.ql.io.orc
It seems strange that import for OrcInputFormat to work (I assume it works, since no error is thrown), but then the above error message turns up. Basically, I am trying to read ORC files from S3.
I am also looking at what have I done wrong, and why this happens.
What I have done:
I have tried running spark-shell with the --jars option, and tried importing hadoop-common-2.6.0.jar (My current version of Spark is 1.6.1, compiled with Hadoop 2.6)
val df = sqlContext.read.format("orc").load(PathToS3), as referred by (Read ORC files directly from Spark shell). I have tried variations of S3, S3n, S3a, without any success.
You have 2 non-printing characters between org.ape and che in the last import, most certainly due to a copy paste :
import org.apache.hadoop.io.NullWritable;
Just rewrite the last import statement and it will work. Also you don't need these semi-colons.
You have the same problem with OrcInputFormat :
error: type OrcInputFormat is not member of package org.apache.hadoop.hive.ql.io.orc
That's funny, in the mobile version of Stackoverflow we can cleary see those non-printing characters :
I am running the following code and getting "error: not found: value Order"
I am not able to figure out a reason. What am I doing wrong?
version : Flink v 0.9.1 (hadoop 1) not using hadoop: Local execution shell: scala shell
Scala-Flink> val data_avg = data_split.map{x=> ((x._1), (x._2._2/x._2._1))}.sortPartition(1, Order.ASCENDING).setParallelism(1)
<console>:16: error: not found: value Order
val data_avg = data_split.map{x=> ((x._1), (x._2._2/x._2._1))}.sortPartition(0, Order.ASCENDING).setParallelism(1)
The problem is that the enum Order is not automatically imported by Flink's Scala shell. Therefore, you have to add the following import manually.
import org.apache.flink.api.common.operators.Order