Scala Play passing variable to view not working - scala

This code works fine:
In the controller:
Ok(views.html.payment(message,test,x_card_num,x_exp_date,exp_year,exp_month,x_card_code,x_first_name,x_last_name,x_address,x_city,x_state,x_zip,save_account,product_array,x_amount,products_json,auth_net_customer_profile_id,auth_net_payment_profile_id,customer_id))
In the view:
#(message: String, test: String, x_card_num: String, x_exp_date: String,exp_year: String, exp_month: String, x_card_code: String, x_first_name: String, x_last_name: String, x_address: String, x_city: String, x_state: String, x_zip: String, save_account: String, product_array: Map[String,Map[String,Any]], x_amount: String, products_json: String, auth_net_customer_profile_id: String,auth_net_payment_profile_id: String,customer_id: String)
But when I try to add one more variable to the controller and view like this:
Ok(views.html.payment(message,test,x_card_num,x_exp_date,exp_year,exp_month,x_card_code,x_first_name,x_last_name,x_address,x_city,x_state,x_zip,save_account,product_array,x_amount,products_json,auth_net_customer_profile_id,auth_net_payment_profile_id,customer_id,saved_payments_xml))
#(message: String, test: String, x_card_num: String, x_exp_date: String,exp_year: String, exp_month: String, x_card_code: String, x_first_name: String, x_last_name: String, x_address: String, x_city: String, x_state: String, x_zip: String, save_account: String, product_array: Map[String,Map[String,Any]], x_amount: String, products_json: String, auth_net_customer_profile_id: String,auth_net_payment_profile_id: String,customer_id: String, saved_payments_xml: String)
It gives me this error:
missing parameter type
What am I doing wrong?

There's a limit to the number of parameters you can pass to a template. You've exceeded it when you add another parameter.
It's an undocumented and fairly arbitrary limit which is the result of how the code generation from a template works. It is arguably a bug, but not one that I would fix since nobody needs this many parameters, and having this many makes code much less readable.
Your best resolution here is to refactor, for example by creating some case classes to represent Card and Address in your model, and pass those in instead.

Related

Error while creating external Hive table in IBM Analytics Engine

I am creating an external hive table from a csv file located on IBM Cloud Object Storage. I am using beeline client while ssh'd into the cluster with the clsadmin user. I was able to make jdbc connection. Getting the below error while creating the table.
The csv file is located in the bucket - bucket-name-masked and I have named the fs.cos parameter set as 'hivetest'
0: jdbc:hive2://***hostname-masked***> CREATE EXTERNAL TABLE NYC311Complaints (UniqueKey string, CreatedDate string, ClosedDate string, Agency string, AgencyName string, ComplaintType string, Descriptor string, LocationType string, IncidentZip string, IncidentAddress string, StreetName string, CrossStreet1 string, CrossStreet2 string, IntersectionStreet1 string, IntersectionStreet2 string, AddressType string, City string, Landmark string, FacilityType string, Status string, DueDate string, ResolutionDescription string, ResolutionActionUpdatedDate string, CommunityBoard string, Borough string, XCoordinateStatePlane string, YCoordinateStatePlane string, ParkFacilityName string, ParkBorough string, SchoolName string, SchoolNumber string, SchoolRegion string, SchoolCode string, SchoolPhoneNumber string, SchoolAddress string, SchoolCity string, SchoolState string, SchoolZip string, SchoolNotFound string, SchoolorCitywideComplaint string, VehicleType string, TaxiCompanyBorough string, TaxiPickUpLocation string, BridgeHighwayName string, BridgeHighwayDirection string, RoadRamp string, BridgeHighwaySegment string, GarageLotName string, FerryDirection string, FerryTerminalName string, Latitude string, Longitude string, Location string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LOCATION 'cos://*bucket-name-masked*.hivetest/IAE_examples_data_311NYC.csv';
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:cos://bucket-name-masked.hivetest/IAE_examples_data_311NYC.csv is not a directory or unable to create one) (state=08S01,code=1)
0: jdbc:hive2://hostname-masked>
This looks like a permission issue but I have provided all credentials for the relevant user ids in hdfs as well as cos
Issue was with the cos URL. Filename is not to be provided. Only the bucket is to be named and objects in it would be read. With filename, the whole path gets read as bucket name and looks for obj in there.

scala named parameter in function-as-params?

I'm quite used to Typescript's function-with-param-names as function-params:
function doHello(
helloFunc: (lastName: string, firstName: string) => void,
...
) { ... }
Here, helloFunc describes a function-as-param with 'named' params (lastName, firstName)
but I could only find examples without param-names, such as:
case class HelloWoot(
helloFunc: (String, String) => Unit,
...
)
which omits some info about helloFunc signature.
So, how do I get the following code to compile in Scala?
case class HelloWoot(
helloFunc: (lastName: String, firstName: String) => Unit,
// Error
)
It's not possible to provide named parameters in a higher order function. If you are worried about people mixing up the String parameters you could introduce a new type like this:
// New object with firstName and lastName
case class NamesObject(firstName: String, lastName: String)
// Higher order function now takes the new type as input
case class HelloWoot(helloFunc: (NamesObject) => Unit)
// You can now safely access the correct variable without having to rely on their order in the Tuple2
HelloWoot(a => println(a.firstName, a.lastName))

Syntax for accepting tuple in a function in Scala

I would like a function to consume tuple of 7 but compiler won't let me with the shown message. I failed to find a proper way how to do it. Is it even possible without explicitely typing all the type parameters like Tuple7[String,String...,String] and is it even a good idea to use Scala like this ?
def store(record:Tuple7): Unit = {
}
Error:(25, 20) class Tuple7 takes type parameters
def store(record: Tuple7): Unit = {
^
As stated by Luis you have to define what Type goes on which position for every position in the Tuple.
I`d like to add some approaches to express the same behaviour in different ways:
Tuple Syntax
For that you have two choices, what syntax to use to do so:
Tuple3[String, Int, Double]
(String, Int, Double)
Approach using Case Classes for better readability
Long tuples are hard to handle, especially when types are repeated. Scala offers a different approach for handling this. Instead of a Tuple7 you can use a case class with seven fields. The gain in this approach would be that you now can attach speaking names to each field and also the typing of each position makes more sense if a name is attached to it.
And the chance of putting values in wrong positions is reduced
(String, Int, String, Int)
// vs
case class(name: String, age: Int, taxNumber: String, numberOfChildren: Int)
using Seq with pattern matching
If your intention was to have a sequence of data seq in combination with pattern matching could also be a nice fit:
List("name", 24, "", 5 ) match {
case name:String :: age:Int ::_ :: _ :: Nil => doSomething(name, age)
}
This only works nice in a quite reduced scope. Normally you would lose a lot of type information as the List is of type Any.
You could do the following :
def store(record: (String, String, String, String, String, String, String)):Unit = {
}
which is the equivalent of :
def store(record: Tuple7[String, String, String, String, String, String, String]):Unit = {
}
You can read more about it in Programming in Scala, 2nd Edition, chapter "Next Steps in Scala", sub-chapter "Step 9. use Tuples".

Map table with more than 22 columns to Scala case class by Slick 2.1.0

I'm using Scala 2.11, Slick 2.1.0-M2, PlayFramework 2.3.1.
I need to map 25 columns table to Scala's case class.
For example I have this case class:
case class Test(f1: Long, f2: String, f3: String, f4: String, f5: String,
f6: String, f7: String, f8: String, f9: String, f10: String,
f11: String, f12: String, f13: String, f14: String, f15: String,
f16: String, f17: String, f18: String, f19: String, f20: String,
f21: String, f22: String, f23: Float, f24: Float, f25: String)
I read that it is possible to write custom Shape (proof), but any my attempts to realize it is fails.
Please help me map this case class to table.
There is not a good solution to this question for current slick version. You can pack some fields into one case class.
Please refer to this test case.
https://github.com/slick/slick/blob/2.1.0-RC1/slick-testkit/src/main/scala/com/typesafe/slick/testkit/tests/JdbcMapperTest.scala#L99
Actually this could be done via HList like this
def * = (col1 :: col2 :: .. :: HNil).shaped <> (
{ case x => new YYY(x(0), x(1), ..)},
{ x: YYY => Option(x.col1 :: x.col2 :: .. :: HNil)}
)
I've written an macro to do the mapping, you may take a look at this
https://github.com/jilen/slickext
I have a HListCaseClassShape that works exactly like the CaseClassShape but without the 22 column limit: here. You could then map it to your table like this: def * = MyHListCaseClassShape(f1, f2, f3...) (see the Pair * example here). Not sure if it'll work with Slick 2 but might be worth a shot.

Routes overloading doesn't work

I want to be able to have this:
POST /items controllers.Application.update()
POST /items/:itemType controllers.Application.update(itemType: String)
POST /items/:itemType/:id controllers.Application.update(itemType: String, id: Int)
but that doesn't compile due to the error of method update is defined twice. Then I changed it and it didn't compiler either:
POST /items controllers.Application.update(itemType: Option[String] = None, id: Option[Int] = None)
POST /items/:itemType controllers.Application.update(itemType: String, id: Option[Int] = None)
POST /items/:itemType/:id controllers.Application.update(itemType: String, id: Int)
the errors are:
the previouse one
and type mismatch; found: Option[String]; required: String
What do I do about that? I wouldn't like to do something like this:
POST /items controllers.Application.updateAll()
POST /items/:itemType controllers.Application.updateByType(itemType: String)
POST /items/:itemType/:id controllers.Application.updateByTypeAndId(itemType: String, id: Int)
and this doesn't look good either since I'd like to use Option instead of the empty string:
POST /items controllers.Application.update(itemType: String = "", id: Int = "")
POST /items/:itemType/:id controllers.Application.update(itemType: String, id: Int = "")
POST /items/:itemType/:id controllers.Application.update(itemType: String, id: Int)
Unfortunately it seems support for Option was removed in v2 - see here for example - so you may be stuck with either coding your own PathBindable to handle Options (as mentioned in the above link), or one of the other unsavoury choices you've noted.
If you're able to change your URL format, you have the ability to use Option.
Route: POST /items controllers.Application.update(itemType: Option[String], id: Option[Int])
URL: http://localhost:9000/items?itemType=someItem&id=123
With this format, you are able to omit itemType, id, or both when making the web service call.