H2 database content is not persisting on insert and update - postgresql

I am using h2 database to test my postgres slick functionality.
I created a below h2DbComponent:
trait H2DBComponent extends DbComponent {
val driver = slick.jdbc.H2Profile
import driver.api._
val h2Url = "jdbc:h2:mem:test;MODE=PostgreSQL;DB_CLOSE_DELAY=-1;DATABASE_TO_UPPER=false;INIT=runscript from './test/resources/schema.sql'\\;runscript from './test/resources/schemadata.sql'"
val logger = LoggerFactory.getLogger(this.getClass)
val db: Database = {
logger.info("Creating test connection ..................................")
Database.forURL(url = h2Url, driver = "org.h2.Driver")
}
}
In the above snippet i am creating my tables using schema.sql and inserting a single row(record) with schemadata.sql.
Then i am trying to insert a record into the table as below using my test case:
class RequestRepoTest extends FunSuite with RequestRepo with H2DBComponent {
test("Add new Request") {
val response = insertRequest(Request("XYZ","tk", "DM", "RUNNING", "0.1", "l1", "file1",
Timestamp.valueOf("2016-06-22 19:10:25"), Some(Timestamp.valueOf("2016-06-22 19:10:25")), Some("scienceType")))
val actualResult=Await.result(response,10 seconds)
assert(actualResult===1)
val response2 = getAllRequest()
assert(Await.result(response2, 5 seconds).size === 2)
}
}
The above assert of insert works fine stating that the record is inserted. But the getAllRequest() assert fails as the output still contains the single row(as inserted by schemadata.sql) => which means the insertRequest change is not persisted. However the below statements states that the record is inserted as the insert returned 1 stating one record inserted.
val response = insertRequest(Request("CMP_XYZ","tesco_uk", "DM", "RUNNING", "0.1", "l1", "file1",
Timestamp.valueOf("2016-06-22 19:10:25"), Some(Timestamp.valueOf("2016-06-22 19:10:25")),
Some("scienceType")))
val actualResult=Await.result(response,10 seconds)
Below is my definition of insertRequest:
def insertRequest(request: Request):Future[Int]= {
db.run { requestTableQuery += request }
}
I am unable to figure out how can i see the inserted record. Is there any property/config which i need to add?

But the getAllRequest() assert fails as the output still contains the single row(as inserted by schemadata.sql) => which means the insertRequest change is not persisted
I would double-check that the assert(Await.result(response2, 5 seconds).size === 2) line is failing because of a size difference. Could it be failing for some other general failure?
For example, as INIT is run on each connection it could be that you are re-creating the database for each connection. Unless you're careful with the SQL, that could produce an error such as "table already exists". Adding TRACE_LEVEL_SYSTEM_OUT=2; to your H2 URL can be helpful in tracking what H2 is doing.
A couple of suggestions.
First, you could ensure your SQL only runs as needed. For example, your schema.sql could add checks to avoid trying to create the table twice:
CREATE TABLE IF NOT EXISTS my_table( my_column VARCHAR NULL );
And likewise for your schemadata.sql:
MERGE INTO my_table KEY(my_column) VALUES ('a') ;
Alternatively, you could establish schema and test data around your tests (e.g., possibly in Scala code, using Slick). Your test framework probably has a way to ensure something is run before and after a test or test suit.

Related

Store generated dynamic unique ID and parse to next test case

I have keyword groovy which allowed me to generate a dynamic unique ID for test data purpose.
package kw
import java.text.SimpleDateFormat
import com.kms.katalon.core.annotation.Keyword
class dynamicId {
//TIME STAMP
String timeStamp() {
return new SimpleDateFormat('ddMMyyyyhhmmss').format(new Date())
}
//Generate Random Number
Integer getRandomNumber(int min, int max) {
return ((Math.floor(Math.random() * ((max - min) + 1))) as int) + min
}
/**
* Generate a unique key and return value to service
*/
#Keyword
String getUniqueId() {
String prodName = (Integer.toString(getRandomNumber(1, 99))) + timeStamp()
return prodName
}
}
Then I have a couple of API test cases as below:
test case 1:
POST test data by calling the keyword. this test case works well.
the dynamic unique ID is being posted and stored in the Database.
partial test case
//test data using dynamic Id
NewId = CustomKeywords.'kw.dynamicId.getUniqueId'()
println('....DO' + NewId)
GlobalVariable.DynamicId = NewId
//test data to simulate Ingest Service sends Dispense Order to Dispense Order Service.
def incomingDOInfo = '{"Operation":"Add","Msg":{"id":"'+GlobalVariable.DynamicId+'"}'
now, test case 2 served as a verification test case.
where I need to verify the dynamic unique ID can be retrieved by GET API (GET back data by ID, this ID should matched the one being POSTED).
how do I store the generated dynamic unique ID once generated from test case 1?
i have the "println('....DO' + NewId)" in Test Case 1, but i have no idea how to use it and put it in test case 2.
which method should I use to get back the generated dynamic unique ID?
updated Test Case 2 with the suggestion, it works well.
def dispenseOrderId = GlobalVariable.DynamicId
'Check data'
getDispenseOrder(dispenseOrderId)
def getDispenseOrder(def dispenseOrderId){
def response = WS.sendRequestAndVerify(findTestObject('Object Repository/Web Service Request/ApiDispenseorderByDispenseOrderIdGet', [('dispenseOrderId') : dispenseOrderId, ('SiteHostName') : GlobalVariable.SiteHostName, , ('SitePort') : GlobalVariable.SitePort]))
println(response.statusCode)
println(response.responseText)
WS.verifyResponseStatusCode(response, 200)
println(response.responseText)
//convert to json format and verify result
def dojson = new JsonSlurper().parseText(new String(response.responseText))
println('response text: \n' + JsonOutput.prettyPrint(JsonOutput.toJson(dojson)))
assertThat(dojson.dispenseOrderId).isEqualTo(dispenseOrderId)
assertThat(dojson.state).isEqualTo("NEW")
}
====================
updated post to try #2 suggestion, works
TC2
//retrieve the dynamic ID generated at previous test case
def file = new File("C:/DynamicId.txt")
//Modify this to match test data at test case "IncomingDOFromIngest"
def dispenseOrderId = file.text
'Check posted DO data from DO service'
getDispenseOrder(dispenseOrderId)
def getDispenseOrder(def dispenseOrderId){
def response = WS.sendRequestAndVerify(findTestObject('Object Repository/Web Service Request/ApiDispenseorderByDispenseOrderIdGet', [('dispenseOrderId') : dispenseOrderId, ('SiteHostName') : GlobalVariable.SiteHostName, , ('SitePort') : GlobalVariable.SitePort]))
println(response.statusCode)
println(response.responseText)
WS.verifyResponseStatusCode(response, 200)
println(response.responseText)
}
There are multiple ways of doing that that I can think of.
1. Store the value od dynamic ID in a GlobalVariable
If you are running Test Case 1 (TC1) and TC2 in a test suite, you can use the global variable for inter-storage.
You are already doing this in the TC1:
GlobalVariable.DynamicId = NewId
Now, this will only work if TC1 and TC2 are running as a part of the same test suite. That is because GlobalVariables are reset to default on the teardown of the test suite or the teardown of a test case when a single test case is run.
Let us say you retrieved the GET response and put it in a response variable.
assert response.equals(GlobalVariable.DynamicId)
2. Store the value od dynamic ID on the filesystem
This method will work even if you run the test cases separately (i.e. not in a test suite).
You can use file system for permanently storing the ID value to a file. There are various Groovy mehods to help you with that.
Here's an example on how to store the ID to a text file c:/path-to/variable.txt:
def file = new File("c:/path-to/variable.txt")
file.newWriter().withWriter { it << NewID }
println file.text
The TC2 needs this assertion (adjust according to your needs):
def file = new File("c:/path-to/variable.txt")
assert response.equals(file.text)
Make sure you defined file in TC2, as well.
3. Return the ID value at the end of TC1 and use it as an input to TC2
This also presupposes TC1 and TC2 are in the same test suite. You return the value of the ID with
return NewId
and then use as an input parameter for TC2.
4. Use test listeners
This is the same thing as the first solution, you just use test listeners to create a temporary holding variable that will be active during the test suite run.

create table in phoenix from spark

Hi I need to create a table in Phoenix from a spark job . I have tried 2 ways below but none of them work, seems this is still not supported.
1) Dataframe.write still requires that the tables exists previously
df.write.format("org.apache.phoenix.spark").mode("overwrite").option("table", schemaName.toUpperCase + "." + tableName.toUpperCase ).option("zkUrl", hbaseQuorum).save()
2) if we connect to phoenix thru JDBC, and try to execute the CREATE statemnt, then we get a parsing error (same create works in phoenix)
var ddlCode="create table test (mykey integer not null primary key, mycolumn varchar) "
val driver = "org.apache.phoenix.jdbc.PhoenixDriver"
val jdbcConnProps = new Properties()
jdbcConnProps.setProperty("driver", driver);
val jdbcConnString = "jdbc:phoenix:hostname:2181/hbase-unsecure"
sqlContext.read.jdbc(jdbcConnString, ddlCode, jdbcConnProps)
error:
org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax error. Encountered "create" at line 1, column 15.
Anyone with similar challenges that managed to do it differently?
i have finally worked in a solution for this. Basically i think was wrong by trying to use SQLContext read method for this. I think this method is designed just to "read" data sources. The way to workaournd it has been basically to open a standard JDBC connection against Phoenix:
var ddlCode="create table test (mykey integer not null primary key, mycolumn varchar) "
val jdbcConnString = "jdbc:hostname:2181/hbase-unsecure"
val user="USER"
val pass="PASS"
var connection:Connection = null
Class.forName(driver)
connection = DriverManager.getConnection(jdbcConnString, user, pass)
val statement = connection.createStatement()
statement.executeUpdate(ddlCode)

Anorm String Interpolation not replacing variables

We are using Scala Play, and I am trying to ensure that all SQL queries are using Anorm's String Interpolation. It works with some queries, but many are not actually replacing the variables before the query is executing.
import anorm.SQL
import anorm.SqlStringInterpolation
object SecureFile
{
val table = "secure_file"
val pk = "secure_file_idx"
...
// This method works exactly as I would hope
def insert(secureFile: SecureFile): Option[Long] = {
DBExec { implicit connection =>
SQL"""
INSERT INTO secure_file (
subscriber_idx,
mime_type,
file_size_bytes,
portal_msg_idx
) VALUES (
${secureFile.subscriberIdx},
${secureFile.mimeType},
${secureFile.fileSizeBytes},
${secureFile.portalMsgIdx}
)
""" executeInsert()
}
}
def delete(secureFileIdx: Long): Int = {
DBExec { implicit connection =>
// Prints correct values
println(s"table: ${table} pk: ${pk} secureFileIdx: ${secureFileIdx} ")
// Does not work
SQL"""
DELETE FROM $table WHERE ${pk} = ${secureFileIdx}
""".executeUpdate()
// Works, but unsafe
val query = s"DELETE FROM ${table} WHERE ${pk} = ${secureFileIdx}"
SQL(query).executeUpdate()
}
}
....
}
Over in the PostgreSQL logs, it's clear that the delete statement has not acquired the correct values:
2015-01-09 17:23:03 MST ERROR: syntax error at or near "$1" at character 23
2015-01-09 17:23:03 MST STATEMENT: DELETE FROM $1 WHERE $2 = $3
2015-01-09 17:23:03 MST LOG: execute S_1: ROLLBACK
I've tried many variations of execute, executeUpdate, and executeQuery with similar results. For the moment, we are using basic string replacement but of course this is bad because it's not using PreparedStatements.
For anyone else sitting on this page scratching their head and wondering what they might be missing...
SQL("select * from mytable where id = $id")
is NOT the same as
SQL"select * from mytable where id = $id"
The former does not do String interpolation whereas the latter does.
This is easily overlooked in the aforementioned docs as all the samples provided just happen to have a (non-related) closing parenthesis on them (like this sentence does)
Anorm String interpolation was introduced to pass parameter (e.g. SQL"Select * From Test Where id = $x), with interpolation arguments (e.g. $x) set on underlying PreparedStament according proper type conversion (see use cases on https://www.playframework.com/documentation/2.3.x/ScalaAnorm ).
Next Anorm release will also have the #$foo syntax to mix interpolation for parameter with standard string interpolation. This will allow to write DELETE FROM #$table WHERE #${pk} = ${secureFileIdx} and having it executed as DELETE FROM foo WHERE bar = ? (if literal table is "foo" and pk is "bar"), with literal secureFileIdx passed as parameter. See related pull request.
Until next revision is release, you can build Anorm from its master sources ti benefit from this change.

Cassandra-Hector-Scala:How can I get all row Composite key in column family?

My data storage format is:
Family name :Test
Rowkey: comkey1:comkey2
=>(name=name,value='xyz',timestamp=1554515485)
-------------------------------------------------------
Rowkey: comkey1:comkey3
=>(name=name,value='abc',timestamp=1554515485)
-------------------------------------------------------
Rowkey: comkey1:comkey4
=>(name=name,value='pqr',timestamp=1554515485)
-------------------------------------------------------
now i want to fetch all composite key from "test" family
and i am trying
def test=Action{
val cluster = HFactory.getOrCreateCluster("Test Cluster", "127.0.0.1:9160");
val keyspace = HFactory.createKeyspace("winoriatest", cluster)
var startKey = new Composite();
var endKey= new Composite();
startKey.addComponent("comkey1", StringSerializer.get());
startKey.addComponent("comkey2", StringSerializer.get());
endKey.addComponent("comkey1", StringSerializer.get());
endKey.addComponent("comkey4", StringSerializer.get());
val rangeSlicesQuery = HFactory.createRangeSlicesQuery(keyspace, CompositeSerializer.get(), StringSerializer.get(),StringSerializer.get())
rangeSlicesQuery.setColumnFamily("test");
// CompositeSerializer.get() is not working.
rangeSlicesQuery.setKeys(startKey,endKey)
rangeSlicesQuery.setRange(null,null,false,Integer.MAX_VALUE);
rangeSlicesQuery.setReturnKeysOnly()
val result = rangeSlicesQuery.execute()
val orderedRows = result.get();
import scala.collection.JavaConversions._
for (sc <- orderedRows) {
println(sc.getKey())
}
Ok(views.html.index("Your new application is ready."))
}
Error :[NullPointerException: null] on line
val result = rangeSlicesQuery.execute()
Cassandra 2.0 scala 2.10.2
Thank you for your help in resolving this, in advance.
it giving me null pointer exception, and the same code is working with java
and my java code is
Cluster cluster = HFactory.getOrCreateCluster("Test Cluster","127.0.0.1:9160");
Keyspace keyspace = HFactory.createKeyspace("winoriatest", cluster);
Serializer<String> se= StringSerializer.get() ;
Serializer<Long> le= LongSerializer.get() ;
Serializer<Integer> ie= IntegerSerializer.get() ;
CompositeSerializer ce = new CompositeSerializer();
RangeSlicesQuery<Composite,String,byte[]> rangeSliceQuery=HFactory.createRangeSlicesQuery(keyspace,ce,se, BytesArraySerializer.get());
rangeSliceQuery.setColumnFamily("test");
rangeSliceQuery.setRange(null,null, false, Integer.MAX_VALUE);
QueryResult<OrderedRows<Composite,String,byte[]>>result=rangeSliceQuery.execute();
OrderedRows<Composite,String,byte[]> orderedRows=result.get();
for (Row<Composite,String,byte[]> r:orderedRows)
{
System.out.println("Compositekey="+r.getKey().get(0,se)+":"+r.getKey().get(1, se));
}
I'm not quite sure what "i want to fetch all composite key in test family" means. If you mean, you want to get just the partition [row] key components, then you can do this in CQL as simply as:
SELECT DISTINCT a, b FROM test
(Assigning a and b to be the column names.)
This is a good example of how much simpler CQL makes Cassandra development, which is why we're pushing people to use the native CQL driver over legacy clients like Hector.
For more on how CQL makes sense of a Thrift data model like this, see http://www.datastax.com/dev/blog/cql3-for-cassandra-experts.

How to save design document / view in CouchDB database from Scala program?

I have written a scala program for creating new database, and adding documents/views into it.
object CouchDBTest extends App {
val dbSession = new Session("localhost", 5984)
val db = dbSession.createDatabase("couchschooltest")
val newC1 = new Document
newC1.put("Type", "Class")
newC1.put("ClassId", "C1")
newC1.put("ClassName", "C-2A")
newC1.put("ClassTeacher", "T1")
newC1.accumulate("Students", "S1");
newC1.accumulate("Students", "S2");
newC1.accumulate("Students", "S3");
db.saveDocument(newC1)
val viewDocClass = new Document
viewDocClass.addView("Class", "function(doc) {if(doc.Type == 'Class') { emit([doc.ClassId, doc.ClassName, doc.ClassTeacher, doc.Students], doc);}}")
db.saveDocument(viewDocClass)
}
When I run this code, it creates the new database in CouchDB and adds the class document in this database. However, it doesn't add the views into the database. It gives the runtime error while adding the viewDocClass as
Error adding document - null null
For this I used couchdb4j API