close connection elasticsearch, is it necessary? - scala

im create a API using scala and library Spray.IO. my API, search into elasticsearch.
my questions is also related with question.
var klt:TransportClient = EsClient_08012017.klien1
var arg = Array(JsObject(Map("id"->JsString("-1"), "item" -> JsString("-1"), "score"-> JsString("-1"))))
if(cariIndex(namaIndexCari)==true && cariIndex(namaIndexCari+"_2")==true)
{
if(hitungJumlahIndex(namaIndexCari) > hitungJumlahIndex(namaIndexCari+"_2"))
{
val ar = ambilRekomendasi(idPenggunaCari, namaTipeCari, namaIndexCari, jumlah, false)
val atd = acakTanpaDuplikat(ar)
arg = parsingJsObject(atd)
}
else
{
val ar = ambilRekomendasi(idPenggunaCari, namaTipeCari, namaIndexCari+"_2", jumlah, false)
val atd = acakTanpaDuplikat(ar)
arg = parsingJsObject(atd)
}
}
else
{
val ar = ambilRekomendasi(idPenggunaCari, namaTipeCari, namaIndexCari, jumlah, false)
val atd = acakTanpaDuplikat(ar)
arg = parsingJsObject(atd)
}
klt.close()
arg
for 1st time, hit API its fine. but, the 2nd hit API im get some error
None of the configured nodes are available: [{#transport#-1}{127.0.0.1}{127.0.0.1:9300}]
what i want to achieve are, each of hit API its also like close connection to ES and open connection. but, the reference link said "it's okay without close connections". thanks for help, or link, or reference!

Never close it unless you are closing your application

Related

How to provide the output result of a function as a PUT request in scala?

i have a scala code that converts a layer to a Geotiff file. Now i want this Geotiff file to be passed in a PUT request as a REST service. How can i do that?
Here is a section of code:
val labeled_layerstack =
{
//Labeled Layerstack
//val layers_input = Array(layer_dop) ++ layers_sat
val layers_labeled_input = Array(layer_label) ++ Array(output_layerstack) //++ layers_input
ManyLayersToMultibandLayer(layers_labeled_input, output_labeled_layerstack)
output_labeled_layerstack
}
if (useCleanup) {
DeleteLayer(layer_label)
if(useDOP)
DeleteLayer(layer_dop)
for( layer_x <- layers_sat)
DeleteLayer(layer_x)
}
labeled_layerstack
}
else output_labeled_layerstack //if reusing existing layerstack ( processing steps w/o "layerstack")
if(processingSteps.isEmpty || processingSteps.get.steps.exists(step => step == "classification")) {
if (useRandomForest) {
ClusterTestRandomForest(labeled_layerstack, fileNameClassifier, layerResult, Some(output_layerstack))
if (useExportResult) {
LayerToGeotiff(layerResult, fileNameResult, useStitching = useExportStitching)
}
}
else if (useSVM) {
ClusterTestSVM(labeled_layerstack, fileNameClassifier, layerResult, Some(output_layerstack))
if (useExportResult) {
LayerToGeotiff(layerResult, fileNameResult, useStitching = useExportStitching)
}
}
}
The original code is quite long and is not shareable so i am sharing this which is relevant to the problem. The output of LayertoGeotiff should be passed as an PUT request. How can i create such a request?
I suggest to you the Play framework to send a PUT request

Download all the files from a s3 bucket using scala

I tried the below code to download one file successfully but unable to download all the list of files
client.getObject(
new GetObjectRequest(bucketName, "TestFolder/TestSubfolder/Psalm/P.txt"),
new File("test.txt"))
Thanks in advance
Update
I tried the below code but getting list of directories ,I want list of files rather
val listObjectsRequest = new ListObjectsRequest().
withBucketName("tivo-hadoop-dev").
withPrefix("prefix").
withDelimiter("/")
client.listObjects(listObjectsRequest).getCommonPrefixes
It's a simple thing but I struggled like any thing before concluding below mentioned answer.
I found a java code and changed to scala accordingly and it worked
val client = new AmazonS3Client(credentials)
val listObjectsRequest = new ListObjectsRequest().
withBucketName("bucket-name").
withPrefix("path/of/dir").
withDelimiter("/")
var objects = client.listObjects(listObjectsRequest);
do {
for (objectSummary <- objects.getObjectSummaries()) {
var key = objectSummary.getKey()
println(key)
var arr=key.split("/")
var file_name = arr(arr.length-1)
client.getObject(
new GetObjectRequest("bucket" , key),
new File("some/path/"+file_name))
}
objects = client.listNextBatchOfObjects(objects);
} while (objects.isTruncated())
Below code is fast and useful especially when you want to download all objects at a specific local directory. It maintains the files under the exact same s3 prefix hierarchy
val xferMgrForAws:TransferManager = TransferManagerBuilder.standard().withS3Client(awsS3Client).build();
var objectListing:ObjectListing = null;
objectListing = awsS3Client.listObjects(awsBucketName, prefix);
val summaries:java.util.List[S3ObjectSummary] = objectListing.getObjectSummaries();
if(summaries.size() > 0) {
val xfer:MultipleFileDownload = xferMgrForAws.downloadDirectory(awsBucketName, prefix, new File(localDirPath));
xfer.waitForCompletion();
println("All files downloaded successfully!")
} else {
println("No object present in the bucket !");
}

Read password in Scala in a console-agnostic way

I have an easy task to accomplish: read a password from a command line prompt without exposing it. I know that there is java.io.Console.readPassword, however, there are times when you cannot access console as if you are running your app from an IDE (such as IntelliJ).
I stumbled upon this Password Masking in the Java Programming Language tutorial, which looks nice, but I fail to implement it in Scala. So far my solution is:
class EraserThread() extends Runnable {
private var stop = false
override def run(): Unit = {
stop = true
while ( stop ) {
System.out.print("\010*")
try
Thread.sleep(1)
catch {
case ie: InterruptedException =>
ie.printStackTrace()
}
}
}
def stopMasking(): Unit = {
this.stop = false
}
}
val et = new EraserThread()
val mask = new Thread(et)
mask.start()
val password = StdIn.readLine("Password: ")
et.stopMasking()
When I start this snippet I get a continuos printing of asterisks on new lines. E.g.:
*
*
*
*
Is there any specific in Scala why this is not working? Or is there any better way to do this in Scala in general?

neo4j 3.0 embedded - no nodes

There's sometime I must be missing about neo4j 3.0 embedded. After creating a node, setting some properties, and marking the transaction as success. I then re-open the DB, but there are no nodes in it! What am I missing here? The neo4j documentation is pretty poor.
val graph1 = {
val graphDb = new GraphDatabaseFactory()
.newEmbeddedDatabase(new File("/opt/neo4j/deviceGraphTest" ))
val tx = graphDb.beginTx()
val node = graphDb.createNode()
node.setProperty("name", "kitchen island")
node.setProperty("bulbType", "incandescent")
tx.success()
graphDb.shutdown()
}
val graph2 = {
val graphDb2 = new GraphDatabaseFactory()
.newEmbeddedDatabase(new File("/opt/neo4j/deviceGraphTest" ))
val tx2 = graphDb2.beginTx()
val allNodes = graphDb2.getAllNodes.iterator().toList
allNodes.foreach(node => {
printNode(node)
})
}
The transaction what you have opened has to be closed with the command tx.close() after setting the transaction to state success. I do not know the exact scala syntax but it would be good to put the full block into a try/catch and to finally close the transaction in the finally block.
Here is the documentation for Java: https://neo4j.com/docs/java-reference/current/javadocs/org/neo4j/graphdb/Transaction.html

Difference between RoundRobinRouter and RoundRobinRoutinglogic

So I was reading tutorial about akka and came across this http://manuel.bernhardt.io/2014/04/23/a-handful-akka-techniques/ and I think he explained it pretty well, I just picked up scala recently and having difficulties with the tutorial above,
I wonder what is the difference between RoundRobinRouter and the current RoundRobinRouterLogic? Obviously the implementation is quite different.
Previously the implementation of RoundRobinRouter is
val workers = context.actorOf(Props[ItemProcessingWorker].withRouter(RoundRobinRouter(100)))
with processBatch
def processBatch(batch: List[BatchItem]) = {
if (batch.isEmpty) {
log.info(s"Done migrating all items for data set $dataSetId. $totalItems processed items, we had ${allProcessingErrors.size} errors in total")
} else {
// reset processing state for the current batch
currentBatchSize = batch.size
allProcessedItemsCount = currentProcessedItemsCount + allProcessedItemsCount
currentProcessedItemsCount = 0
allProcessingErrors = currentProcessingErrors ::: allProcessingErrors
currentProcessingErrors = List.empty
// distribute the work
batch foreach { item =>
workers ! item
}
}
}
Here's my implementation of RoundRobinRouterLogic
var mappings : Option[ActorRef] = None
var router = {
val routees = Vector.fill(100) {
mappings = Some(context.actorOf(Props[Application3]))
context watch mappings.get
ActorRefRoutee(mappings.get)
}
Router(RoundRobinRoutingLogic(), routees)
}
and treated the processBatch as such
def processBatch(batch: List[BatchItem]) = {
if (batch.isEmpty) {
println(s"Done migrating all items for data set $dataSetId. $totalItems processed items, we had ${allProcessingErrors.size} errors in total")
} else {
// reset processing state for the current batch
currentBatchSize = batch.size
allProcessedItemsCount = currentProcessedItemsCount + allProcessedItemsCount
currentProcessedItemsCount = 0
allProcessingErrors = currentProcessingErrors ::: allProcessingErrors
currentProcessingErrors = List.empty
// distribute the work
batch foreach { item =>
// println(item.id)
mappings.get ! item
}
}
}
I somehow cannot run this tutorial, and it's stuck at the point where it's iterating the batch list. I wonder what I did wrong.
Thanks
In the first place, you have to distinguish diff between them.
RoundRobinRouter is a Router that uses round-robin to select a connection.
While
RoundRobinRoutingLogic uses round-robin to select a routee
You can provide own RoutingLogic (it has helped me to understand how Akka works under the hood)
class RedundancyRoutingLogic(nbrCopies: Int) extends RoutingLogic {
val roundRobin = RoundRobinRoutingLogic()
def select(message: Any, routees: immutable.IndexedSeq[Routee]): Routee = {
val targets = (1 to nbrCopies).map(_ => roundRobin.select(message, routees))
SeveralRoutees(targets)
}
}
link on doc http://doc.akka.io/docs/akka/2.3.3/scala/routing.html
p.s. this doc is very clear and it has helped me the most
Actually I misunderstood the method, and found out the solution was to use RoundRobinPool as stated in http://doc.akka.io/docs/akka/2.3-M2/project/migration-guide-2.2.x-2.3.x.html
For example RoundRobinRouter has been renamed to RoundRobinPool or
RoundRobinGroup depending on which type you are actually using.
from
val workers = context.actorOf(Props[ItemProcessingWorker].withRouter(RoundRobinRouter(100)))
to
val workers = context.actorOf(RoundRobinPool(100).props(Props[ItemProcessingWorker]), "router2")