Docker Akka-Http application endpoint not reachable - scala

I have a very basic Akka-http app that is basically not much more than a Hello-world setup - I have defined an endpoint and simply bound it to "localhost" and port "8080":
object Main extends App with Routes {
private implicit val system = ActorSystem()
protected implicit val executor: ExecutionContext = system.dispatcher
protected implicit val materializer: ActorMaterializer = ActorMaterializer()
protected val log: LoggingAdapter = Logging( system, getClass )
log.info( "starting server" )
Http().bindAndHandle( logRequestResult("log",Logging.InfoLevel)( allRoutes ), "localhost", 8080 )
log.info( "server started, awaiting requests.." )
}
(allRoutes is mixed in via Routes, but is just a dummy endpoint that serialises a simple case class to a JSON response)
If I start it up using sbt then the endpoints works fine (http://localhost:8080/colour/red for example).
I am now trying to package it into a Docker container to run it - I have read things like http://yeghishe.github.io/2015/06/24/running-akka-applications.html and have added the sbt-native-package plugin (http://www.scala-sbt.org/sbt-native-packager/formats/docker.html#customize).
Now I run sbt docker:publishLocal
And I can see that the docker image has been created:
REPOSITORY TAG IMAGE ID CREATED SIZE
sample-rest-api 0.0.1 3c6ee44985b4 9 hours ago 714.4 MB
If I now start my image, mapping the 8080 port as follows:
docker run -p 8080:8080 sample-rest-api:0.0.1
I see the log output I normally see on startup, so it looks like it has started ok, however, if I then attempt to access the same URL as previously I now get the response
"Problem loading the page: The connection was reset"
If I check docker ps I see that the image is running, and the ports mapped as expected:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
27848729a425 sample-rest-api:0.0.1 "bin/sample-rest-api" About a minute ago Up About a minute 0.0.0.0:8080->8080/tcp furious_heisenberg
I am running on Ubuntu 16.04 - anyone have any ideas?

Try changing 'localhost' to 0.0.0.0
In the http.bindAndHandle

Related

Allow all requests to the AKKA-HTTP service

Comrades!
I have a small service from the AKKA-HTTP example.
import ch.megard.akka.http.cors.scaladsl.CorsDirectives._
object Server extends App{
val route = cors() {
path("hello") {
get {
complete(HttpEntity(ContentTypes.`text/html(UTF-8)`, "<h1>Привет ёпта</h1>"))
}
}
}
val routes = cors() {
concat(route, getUser, createUser, addMessage, getQueue, test, test2)
}
val bindingFuture = Http().newServerAt("localhost", 8080).bind(routes)
}
For CORS i create file resourses/application.conf:
akka-http-cors {
allowed-origins = "*"
allowed-methods = ["GET", "POST", "PUT", "DELETE", "HEAD", "OPTIONS"]
}
When I run a project in intellij idea, the route works fine:
But if I run the project in Docker, the route doesn't want to work.
Error in chrome:
Error in Postman:
Below are the errors when the project is turned off everywhere:
How do I properly configure the application.conf file so that the application accepts third-party requests? Or maybe the error is hidden in something else?
Please tell me!
I've been thinking for two days.
UPD: File DockerFile:
FROM openjdk:8-jre-alpine
WORKDIR /opt/docker
ADD --chown=daemon:daemon opt /opt
USER daemon
ENTRYPOINT ["/opt/docker/bin/servertelesupp"]
CMD []
Project on GitHub: https://github.com/MinorityMeaning/ServerTeleSupp
Change
Http().newServerAt("localhost", 8080).bind(routes)
to
Http().newServerAt("0.0.0.0", 8080).bind(routes)
By binding to localhost inside docker, you will not be able route traffic from outside to it.
What is the value of -p flag that you are using? It should be 8080:8080 if the server is using 8080 port inside the docker, or 8080:80 if the server is using 80 (and so on). Also please do verify that the port is indeed free inside docker.
PS: I have low reputation and can't add a comment.

Proxy authentication issue using Jsoupbrowser in container

I'm having an issue trying to authenticate my proxy inside a Docker.
That's what I did :
Authenticator.setDefault(new Authenticator() {
override def getPasswordAuthentication = new PasswordAuthentication(<USERNAME>, <PASSWORD>.toCharArray)
})
val browser = new JsoupBrowser(ua,proxy) {
override def requestSettings(conn: Connection) = conn.timeout(5000)
}
// Step 1: We __scrape__ the current page.
val doc = browser.get(baseUrl)
It works locally, but when I deploy it on my server I get an Error 407
java.io.IOException: Unable to tunnel through proxy. Proxy returns "HTTP/1.1 407 Proxy Authentication Required"
I also tried upgrading the configuration to container level but it didn't work.
I found a solution.
The problem was only at the deploy, so I concluded that the problem came with the build of the docker.
I add this JVM parameter in my Dockerfile :
-Djdk.http.auth.tunneling.disabledSchemes=
To the CMD
CMD java -Dhttp.port=${port} -Djdk.http.auth.tunneling.disabledSchemes= -Dplay.crypto.secret=secret $* -jar ./app-assembly.jar
It works now.

How to run a custom docker image testContainer

I have gone thru' multiple blogs and official documentation but couldn't resolve my issue. I am using testContainers-scala version 0.38.1 and scala version 2.11.
I am trying to create a simple test using testContainer-scala as below:
class MyServiceITSpec extends AnyFlatSpec with ForAllTestContainer {
override val container = GenericContainer(dockerImage="my-service",
exposedPorts = Seq(8080),
env=(HashMap[String, String]("PARAM1" -> "value1", "PARAM2" -> "value2", "PARAM3" -> "value3")),
waitStrategy = Wait.forHttp("/")
)
"GenericContainer" should "start my service and say Hello! Wassupp" in {
assert(Source.fromInputStream(
new URL(s"http://${container.containerIpAddress}:${container.mappedPort(8080)}/").openConnection().getInputStream
).mkString.contains("Hello! Wassupp"))
}
}
On the basis of the above snippet, my understanding is this (please correct if wrong):
Port 8155 is exposed by the docker container and a random host port against the same would be assigned.
We can get that assigned port as container.mappedPort
Here I am trying to assert that http:// localhost:mappedPort/ return Hello! Wassupp.
But, I get the below error:
Caused by: org.testcontainers.containers.ContainerLaunchException: Could not create/start container
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:498)
at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:325)
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
... 18 more
Caused by: org.testcontainers.containers.ContainerLaunchException: Timed out waiting for URL to be accessible (http://localhost:32869/ should return HTTP 200)
at org.testcontainers.containers.wait.strategy.HttpWaitStrategy.waitUntilReady(HttpWaitStrategy.java:214)
at org.testcontainers.containers.wait.strategy.AbstractWaitStrategy.waitUntilReady(AbstractWaitStrategy.java:35)
at org.testcontainers.containers.GenericContainer.waitUntilContainerStarted(GenericContainer.java:890)
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:441)
... 20 more
The same image runs just fine with:
docker run -p 8081:8080 -e PARAM1=value1 -e PARAM2=value2 -e PARAM3=VALUE3 my-service
So after juggling with the errors, I found my issue. It is to do with the required Request Headers missing from the request. I am adding the reference code for anyone who runs into similar issue.
import com.dimafeng.testcontainers.{ForAllTestContainer, GenericContainer}
import org.scalatest.flatspec.AnyFlatSpec
import org.testcontainers.containers.wait.strategy.Wait
import scala.collection.immutable.HashMap
import scalaj.http.Http
class MyServiceITSpec extends AnyFlatSpec with ForAllTestContainer {
override val container = GenericContainer(dockerImage="my-service-img:tag12345",
exposedPorts = Seq(8080),
env=(HashMap[String, String]("PARAM1" -> "value1", "PARAM2" -> "value2")),
waitStrategy = Wait.forHttp("/") // or "/health" based on ur implementation
)
"My Service" should "successfully fetch the msg" in {
assert(Http(s"http://${container.containerIpAddress}:${container.mappedPort(8080)}/products/product1")
.header("HEADER1", "value1")
.header("HEADER2", "value2")
.asString.code==200)
}
}
Some explanations that I found after a lot of reading:
You give the port number that your docker application exposes as exposedPorts.
TestContainers then does a mapping of this port against a random port (this is by design to avoid port number conflicts). If you were to run this docker image directly on your machine you would write:
docker run -p 8081:8080 -e PARAM1=value1 -e PARAM2=value2 my-service-img:tag12345
Here, your exposed port is 8080 and the mapped port is 8081.
TestContainers runs the docker image by exposing the port 8080 and then mapping it against a random port. The mapped port can be container.mappedPort().
Another important thing to notice is the wait strategy. This tells the code to wait unless the / endpoint gets up. This is kind of a health check that your application exposes. You can have a better endpoint for the same - like /health. By default, it waits for 60 seconds for this endpoint to become active. Post that it would anyway run the tests and if the application has not started by then, it would cause an error. I am not sure how to override the default timeout but I think there should be a way to do that.
Lastly, I am using scalaj.http.Http to make a HTTP request(it is a pretty easy one to use - you can ).

mitmproxy script seems not running?

I am trying to run a simple mitmscript script by issuing ./mitmproxy --mode transparent -s pyscript.py.The proxy works fine and there's no error info in mitmproxy console,but it seems the script didn't even run,log.txt file is empty even though proxy successfully proxied client requests:
import mitmproxy.http
class Events:
def response(self, f: mitmproxy.http.HTTPFlow):
try:
with open("/home/me/mitmproxy/log.txt", "a+") as log:
log.write("rrr")
except:
with open("/home/me/mitmproxy/log.txt", "a+") as log:
log.write("sss")
def load(self, entry: mitmproxy.addonmanager.Loader):
with open("/home/me/mitmproxy/log.txt", "a+") as log:
log.write("xxx")
You have created an add-on class, but you forgot to create a new instance of the class and register it in mitmproxy.
To do so you have to add the following entry at the end of your script:
addons = [
Events()
]
See also sample Events script for mitmproxy: https://docs.mitmproxy.org/stable/addons-events/

spark scala on windows machine

I am learning from the class. I have run the code as shown in the class and i get below errors. Any idea what i should do?
I have spark 1.6.1 and Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_74)
val datadir = "C:/Personal/V2Maestros/Courses/Big Data Analytics with Spark/Scala"
//............................................................................
//// Building and saving the model
//............................................................................
val tweetData = sc.textFile(datadir + "/movietweets.csv")
tweetData.collect()
def convertToRDD(inStr : String) : (Double,String) = {
val attList = inStr.split(",")
val sentiment = attList(0).contains("positive") match {
case true => 0.0
case false => 1.0
}
return (sentiment, attList(1))
}
val tweetText=tweetData.map(convertToRDD)
tweetText.collect()
//val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
var ttDF = sqlContext.createDataFrame(tweetText).toDF("label","text")
ttDF.show()
The error is:
scala> ttDF.show()
[Stage 2:> (0 + 2) / 2]16/03/30 11:40:25 ERROR ExecutorClassLoader: Failed to check existence of class org.apache.spark.sql.catalyst.expressio
REPL class server at http://192.168.56.1:54595
java.net.ConnectException: Connection timed out: connect
at java.net.TwoStacksPlainSocketImpl.socketConnect(Native Method)
re/4729300
I'm no expert but the connection IP in the error message looks like a private node or even your router/modem local address.
As stated in the comment it could be that you're running the context with a wrong configuration that tries to spread the work to a cluster that's not there, instead of in your local jvm process.
For further information you can read here and experiment with something like
import org.apache.spark.SparkContext
val sc = new SparkContext(master = "local[4]", appName = "tweetsClass", conf = new SparkConf)
Update
Since you're using the interactive shell and the provided SparkContext available there, I guess you should pass the equivalent parameters to the shell command as in
<your-spark-path>/bin/spark-shell --master local[4]
Which instructs the driver to assign a master for the spark cluster on the local machine, on 4 threads.
I think the problem comes with connectivity and not from within the code.
Check if you can actually connect to this address and port (54595).
Probably your spark master is not accessible at the specified port. Use local[*] to validate using a smaller dataset and local master. Then, ckeck if the port is accessible or change it based on Spark port configuration (http://spark.apache.org/docs/latest/configuration.html)