Unknown host when using localstack with Spring Cloud AWS 2.3 - localstack

"ResourceLoader" with AWS S3 works fine with these properties:
cloud:
aws:
s3:
endpoint: s3.amazonaws.com <-- custom endpoint added in spring cloud aws 2.3
credentials:
accessKey: XXXXXX
secretKey: XXXXXX
region:
static: us-east-1
stack:
auto: false
However, when I bring up a localstack container locally and try to use it with these properties(as per this release doc: https://spring.io/blog/2021/03/17/spring-cloud-aws-2-3-is-now-available):
cloud:
aws:
s3:
endpoint: http://localhost:4566
credentials:
accessKey: test
secretKey: test
region:
static: us-east-1
stack:
auto: false
I get this exception:
17:12:12.130 [reactor-http-nio-2] ERROR org.springframework.boot.autoconfigure.web.reactive.error.AbstractErrorWebExceptionHandler - [23efd000-1] 500 Server Error for HTTP GET "/getresource/test"
com.amazonaws.SdkClientException: Unable to execute HTTP request: mybucket.localhost
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1207) ~[aws-java-sdk-core-1.11.951.jar:?]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
|_ checkpoint ⇢ org.springframework.boot.actuate.metrics.web.reactive.server.MetricsWebFilter [DefaultWebFilterChain]
|_ checkpoint ⇢ HTTP GET "/getresource/test" [ExceptionHandlingWebHandler]
Stack trace:
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1207) ~[aws-java-sdk-core-1.11.951.jar:?]
Caused by: java.net.UnknownHostException: mybucket.localhost
at java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) ~[?:?]
I can view my localstack bucket files otherwise fine in an S3 browser.
Here is the docker compose config for my localstack:
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- AWS_ACCESS_KEY_ID=test
- AWS_SECRET_ACCESS_KEY=test
- EDGE_PORT=4566
- SERVICES=lambda,s3
ports:
- '4566-4583:4566-4583'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
Here is how I am reading a text file:
public class ResourceTransferManager {
#Autowired
ResourceLoader resourceLoader;
public void resourceLoadingMethod() throws IOException {
Resource resource = resourceLoader.getResource("s3://mybucket/index.txt");
InputStream inputStream = resource.getInputStream();
System.out.println("File content: " + IOUtils.toString(inputStream, StandardCharsets.UTF_8));
}}

By default S3 client creates a path having bucket name as subdomain and this causes the issue.
there are couple of ways to address this issue :
In case of localstack , do not use the endpoint http://localhost:4566 , use the standard formate endpoint i.e : http://s3.localhost.localstack.cloud:4566 , this will actualy reachout to DNS and will resolve into localhost IP internally and thus this will work fine. (only caviate it , it resolve using public DNS thus it either needs internet connection or you will need to make host entries prefixing bucketname for example in host file put 127.0.0.1 <yourexpectedbucketName>.s3.localhost.localstack.cloud).
OR if you are using docker then instead of making host entries , you can also create network alias for your localstack container like : <yourexpectedbucketName>.s3.localhost.localstack.cloud
another better way is extension to first approach , but here instead of using aliases for each of your bucket (which may not always be feasible) , you can spin up local dns container and use wildcard dns config there. refer simplified sample at : https://gist.github.com/paraspatidar/c29e4adb172a5afc92852a57e621323d
( original reference : https://gist.github.com/NAR8789/92da076d0c35b434107fb4f4f198fd12)

Related

Docker compose to AWS ECS fails at the end

Im publishing a project via docker compose to AWS ECR but it fails on the last couple of steps. Its based on the new "docker compose" integration with an AWS context
The error i receive is:
MicroservicedocumentGeneratorService TaskFailedToStart: ResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve ecr registry auth: service call has been retried 3 time(s): RequestError: send request failed caused by: Post https://api.ecr....
The image is in an ECR private repository along with the others from the compose file.
I have authenticated with:
aws ecr get-login-password
The docker compose is:
microservice_documentGenerator:
image: xxx.dkr.ecr.xxx.amazonaws.com/microservice_documentgenerator:latest
networks:
- publicnet
The original dockerfile is
FROM openjdk:11-jdk-slim
COPY /Microservice.DocumentGenerator/Microservice.DocumentGenerator.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
The output for before the error was:
[+] Running 54/54
- projext DeleteComplete 355.3s
- PublicnetNetwork DeleteComplete 310.5s
- LogGroup DeleteComplete 306.1s
- MicroservicedocumentGeneratorTaskExecutionRole DeleteComplete 272.2s
- MicroservicedocumentGeneratorTaskDefinition Del... 251.2s
- MicroservicedocumentGeneratorServiceDiscoveryEntry DeleteComplete 220.1s
- MicroservicedocumentGeneratorService DeleteComp... 211.9s
try authentication with:
aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
Plus can you mention from where you are making the call and if the server has the permission to make the call to ECR?

Docker Compose Config Server unreachable by Spring Cloud Data Flow Microservices

The config server is reachable from localhost:8888 but when I deploy my applications on SCDF the following error occurs:
Fetching config from server at : http://localhost:8888
2021-07-30 14:58:53.535 INFO 143 --- [ main] o.s.b.context.config.ConfigDataLoader : Connect Timeout Exception on Url - http://localhost:8888. Will be trying the next url if available
2021-07-30 14:58:53.535 WARN 143 --- [ main] o.s.b.context.config.ConfigDataLoader : Could not locate PropertySource ([ConfigServerConfigDataResource#3de88f64 uris = array<String>['http://localhost:8888'], optional = true, profiles = list['default']]): I/O error on GET request for "http://localhost:8888/backend-service/default": Connection refused (Connection refused); nested exception is java.net.ConnectException: Connection refused (Connection refused)
The application(s) deploy successfully on SCDF apart from the config server connection. The only property I specify in SCDF is the docker network. I'm using spring.config.import and am not using any bootstraps. This all works correctly when deployed locally but the microservices can't connect to the config server when deployed on SCDF.
Spring Boot Version: 2.5.1
app properties
spring.application.name=backend-service
spring.cloud.config.fail-fast=true
spring.cloud.config.retry.max-attempts=6
spring.cloud.config.retry.max-interval=11000
spring.config.import=optional:configserver:http://localhost:8888
config server properties
spring.cloud.config.server.git.uri=...
management.endpoints.web.exposure.include=*
spring.cloud.config.fail-fast=true
spring.cloud.config.retry.max-attempts=6
spring.cloud.config.retry.max-interval=11000
spring.cloud.bus.id=my-config-server
spring.cloud.stream.rabbit.bindings.springCloudBus.consumer.declareExchange=false
spring.rabbitmq.host=127.0.0.1
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest
spring.cloud.bus.enabled=true
spring.cloud.bus.refresh.enabled: true
spring.cloud.bus.env.enabled: true
server.port=8888
docker-compose.yml
version: '3.1'
services:
h2:
...
rabbitmq-container:
image: rabbitmq:3.7.14-management
hostname: dataflow-rabbitmq
expose:
- '5672'
ports:
- "5672:5672"
- "15672:15672"
networks:
- scdfnet
dataflow-server:
...
networks:
- scdfnet
app-import:
...
networks:
- scdfnet
skipper-server:
...
networks:
- scdfnet
configserver-container:
image: ...
ports:
- "8888:8888"
expose:
- '8888'
environment:
- spring_rabbitmq_host=rabbitmq-container
- spring_rabbitmq_port=5672
- spring_rabbitmq_username=guest
- spring_rabbitmq_password=guest
depends_on:
- rabbitmq-container
networks:
- scdfnet
networks:
scdfnet:
external:
name: scdfnet
volumes:
h2-data:
For anyone else having this problem, I have found two ways of solving it. The problem is that once the Spring Boot application is containerized, the localhost referred to in the properties file will cause the program to fetch the localhost of the application container's virtual network and not that of your local machine.
There are numerous Stack Overflow answers for this same error but all center around corrections to bootstrap properties. However, bootstrap context initialization is deprecated since Spring Boot 2.4.
The first solution is to use your IPv4 address instead of localhost.
spring.config.import=configserver:http://<insert IPv4 address>:8888
For Example:
spring.config.import=configserver:http://10.6.39.148:8888
A much better solution than hardwiring addresses is to reference the config server container running in docker compose:
spring.config.import=optional:configserver:http://configserver-container:8888
Make sure that all of the Docker Compose services are running on the same network (scdf_network in my case) and note that this address will only work when running on docker-compose so if you are building the maven file on Eclipse, you may need to remove or disable your tests to build successfully. That might be unnecessary; it could just be that there is some property that I failed to copy to my local application.properties file which is causing the context tests to fail. According to the documentation, the optional label should allow the config client to run even if contact cannot be established with the config server.

Unable to connect to Dapr (self-hosted) using gRPC client

I am working on an application that connects an app running using Dapr (self-hosted) and a gRPC client. I use the common.proto and runtime.proto. I am able to get the connection working when using the DAPR CLI.
But when using the DAPR self-hosted instance I am getting the following error when using the gRPC port:
Grpc.Core.RpcException: 'Status(StatusCode="Internal", Detail="invoke API is not ready", DebugException="Grpc.Core.Internal.CoreErrorDetailException: {"created":"#1621435606.713000000","description":"Error received from peer ipv4:127.0.0.1:54000","file":"..\..\..\src\core\lib\surface\call.cc","file_line":1068,"grpc_message":"invoke API is not ready","grpc_status":13}")'
and this when I am using the dapr http port
Grpc.Core.Internal.CoreErrorDetailException: {"created":"#1621435606.713000000","description":"Error received from peer ipv4:127.0.0.1:54000","file":"..\..\..\src\core\lib\surface\call.cc","file_line":1068,"grpc_message":"invoke API is not ready","grpc_status":13}")'
Below is the compose file for DAPR server
version: '3.4'
services:
daprserver:
image: ${DOCKER_REGISTRY-}daprserver
build:
context: .
dockerfile: Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:80
networks:
- daprserver-network
daprserver-dapr:
image: "daprio/dapr:latest"
command: [ "./daprd",
"-app-id", "daprserver",
"-app-port", "80",
"-dapr-http-port","53001",
"-dapr-grpc-port","53000"]
ports:
- 54000:53000 #grpc external:internal
- 54001:53001 #http external:internal
depends_on:
- daprserver
networks:
- daprserver-network
networks:
daprserver-network:
and the client code is below:
var channel = new Channel("127.0.0.1:54000", ChannelCredentials.Insecure);
var daprClient = new Dapr.Client.Autogen.Grpc.v1.Dapr.DaprClient(channel);
var request = new InvokeServiceRequest
{
Id = "daprserver",
Message = new InvokeRequest
{
Method = "weatherforecast",
HttpExtension = new HTTPExtension
{
Verb = HTTPExtension.Types.Verb.Get,
}
}
};
var invokeResponse = daprClient.InvokeServiceAsync(request).GetAwaiter().GetResult();
var json = invokeResponse.Data.Value.ToStringUtf8();
Am I missing a setting or is there any issue with my configuration?
You need to run the app using the below command in the command prompt from the project file. It will run both dapr and your application, you can see two processes in the task manager one is dapr.exe, and the second is dotnet.exe.
dapr run --app-id yourappid dotnet run

Spinnaker "Create Application" menu doesn't load

I'm quite new to the Spinnaker and have to ask for some help I guess. Does anyone knows why it could be that I can't create any Application and just keep seeing this screen.
My installation is through Halyard 1.5.0 and Ubuntu 14.04.
We don't use any cloud provider but I did configure Docker and Kubernetes part
And here is the error I see in the /var/log/spinnaker/echo/echo.log:
2017-11-16 13:52:29.901 INFO 13877 --- [ofit-/pipelines] c.n.s.echo.services.Front50Service : java.net.SocketTimeoutException: timeout
at okio.Okio$3.newTimeoutException(Okio.java:207)
at okio.AsyncTimeout.exit(AsyncTimeout.java:261)
at okio.AsyncTimeout$2.read(AsyncTimeout.java:215)
at okio.RealBufferedSource.indexOf(RealBufferedSource.java:306)
at okio.RealBufferedSource.indexOf(RealBufferedSource.java:300)
at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:196)
at com.squareup.okhttp.internal.http.Http1xStream.readResponse(Http1xStream.java:186)
at com.squareup.okhttp.internal.http.Http1xStream.readResponseHeaders(Http1xStream.java:127)
at com.squareup.okhttp.internal.http.HttpEngine.readNetworkResponse(HttpEngine.java:739)
at com.squareup.okhttp.internal.http.HttpEngine.access$200(HttpEngine.java:87)
at com.squareup.okhttp.internal.http.HttpEngine$NetworkInterceptorChain.proceed(HttpEngine.java:724)
at com.squareup.okhttp.internal.http.HttpEngine.readResponse(HttpEngine.java:578)
at com.squareup.okhttp.Call.getResponse(Call.java:287)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243)
at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:205)
at com.squareup.okhttp.Call.execute(Call.java:80)
at retrofit.client.OkClient.execute(OkClient.java:53)
at retrofit.RestAdapter$RestHandler.invokeRequest(RestAdapter.java:326)
at retrofit.RestAdapter$RestHandler.access$100(RestAdapter.java:220)
at retrofit.RestAdapter$RestHandler$1.invoke(RestAdapter.java:265)
at retrofit.RxSupport$2.run(RxSupport.java:55)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at retrofit.Platform$Base$2$1.run(Platform.java:94)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Socket closed
at java.net.SocketInputStream.read(SocketInputStream.java:204)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at okio.Okio$2.read(Okio.java:139)
at okio.AsyncTimeout$2.read(AsyncTimeout.java:211)
... 24 more
2017-11-16 13:52:29.901 INFO 13877 --- [ofit-/pipelines] c.n.s.echo.services.Front50Service : ---- END ERROR
#grizzthedj
thanks again for recommendations. It doesn't seem, however, solved the issue. I wonder if it has something to do with my Docker Registry or Kubernetes.
Here is what I have in my .hal/config:
dockerRegistry:
enabled: true
accounts:
- name: <hidden-name>
requiredGroupMembership: []
address: https://docker-registry.<hidden-name>.net/
cacheIntervalSeconds: 30
repositories:
- hellopod
- demoapp
primaryAccount: <hidden-name>
kubernetes:
enabled: true
accounts:
- name: <username>
requiredGroupMembership: []
dockerRegistries:
- accountName: <hidden-name>
namespaces: []
context: sre-os1-dev
namespaces:
- spinnaker
omitNamespaces: []
kubeconfigFile: /home/<username>/.kube/config
I suspect you may be using redis as the persistent storage type(I ran into the same issue).
If this is the case, persistent storage using redis doesn't seem to be working properly out-of-the-box, and it is not supported. I would try using an S3 target, if available.
More info here on support for redis
To configure S3 using Halyard, use the following commands:
echo <SECRET_ACCESS_KEY> | hal config storage s3 edit --access-key-id <ACCESS_KEY_ID> --endpoint <S3_ENDPOINT> --bucket <BUCKET_NAME> --root-folder spinnaker --secret-access-key
hal config storage edit --type s3
hal deploy apply
#grizzthedj,
Here is what I've found inside front50.log (I wiped out ID's of course for security reasons)
You may be right.
2017-11-20 12:40:29.151 INFO 682 --- [0.0-8080-exec-1] com.amazonaws.latency : ServiceName=[Amazon S3], AWSErrorCode=[NoSuchKey], StatusCode=[404], ServiceEndpoint=[https://s3-us-west-2.amazonaws.com], Exception=[com.amazonaws.services.s3.model.AmazonS3Exception: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: ...; S3 Extended Request ID: ...), S3 Extended Request ID: ...], RequestType=[GetObjectRequest], AWSRequestID=[...], HttpClientPoolPendingCount=0, RetryCapacityConsumed=0, HttpClientPoolAvailableCount=1, RequestCount=1, Exception=1, HttpClientPoolLeasedCount=0, ClientExecuteTime=[39.634], HttpClientSendRequestTime=[0.072], HttpRequestTime=[39.213], RequestSigningTime=[0.067], CredentialsRequestTime=[0.001, 0.0], HttpClientReceiveResponseTime=[39.059],
I had a similar issue on kubernetes/aws, when I opened up the chrome dev console I was getting lots of 404 errors trying to connect to localhost:8084, I had to reconfigure the deck and gate baseurls. This is what I did using halyard:
hal config security ui edit --override-base-url http://<deck-loadbalancer-dns-entry>:9000
hal config security api edit --override-base-url http://<gate-loadbalancer-dns-entry>:8084
i did hal deploy apply and when it came back I noticed the developer console was throwing cors errors so I had to do the following.
echo "host: 0.0.0.0" | tee \ ~/.hal/default/service-settings/gate.yml \ ~/.hal/default/service-settings/deck.yml
You may note the lack of TLS and cors config, this is a test system so make better choices in production :)

AWS-ECS - Communication between containers - Unknown host error

I have two Docker containers.
TestWeb (Expose: 80)
TestAPI (Expose: 80)
Testweb container calls TestApi container. Host can communicate with TestWeb container from port 8080. Host can communicate with TestApi using 8081.
I can get TestWeb to call TestApi in my dev box (Windows 10) but when I deploy the code to AWS (ECS) I get "unknown host" exception. Both the containers work just fine and I can call them individually. But when I call a method that internally makes a Rest call using HttpClient to a method in Container2, it gives the error:
An error occurred while sending the request. ---> System.Net.Http.CurlException: Couldn't resolve host name.
Code:
using (var client = new HttpClient())
{
try
{
string url = "http://testapi/api/Tenant/?i=" + id;
var response = client.GetAsync(url).Result;
if (response.IsSuccessStatusCode)
{
var responseContent = response.Content;
string responseString = responseContent.ReadAsStringAsync().Result;
return responseString;
}
return response.StatusCode.ToString();
}
catch (HttpRequestException httpRequestException)
{
return httpRequestException.Message;
}
}
The following are the things I have tried:
The two containers (TestWeb, TestAPI) are in the same Task definition in AWS ECS. When I inspect the containers, I get the IP address of each of the containers. I can ping container2 from container1 with their IP address. But I can't ping using container2 name. It gives me "unknown host" error.
It appears ECS doesn't use legit docker-compose under the hood, however, their implementation does support the Compose V2 "links" feature.
Here is a portion of my compose file I just ran on ECS that needed this same functionality AND had the same "could not resolve host" error you were getting. The "links" I added fixed my hostname resolution issue on Elastic Container Service!
version: '3'
services:
appserver:
links:
- database:database
- socks-proxy:socks-proxy
This allowed my appserver to communicate TO the database and socks-proxy hostnames. The format is "SERVICE:ALIAS" and it is fine to keep them both the same as a default practice.
In your example it would be:
version: '3'
services:
testapi:
links:
- testweb:testweb
testweb:
links:
- testapi:testapi
AWS does not use Docker compose but provides a interface to add Task Definitions.
Containers that need to communicate together can be put on the same Task definition. Then we can also specify in the links section the containers that will be called from the current container. Each container can be given its container name on the "Host" section of Task Definition. Once I added the container name to the "Host" field, Container1 (TestWeb) was able to communicate with Container2 (TestAPI).