I got a service in Java with Spring-boot + spring-cloud-aws-messaging ... that uploads files into S3 ...
It's failing when tries to upload a file into S3 bucket (only when I run it in docker-compose).
here is my code
pom.xml
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-aws-messaging</artifactId>
<version>2.2.6.RELEASE</version>
</dependency>
S3 client
public AmazonS3 amazonS3Client() {
return AmazonS3ClientBuilder
.standard()
.withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration(appConfig.getAwsS3Endpoint(), appConfig.getRegion()))
.withCredentials(credentialsProvider)
.build();
}
docker-compose yaml
version: '3.7'
services:
dumb-service:
build: ../dumb-service/.
image: dumb-service:latest
hostname: dumb-service
ports:
- '8080:8080'
- '5006:5006'
environment:
JAVA_OPTS: '-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5006'
AWS_S3_ENDPOINT: 'http://localstack:4566/'
AWS_S3_BUCKET: 'dumb-bucket'
AWS_SQS_ENDPOINT: 'http://localstack:4566'
PDF_REQUEST_QUEUE_URL: 'http://localstack:4566/000000000000/dumb-inbound.fifo'
PDF_REQUEST_QUEUE_NAME: 'dumb-inbound.fifo'
AWS_REGION: "${AWS_REGION}"
AWS_ACCESS_KEY_ID: "${AWS_ACCESS_KEY_ID}"
AWS_SECRET_ACCESS_KEY: "${AWS_SECRET_ACCESS_KEY}"
depends_on:
- "localstack"
# localstack home: https://github.com/localstack/localstack
localstack:
hostname: localstack
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack
ports:
- '4566:4566'
- '4571:4571'
environment:
- SERVICES=${SERVICES-}
- DEBUG=${DEBUG-}
- DATA_DIR=${DATA_DIR-}
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR-}
- HOST_TMP_FOLDER=${TMPDIR:-/tmp/}localstack
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp}/localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
Error is:
Unable to execute HTTP request: dumb-bucket.localstack
(This code is working fine with AWS services or running code native + localstack)
Any thoughts about how to connect to localstack S3 ?
If you are trying to access an s3 bucket with localstack via an aws api; you would need to withPathStyleEnabled flag turned on.
for eg:
AmazonS3 createLocalStackS3WithEndpointConfig(String endpointUrl, String awsRegion) {
return createAwsBeanWithEndpointConfig(
endpointUrl, awsRegion,
epConf -> AmazonS3ClientBuilder.standard().withEndpointConfiguration(epConf).withPathStyleAccessEnabled(true).build(),
AmazonS3ClientBuilder::defaultClient
);
} `
Reference : Troubleshooting section of https://pythonrepo.com/repo/localstack-localstack
Related
I have an Asp.net Core application with Docker-Compose.yml configuration and Docker Section configuration in the LaunchSettings.json
Connection string is configured like mentioned below in the Docker-Compose.yml
PersistenceAccess__ConnectionString= Server=db;Port=5432;Database=DemoDatabase;User Id=postgres;Password=postgres;
version: '3.4'
services:
Demo.api:
image: ${DOCKER_REGISTRY-}demoapi
build:
context: .
dockerfile: Sources/Code/Demo.Api/Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=Development
- PersistenceAccess__ConnectionString= Server=db;Port=5432;Database=DemoDatabase;User Id=postgres;Password=postgres;
ports:
- '8081:80'
depends_on:
- db
db:
image: postgres
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
logging:
options:
max-size: 10m
max-file: "3"
ports:
- '5438:5432'
volumes:
- ./postgres-data:/var/lib/postgresql/data
# copy the sql script to create tables
- ./sql/create_tables.sql:/docker-entrypoint-initdb.d/create_tables.sql
# copy the sql script to fill tables
- ./sql/fill_tables.sql:/docker-entrypoint-initdb.d/fill_tables.sql
Same Connection string is configured in the LaunchSettings.json like mentioned below
"PersistenceAccess__ConnectionString": "Server=host.docker.internal;Port=5432;Database=DemoDatabaseNew;User Id=postgres;Password=postgres;"
{
"iisSettings": {
..
},
"profiles": {
"IIS Express": {
...
},
"Docker": {
"commandName": "Docker",
"launchBrowser": true,
"launchUrl": "{Scheme}://{ServiceHost}:{ServicePort}",
"environmentVariables": {
"PersistenceAccess__ConnectionString": "Server=host.docker.internal;Port=5432;Database=DemoDatabaseNew;User Id=postgres;Password=postgres;"
},
"DockerfileRunArguments": "--add-host host.docker.internal:host-gateway",
"publishAllPorts": true,
"useSSL": false
}
}
}
Which configuration will be used while I run the application using
docker-compose -f docker-compose.yml up
Does the above command create a Database? If so, when it will create the database and what would be the name of the database? Also, when it would create the tables and seed the data?
Please suggest.
I have the following flink program:
object StreamToHive {
def main(args: Array[String]) {
val builder = KafkaSource.builder[MyEvent]
builder.setBootstrapServers("localhost:29092")
builder.setProperty("partition.discovery.interval.ms", "10000")
builder.setTopics("myevent")
builder.setBounded(OffsetsInitializer.latest)
builder.setStartingOffsets(OffsetsInitializer.earliest)
builder.setDeserializer(KafkaRecordDeserializationSchema.of(new MyEventSchema))
val source = builder.build()
val env = StreamExecutionEnvironment.getExecutionEnvironment
val streamSource = env
.fromSource[MyEvent](source, WatermarkStrategy.noWatermarks[MyEvent](), "Kafka Source")
val sink: StreamingFileSink[MyEvent] = StreamingFileSink
.forBulkFormat(new Path("hdfs://localhost:50070/mydata"),
AvroParquetWriters.forReflectRecord[MyEvent](classOf[MyEvent])
)
.build()
streamSource.addSink(sink)
env.execute()
}
}
But executing this fails with apache flink java.lang.IllegalArgumentException: Invalid lambda deserialization. I assume I have something completely wrong, but what? What must a do to be able to write POJO instances to an HDFS instance?
Reading from Kafka works all fine.
With the class MyEvent defined like this:
class MyEvent() extends Serializable{
#JsonProperty("id")
var id:String = null
#JsonProperty("timestamp")
var timestamp:Date = null
}
The namenode is running with the following docker-compose services:
namenode:
image: bde2020/hadoop-namenode:2.0.0-hadoop2.7.4-java8
container_name: namenode
volumes:
- ./hdfs/namenode:/hadoop/dfs/name
environment:
- CLUSTER_NAME=hive
env_file:
- ./hive/hadoop-hive.env
ports:
- "50070:50070"
networks:
- shared
datanode:
image: bde2020/hadoop-datanode:2.0.0-hadoop2.7.4-java8
container_name: datanode
volumes:
- ./hdfs/datanode:/hadoop/dfs/data
env_file:
- ./hive/hadoop-hive.env
environment:
SERVICE_PRECONDITION: "namenode:50070"
depends_on:
- namenode
ports:
- "50075:50075"
networks:
- shared
hive-server:
image: bde2020/hive:2.3.2-postgresql-metastore
container_name: hive-server
env_file:
- ./hive/hadoop-hive.env
environment:
HIVE_CORE_CONF_javax_jdo_option_ConnectionURL: "jdbc:postgresql://hive-metastore/metastore"
SERVICE_PRECONDITION: "hive-metastore:9083"
depends_on:
- hive-metastore
ports:
- "10000:10000"
networks:
- shared
hive-metastore:
image: bde2020/hive:2.3.2-postgresql-metastore
container_name: hive-metastore
env_file:
- ./hive/hadoop-hive.env
command: /opt/hive/bin/hive --service metastore
environment:
SERVICE_PRECONDITION: "namenode:50070 datanode:50075 hive-metastore-postgresql:5432"
depends_on:
- hive-metastore-postgresql
ports:
- "9083:9083"
networks:
- shared
hive-metastore-postgresql:
image: bde2020/hive-metastore-postgresql:2.3.0
container_name: hive-metastore-postgresql
volumes:
- ./metastore-postgresql/postgresql/data:/var/lib/postgresql/data
depends_on:
- datanode
networks:
- shared
I am using Redis using NestJS and I see following error. I am going through different articles like here and looks like I am following the same but still getting this error.
Steps:
I used docker compose up command
Made sure that host in redis.module.ts is same as service name in docker-compose.yml which is redis.
What am I missing here?
Error:
Error: getaddrinfo ENOTFOUND redis
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:71:26)
Code:
redis.module.ts
import { CacheModule, Module } from '#nestjs/common';
import { ConfigModule, ConfigService } from '#nestjs/config';
import { RedisService } from './redis.service';
import * as redisStore from 'cache-manager-redis-store';
import { envVariables } from '../env.variables';
#Module({
imports: [
CacheModule.registerAsync({
imports: [ConfigModule],
inject: [ConfigService],
useFactory: async (configService: ConfigService) => ({
store: redisStore,
host: process.env.REDIS_HOST,
port: configService.get('REDIS_PORT'),
ttl: configService.get('CACHE_TTL'),
max: configService.get('MAX_ITEM_IN_CACHE'),
}),
}),
],
providers: [RedisService],
exports: [RedisService],
})
export class RedisModule {}
.env
#REDIS
REDIS_HOST=redis
docker-compose.yml
version: "3.8"
services:
partnersusers:
image: partnersusers
build:
context: .
dockerfile: ./Dockerfile
environment:
- RUN_ENV=dev
- NODE_ENV=development
ports:
- "4000:4000"
networks:
- default
redis:
image: 'redis:alpine'
ports:
- "6379:4000"
networks:
default:
driver: bridge
Error in Docker:
I'm not an expert, but I notice a couple of things on your docker-compose.yml file.
First your redis service is missing the network assignation:
networks:
- default
Without this, redis-commander won't be able to find it as it's not on the same network.
Second redis by default runs on port: 6379 if you want it to run on port 4000 I believe you will need to specify an env var for it.
Or here maybe you just confused the order of the port matching which should've been: 4000:6379 (host_port:container_port).
this is my working docker-compose.yml for reference:
---
version: '3.8'
services:
...
redis:
image: redis
container_name: redis
hostname: redis
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '6379:6379'
networks:
- my-net
redis-commander:
depends_on:
- redis
container_name: redis-commander
hostname: redis-commander
image: rediscommander/redis-commander:latest
restart: always
environment:
- REDIS_HOSTS=local:redis:6379 # note: this has to be the port the redis container exposes.
ports:
- "8081:8081"
networks:
- my-net
networks:
my-net:
Hope this helps :)
I'm starting a springboot app and dynamodb local in docker containers via docker-compose.
Both containers come up successfully.
When I use the container name for the AMAZON_AWS_DYNAMODB_ENDPOINT value, I get the following error:
[https-jsse-nio-8443-exec-6] [2019-04-15 08:03:42,239] INFO com.amazonaws.protocol.json.JsonContent [] - Unable to parse HTTP response content
com.fasterxml.jackson.core.JsonParseException: Unexpected character ('<' (code 60)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')
at [Source: (byte[])"<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved here.</p>
</body></html>
Further down I'm getting the following error:
com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: null (Service: AmazonDynamoDBv2; Status Code: 301; Error Code: null; Request ID: null)
If I replace the AMAZON_AWS_DYNAMODB_ENDPOINT value with my Windows computer IP address (running the containers) it works successfully.
Any suggestions on how to get the container name working?
Here's my docker-compose:
version: '3'
services:
dynamodb:
image: amazon/dynamodb-local
ports:
- "8000:8000"
volumes:
- dynamodata:/data
command: "-jar DynamoDBLocal.jar -sharedDb -dbPath ."
app:
build: .
ports:
- "8443:8443"
environment:
- SERVER_PORT=8443
- SERVER_SSL_KEY_STORE=/etc/ssl/key
- SERVER_SSL_KEY_STORE_TYPE=PKCS12
- SERVER_SSL_KEY_ALIAS=tomcat
- SERVER_SSL_KEY_STORE_PASSWORD=xxxxxx
- SPRING_PROFILES_ACTIVE=aws,local
- DATAPOWER_ENABLED=true
# - AMAZON_AWS_DYNAMODB_ENDPOINT=${DYNAMODB_ENDPOINT:-http://dynamodb:8000} <--- does not work
# - AMAZON_AWS_DYNAMODB_ENDPOINT=${DYNAMODB_ENDPOINT:-http://xx.xxx.xxx.xxx:8000} <--- works
- AMAZON_AWS_DYNAMODB_REGION=${DYNAMODB_REGION:-us-east-1}
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID:-local}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY:-xxxxxxxxxx}
- ENV=dev
- AWS_REGION=us-east-1
volumes:
dynamodata:
Thanks
Try adding networks something like this:
version: '3'
services:
dynamodb:
image: amazon/dynamodb-local
ports:
- "8000:8000"
volumes:
- dynamodata:/data
command: "-jar DynamoDBLocal.jar -sharedDb -dbPath ."
networks:
- my-network
app:
build: .
ports:
- "8443:8443"
environment:
- SERVER_PORT=8443
- SERVER_SSL_KEY_STORE=/etc/ssl/key
- SERVER_SSL_KEY_STORE_TYPE=PKCS12
- SERVER_SSL_KEY_ALIAS=tomcat
- SERVER_SSL_KEY_STORE_PASSWORD=xxxxxx
- SPRING_PROFILES_ACTIVE=aws,local
- DATAPOWER_ENABLED=true
# - AMAZON_AWS_DYNAMODB_ENDPOINT=${DYNAMODB_ENDPOINT:-http://dynamodb:8000} <--- does not work
# - AMAZON_AWS_DYNAMODB_ENDPOINT=${DYNAMODB_ENDPOINT:-http://xx.xxx.xxx.xxx:8000} <--- works
- AMAZON_AWS_DYNAMODB_REGION=${DYNAMODB_REGION:-us-east-1}
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID:-local}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY:-xxxxxxxxxx}
- ENV=dev
- AWS_REGION=us-east-1
networks:
- my-network
volumes:
dynamodata:
networks:
my-network:
driver: bridge
I'm trying to use Keyrock to offer Single Sign-on on different platforms. Specifically, I want to offer that service in Grafana. I've seen the configuration to be changed in Grafana and my docker-compose is like this:
version: "3.1"
services:
grafana:
image: grafana/grafana:5.1.0
ports:
- 3000:3000
networks:
default:
ipv4_address: 172.18.1.4
environment:
- GF_AUTH_GENERIC_OAUTH_CLIENT_ID=90be8de5-69dc-4b9a-9cc3-962cca534410
- GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET=9e98964b-5043-4086-9657-51f1d8c11fe0
- GF_AUTH_GENERIC_OAUTH_ENABLED=true
- GF_AUTH_GENERIC_OAUTH_AUTH_URL=http://172.18.1.5:3005/oauth2/authorize
- GF_AUTH_GENERIC_OAUTH_TOKEN_URL=http://172.18.1.5:3005/oauth2/token
- GF_AUTH_GENERIC_OAUTH_API_URL=http://172.18.1.5:3005/v1/users
- GF_AUTH_GENERIC_OAUTH_ALLOW_SIGN_UP = true
- GF_Server_DOMAIN=172.18.1.4
- GF_Server_ROOT_URL=http://172.18.1.4:3000
keyrock:
image: fiware/idm:7.5.1
container_name: fiware-keyrock
hostname: keyrock
networks:
default:
ipv4_address: 172.18.1.5
depends_on:
- mysql-db
ports:
- "3005:3005"
- "3443:3443"
environment:
- DEBUG=idm:*
- DATABASE_HOST=mysql-db
- IDM_DB_PASS_FILE=/run/secrets/my_secret_data
- IDM_DB_USER=root
- IDM_HOST=http://localhost:3005
- IDM_PORT=3005
- IDM_HTTPS_ENABLED=false
- IDM_HTTPS_PORT=3443
- IDM_ADMIN_USER=admin
- IDM_ADMIN_EMAIL=admin#test.com
- IDM_ADMIN_PASS=test
secrets:
- my_secret_data
healthcheck:
test: curl --fail -s http://localhost:3005/version || exit 1
mysql-db:
restart: always
image: mysql:5.7
hostname: mysql-db
container_name: db-mysql
expose:
- "3306"
ports:
- "3306:3306"
networks:
default:
ipv4_address: 172.18.1.6
environment:
- "MYSQL_ROOT_PASSWORD_FILE=/run/secrets/my_secret_data"
- "MYSQL_ROOT_HOST=172.18.1.5"
volumes:
- mysql-db-sso:/var/lib/mysql
- ./mysql-data:/docker-entrypoint-initdb.d/:ro
secrets:
- my_secret_data
networks:
default:
ipam:
config:
- subnet: 172.18.1.0/24
volumes:
mysql-db-sso:
secrets:
my_secret_data:
file: ./secrets.txt
I have the Grafana application registered in Keyrock and has as callback http://172.18.1.4:3000/login. When I try to Sign-in in Grafana through Oauth it redirects me to the keyrock page to Sign-in, but when entering the credentials it returns me an invalid client_id, but it is the same one that returns Keyrock to me when obtaining the application information.
Is it possible that I lack something to configure or should it be done in another way?
Here is the working configuration for Keyrock 7.5.1 and Grafana 6.0.0
Grafana:
[auth.generic_oauth]
enabled = true
allow_sign_up = true
client_id = ${CLIENT_ID}
client_secret = ${CLIENT_SECRET}
scopes = permanent
auth_url = ${KEYROCK_URL}/oauth2/authorize
token_url = ${KEYROCK_URL}/oauth2/token
api_url = ${KEYROCK_URL}/user
App in Keyrock:
url - ${GRAFANA_ROOT_URL}
callback_url - ${GRAFANA_ROOT_URL}/login/generic_oauth
Token types - Permanent
So you need to fix env variable
GF_AUTH_GENERIC_OAUTH_API_URL
to
http://172.18.1.5:3005/user
and callback url
http://172.18.1.4:3000/login
to
http://172.18.1.4:3000/login/generic_oauth
and add oauth2 scopes