Unable to run .net core app Azure Durable Functions v3 in docker - docker-compose

I am trying to implement a docker-compose.yml file to build a container for a .net core Azure Durable Function v3. The following code snippet is from the environment file i.e. .env:
AzureWebJobsStorage=MyConnectionString
AzureWebJobsDashboard=MyConnectionString
AzureWebJobsStorageQueue=MyAnotherConnectionString
This is how a part of the docker-compose file looks like:
local.mydurablefunction:
image: ${DOCKER_REGISTRY-}myfunction
build:
context: .
dockerfile: src/MyFunction/Dockerfile
ports:
- 34080:34080
environment:
- AzureWebJobsStorageQueue
- AzureWebJobsDashboard
- AzureWebJobsStorageQueue
When running docker-compose up I get the following error message:
fail: Host.Startup[515] A host error has occurred during startup
operation 'd8e39085-bed2-4f30-b80b-37d2fe1b286d'.
System.InvalidOperationException: Unable to find an Azure Storage
connection string to use for this binding. at
Microsoft.Azure.WebJobs.Extensions.DurableTask.AzureStorageDurabilityProviderFactory.GetAzureStorageOrchestrationServiceSettings(String
connectionName, String taskHub
This is how the function looks like:
[FunctionName("MyTrigger")]
public async Task RunAsync(
[QueueTrigger("queuename", Connection = "")] string metadataPayload,
[DurableClient] IDurableOrchestrationClient starter,
ILogger log,
CancellationToken cancellationToken)
{
}
Somewhere in the function's body, we are calling a durable task which looks like the following code snippet:
[FunctionName("Orchestrator")]
public async Task RunOrchestratorAsync(
[OrchestrationTrigger] IDurableOrchestrationContext context,
[DurableClient] IDurableOrchestrationClient orchestrationClient,
ILogger log)
{
}
And this is the service dependency definition:
{
"dependencies": {
"storage1": {
"type": "storage",
"connectionId": "AzureWebJobsStorageQueue"
}
}
}
What is the solution for this problem or what may be missing in this configuration? Could it be due to not being able to copy the environment variables to the container?

Related

Sending parameter by docker-compose does not work on appsettings in .NET6

I have a docker-compose with environment variables for the ConnectionString :
services:
nameservice:
...
environment:
ConnectionStrings__mysqlDatabase: "Server=db;Uid=root;Pwd=password;"
I have on the .NET side in a console program, an AppSettings file (.NET6):
{
"ConnectionStrings": {
"mysqlDatabase": "Server=localhost;Uid=root;Pwd=password;"
}
}
and my program.cs :
IConfiguration Config = new ConfigurationBuilder().AddJsonFile("appsettings.json").Build();
ConnectionStringConstant.mysqlDatabase = Config.GetConnectionString("mysqlDatabase");
my problem is that in docker after creating a container, it keeps the value in AppSettings (which is not good for production) and it does not take the value passed by the docker-compose file.
Can you help me?
When building configuration manually (without using hosting) setup for environment variables support is required:
var configurationRoot = new ConfigurationBuilder()
.AddJsonFile("appsettings.json")
.AddEnvironmentVariables()
.Build();
Read more:
Configuration in .NET

Unknown host when using localstack with Spring Cloud AWS 2.3

"ResourceLoader" with AWS S3 works fine with these properties:
cloud:
aws:
s3:
endpoint: s3.amazonaws.com <-- custom endpoint added in spring cloud aws 2.3
credentials:
accessKey: XXXXXX
secretKey: XXXXXX
region:
static: us-east-1
stack:
auto: false
However, when I bring up a localstack container locally and try to use it with these properties(as per this release doc: https://spring.io/blog/2021/03/17/spring-cloud-aws-2-3-is-now-available):
cloud:
aws:
s3:
endpoint: http://localhost:4566
credentials:
accessKey: test
secretKey: test
region:
static: us-east-1
stack:
auto: false
I get this exception:
17:12:12.130 [reactor-http-nio-2] ERROR org.springframework.boot.autoconfigure.web.reactive.error.AbstractErrorWebExceptionHandler - [23efd000-1] 500 Server Error for HTTP GET "/getresource/test"
com.amazonaws.SdkClientException: Unable to execute HTTP request: mybucket.localhost
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1207) ~[aws-java-sdk-core-1.11.951.jar:?]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
|_ checkpoint ⇢ org.springframework.boot.actuate.metrics.web.reactive.server.MetricsWebFilter [DefaultWebFilterChain]
|_ checkpoint ⇢ HTTP GET "/getresource/test" [ExceptionHandlingWebHandler]
Stack trace:
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1207) ~[aws-java-sdk-core-1.11.951.jar:?]
Caused by: java.net.UnknownHostException: mybucket.localhost
at java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) ~[?:?]
I can view my localstack bucket files otherwise fine in an S3 browser.
Here is the docker compose config for my localstack:
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- AWS_ACCESS_KEY_ID=test
- AWS_SECRET_ACCESS_KEY=test
- EDGE_PORT=4566
- SERVICES=lambda,s3
ports:
- '4566-4583:4566-4583'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
Here is how I am reading a text file:
public class ResourceTransferManager {
#Autowired
ResourceLoader resourceLoader;
public void resourceLoadingMethod() throws IOException {
Resource resource = resourceLoader.getResource("s3://mybucket/index.txt");
InputStream inputStream = resource.getInputStream();
System.out.println("File content: " + IOUtils.toString(inputStream, StandardCharsets.UTF_8));
}}
By default S3 client creates a path having bucket name as subdomain and this causes the issue.
there are couple of ways to address this issue :
In case of localstack , do not use the endpoint http://localhost:4566 , use the standard formate endpoint i.e : http://s3.localhost.localstack.cloud:4566 , this will actualy reachout to DNS and will resolve into localhost IP internally and thus this will work fine. (only caviate it , it resolve using public DNS thus it either needs internet connection or you will need to make host entries prefixing bucketname for example in host file put 127.0.0.1 <yourexpectedbucketName>.s3.localhost.localstack.cloud).
OR if you are using docker then instead of making host entries , you can also create network alias for your localstack container like : <yourexpectedbucketName>.s3.localhost.localstack.cloud
another better way is extension to first approach , but here instead of using aliases for each of your bucket (which may not always be feasible) , you can spin up local dns container and use wildcard dns config there. refer simplified sample at : https://gist.github.com/paraspatidar/c29e4adb172a5afc92852a57e621323d
( original reference : https://gist.github.com/NAR8789/92da076d0c35b434107fb4f4f198fd12)

Deployed IBM cloud function (nodejs) using manifest yaml with dependencies execution fails

I've deployed nodejs based IBM cloud function using manifest file. I'll have few other functions which may share some common codes. Here is the folder structure
manifest.yml
actions/
- myFunction1/
-- index.js
-- package.json
- myFunction2/
-- index.js
-- package.json
- common/
-- utils.js
Here is my manifest.yml -
packages:
myfunctions:
version: 1.0
license: Apache-2.0
actions:
myFunction1:
function: actions/myFunction1
runtime: nodejs:10
include:
- ["actions/common/*.js", "./common/"]
myFunction2:
function: actions/myFunction2/index.js
runtime: nodejs:10
I deployed the functions using following command from cmd-
ibmcloud fn deploy --manifest manifest.yml
The deployment went successful, both functions are created. Second function(myFunction2) executes properly but the first function throws error when try to execute. Here is the error message -
{
"error": "Initialization has failed due to: There was an error uncompressing the action archive."
}
I even tried with the inclusion of the dependencies in manifest and codes but throws same error. I was following this article -
https://medium.com/openwhisk/whisk-deploy-zip-actions-with-include-exclude-30ba6d96ad8b
Still struggling, appreciate any help.
Thanks
Musa

Serverless: Service files not changed. Skipping deployment

After some successful projects, I have deleted the functions inside AWS-lambda, deleted the logs in CloudWatch and the IAM roles.
Also deleted the my-service folder from my Documents.
Then I followed the steps in this tutorial in serverless.
Now when I run:
serverless deploy --aws-profile testUser_atWork
where testUser_atWork is one of my profiles to connect in AWS.
I get the follow error:
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Service files not changed. Skipping deployment...
Service Information
service: my-service
stage: dev
region: us-east-1
stack: my-service-dev
api keys:
None
endpoints:
None
functions:
hello: my-service-dev-hello
//serverless.yml
service: my-service
provider:
name: aws
runtime: nodejs6.10
functions:
hello:
handler: handler.hello
And this my handler.js
'use strict';
module.exports.hello = (event, context, callback) => {
const response = {
statusCode: 200,
body: JSON.stringify({
message: 'Go Serverless v1.0! Your function executed successfully!',
input: event,
}),
};
callback(null, response);
// Use this code if you don't use the http event with the LAMBDA-PROXY integration
// callback(null, { message: 'Go Serverless v1.0! Your function executed successfully!', event });
};
I don't understand why it is skipping deployment.
have you tried :
serverless deploy --aws-profile testUser_atWork --force to force it to update the stack?
Otherwise, try deleting the stack in cloudformation, or with the serverless remove command

AWS-ECS - Communication between containers - Unknown host error

I have two Docker containers.
TestWeb (Expose: 80)
TestAPI (Expose: 80)
Testweb container calls TestApi container. Host can communicate with TestWeb container from port 8080. Host can communicate with TestApi using 8081.
I can get TestWeb to call TestApi in my dev box (Windows 10) but when I deploy the code to AWS (ECS) I get "unknown host" exception. Both the containers work just fine and I can call them individually. But when I call a method that internally makes a Rest call using HttpClient to a method in Container2, it gives the error:
An error occurred while sending the request. ---> System.Net.Http.CurlException: Couldn't resolve host name.
Code:
using (var client = new HttpClient())
{
try
{
string url = "http://testapi/api/Tenant/?i=" + id;
var response = client.GetAsync(url).Result;
if (response.IsSuccessStatusCode)
{
var responseContent = response.Content;
string responseString = responseContent.ReadAsStringAsync().Result;
return responseString;
}
return response.StatusCode.ToString();
}
catch (HttpRequestException httpRequestException)
{
return httpRequestException.Message;
}
}
The following are the things I have tried:
The two containers (TestWeb, TestAPI) are in the same Task definition in AWS ECS. When I inspect the containers, I get the IP address of each of the containers. I can ping container2 from container1 with their IP address. But I can't ping using container2 name. It gives me "unknown host" error.
It appears ECS doesn't use legit docker-compose under the hood, however, their implementation does support the Compose V2 "links" feature.
Here is a portion of my compose file I just ran on ECS that needed this same functionality AND had the same "could not resolve host" error you were getting. The "links" I added fixed my hostname resolution issue on Elastic Container Service!
version: '3'
services:
appserver:
links:
- database:database
- socks-proxy:socks-proxy
This allowed my appserver to communicate TO the database and socks-proxy hostnames. The format is "SERVICE:ALIAS" and it is fine to keep them both the same as a default practice.
In your example it would be:
version: '3'
services:
testapi:
links:
- testweb:testweb
testweb:
links:
- testapi:testapi
AWS does not use Docker compose but provides a interface to add Task Definitions.
Containers that need to communicate together can be put on the same Task definition. Then we can also specify in the links section the containers that will be called from the current container. Each container can be given its container name on the "Host" section of Task Definition. Once I added the container name to the "Host" field, Container1 (TestWeb) was able to communicate with Container2 (TestAPI).