Aplakka scala s3 connector hangs when trying to put data - scala

I'm trying to process aws s3 put into bucket, with a simple string, I couldn't do this with alpakka (scala) but I can process with same request using aws java sdk
Using alpakka my thread just hangs not processing anything, Future.onComplete not triggering
I've tried to specify aplakka conf file like this ('*' marks covers sensitive data) :
alpakka.s3 {
aws {
credentials {
provider = static
access-key-id = "********"
secret-access-key = "********"
}
region {
provider = static
default-region = "*****"
}
}
}
I do have ~/.aws/credentials on my machine correct, I can connect both with aws sdk and aws cli
As I understand ideally I may not specify any apakka.s3 creds at all, like in aws java sdk
I've already checked this article https://discuss.lightbend.com/t/alpakka-s3-connection-issue/6551/2 nothing worked
My example is strainghforward scala code from docs:
val file: Source[ByteString, NotUsed] =
Source.single(ByteString(body))
val s3Sink: Sink[ByteString, Future[MultipartUploadResult]] =
S3.multipartUpload(bucket, bucketKey)
val result: Future[MultipartUploadResult] =
file.runWith(s3Sink)
but actually I also need my source to be InputStream
val source: Source[ByteString, Future[IOResult]] = StreamConverters.fromInputStream(() => is, 4096)
PS: I don't actually get why i need to specify some host like this:
endpoint-url = "http://localhost:9000"

If you leave alpakka.s3.aws empty, it will use the default AWS configuration methods, as in the CLI (e.g. you can use the AWS_REGION environment variable to set the region and the standard AWS credential file). You can also leave alpakka.s3.aws.credentials empty to use the default AWS credential methods and set the AWS region via
alpakka.s3 {
aws {
region {
provider = static
default-region = "us-east-1"
}
}
}
endpoint-url is only for use with alternative (non-AWS) implementations of the S3 API (e.g. minio). If you're setting it, you will not be able to connect to AWS S3.

Related

Assigning a launch configuration to an Auto-Scaling Group using CDK

Note: the code here is Go but happy to see answers in any CDK language.
In AWS CDK, you can create Launch Configurations:
// Create the launch configuration
lc := awsautoscaling.NewCfnLaunchConfiguration(
stack,
jsii.String("asg-lc"),
&awsautoscaling.CfnLaunchConfigurationProps{
...
},
)
But there is no obvious parameter or function in the Auto-Scaling Group props to attach it.
I have set the update policy:
UpdatePolicy: awsautoscaling.UpdatePolicy_RollingUpdate,
What I want to do is be able to call an auto-refresh in the CI system when an AMI configuration has changed:
aws autoscaling start-instance-refresh --cli-input-json file://asg-refresh.json
The problem is that the launch configuration appears to have been created automatically when the stack is first created and doesn't change on update and has incorrect values (AMI ID is outdated).
Is there a way to define/refresh the launch config using CDK to update the AMI ID? It's a simple change in the UI.
If you use the L2 AutoScalingGroup Construct, you can run cdk deploy after updating the AMI and it should launch a new one for you. Also with this Construct, the Launch Configuration is created for you. You don't really need to worry about it.
IMachineImage image = MachineImage.Lookup(new LookupMachineImageProps()
{
Name = "MY-AMI", // this can be updated on subsequent deploys
});
AutoScalingGroup asg = new AutoScalingGroup(this, $"MY-ASG", new AutoScalingGroupProps()
{
AllowAllOutbound = false,
AssociatePublicIpAddress = false,
AutoScalingGroupName = $"MY-ASG",
Vpc = network.Vpc,
VpcSubnets = new SubnetSelection() { Subnets = network.Vpc.PrivateSubnets },
MinCapacity = 1,
MaxCapacity = 2,
MachineImage = image,
InstanceType = new InstanceType("m5.xlarge"),
SecurityGroup = sg,
UpdatePolicy = UpdatePolicy.RollingUpdate(new RollingUpdateOptions()
{
}),
});

yaml language server and nvim configuration

I would like to use nvim lsp to validate an OpenAPI file.
Here are the steps I've been following:
I installed yaml-language-server, and made sure it was available in the PATH
I downloaded the OpenAPI schema from here, and stored it in my filesystem.
I modified my existing init.vim to include the following:
lspconfig.yamlls.setup {
on_attach = on_attach,
flags = {
debounce_text_changes = 150,
},
settings = {
yaml = {
schemas = {
{
fileMatch = { ".openapi.yaml" },
url = "file:///[...]/openapi.schema.yaml"
}
},
format = {
enable = true,
singleQuote = false,
bracketSpacing = true
},
validate = true,
completion = true
}
}
}
I wrote a simple OpenAPI spec file, and opened it with nvim.
It seems that my nvim correctly hits the yaml-language-server to validate the yaml syntax, but it does not seem to validate against the schema.
One of the problem I have is that I don't have access to logs of nvim, or the yaml-language-server, to have some insight about what's going on.
it's been a while but I figured I'd give this a shot in case it helps anyone.
First, about your lack of access to the nvim log. You don't explain why that is but if your nvim is running, and it seems that it is, you should be able to get the log running this command:
:lua vim.cmd('e'..vim.lsp.get_log_path())
I think the way you have the settings.yaml.schemas node is not correct.
It might also be a problem that you are using a yaml format. I highly recommend you use json for open api schema validation. If you want to try this, just sub all the yaml for json (s/yaml/json), and replace your local yaml file with the json one.
See the example below if it works for you.
-- configure yamlls ls:
require('lspconfig')['yamlls'].setup {
on_attach = on_attach,
capabilities = capabilities,
settings = {
yaml = {
schemas = {
["https://raw.githubusercontent.com/OAI/OpenAPI-Specification/main/schemas/v3.0/schema.yaml"] = "/*"
}
}
}
}
If you really need to load a local file, either one of these formats should work:
["file:///home/$user/openapi.schema.yaml"] = "/*"
["../relative/path/openapi.schema.yaml"] = "/*"

Is it possible to create a new gcloud SQL instance from code?

Is it possible to create a new gcloud SQL instance from code?
For an RnD project, I need to write a tool that is able to spin up and delete postgres database hosted with gcloud. I see this can be done for compute instances using node. I would preferbly like to be able to do it using node or python but am not tied down to any particilar lanauge.
Is this possible and do you have any suggestions?
Yes, The Cloud SQL instances.insert API Call can be used to create instances. However there is no nice nodejs package like #google-cloud/compute. Instead you muse use the the generic, alpha googleapis library. This looks something like:
const {google} = require('googleapis');
const sql = google.sql({version: 'v1beta4'});
async function main () {
const auth = new google.auth.GoogleAuth({scopes: ['https://www.googleapis.com/auth/sqlservice.admin']});
const authClient = await auth.getClient();
const project = "your-project-id-123";
dbinstance = {
// see https://cloud.google.com/sql/docs/postgres/admin-api/rest/v1beta4/instances#DatabaseInstance
// for parameters
};
const res = await sql.instances.insert({project: project, requestBody: dbinstance, auth: authClient});
// ...
}

How to upload to Google Cloud Storage, with metadata, via the Java client?

It looks like the PHP client allows this: https://github.com/GoogleCloudPlatform/google-cloud-php/issues/626
On Java client (version 1.35.0), I haven't found the way to upload content and set its custom metadata, at one go.
There are two ways to write to a bucket: via the bucket object, and (bypassing it) via the storage object.
The bucket API does not seem to allow providing metadata (blobs are identified simply by their name, as a string).
val sto: Storage = StorageOptions.getDefaultInstance.getService()
val bucket: Bucket = sto.get(bucketName)
bucket.create("abc", "ABC!".getBytes(UTF_8))
The storage API, in contrast, allows passing custom metadata:
val sto: Storage = StorageOptions.getDefaultInstance.getService()
val meta: Map[String,String] = Map("a" -> "42", "b" -> "25")
val bId: BlobId = BlobId.of(bucketName, "abc2")
val bInfo: BlobInfo = BlobInfo.newBuilder(bId)
.setMetadata( meta.asJava )
.build()
sto.create(bInfo, "ABC!".getBytes(UTF_8))
I hope this was clearer described somewhere in the documents, but maybe an entry in StackOverflow will suffice. :)
Here's how it works in Java since at least version 1.35.0:
public Blob uploadWithMetadata(Storage storage, Map<String, String> metadata, byte[] objectData) {
BlobInfo blobInfo = BlobInfo
.newBuilder("bucketName", "objectName")
.setMetadata(metadata)
.build();
return storage.create(blobInfo, objectData);
}
As for client version 1.109.1, bucket API still does not allow to set custom metadata.

Serving static /public/ file from Play 2 Scala controller

What is the preferred method to serve a static file from a Play Framework 2 Scala controller?
The file is bundled with my application, so it's not possible to hardcode a filesystem absolute /path/to/the/file, because its location depends on where the Play app happens to be installeld.
The file is placed in the public/ dir, but not in app/assets/, because I don't want Play to compile it.
(The reason I don't simply add a route to that file, is that one needs to login before accessing that file, otherwise it's of no use.)
Here is what I've done so far, but this breaks on my production server.
object Application ...
def viewAdminPage = Action ... {
... authorization ...
val adminPageFile = Play.getFile("/public/admin/index.html")
Ok.sendFile(adminPageFile, inline = true)
}
And in my routes file, I have this line:
GET /-/admin/ controllers.Application.viewAdminPage
The problem is that on my production server, this error happens:
FileNotFoundException: app1/public/admin/index.html
Is there some other method, rather than Play.getFile and OK.sendFile, to specify which file to serve? That never breaks in production?
(My app is installed in /some-dir/app1/ and I start it from /some-dir/ (without app1/) — perhaps everything would work if I instead started the app from /some-dir/app1/. But I'd like to know how one "should" do, to serve a static file from inside a controller? So that everything always works also on the production servers, regardless of from where I happen to start the application)
Check Streaming HTTP responses doc
def index = Action {
Ok.sendFile(
content = new java.io.File("/tmp/fileToServe.pdf"),
fileName = _ => "termsOfService.pdf"
)
}
You can add some random string to the fileName (individual for each logged user) to avoid sharing download link between authenticated and non-authinticated users and also make advanced download stats.
I did this: (but see the Update below!)
val fileUrl: java.net.URL = this.getClass().getResource("/public/admin/file.html")
val file = new java.io.File(adminPageUrl.toURI())
Ok.sendFile(file, inline = true)
(this is the controller, which is (and must be) located in the same package as the file that's being served.)
Here is a related question: open resource with relative path in java
Update
Accessing the file via an URI causes an error: IllegalArgumentException: URI is not hierarchical, if the file is then located inside a JAR, which is the case if you run Play like so: play stage and then target/start.
So instead I read the file as a stream, converted it to a String, and sent that string as HTML:
val adminPageFileString: String = {
// In prod builds, the file is embedded in a JAR, and accessing it via
// an URI causes an IllegalArgumentException: "URI is not hierarchical".
// So use a stream instead.
val adminPageStream: java.io.InputStream =
this.getClass().getResourceAsStream("/public/admin/index.html")
io.Source.fromInputStream(adminPageStream).mkString("")
}
...
return Ok(adminPageFileString) as HTML
Play has a built-in method for this:
Ok.sendResource("public/admin/file.html", classLoader)
You can obtain a classloader from an injected Environment with environment.classLoader or from this.getClass.getClassLoader.
The manual approach for this is the following:
val url = Play.resource(file)
url.map { url =>
val stream = url.openStream()
val length = stream.available
val resourceData = Enumerator.fromStream(stream)
val headers = Map(
CONTENT_LENGTH -> length.toString,
CONTENT_TYPE -> MimeTypes.forFileName(file).getOrElse(BINARY),
CONTENT_DISPOSITION -> s"""attachment; filename="$name"""")
SimpleResult(
header = ResponseHeader(OK, headers),
body = resourceData)
The equivalent using the assets controller is this:
val name = "someName.ext"
val response = Assets.at("/public", name)(request)
response
.withHeaders(CONTENT_DISPOSITION -> s"""attachment; filename="$name"""")
Another variant, without using a String, but by streaming the file content:
def myStaticRessource() = Action { implicit request =>
val contentStream = this.getClass.getResourceAsStream("/public/content.html")
Ok.chunked(Enumerator.fromStream(contentStream)).as(HTML)
}