How to customise logging in tapir? - scala

I'm sure this is a ridiculous question... I'm trying to enable tapir's logging as described here:
https://tapir.softwaremill.com/en/v0.19.0-m4/server/debugging.html
... but even having looked at the ServerOptions docs, I'm not getting anywhere. My naive assumption from the docs was that it should be something like:
import sttp.tapir.server.akkahttp.AkkaHttpServerOptions
import sttp.tapir.server.interceptor.log.DefaultServerLog
val customServerOptions: AkkaHttpServerOptions = AkkaHttpServerOptions.customInterceptors(
serverLog = DefaultServerLog(logWhenHandled=true, logAllDecodeFailures=true)
)
and then
AkkaHttpServerInterpreter(customServerOptions).toRoute(...)
However, this is clearly way off - DefaultServerLog is the wrong type, and would need masses of configuration.
The docs imply this can just be done by setting a few flags, what am I missing?

The DefaultServerLog is the default implementation for the ServerLog trait, though it has a couple of fields without default values. These fields specify how to actually perform the logging: DefaultServerLog .doLogWhenHandled etc.
What you need to do is to get an instance of DefaultServerLog, which is configured to integrate with akka-http. This is available as AkkaHttpServerOptions.Log.defaultServerLog.
Unfortunately, in 0.19 this was typed too narrowly, not allowing proper customisation. Hence, an ugly cast is needed (this is fixed in the current, stable 1.2 version):
val customServerOptions: AkkaHttpServerOptions =
AkkaHttpServerOptions.customInterceptors(
serverLog = Some(
AkkaHttpServerOptions.Log.defaultServerLog
.asInstanceOf[DefaultServerLog[LoggingAdapter => Future[Unit]]]
.copy(logWhenHandled = true, logAllDecodeFailures = true)
)
)
AkkaHttpServerInterpreter(customServerOptions).toRoute(???)
As for the docs, you are right that they might be misleading (they haven't changed much since 0.19), I'll fix that shortly.

Related

How to use delta trigger in flink?

I want to use the deltatrigger in apache flink (flink 1.3) but I have some trouble with this code :
.trigger(DeltaTrigger.of(100, new DeltaFunction[uniqStruct] {
override def getDelta(oldFp: uniqStruct, newFp: uniqStruct): Double = newFp.time - oldFp.time
}, TypeInformation[uniqStruct]))
And I have this error:
error: object org.apache.flink.api.common.typeinfo.TypeInformation is not a value [ERROR] }, TypeInformation[uniqStruct]))
I don't understand why DeltaTrigger need TypeSerializer[T]
and I don't know what to do to remove this error.
Thanks a lot everyone.
I would read into this a bit https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/types_serialization.html sounds like you can create a serializer using typeInfo.createSerializer(config) on your type info. Note what you're passing in currently is a type itself and NOT the type info which is why you're getting the error you are.
You would need to do something more like
val uniqStructTypeInfo: TypeInformation[uniqStruct] = createTypeInformation[uniqStruct]
val uniqStrictTypeSerializer = typeInfo.createSerializer(config)
To quote the page above regarding the config param you need to pass to create serializer
The config parameter is of type ExecutionConfig and holds the
information about the program’s registered custom serializers. Where
ever possibly, try to pass the programs proper ExecutionConfig. You
can usually obtain it from DataStream or DataSet via calling
getExecutionConfig(). Inside functions (like MapFunction), you can get
it by making the function a Rich Function and calling
getRuntimeContext().getExecutionConfig().
DeltaTrigger needs a TypeSerializer because it uses Flink's managed state mechanism to store each element for later comparison with the next one (it just keeps one element, the last one, which is updated as new elements arrive).
You will find an example (in Java) here.
But if all you need is a window that triggers every 100msec, then it'll be easier to just use a TimeWindow, such as
input
.keyBy(<key selector>)
.timeWindow(Time.milliseconds(100)))
.apply(<window function>)
Updated:
To have hour-long windows that trigger every 100msec, you could use sliding windows. However, you would have 10 * 60 * 60 windows, and every event would be placed into each of these 36000 windows. So that's not a great idea.
If you use a GlobalWindow with a DeltaTrigger, then the window will be triggered only when events are more than 100msec apart, which isn't what you've said you want.
I suggest you look at ProcessFunction. It should be straightforward to get what you want that way.

How do I get Swashbuckle to have Swagger UI to group by Version?

I am playing with the new Azure API Apps (template in Visual Studio 2013 w/ the new SDK bits from 3/24/15) and I'd like have my Swagger UI group my calls by Version #. In my case, I'm currently versioning by URI (I realize REST purists will tell me not to do this - please don't try to "correct my error" here). For instance, I may have these calls:
http://example.com/api/Contacts <-- "latest"
http://example.com/api/1/Contacts
http://example.com/api/2/Contacts
http://example.com/api/Contacts{id} <-- "latest"
http://example.com/api/1/Contacts/{id}
http://example.com/api/2/Contacts/{id}
Functionally, this works great! (Yes, I know some of you will cringe. Sorry this hurts your feelings.) However, my problem is w/ Swagger UI organization. By default, Swagger UI groups these by the Controller Name (Contacts in this case). I see in the SwaggerConfig.cs file that I can change this:
// Each operation be assigned one or more tags which are then used by consumers for various reasons.
// For example, the swagger-ui groups operations according to the first tag of each operation.
// By default, this will be controller name but you can use the "GroupActionsBy" option to
// override with any value.
//
//c.GroupActionsBy(apiDesc => apiDesc.HttpMethod.ToString());
What I don't understand is how I can tweak this to group all of the "latest" together and then all of v1 together and then all of v2 together, etc.
How can I do this? If it absolutely must require that I add the word "latest" (or equiv) into the path in place of the version number, then I can do that but I'd prefer not have to do that.
I believe what you're looking to do is to uncomment a line a few lines below that one in SwaggerConfig.cs
c.OrderActionGroupsBy(new DescendingAlphabeticComparer());
except you'd change the name of the class to something like ApiVersionComparer() and then implement it as a new class:
public class ApiVersionComparer : IComparer<string>
{
public int Compare(string x, string y)
{
// Write whatever comparer you'd like to here.
// Yours would likely involve parsing the strings and having
// more complex logic than this....
return -(string.Compare(x, y));
}
}
If you've gotten far enough to ask this question, I'm sure I can leave the sort implementation for you. :-)

Why sbt.Extracted remove the previously defined TaskKey while append method?

There is a suitable method in the sbt.Exctracted to add the TaskKey to the current state. Assume I have inState: State:
val key1 = TaskKey[String]("key1")
Project.extract(inState).append(Seq(key1 := "key1 value"), inState)
I have faced with the strange behavior when I do it twice. I got the exception in the following example:
val key1 = TaskKey[String]("key1")
val key2 = TaskKey[String]("key2")
val st1: State = Project.extract(inState).append(Seq(key1 := "key1 value"), inState)
val st2: State = Project.extract(st1).append(Seq(key2 := "key2 value"), st1)
Project.extract(st2).runTask(key1, st2)
leads to:
java.lang.RuntimeException: */*:key1 is undefined.
The question is - why does it work like this? Is it possible to add several TaskKeys while executing the particular task by several calls to sbt.Extracted.append?
The example sbt project is sbt.Extracted append-example, to reproduce the issue just run sbt fooCmd
Josh Suereth posted the answer to sbt-dev mail list. Quote:
The append function is pretty dirty/low-level. This is probably a bug in its implementation (or the lack of documentation), but it blows away any other appended setting when used.
What you want to do, (I think) is append into the current "Session" so things will stick around and the user can remove what you've done via "sesison clear" command.
Additonally, the settings you're passing are in "raw" or "fully qualified" form. If you'd for the setting you write to work exactly the same as it would from a build.sbt file, you need to transform it first, so the Scopes match the current project, etc.
We provide a utility in sbt-server that makes it a bit easier to append settings into the current session:
https://github.com/sbt/sbt-remote-control/blob/master/server/src/main/scala/sbt/server/SettingUtil.scala#L11-L29
I have tested the proposed solution and that works like a charm.

HATEOAS link to method with optional requestparams

I want to link to a method that has the following signature:
public SomeResponse getSomeObjects(#RequestParam(value = "foo", defaultValue = "bar") Foo fooValue)
Now I want the link to look like this:
http://myhost/api/someobjects
I tried using methodOn from Spring HATEOAS's ControllerLinkBuilder as seen below:
discoverResponse.add(linkTo(methodOn(SomeController.class).getSomeObjects(null)).withRel("someobjects"))
But it doesn't lead to the desired link because a ?foo is added at its end. How can I achieve the above objective?
Since backward compatibility is such an issue for you, you could always manually construct your Link objects like so:
discoverResponse.add(new Link(baseUri() + "/someobjects", "someobjects"));
The other option would be to fork Spring HATEOAS on GitHub, build the project yourself, and change the way defaults are handled in ControllerLinkBuilder. I don't really know how you'd expect an out-of-context Link builder to be able to differentiate between whether it should advertise an optional parameter. In the HATEOAS world, if the parameter isn't included, the client doesn't know about it. So why even have the optional parameter?
I know there are 7 years gone now, but I had a similar problem today which lead me here. In spring hateoas 1.1.0 the behavior is slightly different, instead it will generate URI-Templates by default for non-required #RequestParams:
http://myhost/api/someobjects{?foo}
If you don't want them in your link, you can just expand it
Map<String, Object> parameters = new HashMap<>();
parameters.put("foo", null);
link = link.expand(parameters);
It will result in the desired URL
http://myhost/api/someobjects

Redmine REST API called from Ruby is ignoring updates to some fields

I have some code which was working at one point but no longer works, which strongly suggests that the redmine configuration is involved somehow (I'm not the redmine admin), but the lack of any error messages makes it hard to determine what is wrong. Here is the code:
#!/usr/bin/env ruby
require "rubygems"
gem "activeresource", "2.3.14"
require "active_resource"
class Issue < ActiveResource::Base
self.site = "https://redmine.mydomain.com/"
end
Issue.user = "myname"
Issue.password = "mypassword" # Don't hard-code real passwords :-)
issue = Issue.find 19342 # Created manually to avoid messing up real tickets.
field = issue.custom_fields.select { |x| x.name == "Release Number" }.first
issue.notes = "Testing at #{Time.now}"
issue.custom_field_values = { field.id => "Release-1.2.3" }
success = issue.save
puts "field.id: #{field.id}"
puts "success: #{success}"
puts "errors: #{issue.errors.full_messages}"
When this runs, the output is:
field.id: 40
success: true
errors: []
So far so good, except that when I go back to the GUI and look at this ticket, the "notes" part is updated correctly but the custom field is unchanged. I did put some tracing in the ActiveRecord code and that appears to be sending out my desired updates, so I suspect the problem is on the server side.
BTW if you know of any good collections of examples of accessing Redmine from Ruby using the REST API that would be really helpful too. I may just be looking in the wrong places, but all I've found are a few trivial ones that are just enough to whet one's appetite for more, and the docs I've seen on the redmine site don't even list all the available fields. (Ideally, it would be nice if the examples also specified which version of redmine they work with.)