There is a suitable method in the sbt.Exctracted to add the TaskKey to the current state. Assume I have inState: State:
val key1 = TaskKey[String]("key1")
Project.extract(inState).append(Seq(key1 := "key1 value"), inState)
I have faced with the strange behavior when I do it twice. I got the exception in the following example:
val key1 = TaskKey[String]("key1")
val key2 = TaskKey[String]("key2")
val st1: State = Project.extract(inState).append(Seq(key1 := "key1 value"), inState)
val st2: State = Project.extract(st1).append(Seq(key2 := "key2 value"), st1)
Project.extract(st2).runTask(key1, st2)
leads to:
java.lang.RuntimeException: */*:key1 is undefined.
The question is - why does it work like this? Is it possible to add several TaskKeys while executing the particular task by several calls to sbt.Extracted.append?
The example sbt project is sbt.Extracted append-example, to reproduce the issue just run sbt fooCmd
Josh Suereth posted the answer to sbt-dev mail list. Quote:
The append function is pretty dirty/low-level. This is probably a bug in its implementation (or the lack of documentation), but it blows away any other appended setting when used.
What you want to do, (I think) is append into the current "Session" so things will stick around and the user can remove what you've done via "sesison clear" command.
Additonally, the settings you're passing are in "raw" or "fully qualified" form. If you'd for the setting you write to work exactly the same as it would from a build.sbt file, you need to transform it first, so the Scopes match the current project, etc.
We provide a utility in sbt-server that makes it a bit easier to append settings into the current session:
https://github.com/sbt/sbt-remote-control/blob/master/server/src/main/scala/sbt/server/SettingUtil.scala#L11-L29
I have tested the proposed solution and that works like a charm.
Related
I'm sure this is a ridiculous question... I'm trying to enable tapir's logging as described here:
https://tapir.softwaremill.com/en/v0.19.0-m4/server/debugging.html
... but even having looked at the ServerOptions docs, I'm not getting anywhere. My naive assumption from the docs was that it should be something like:
import sttp.tapir.server.akkahttp.AkkaHttpServerOptions
import sttp.tapir.server.interceptor.log.DefaultServerLog
val customServerOptions: AkkaHttpServerOptions = AkkaHttpServerOptions.customInterceptors(
serverLog = DefaultServerLog(logWhenHandled=true, logAllDecodeFailures=true)
)
and then
AkkaHttpServerInterpreter(customServerOptions).toRoute(...)
However, this is clearly way off - DefaultServerLog is the wrong type, and would need masses of configuration.
The docs imply this can just be done by setting a few flags, what am I missing?
The DefaultServerLog is the default implementation for the ServerLog trait, though it has a couple of fields without default values. These fields specify how to actually perform the logging: DefaultServerLog .doLogWhenHandled etc.
What you need to do is to get an instance of DefaultServerLog, which is configured to integrate with akka-http. This is available as AkkaHttpServerOptions.Log.defaultServerLog.
Unfortunately, in 0.19 this was typed too narrowly, not allowing proper customisation. Hence, an ugly cast is needed (this is fixed in the current, stable 1.2 version):
val customServerOptions: AkkaHttpServerOptions =
AkkaHttpServerOptions.customInterceptors(
serverLog = Some(
AkkaHttpServerOptions.Log.defaultServerLog
.asInstanceOf[DefaultServerLog[LoggingAdapter => Future[Unit]]]
.copy(logWhenHandled = true, logAllDecodeFailures = true)
)
)
AkkaHttpServerInterpreter(customServerOptions).toRoute(???)
As for the docs, you are right that they might be misleading (they haven't changed much since 0.19), I'll fix that shortly.
I am attempting to improve a Scala Companion Object plus Case Class code generating template in IntelliJ 2021.3 (Build #IU-213.5744.223, built on November 27, 2021).
I would like to allow the user to be able to leave some of the input parameters unspecified (empty) and then have the template generate sane defaults. This is because the generated code can then be refactored by the client to something more pertinent later.
My question is this: Is there another way to approach achieving my goal than the way I show below using the Velocity #macro (which I got from here, and then doesn't appear to work anyway)?
With the current approach using the #marco, when I invoke it...
With:
NAME = "Longitude"
PROPERTY_1_VARIABLE = "value"
PROPERTY_1_TYPE - "Double"
The Scala code target:
final case class Longitude(value: Double)
With:
NAME = "Longitude"
PROPERTY_1_VARIABLE = ""
PROPERTY_1_TYPE - ""
The Scala code target using defaults for 2 and 3:
final case class Longitude(property1: Property1Type)
And here's my attempt to make it work with the IntelliJ (Velocity) Code Template:
#macro(condOp $check, $default)
#if ($check == "")
$default
#else
$check
#end
#end
#set (${PROPERTY_1_VARIABLE} = "#condOp(${PROPERTY_1_VARIABLE}, 'property1')")
#set (${PROPERTY_1_TYPE} = "#condOp(${PROPERTY_1_TYPE}, 'Property1Type')")
#if ((${PACKAGE_NAME} && ${PACKAGE_NAME} != ""))package ${PACKAGE_NAME} #end
final case class ${NAME} (${PROPERTY_1_VARIABLE}: ${PROPERTY_1_TYPE})
First, the IntelliJ popup box for entering the parameters correctly shows "Name:". And then incorrectly shows "default:" and "check". I have submitted a ticket with Jetbrains for this IntelliJ issue. Please vote for it, if you get a chance so it increases in priority.
And then when the macro finishes executing (I provide the Name of "LongLat", but leave the default and check parameters empty), I don't understand why I get the following gibberish.
package org.public_domain
${PROPERTY_1_TYPE})
Any guidance on fixing my current Velocity Code Template script or suggest another way to effectively approach solving the problem would be deeply appreciated.
I love testing-library, have used it a lot in a React project, and I'm trying to use it in an Angular project now - but I've always struggled with the enormous error output, including the HTML text of the render. Not only is this not usually helpful (I couldn't find an element, here's the HTML where it isn't); but it gets truncated, often before the interesting line if you're running in debug mode.
I simply added it as a library alongside the standard Angular Karma+Jasmine setup.
I'm sure you could say the components I'm testing are too large if the HTML output causes my console window to spool for ages, but I have a lot of integration tests in Protractor, and they are SO SLOW :(.
I would say the best solution would be to use the configure method and pass a custom function for getElementError which does what you want.
You can read about configuration here: https://testing-library.com/docs/dom-testing-library/api-configuration
An example of this might look like:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
You can then put this in any single test file or use Jest's setupFiles or setupFilesAfterEnv config options to have it run globally.
I am assuming you running jest with rtl in your project.
I personally wouldn't turn it off as it's there to help us, but everyone has a way so if you have your reasons, then fair enough.
1. If you want to disable errors for a specific test, you can mock the console.error.
it('disable error example', () => {
const errorObject = console.error; //store the state of the object
console.error = jest.fn(); // mock the object
// code
//assertion (expect)
console.error = errorObject; // assign it back so you can use it in the next test
});
2. If you want to silence it for all the test, you could use the jest --silent CLI option. Check the docs
The above might even disable the DOM printing that is done by rtl, I am not sure as I haven't tried this, but if you look at the docs I linked, it says
"Prevent tests from printing messages through the console."
Now you almost certainly have everything disabled except the DOM recommendations if the above doesn't work. On that case you might look into react-testing-library's source code and find out what is used for those print statements. Is it a console.log? is it a console.warn? When you got that, just mock it out like option 1 above.
UPDATE
After some digging, I found out that all testing-library DOM printing is built on prettyDOM();
While prettyDOM() can't be disabled you can limit the number of lines to 0, and that would just give you the error message and three dots ... below the message.
Here is an example printout, I messed around with:
TestingLibraryElementError: Unable to find an element with the text: Hello ther. This could be because the text is broken up by multiple elements. In this case, you can provide a function for your text matcher to make your matcher more flexible.
...
All you need to do is to pass in an environment variable before executing your test suite, so for example with an npm script it would look like:
DEBUG_PRINT_LIMIT=0 npm run test
Here is the doc
UPDATE 2:
As per the OP's FR on github this can also be achieved without injecting in a global variable to limit the PrettyDOM line output (in case if it's used elsewhere). The getElementError config option need to be changed:
dom-testing-library/src/config.js
// called when getBy* queries fail. (message, container) => Error
getElementError(message, container) {
const error = new Error(
[message, prettyDOM(container)].filter(Boolean).join('\n\n'),
)
error.name = 'TestingLibraryElementError'
return error
},
The callstack can also be removed
You can change how the message is built by setting the DOM testing library message building function with config. In my Angular project I added this to test.js:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
This was answered here: https://github.com/testing-library/dom-testing-library/issues/773 by https://github.com/wyze.
Noob to Gatling/Scala here.
This might be a bit of a silly question but I haven't been able to find an example of what I am trying to do.
I want to pass in things such as the baseURL, username and passwords for some of my calls. This would change from env to env, so I want to be able to change these values between the envs but still have the same tests in each.
I know we can feed in values but it appears that more for iterating over datasets and not so much for passing in the config values like I have.
Ideally I would like to house this information in a JSON file and not pass it in on the command line, but maybe thats not doable?
Any guidance on this would be awesome.
I have a similar setup and you can use pure scala here .In this scenario you can create an object called Config for eg
object Configuration { var INPUT_PROFILE_FILE_NAME = ""; }
This class can also read a file , I have the below code in the above object
val file = getClass.getResource("data/config.properties").getFile()
val prop = new Properties()
prop.load(new FileInputStream(file));
INPUT_PROFILE_FILE_NAME = prop.getProperty("inputProfileFileName")
Now you can import this object in Gattling Simulation File
val profileName= Configuration.INPUT_PROFILE_FILE_NAME ;
https://docs.scala-lang.org/tutorials/tour/singleton-objects.html.html
I want to use the deltatrigger in apache flink (flink 1.3) but I have some trouble with this code :
.trigger(DeltaTrigger.of(100, new DeltaFunction[uniqStruct] {
override def getDelta(oldFp: uniqStruct, newFp: uniqStruct): Double = newFp.time - oldFp.time
}, TypeInformation[uniqStruct]))
And I have this error:
error: object org.apache.flink.api.common.typeinfo.TypeInformation is not a value [ERROR] }, TypeInformation[uniqStruct]))
I don't understand why DeltaTrigger need TypeSerializer[T]
and I don't know what to do to remove this error.
Thanks a lot everyone.
I would read into this a bit https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/types_serialization.html sounds like you can create a serializer using typeInfo.createSerializer(config) on your type info. Note what you're passing in currently is a type itself and NOT the type info which is why you're getting the error you are.
You would need to do something more like
val uniqStructTypeInfo: TypeInformation[uniqStruct] = createTypeInformation[uniqStruct]
val uniqStrictTypeSerializer = typeInfo.createSerializer(config)
To quote the page above regarding the config param you need to pass to create serializer
The config parameter is of type ExecutionConfig and holds the
information about the program’s registered custom serializers. Where
ever possibly, try to pass the programs proper ExecutionConfig. You
can usually obtain it from DataStream or DataSet via calling
getExecutionConfig(). Inside functions (like MapFunction), you can get
it by making the function a Rich Function and calling
getRuntimeContext().getExecutionConfig().
DeltaTrigger needs a TypeSerializer because it uses Flink's managed state mechanism to store each element for later comparison with the next one (it just keeps one element, the last one, which is updated as new elements arrive).
You will find an example (in Java) here.
But if all you need is a window that triggers every 100msec, then it'll be easier to just use a TimeWindow, such as
input
.keyBy(<key selector>)
.timeWindow(Time.milliseconds(100)))
.apply(<window function>)
Updated:
To have hour-long windows that trigger every 100msec, you could use sliding windows. However, you would have 10 * 60 * 60 windows, and every event would be placed into each of these 36000 windows. So that's not a great idea.
If you use a GlobalWindow with a DeltaTrigger, then the window will be triggered only when events are more than 100msec apart, which isn't what you've said you want.
I suggest you look at ProcessFunction. It should be straightforward to get what you want that way.