I have an existing integration with Paypal using the java sdk. We're planning a production release, but we can't let it got to prod with the current log level of the sdk. It seems to be set to DEBUG and logs every request going/coming from Paypal. I guess there must be a parameter to add to the paypal_sdk_config.properties file, but I can't seem to guess it and documentation is inexistant on the matter.
Anyone has ever done this before?
We just discovered, going through the code source of the com.paypal.core.LoggingManager class, that the logger creation is not intuitive.
The logger name is always something like class for example: class com.paypal.core.APIService
It means that you cannot set the log level through this call
getLogger("com.paypal.core.APIService").setLevel(Level.WARNING);
To set the log level, you need to do that:
getLogger("class com.paypal.core.APIService").setLevel(Level.WARNING);
I just open an issue on the Paypal github repo for that strange behavior: https://github.com/paypal/sdk-core-java/issues/13
Related
I need some help, I'm starting with this automation stuff, I like it but I'm still learning, recently I create a test case that basically is, going to a certain page and click on a button to upgrade the account on an specific sale, so I did that but when I got my PR reviewed and devops ask me if I can add an assertion.
So, this code is on the spec file not on the page objects file, so the devops mean I have to create the code on the page object file and then call it on the spec file???? any tip would be great and thanks!
You can do it either way. You can write the assertion in the spec file, or write the assertion in the page objects file and call it from the specs file. If the latter is your framework's code convention, you may want to do it that way for consistency, but either way should work.
GitHub's Actions feature recently started letting users generate badges, to showcase the status of their tests. For example, if I have a set of tests that run on my repo's dev branch from a file named .github/test_dev.yml, I can access that build's status by adding /badge.svg to the end of the test's URL.
https://github.com/<username>/<repo_name>/actions/workflows/test_dev.yml/badge.svg
That's great from the standpoint of keeping your project readme up to date with the status of the project, but the next logical step would be to also add a link to the badge that points to the latest testing outcome.
Unfortunately, even though you can access all the tests of a particular action as follows:
https://github.com/<username>/<repo_name>/actions/workflows/test_dev.yml
The test runs themselves seem to be behind a unique ID under actions/runs/.
https://github.com/<username>/<repo_name>/actions/runs/1234567890
Is there any way to construct a URL that just points to the latest test? Something like:
https://github.com/<username>/<repo_name>/actions/workflows/test_dev.yml?result=latest
I poked through GitHub's documentation, but even though there's some documentation surrounding the generation of those badge SVG's, I couldn't find anything about linking directly to the action that actually generated that SVG.
you can use this to get the id in a yaml file:
https://github.com/<username>/<repo_name>/actions/runs/${{ github.run_id }}
I want to be able to talk with Google Assistant, but connect the Actions project directly to an NLP service I already have running on my server. In other words, NOT use dialogflow.
All the following examples show how to do this.
With Rasa
https://blog.rasa.com/going-beyond-hey-google-building-a-rasa-powered-google-assistant/
With LUIS
https://www.grokkingandroid.com/using-the-actions-sdk/
https://dzone.com/articles/using-the-actions-sdk-for-google-assistant-develop
With Watson
https://www.youtube.com/watch?v=no0R0bSkHXc
They use the actions.intent.MAIN as the invocation and actions.intent.TEXT for all other utterances from the talker.
This is what I need. I don’t want to create a load of intents, with utterance phrases, inside the Action because I just want all the phrases spoken by the talker to be passed to my server, and for my NLP service to deal with them.
So I set up a new Action project, install the Actions CLI and then spend 3 days trying all possible combinations without success, because all these examples are using gactions cli 2.1.3 and Google have now moved on to gactions cli 3.1.0.
Not only have the commands changed, but so too has the file formats and structure.
It appears there is also a new Google Actions Console, and actions.intent.TEXT is no longer available.
My Action is webhook connected to my server, but I cannot figure out how to get the action.intent.TEXT included and working.
Everything I find, even here
Publishing Actions on google without Dialogflow
is pre version update and follows the same pattern.
Can anyone point to an up-to-date, v3.1.0, discussion, tutorial or example about how to send all talker phrases through to an NLP that isn’t dialogflow, or has Google closed that avenue?
Is it possible to somehow go back and use the 2.1 CLI either with the new Console or revert the console back. (I have both CLI versions, I can see how different their commands are)
Is it possible to go back and use 2.1?
There is no way to go back to AoG 2. You probably also don't want to do so - newer features aren't available with v2 and are only available with v3.
Can I use my own NLP with v3?
Yes, although it isn't as obvious, and there are some changes in semantics.
As an overview, what you'll need to do is:
Create a Type that can accept "Free form text". I usually call this type "Any".
In the console, it looks something like this:
Create a Custom Intent that has a single parameter of this Any Type and at least one phrase that captures everything for this parameter. (So you should add one training phrase, highlight the entire phrase, and set it for the parameter. Sometimes I also add additional phrases that includes words that I don't want to capture.) I usually call the Intent "matchAny" and the parameter "any".
In the console, it could be something like this:
Finally, you'll have a Scene that you transition to from the Main invocation. When it matches the "matchAny" Intent, it should call your webhook with a handler name. Your webhook will be called with the "any" parameter set with the user utterance. (Note that the JSON has also changed.
Again, the console might have it looking something like this:
That seems like a lot of work. Isn't there just some way to do all that from the command line?
Yes. You can do all of that in the configuration files that the CLI accesses and then upload it. (You can then also use the console to review the configuration, if necessary, to make sure they're configured as you expect. You can shift back and forth between them as appropriate.)
Google also has a github repository that contains most of the files pre-configured for this sort of setup.
You will need to update the configuration from the repository to handle the webhook correctly (it includes code to illustrate what is happening using the inline code editor) and to add your project ID.
The app I am trying to test makes use of feature toggles to enable/disable certain parts of the app. However, the tests I've written are for all the features. When a user logs in, this will fetch the feature toggles from a REST service (using a class which uses the generated openapi) so the app knows what to show and what not to show.
Now I want to include those feature toggles in my tests, so that the corresponding tests are skipped and don't just fail if some parts aren't enabled. However, when I try to include the class that does the call, I get problems with dart:ui in the console, and the test no longer runs. When I (recursively) check the imports on those service classes, there are some imports to widgets.dart, so I guess that's the problem. I tried removing most of it, but since we're using Localized strings for error messages etc. it's getting to be a very cumbersome job to remove all of that from those files.
So before I continue doing that, I was wondering if there is any easy way to include a call to a REST service in an integration test?
I checked the Flutter drive documentation, and searched for some similar questions online but haven't really found anything similar.
I am making a Firefox Extension and I want to log the errors/messages/exceptions produced by the extension code using Sentry.
I tried the JavsScript Raven client but I guess its not really made to live inside the "Content" context.
The error I get is: message = "debug" is read-only, but my actual question is, how do I go about integrating Sentry in a Firefox Addon?
PS: No, this wont go into general distribution, my api keys are safe.
What I did was just to omit calling .install() and just use the error/message reporting.
There will be no automatic catching and source code but it works for my purposes.