I cannot seem to log debug statements to the "Debug Console" in VS Code using dotnet core. I have this statement in my Startup.cs:
public void Configure(IApplicationBuilder app, IWebHostEnvironment env, ILogger<Startup> logger)
{
_logger = logger;
...
}
When I print this statement from
_logger.LogInformation("Testing LogInformation");
I get output in the Debug Console, but when I run this statement:
_logger.LogDebug("Testing LogDebug");
nothing shows. I would like to do a lot of debug statements which don't carry over to production, unless I configure it to run debug statements. It seems like this would be a configuration setting. I read this post but it does not make sense to me. I seems very intuitive that the LogDebug would show a statement in the Debug Console, and if the challenge, as a link in the post states, is that "If the default was Debug or Verbose then a typical request would have hundreds of log messages flowing through, and that would probably make the log output unusable in most cases.", that challenge should be dealt with in a different way, like being able to filter the component for which debug statements are shown.
Am I missing a configuration option? Are others using LogInformation?
In my workflows and activities, I’d like to log some messages for debugging purposes.
I saw the cadence.GetLogger(ctx).Info() function, but don’t know where to find the logs.
Go Client:
You can use the following in the workflow code:
cadence.GetLogger(ctx).Info(...)
In the activity code, you should use the following:
cadence.GetActivityLogger(ctx).Info(...)
By default, the logger will write to console, which may be sufficient for development purposes. However, you should log to a file if you need the logs in production, too. Here is how to setup your cadence worker to do it:
workerOptions := cadence.WorkerOptions{
Logger: myLogger,
}
worker := cadence.NewWorker(service, domain, taskList, workerOptions)
The Cadence client uses zap as the logging framework. You can create the zap logger and specify the log file path per your needs. Check out the zap documentation to learn more about configuring the logs.
Java Client
The Java client uses slf4j for logging. You can get the logger instance by calling Workflow.getLogger() and configure it in logback.xml as usual.
I am doing some initial one off setup using [BeforeTestRun] hook for my specflow tests. This does check on some users to make sure if they exist and creates them with specific roles and permissions if they are not so the automated tests can use them. The function to do this prints a lot of useful information on the Console.Writeline.
When I run the test on my local system I can see the output from this hook function on the main feature file and the output of each scenario under each of them. But when I run the tests via Azure DevOps pipleine, I am not sure where to find the output for the [BeforeTestRun] because it is not bound a particular test scenario. The console of Run Tests Tasks has no information about this.
Can someone please help me to show this output somewhere so I can act accordingly.
I tried to use System.Diagnostics.Debug.Print, System.Diagnostics.Debug.Print, System.Diagnostics.Debug.WriteLine and System.Diagnostics.Trace.WriteLine, but nothing seems to work on pipeline console.
[BeforeTestRun]
public static void BeforeRun()
{
Console.WriteLine(
"Before Test run analyzing the users and their needed properties for performing automation run");
}
I want my output to be visible somewhere so I can act based on that information if needed to.
It's not possible for the console logs.
The product currently does not support printing console logs for passing tests and we do not currently have plans to support this in the near future.
(Source: https://developercommunity.visualstudio.com/content/problem/631082/printing-the-console-output-in-the-azure-devops-te.html)
However, there's another way:
Your build will have an attachment with the file extension .trx. This is a xml file and contains an Output element for each test (see also https://stackoverflow.com/a/55452011):
<TestRun id="[omitted]" name="[omitted] 2020-01-10 17:59:35" runUser="[omitted]" xmlns="http://microsoft.com/schemas/VisualStudio/TeamTest/2010">
<Times creation="2020-01-10T17:59:35.8919298+01:00" queuing="2020-01-10T17:59:35.8919298+01:00" start="2020-01-10T17:59:26.5626373+01:00" finish="2020-01-10T17:59:35.9209479+01:00" />
<Results>
<UnitTestResult testName="TestMethod1">
<Output>
<StdOut>Test</StdOut>
</Output>
</UnitTestResult>
</Results>
</TestRun>
anyone what is the command to enable the logging (that shows all the requests, even javascript errors) when running Capybara?
I used to use that one-line to enable for troubleshooting..
In your spec_helper.rb, add this
Capybara.default_driver = :webkit_debug
and run your specs.
Is it possible to integrate a headless browser with locust? I need my load tests to process client side script that triggers additional requests on each page load.
That's an old question, but I came across it now, and found a solution in "realbrowserlocusts" (https://github.com/nickboucart/realbrowserlocusts) - it adds "Real Browser support for Locust.io load testing" using selenium.
If you use one of its classes (FirefoxLocust, ChromeLocust, PhantomJSLocust) instead of HttpLocust for your locust user class
class WebsiteUser(HeadlessChromeLocust):
then in your TaskSet self.client becomes an instance of selenium WebDriver.
One drawback for me was that webdriver (unlike built-in client in HttpLocust) doesn't know about "host", which forces to use absolute URLs in TaskSet instead of relative ones, and it's really convenient to use relative URLs when working with different environments (local, dev, staging, prod, etc.).
But there is an easy fix for this: to inherit from one of realbrowserlocusts' locusts and pass "host" to WebDriver instance:
from locust import TaskSet, task, between
from realbrowserlocusts import HeadlessChromeLocust
class UserBehaviour(TaskSet):
#task(1)
def some_action(self):
# self.client is selenium WebDriver instance
self.client.get(self.client.base_host + "/relative/url")
# and then for inst. using selenium methods:
self.client.find_element_by_name("form-name").send_keys("your text")
# etc.
class ChromeLocustWithHost(HeadlessChromeLocust):
def __init__(self):
super(ChromeLocustWithHost, self).__init__()
self.client.base_host = self.host
class WebsiteUser(ChromeLocustWithHost):
screen_width = 1200
screen_height = 1200
task_set = UserBehaviour
wait_time = between(5, 9)
============
UPDATE from September 5, 2020:
I posted this solution in March 2020, when locust was on major version 0. Since then, in May 2020, they released version 1.0.0 in which some backward incompatible changes were made (one of which - renaming StopLocust to StopUser). realbrowserlocusts was not updated for a while, and is not updated yet to work with locust >=1.
There is a workaround though. When locust v1.0.0 was release, previous versions were released under a new name - locustio with the last version 0.14.6, so if you install "locustio==0.14.6" (or "locustio<1"), then a solution with realbrowserlocusts still works (I checked just now). (see https://github.com/nickboucart/realbrowserlocusts/issues/13).
You have to limit a version of locustio, as it refuses to install without it:
pip install locustio
...
ERROR: Command errored out with exit status 1:
...
**** Locust package has moved from 'locustio' to 'locust'.
Please update your reference
(or pin your version to 0.14.6 if you dont want to update to 1.0)
In theory you could make a headless browser a Locust slave/worker. But the problem is that the browser is much more expensive in terms of CPU and memory which would make it difficult to scale.
That is why Locust uses small greenlets to simulate users since they much cheaper to construct and run.
I would recommend you to break down your page's requests and encode them as requests inside of Locust. The Network tab in Chrome's Dev Tools is probably a good start. I've also heard of people capturing these by going through a proxy that logs all requests for you.
You could use something like browserless to take care of the hosting of Chrome (https://browserless.io). Depending on how you brutal your load tests are there’s varying degrees of concurrency. Full disclaimer: I’m the maker of the browserless service
I think locust is not desinged for that purposes, it is for creating concurrent user to make http requests so I didnt see any integration with locust and browser. However you can simulate browser by sending extra information in the header with that way client side scripts will also work.
r = self.client.get("/orders", headers = {"Cookie": self.get_user_cookie(user[0]), 'User-Agent': self.user_agent})
The locust way of solving this is to add more requests to your test that mimic the requests that the javascript code will make.
I structure my locust tests to parse the JSON response from an early request in the app's workflow. I then randomly pick some interesting piece of data from that JSON, and then issue more requests that mimic what would happen in the browser if the user had clicked on that piece of data.