How to print python logger info level - python-3.7

I want to run a python3.7 script on the command line and have logger.info() messages show up on stdout the same way logger.warning() does.
Here's my code:
import logging
logger = logging.getLogger(__name__)
print(logger.isEnabledFor(logging.INFO))
logger.setLevel(logging.INFO)
print(logger.isEnabledFor(logging.INFO))
logger.info('my info message')
logger.warn('my warning message')
Expected output:
False
True
my info message
my warning message
Actual output:
False
True
my warning message
example: https://repl.it/repls/UnknownWideeyedPhase

In the example above you need to define a Stream-Handler to tell the logger where to send the logs to. If you want to send them to stdout (console output), just add
import sys
logging.basicConfig(stream=sys.stdout)
For further details concerning logging configuration, I propose reading the documentation documentation
P.S. you should use logger.warning as logger.warn is deprecated

This problem was resolved in Logging setLevel is being ignored
But following the resolution:
import logging
logger = logging.getLogger(__name__)
print(logger.isEnabledFor(logging.INFO))
logging.basicConfig(level=logging.INFO, format='%(message)s')
print(logger.isEnabledFor(logging.INFO))
logger.info('my info message')
logger.warning('my warning message')

Related

Redirecting Logs to a File in Scala

I am new to Scala and I am struggling to find out how I can redirect my logs to a file in Scala. This is a simple task in Python but I can't find the relevant documentation for Scala. I am trying to use log4j but I don't mind to use other packages either. All references that I find discuss how to do so through a configuration file but I would like to do this programmatically.
This is what I have found so far and works but I do not know how to add a file. I think FileAppender should solve my problem but I can't find an example how to add it to my logger:
import org.apache.log4j.Logger
val logger = Logger.getLogger("My Logger")
logger.info("I am a log message")
What I wish to achieve (with some extra details) can be written in Python as follows:
import logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
handler = logging.FileHandler('output.log')
handler.setLevel(logging.INFO)
logger.addHandler(handler)
logger.info("I am a log message")
Link from comment translated to Scala:
import org.apache.log4j.PatternLayout
import org.apache.log4j.{Level, Logger, FileAppender}
val fa = new FileAppender
fa.setName("FileLogger")
fa.setFile("mylog.log")
fa.setLayout(new PatternLayout("%d %-5p [%c{1}] %m%n"))
fa.setThreshold(Level.DEBUG)
fa.setAppend(true)
fa.activateOptions
//add appender to any Logger (here is root)
Logger.getRootLogger.addAppender(fa)
// usage
val logger = Logger.getLogger("My Logger")
logger.info("I am a log message")
If " import org.apache.log4j.{Level, Logger, FileAppender}" is not worked, means, log4j libraries absent in classpath.

How do I write messages to the output log on AWS Glue?

AWS Glue jobs log output and errors to two different CloudWatch logs, /aws-glue/jobs/error and /aws-glue/jobs/output by default. When I include print() statements in my scripts for debugging, they get written to the error log (/aws-glue/jobs/error).
I have tried using:
log4jLogger = sparkContext._jvm.org.apache.log4j
log = log4jLogger.LogManager.getLogger(__name__)
log.warn("Hello World!")
but "Hello World!" doesn't show up in either of the logs for the test job I ran.
Does anyone know how to go about writing debug log statements to the output log (/aws-glue/jobs/output)?
TIA!
EDIT:
It turns out the above actually does work. What was happening was that I was running the job in the AWS Glue Script editor window which captures Command-F key combinations and only searches in the current script. So when I tried to search within the page for the logging output it seemed as if it hadn't been logged.
NOTE: I did discover through testing the first responder's suggestion that AWS Glue scripts don't seem to output any log message with a level less than WARN!
Try to use built-in python logger from logging module, by default it writes messages to standard output stream.
import logging
MSG_FORMAT = '%(asctime)s %(levelname)s %(name)s: %(message)s'
DATETIME_FORMAT = '%Y-%m-%d %H:%M:%S'
logging.basicConfig(format=MSG_FORMAT, datefmt=DATETIME_FORMAT)
logger = logging.getLogger(<logger-name-here>)
logger.setLevel(logging.INFO)
...
logger.info("Test log message")
I know the article is not new but maybe it could be helpful for someone:
For me logging in glue works with the following lines of code:
# create glue context
glueContext = GlueContext(sc)
# set custom logging on
logger = glueContext.get_logger()
...
#write into the log file with:
logger.info("s3_key:" + your_value)
I noticed the above answers are written in python. For Scala you could do the following
import com.amazonaws.services.glue.log.GlueLogger
object GlueApp {
def main(sysArgs: Array[String]) {
val logger = new GlueLogger
logger.info("info message")
logger.warn("warn message")
logger.error("error message")
}
}
You can find both Python and Scala solution from official doc here
Just in case this helps. This works to change the log level.
sc = SparkContext()
sc.setLogLevel('DEBUG')
glueContext = GlueContext(sc)
logger = glueContext.get_logger()
logger.info('Hello Glue')
This worked for INFO level in a Glue Python job:
import sys
root = logging.getLogger()
root.setLevel(logging.DEBUG)
handler = logging.StreamHandler(sys.stdout)
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
root.addHandler(handler)
root.info("check")
source
I faced the same problem. I resolved it by added
logging.getLogger().addHandler(logging.StreamHandler(sys.stdout))
Before there was no prints at all, even ERROR level
The idea was taken from here
https://medium.com/tieto-developers/how-to-do-application-logging-in-aws-745114ac6eb7
Another option would be to log to stdout and glue AWS logging to stdout (using stdout is actually one of the best practices in cloud logging).
Update: it works only for setLevel("WARNING") and when prints ERROR or WARING. I didn't find how to manage it for the INFO level :(
If you're just debugging, print() (Python) or println() (Scala) works just fine.

What is considered defaults for celery's app.conf.humanize(with_defaults=False)?

I'm trying to printout celery's configuration using app.conf.humanize(with_defaults=False) following the example in the user guide. But I always get an empty string when using with_defaults=False, I know that the configuration changes are in effects because I can see the changes using .humanize(with_defaults=True) instead.
I'm guessing that adding configuration with app.conf.config_from_object('myconfig') is loading the configuration settings as "defaults", so is there a way to load the config at module myconfig in a way that is not a default?
This is my source code:
#myconfig.py
worker_redirect_stdouts_level='INFO'
imports = ('tasks',)
and
#tasks.py
from celery import Celery
app = Celery()
app.config_from_object('myconfig')
print "config: %s" % app.conf.humanize(with_defaults=False)
#app.task
def debug(*args, **kwargs):
print "debug task args : %r" % (args,)
print "debug task kwargs: %r" % (kwargs,)
I start celery using env PYTHONPATH=. celery worker --loglevel=INFO and it prints config: (if I change to with_defaults=True I get the expected full output).
The configuration loaded with config_from_object() or config_from_envvar() is not considered defaults.
The behaviour observed was due a bug fixed by this commit in response to my bug report so future versions of celery will work as expected.
from celery import Celery
app = Celery
app.config_from_object('myconfig')
app.conf.humanize() # returns only settings directly set by 'myconfig' omitting defaults
where myconfig is a python module in the PYTHONPATH:
#myconfig.py
worker_redirect_stdouts_level='DEBUG'

How do I suppress the bloat of useless information when using the DUMP command while using grunt via 'pig -x local'?

I'm working with PigLatin, using grunt, and every time I 'dump' stuffs, my console gets clobbered with blah blah, blah non-info, is there a way to surpress all that?
grunt> A = LOAD 'testingData' USING PigStorage(':'); dump A;
2013-05-06 19:42:04,146 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN
2013-05-06 19:42:04,147 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
...
...
--- another like 50 lines of useless context clobbering junk here... till ---
...
...
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Success!
now my like 4 lines of info looking for:
(daemon,*,1,1,System Services,/var/root,/usr/bin/false)
(uucp,*,,,/var/spool/uucp,/usr/sbin/uucico)
(taskgated,*,13,13,Task Gate Daemon,/var/empty,/usr/bin/false)
(networkd,*,24,24,Network Services,/var/empty,/usr/bin/false)
(installassistant,*,25,25,/usr/bin/false)
grunt>
---> obviously if it errors, fine lotsa info helpful, but not when it basically works great.
You need to set the log4j properties.
For example:
$PIG_HOME/conf/pig.properties :
enable:
# log4jconf=./conf/log4j.properties
rename: log4j.properties.template -> log4j.properties
log4j.properties :
set info to error:
log4j.logger.org.apache.pig=info, A
You may also set the Hadoop related logging level as well:
log4j.logger.org.apache.hadoop = error, A
An easy way to do this seems to be to redirect standard error as below.
But it will suppress all errors.
pig -x local 2> /dev/null
Also found that if you remove or rename your hadoop install directory to basically make it inaccessible to pig then all those INFO messages go away. Changing logging levels in hadoop didn't help, just so that you know.
When you start pig, pass it a log4j.properties file with pig -4 <filename>.
In my case there was a log4j.properties in the conf directory and setting the level of the logger named org.apache.pig to ERROR is sufficient to make the logger less verbose.
log4j.logger.org.apache.pig=ERROR, A
pig has debug log level one need to set that in pig.properties file,
# Logging level. debug=OFF|ERROR|WARN|INFO|DEBUG (default: INFO)
#
# debug=INFO
The reason one get large logs on console, e.g. change it to ERROR

Log IPython output?

Is there any way to make IPython's logging capability include output as well as input?
This is what a log file looks like currently:
#!/usr/bin/env python
# 2012-08-06.py
# IPython automatic logging file
# 12:02
# =================================
print "test"
I'd like to have one more line show up:
#!/usr/bin/env python
# 2012-08-06.py
# IPython automatic logging file
# 12:02
# =================================
print "test"
# test
(the # is because I assume that is needed to prevent breaking IPython's logplay feature)
I suppose this is possible using IPython notebooks, but on at least one machine I need this for, I'm limited to ipython 0.10.2.
EDIT: I'd like to know how to set this up automatically, i.e. within the configuration file. Right now my config looks like
from time import strftime
import os
logfilename = strftime('ipython_log_%Y-%m-%d')+".py"
logfilepath = "%s/%s" % (os.getcwd(),logfilename)
file_handle = open(logfilepath,'a')
file_handle.write('########################################################\n')
out_str = '# Started Logging At: '+ strftime('%Y-%m-%d %H:%M:%S\n')
file_handle.write(out_str)
file_handle.write('########################################################\n')
file_handle.close()
c.TerminalInteractiveShell.logappend = logfilepath
c.TerminalInteractiveShell.logstart = True
but specifying c.TerminalInteractiveShell.log_output = True seems to have no affect
There's the -o option for %logstart:
-o: log also IPython's output. In this mode, all commands which
generate an Out[NN] prompt are recorded to the logfile, right after
their corresponding input line. The output lines are always
prepended with a '#[Out]# ' marker, so that the log remains valid
Python code.
ADDENDUM: If you are in an interactive ipython session for which logging has already been started, you must first stop logging and then restart:
In [1]: %logstop
In [2]: %logstart -o
Activating auto-logging. Current session state plus future input saved.
Filename : ./ipython.py
Mode : backup
Output logging : True
Raw input log : False
Timestamping : False
State : active
Observe that, after the restart, "Output Logging" is now "True".