刚知道tornado原来是facebook开源的。
logging的简单介绍和使用
python内置模块logging,用于记录日志,格式化输出。通过getLogger()
获得的是单例且线程安全的(进程不安全),下文会简单介绍logging的常用方法和简单logging源码分析,以及tornado中的日志模块。
使用basicConfig()
(给root handler添加基本配置),除了level
和filename
参数外还有filemode
(默认a),format
(输出格式),handlers
(给root logger添加handlers)
import logging
logging.basicConfig(level=logging.DEBUG, filename='main.log')
logging.debug('bug log')
logging的日志分为这几种级别:
CRITICAL = 50
FATAL = CRITICAL
ERROR = 40
WARNING = 30
WARN = WARNING
INFO = 20
DEBUG = 10
NOTSET = 0
logging.info()
等实际上是根looger(root logger)的info()方法。
利用logger
的相对应的info
,debug
,waring
等可以输出日志,只有当方法级别>日志级别时才能够输出(如level=logging.INFO
时,用logging.error('error')
能够输出,而logging.debug('debug')
无输出)。
值得一提的是还有一个logging.exception()
方法,能够简单的记录Traceback信息(实际上是相当于logging.error(e, exc_info=True)
),所有的输出方法最终都是调用了logger的_log()
方法。
getLogger()
import logging
root_logger = logging.getLogger()
mine_logger = logging.getLogger('mine')
参数缺省时默认获取的是root_logger
,是所有logger的最终父logger。
在未作任何配置时,root_logger
的日志级别是warning
。
阅读getLogger()
源码可以发现
root = RootLogger(WARNING)
Logger.root = root
Logger.manager = Manager(Logger.root)
def getLogger(name=None):
"""
Return a logger with the specified name, creating it if necessary.
If no name is specified, return the root logger.
"""
if name:
return Logger.manager.getLogger(name)
else:
return root
执行logging.getLogger('mine')
方法,即Manager(Logger.root).getLogger('mine')
,再来看一下Manager
的getLogger()
。
def getLogger(self, name):
"""
Get a logger with the specified name (channel name), creating it
if it doesn't yet exist. This name is a dot-separated hierarchical
name, such as "a", "a.b", "a.b.c" or similar.
If a PlaceHolder existed for the specified name [i.e. the logger
didn't exist but a child of it did], replace it with the created
logger and fix up the parent/child references which pointed to the
placeholder to now point to the logger.
"""
rv = None
if not isinstance(name, str):
raise TypeError('A logger name must be a string')
_acquireLock()
try:
if name in self.loggerDict:
rv = self.loggerDict[name]
if isinstance(rv, PlaceHolder):
ph = rv
rv = (self.loggerClass or _loggerClass)(name)
rv.manager = self
self.loggerDict[name] = rv
self._fixupChildren(ph, rv)
self._fixupParents(rv)
else:
rv = (self.loggerClass or _loggerClass)(name)
rv.manager = self
self.loggerDict[name] = rv
self._fixupParents(rv)
finally:
_releaseLock()
return rv
getLogger
的时候会先从Manager的loggerDict
中查找,找不到时才生成新logger ,通过loggerDict
和PlaceHodler
,将a,a.b,a.b.c类似的,子logger继承父logger的level
,handler
等,最终继承root_handler。
Handlers, Formatter
每个logger都能有多个handler,默认的是StreamHandler
,能够通过addHandler()
和removeHandler()
来添加删除handler。
import logging
mine_logger = logging.getLogger('mine')
mine_logger.setLevel(logging.INFO)
console_handler = logging.StreamHandler()
file_handler = logging.FileHandler('./logs/test.log')
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
console_handler.setFormatter(formatter)
file_handler.setFormatter(formatter)
mine_logger.addHandler(console_handler)
mine_logger.addHandler(file_handler)
mine_logger.info('info message')
新建了console_handler
和file_handler
,通过addHandler()
添加到了mine_logger
上,这样mine_logger.info('info message')
既能输出到文件又能打印在控制台。
ps:可以利用logging.Formatter
来格式化输出,但每个handler只能有一种类formatter。
高级handlers
在logging.handers
下面有很多封装好的功能强大的handler。
RotatingFileHandler
:根据大小切割日志。
TimedRotatingFileHandler
:根据时间切割日志。
还有MemoryHandler
,SocketHandler
等等。
config
利用配置文件能够快速的对logging进行配置。
[loggers]
keys=root, common
[logger_root]
level=INFO
handlers=root_handler
[logger_common]
level=DEBUG
handlers=common_handler
qualname=common
[handlers]
keys=root_handler,common_handler
[handler_root_handler]
class=StreamHandler
level=INFO
formatter=form01
args=(sys.stdout,)
[handler_common_handler]
class=FileHandler
level=DEBUG
formatter=form01
args=('main.log',"a")
[formatters]
keys=form01
[formatter_form01]
format=[%(asctime)s %(levelname)s] file: %(filename)s, line:%(lineno)d %(process)d %(message)s
datefmt=%Y-%m-%d %H:%M:%S
通过配置文件来定义logger,handler,formatter等。上述配置中handler都是指向logging模块自带的handler,也可以指向自定义的handler。
利用logging.config
加载配置文件。
import logging
import logging.config
logging_config_path = 'logging.conf'
logging.config.fileConfig(logging_config_path)
root_logger = logging.getLogger('root')
root_logger.info('test message')
common_logger = logging.getLogger('common')
common_logger.info('test common message')
tornado中的log简单介绍
在tornado的log.py模块中,初始化了三种logger,都继承于logging模块的root logger。
'''Tornado uses three logger streams:
* ``tornado.access``: Per-request logging for Tornado's HTTP servers (and
potentially other servers in the future)
* ``tornado.application``: Logging of errors from application code (i.e.
uncaught exceptions from callbacks)
* ``tornado.general``: General-purpose logging, including any errors
or warnings from Tornado itself.
'''
# Logger objects for internal tornado use
access_log = logging.getLogger("tornado.access")
app_log = logging.getLogger("tornado.application")
gen_log = logging.getLogger("tornado.general")
在启动tornado前利用tornado.options.parse_command_line()
来初始化一些配置。
tornado.options.parse_command_line()
会默认调用enable_pretty_logging()
方法,对日志输出做一些格式化和rotate并输出到日志文件,可以通过tornado.options.define()
来传递这些参数。
def enable_pretty_logging(options=None, logger=None):
"""Turns on formatted logging output as configured.
This is called automatically by `tornado.options.parse_command_line`
and `tornado.options.parse_config_file`.
"""
if options is None:
import tornado.options
options = tornado.options.options
if options.logging is None or options.logging.lower() == 'none':
return
if logger is None:
logger = logging.getLogger()
logger.setLevel(getattr(logging, options.logging.upper()))
if options.log_file_prefix:
rotate_mode = options.log_rotate_mode
if rotate_mode == 'size':
channel = logging.handlers.RotatingFileHandler(
filename=options.log_file_prefix,
maxBytes=options.log_file_max_size,
backupCount=options.log_file_num_backups)
elif rotate_mode == 'time':
channel = logging.handlers.TimedRotatingFileHandler(
filename=options.log_file_prefix,
when=options.log_rotate_when,
interval=options.log_rotate_interval,
backupCount=options.log_file_num_backups)
else:
error_message = 'The value of log_rotate_mode option should be ' +\
'"size" or "time", not "%s".' % rotate_mode
raise ValueError(error_message)
channel.setFormatter(LogFormatter(color=False))
logger.addHandler(channel)
if (options.log_to_stderr or
(options.log_to_stderr is None and not logger.handlers)):
# Set up color if we are in a tty and curses is installed
channel = logging.StreamHandler()
channel.setFormatter(LogFormatter())
logger.addHandler(channel)
如现在我们对tornado的日志进行每间隔一天进行分割,并输出到文件中。
tornado.options.define('log_file_prefix', default='./logs/test.log')
tornado.options.define('log_rotate_mode', default='time')
tornado.options.define('log_rotate_when', default='M')
tornado.options.define('log_rotate_interval', default=1)
tornado.options.parse_command_line()
更多参数可以参考源码:
options.define("logging", default="info",
help=("Set the Python log level. If 'none', tornado won't touch the "
"logging configuration."),
metavar="debug|info|warning|error|none")
options.define("log_to_stderr", type=bool, default=None,
help=("Send log output to stderr (colorized if possible). "
"By default use stderr if --log_file_prefix is not set and "
"no other logging is configured."))
options.define("log_file_prefix", type=str, default=None, metavar="PATH",
help=("Path prefix for log files. "
"Note that if you are running multiple tornado processes, "
"log_file_prefix must be different for each of them (e.g. "
"include the port number)"))
options.define("log_file_max_size", type=int, default=100 * 1000 * 1000,
help="max size of log files before rollover")
options.define("log_file_num_backups", type=int, default=10,
help="number of log files to keep")
options.define("log_rotate_when", type=str, default='midnight',
help=("specify the type of TimedRotatingFileHandler interval "
"other options:('S', 'M', 'H', 'D', 'W0'-'W6')"))
options.define("log_rotate_interval", type=int, default=1,
help="The interval value of timed rotating")
options.define("log_rotate_mode", type=str, default='size',
help="The mode of rotating files(time or size)")
对于access_logger,初始化后tornado会记录每次请求的信息,该记录log的操作是在handler中完成的。
在RequestHandler的finish()
中会执行_log()
方法,该方法会找到该handler所属Application
的log_request()
方法:
def log_request(self, handler):
"""Writes a completed HTTP request to the logs.
By default writes to the python root logger. To change
this behavior either subclass Application and override this method,
or pass a function in the application settings dictionary as
``log_function``.
"""
if "log_function" in self.settings:
self.settings["log_function"](handler)
return
if handler.get_status() < 400:
log_method = access_log.info
elif handler.get_status() < 500:
log_method = access_log.warning
else:
log_method = access_log.error
request_time = 1000.0 * handler.request.request_time()
log_method("%d %s %.2fms", handler.get_status(),
handler._request_summary(), request_time)
很直观的看出来在请求状态码不同时,输出的日志级别也不同,除了状态码,还记录了请求时间,以及ip,uri和方法,若要简单更改输出的日志格式,可以继承RequestHandler
并重写_request_summary()
方法。
def _request_summary(self):
return "%s %s (%s)" % (self.request.method, self.request.uri,
self.request.remote_ip)
ps: 也能够通过配置文件对tornado日志进行配置。
logging模块是线程安全的,但多进程时会出现问题,多进程解决方法参考: