calliope-project / calliope

A multi-scale energy systems modelling framework

Home Page:https://www.callio.pe

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Examples for customizing logging destinations would help

jnnr opened this issue · comments

Problem description

When running a big model, one may want to direct different logging levels to different destinations. e.g.:

  • write info to the terminal
  • write debug to file.
  • optionally, write debug from different modules to different files.

This can be done in run scripts. Some examples for that could help.

I prepared some examples (see some bits of it below). Where does it make most sense to put them?

# main.py
import logging

from module import function

logger = logging.getLogger()  # Using the root logger here
logger.setLevel(logging.DEBUG)  # have to set the logger's level to lowest, otherwise level is already cut off before handlers come in.

# Define a formatter
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')

# Add a ConsoleHandler
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.ERROR)
console_handler.setFormatter(formatter)
logger.addHandler(console_handler)

# Add a FileHandler
LOGPATH = "main.log"
file_handler = logging.FileHandler(LOGPATH)
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)


if __name__ == "__main__":
    logger.error("Application error")
    function()
# module.py
import logging

logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)

def function():
    logger.debug("debug")
    logger.info("info")
    logger.warning("warning")
    logger.error("error")
    logger.critical("critical")

Thanks for the examples @jnnr. The bits you provided are quite generalised and not worth putting in Calliope docs by themselves (one could equally link to various logging tutorials). Are the examples you have put together more specific to the Calliope loggers?

Looking into what we have in logging.py, our logging probably needs a bit of a rethink, but generally you can set the root logger (and its children) to a specific stdout level with some string formatting using calliope.set_log_verbosity. We then have various loggers around the place in Calliope which a user could access directly and attach handlers to for their needs, e.g. the logger at "calliope.backend.pyomo.model" dealt with the output from the solver (now that will happen in "calliope.backend.backends" and we probably need a logger per backend model).

So my sense is that we only need a basic example of adding a file/stream handler to a module-level logger and then a list of the calliope module-level loggers and on what part of the modelling process they might produce messages.

As to where to put it in the docs, I would say in troubleshooting.

Thanks for your comments! This is an updated version which I propose to add to the docs under troubleshooting.

I had a look at the logging in the modules and calliope.core.util.logging but did not end with a final conclusion what to do about it. If you like, let's talk about that again.

import calliope

import logging

logger = logging.getLogger()  # Using the root logger here
# Set the logger's level to lowest to include all messages.
# If we do not do this, messages with lower level not be collected.
# The handlers introduced further below will have their own levels.
logger.setLevel(logging.DEBUG)

# Define a formatter
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')

# Add a ConsoleHandler
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)  # In this example, we only want to see warnings in the console
console_handler.setFormatter(formatter)
logger.addHandler(console_handler)

# Add a FileHandler
LOGPATH = "example_national_scale.log"
file_handler = logging.FileHandler(LOGPATH)
file_handler.setLevel(logging.DEBUG)  # We want to include all messages in the log written to file.
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)

logger.info("Loading the national-scale example model")  # You can use logging in your scripts
m = calliope.examples.national_scale()

m.build()

m.solve()

This is a second example that sends logs from different modules to different files. The amount of files is near 100, many of them empty.

import calliope

import logging

# for logger in loggers:
#     logger.addHandler(console_handler)

logger = logging.getLogger()  # Using the root logger here
# Set the logger's level to lowest to include all messages.
# If we do not do this, messages with lower level not be collected.
# The handlers introduced further below will have their own levels.
logger.setLevel(logging.DEBUG)

# Define a formatter
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')

# Add a ConsoleHandler
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)  # In this example, we only want to see warnings in the console
console_handler.setFormatter(formatter)
logger.addHandler(console_handler)

# Add FileHandlers per logger
loggers = [logging.getLogger(name) for name in logging.root.manager.loggerDict]
for logger in loggers:
    LOGPATH = f"{logger.name}.log"
    file_handler = logging.FileHandler(LOGPATH)
    file_handler.setLevel(logging.DEBUG)
    file_handler.setFormatter(formatter)
    logger.addHandler(file_handler)

logger.info("Loading the national-scale example model")  # You can use logging in your scripts
m = calliope.examples.national_scale()

m.build()

m.solve()

I still think this example could be more calliope specific. Rather than produce ~100 files (albeit with many being empty), one could target a calliope logger to dump to file at a specific log level. We can then document what each calliope logger captures (e.g., preprocessing, math parsing, model solving, ...). I'll work on an update to your example to demonstrate this.

If I get you right, you want to have different loggers not for each module but for each processing step. Sounds useful!

Closed as fixed in #492 . @jnnr when you have the chance could you check that it provides sufficient guiadance (mainly the new jupyter notebook)?