aio-libs / aiohttp-devtools

dev tools for aiohttp

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

reload not working on non standard project structure

idwaker opened this issue · comments

  • aiohttp-devtools version:
    0.10.3

  • aiohttp version:
    3.4.4

  • python version:
    3.6

  • Platform:
    Solus Linux

Issue Summary

I am using DDD structure for current project with separate application and webapi entry point. devtools doesnot use root path as project folder rather uses app_path which in this case unable watch changes on files inside application directory.

Steps to reproduce

  1. Create entry app entry point on separate directory from application directory
  • webapp/
    • webapi/
      • main.py [ application entry point ]
    • application/
      • ... [ actual application code ]
  1. Run dev server

    $ export AIO_APP_PATH=webapi/
    $ adev runserver

  2. Make any changes on files inside webapi/, works as expected

  3. Make any changes on files inside application/, doesn't work

Edit: Currently i am using runserver by directly editing runserver/config.py

Just run across this too, if --root is provided it should be used as the base directory to watch.

commented

I have a application that monitor a system and running aiohttp at the same time.
Project structure are like this

root
 |- __main__.py
 |- service.py
 |- container.py
 |- worker.py
 |- http
    |- app.py
    |- action.py
    |- asset
      |- *.js
      |- *.css
    |- view
      |- *.j2

The main.py will run all registered event loop ( not only aiohttp )

app.py

from aiohttp_devtools import runserver

app = web.Application()
config = runserver.config.Config({
  app_path="root/http"
  root_path="root",
  livereoload=True
})
runserver.serve.modify_main_app(app, config)

worker.py

from app import app

async def httpserver():
  runner = web.AppRunner(app)
  await runner.setup()
  site = web.TCPSite(runner, host, port)
  await site.start()

async def other_long_task():
 await long_task()

main.py

loop = asyncio.get_event_loop()
asyncio.ensure_future(httpserver())
asyncio.ensure_future(other_long_task())
loop.run_forever()

With this approached all modified css js or template will be updated on next reload
But if I change python file such as action.py or service.py it will not update.
Is there a way to include python file ?

from aiohttp_devtools import runserver
runserver.serve.modify_main_app(app, config)

I don't think devtools was ever designed to be run like this. I think you should really just be using the commands (i.e. adev runserver) to run the app as per the README.

The aiohttp app should then be the single point of entry and manage everything else. For example, I have a 2nd (non-aiohttp) server in my project and for local development I run that in another process with code like:

async def run_server(_app):
    proc = await asyncio.create_subprocess_exec(path)

    yield

    if proc.returncode is None:
        proc.terminate()
    await proc.wait()

app.cleanup_ctx.append(run_server)

Making your code this way will make it a lot more robust and will work with tools like aiohttp-devtools much better.

commented

Ok got it, It actualy works. just need to tweak some of the code.
By the way my logging is not appear in the console only log from devtools appear. How to setup the logger ?

I have logging system in main.py

logging.basicConfig(                                                             
    level = logging.DEBUG,
    format="%(asctime)s %(levelname)s %(name)s : %(message)s",
    handlers = [
        logging.StreamHandler(sys.stdout)
    ]
)

log = logging.getLogger("main")
# this doesn't appear even when using verbose
log.info("test")
commented

never mind already works. need to define it inside app

Yep, devtools will be loading the app from app.py. __main__.py will never be run.

Personally, I use __main__.py as the entry point for production (run with python -m project) and app.py:create_app for devtools, which sets up several extra things for local dev.

commented

Working example of combining aiohttp with other asyncio operation

project
  |- project
     |- app.py
     |- __main__.py
     |- __init__.py
     |- service.py
     |- container.py
     |- worker.py
     |- http
        |- app.py
        |- action.py
        |- asset
           |- *.js
           |- *.css
        |- view
           |- *.j2

http.app.py

from aiohttp import web

app = web.Application()
# any aiohttp setup here, e.g routes middleware etc

worker.py

from app import app

async def httpserver():
   runner = web.AppRunner(app)
   await runner.setup()
   site = web.TCPSite(runner, host, port)
   await site.start()

async def other_long_task():
   while True:
     await long_task()
   await asyncio.sleep(1)

__main__.py
run using "python -m project"

from  worker import httpserver, other_long_task

if __name__ == "__main__":
  loop = asyncio.get_event_loop() 
  asyncio.ensure_future(httpserver())
  asyncio.ensure_future(other_long_task())
  loop.run_forever()

app.py
run using "adev runserver project"

from http.app import app
from worker import other_long_task

def app():
   asyncio.ensure_future(other_long_task())
   return app

That's still less robust than the approach I described. We also generally discourage using globals like that (if you look at the demos/tutorials for aiohttp, the app object is always local to a function, never a global).

My suggestion would look roughly more like:

__main__.py:

from my_app.app import init_app

def main() -> None:
    """Run the server in production environment."""
    app = init_app()
    web.run_app(app)


if __name__ == "__main__":
    main()

app.py:

def init_app() -> web.Application:
    """Initialise the web app."""
    app = web.Application()

    setup_routes(app)
    setup_middlewares(app)
    app.cleanup_ctx.append(run_other_task)

    return app

async def create_app() -> web.Application:
    """Create the app for local use.

    Called when running locally through aiohttp-devtools.
    """
    from my_app import _localdev as dev

    app = init_app()
    dev.configure_app(app)

    return app

With run_other_task() being defined something like:

async def run_other_task(_app):
    task = await asyncio.create_task(other_long_task())

    yield

    task.cancel()
    await task  # Ensure any exceptions etc. are raised.

Your current implementation will be more brittle as the lifetime of the other task is not being controlled by the app. Using a cleanup_ctx like this ensures that the task is created and torn down along with the app. As mentioned before, the app should be handling the overall running of the program (using aiohttp.web.run_app() rather than messing with low-level asyncio details).

commented

I separate it because I also have a case when aiohttp is optional ( disabled web interface )

e.g we sometime use --web option to enable webgui

So the web is not actually the first citizen here, the console app is.
I have a class that register all the task and will run all the service and on start
on shutdown it will wait all service to shutdown

Each service will have their own start and shutdown mechanism. It feels more modular this way.
But i'll check the cleanup_ctx part.

I actually extract the code from web.run_app(app) and put it in separate code. Its give more flexibility for this scenario

class Service:
    services = []
    def start(self) -> None:
        loop = asyncio.get_event_loop()
        try:
            for d in services:
                d.start()
            loop.run_forever()
        except asyncio.CancelledError:
            logger.info("Receive Cancelled")
        except KeyboardInterrupt:
            logger.info("Receive Keyboard Interrupt")
        except:
            logger.info(sys.exc_info())

        loop.run_until_complete(asyncio.wait([d.shutdown() for d in self._services]))
        loop.close()

I think as mentioned above, the solution to the original issue is to use `--root'. Please reopen if that is not the case.

commented

I haven't try --root options, currently it works using the approach you mention using separate file

I think there are 2 slightly different cases. The case for --root is, for example, when you essentially have 2 applications, so the entry point is one directory, and the local code then spawns the other application in a new process. So, one project I have looks like:

root/
-- main_app/
-- auxiliary_app/

devtools runs main_app, and my local code spawns auxiliary_app in another process (they would be separate servers in production). By default, devtools would only watch for changes in main_app/, so in this case --root can be used to move the watched directory to root/, causing the reload to happen when either app is modified.