mplanchard / pydecor

Easy peasy Python decorators

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Followup to repair @export

Paebbels opened this issue · comments

After pushing my initial source in #9 for an export decorator, a portion of the code was changed that creates lots of problems.

Original code for adding a declared entity to __all__:

   if hasattr(module, '__all__'):
        if entity.__name__ not in module.__all__:
            module.__all__.append(entity.__name__)
    else:
        module.__all__ = [ entity.__name__ ]

After modification:

    if hasattr(module, "__all__"):
        if entity.__name__ not in module.__all__:  # type: ignore
            module.__all__ = module.__all__.__class__(  # type: ignore
                (*module.__all__, entity.__name__)  # type: ignore
            )
    else:
        module.__all__ = [entity.__name__]  # type: ignore

The differences are:

  • Original implementation works inplace by appending items to a existing list or creates one.
  • The final implementation replace the list with a new list on every call of the decorator.

Drawback of the final implementation:

  1. When taking a copy of __all__ e.g. for documentation purposes to __api__. This variable stays empty, because:
    • the pointer doesn't get modified to point to the new list.
    • export doesn't update __api__
  2. Replacing lists is super super inefficient !
    • Inplace updates in a list object adds items into preallocated spaces for new items and resizes only from time to time.
    • The final implementation does this:
      1. Expand a list to n arguments for a tuple constructor
      2. Call tuple constructor with n+1 arguments and generate a new tuple class in memory
      3. Call the list constructor and handover one tuple. Internally it expands the tuple again and creates a new list.
    • At least 2 new objects are created and n elements are unpacked and copied at least 2 times.
    • This consumes lots of memory until garbage collection comes by.

Here is an outlook to the performance impact of original vs. final implementation:

def original():
  l = []
  for i in range(1000):
    l.append(i)

def final():
  l = []
  for i in range(1000):
    l = list( (*l, i) )
timeit(original)
44.279120899998816

timeit(final)
2816.1221756999985
Final Number of Elements → 10 100 500 1000 Time Complexity
timeit(original) 0.564 4.666 21.994 44.279 O(n)
timeit(final) 1.540 82.155 859.434 2816.122 O(n²)

Assuming O(n²):
image

Other references: Python concatenation vs append speed on lists

@mplanchard any updates on this issue?

Hey @Paebbels, sorry for the radio silence on this. I've switched to writing Rust professionally, and I'm generally using non-python languages for my hobby projects these days, making it a bit harder to find the time to maintain projects like this. Would you be interested in taking this one over?

Hello @mplanchard, I think taking over the whole pydecor repository would be too much. I'm not into all the details of the repo and also not it's used techniques like tox or how you did tests. Anyhow, I appreciate the offer.

I could imagine to co-maintain the repo and maybe got more into the details in the future.