Calling "destroyResource" multiple times on the same resource causes "inUse" count to be incorrect, which allows exceeding the maxResources limit
mgsloan opened this issue · comments
Michael Sloan commented
The definition of destroyResource
is:
destroyResource :: Pool a -> LocalPool a -> a -> IO ()
destroyResource Pool{..} LocalPool{..} resource = do
destroy resource `E.catch` \(_::SomeException) -> return ()
atomically (modifyTVar_ inUse (subtract 1))
inUse
always gets decremented, regardless of if this function has been called multiple times for the same resource. Here is a demonstration of the issue:
#!/usr/bin/env stack
-- stack script --resolver lts-11.4 --package resource-pool --package stm
import Control.Concurrent.STM
import Control.Concurrent.STM.TVar
import Control.Monad
import Data.Pool
main :: IO ()
main = do
counter <- newTVarIO 0
let acquire = do
k <- atomically $ do
k <- readTVar counter
writeTVar counter (k + 1)
return k
putStrLn $ "acquire " ++ show k
return k
release k = putStrLn $ "release " ++ show k
pool <- createPool acquire release 1 60 1
(k, localPool) <- takeResource pool
destroyResource pool localPool k
destroyResource pool localPool k
void $ takeResource pool
void $ takeResource pool
putStrLn "Bug: acquired two resources despite the pool having a limit of 1. Next resource acquire will block."
void $ takeResource pool
Output:
acquire 0
release 0
release 0
acquire 1
acquire 2
Bug: acquired two resources despite the pool having a limit of 1. Next resource acquire will block.
Cody Goodman commented
Isn't the only way to solve this by giving LocalPools a unique id and having Pool store a Map of ids that have already been decremented?
Actually... I think that wouldn't work. The fact that no one else has chimed in makes me wonder if the burden is on the user of the library here and this isn't a pattern Pool aims at preventing. That seems wrong to me though.