-
-
Notifications
You must be signed in to change notification settings - Fork 33.2k
Closed
Labels
stdlibStandard Library Python modules in the Lib/ directoryStandard Library Python modules in the Lib/ directorytopic-multiprocessingtype-bugAn unexpected behavior, bug, or errorAn unexpected behavior, bug, or error
Description
Bug report
Bug description:
Maybe I didn't quite understand what this feature did, but I think there's a bug when using the locked() method with a multiprocessing.[R]Lock.
Here is an example:
import multiprocessing as mp
def acq(lock, event):
lock.acquire()
print(f'Acq: {lock = }')
print(f'Acq: {lock.locked() = }')
event.set()
def main():
lock = mp.Lock()
event = mp.Event()
p = mp.Process(target=acq, args=(lock, event))
p.start()
event.wait()
print(f'Main: {lock = }')
print(f'Main: {lock.locked() = }')
if __name__ == "__main__":
mp.freeze_support()
main()
output is:
Acq: lock = <Lock(owner=Process-1)>
Acq: lock.locked() = True
Main: lock = <Lock(owner=SomeOtherProcess)>
Main: lock.locked() = False
In the lockedmethod, the call to self._semlock._count() != 0 is not appropriate. The internal count attribute is really used with multiprocessing.RLock to count number of reentrant calls to acquire for the current thread.
With multiprocessing.Lock, this count is set to 1 when the lock is acquired (only once).
Whatever, only other threads can obtain this value, but not other processes sharing the [R]Lock.
IMO the test should be replace with self._semlock._is_zero() and the example above should also be add as unit test.
Linked issue/PR
AcquirerProxyobject has no attributelocked#115942- gh-115942: Add
lockedto several multiprocessing locks #115944
CPython versions tested on:
CPython main branch
Operating systems tested on:
macOS
Linked PRs
Metadata
Metadata
Assignees
Labels
stdlibStandard Library Python modules in the Lib/ directoryStandard Library Python modules in the Lib/ directorytopic-multiprocessingtype-bugAn unexpected behavior, bug, or errorAn unexpected behavior, bug, or error