FencedLock¶
-
class
FencedLock
(context, group_id, service_name, proxy_name, object_name)¶ Bases:
hazelcast.proxy.cp.SessionAwareCPProxy
A linearizable, distributed lock.
FencedLock is CP with respect to the CAP principle. It works on top of the Raft consensus algorithm. It offers linearizability during crash-stop failures and network partitions. If a network partition occurs, it remains available on at most one side of the partition.
FencedLock works on top of CP sessions. Please refer to CP Session IMDG documentation section for more information.
By default, FencedLock is reentrant. Once a caller acquires the lock, it can acquire the lock reentrantly as many times as it wants in a linearizable manner. You can configure the reentrancy behaviour on the member side. For instance, reentrancy can be disabled and FencedLock can work as a non-reentrant mutex. One can also set a custom reentrancy limit. When the reentrancy limit is reached, FencedLock does not block a lock call. Instead, it fails with
LockAcquireLimitReachedError
or a specified return value. Please check the locking methods to see details about the behaviour.It is advised to use this proxy in a blocking mode. Although it is possible, non-blocking usage requires an extra care. FencedLock uses the id of the thread that makes the request to distinguish lock owners. When used in a non-blocking mode, added callbacks or continuations are not generally executed in the thread that makes the request. That causes the code below to fail most of the time since the lock is acquired on the main thread but, unlock request is done in another thread.
lock = client.cp_subsystem.get_lock("lock") def cb(_): lock.unlock() lock.lock().add_done_callback(cb)
-
INVALID_FENCE
= 0¶
-
lock
()¶ Acquires the lock and returns the fencing token assigned to the current thread for this lock acquire.
If the lock is acquired reentrantly, the same fencing token is returned, or the
lock()
call can fail withLockAcquireLimitReachedError
if the lock acquire limit is already reached.If the lock is not available then the current thread becomes disabled for thread scheduling purposes and lies dormant until the lock has been acquired.
Fencing tokens are monotonic numbers that are incremented each time the lock switches from the free state to the acquired state. They are simply used for ordering lock holders. A lock holder can pass its fencing to the shared resource to fence off previous lock holders. When this resource receives an operation, it can validate the fencing token in the operation.
Consider the following scenario where the lock is free initially
lock = client.cp_subsystem.get_lock("lock").blocking() fence1 = lock.lock() # (1) fence2 = lock.lock() # (2) assert fence1 == fence2 lock.unlock() lock.unlock() fence3 = lock.lock() # (3) assert fence3 > fence1
In this scenario, the lock is acquired by a thread in the cluster. Then, the same thread reentrantly acquires the lock again. The fencing token returned from the second acquire is equal to the one returned from the first acquire, because of reentrancy. After the second acquire, the lock is released 2 times, hence becomes free. There is a third lock acquire here, which returns a new fencing token. Because this last lock acquire is not reentrant, its fencing token is guaranteed to be larger than the previous tokens, independent of the thread that has acquired the lock.
- Returns
The fencing token.
- Return type
- Raises
LockOwnershipLostError – If the underlying CP session was closed before the client releases the lock
LockAcquireLimitReachedError – If the lock call is reentrant and the configured lock acquire limit is already reached.
-
try_lock
(timeout=0)¶ Acquires the lock if it is free within the given waiting time, or already held by the current thread at the time of invocation and, the acquire limit is not exceeded, and returns the fencing token assigned to the current thread for this lock acquire.
If the lock is acquired reentrantly, the same fencing token is returned. If the lock acquire limit is exceeded, then this method immediately returns
INVALID_FENCE
that represents a failed lock attempt.If the lock is not available then the current thread becomes disabled for thread scheduling purposes and lies dormant until the lock is acquired by the current thread or the specified waiting time elapses.
If the specified waiting time elapses, then
INVALID_FENCE
is returned. If the time is less than or equal to zero, the method does not wait at all. By default, timeout is set to zero.A typical usage idiom for this method would be
lock = client.cp_subsystem.get_lock("lock").blocking() fence = lock.try_lock() if fence != lock.INVALID_FENCE: try: # manipulate the protected state finally: lock.unlock() else: # perform another action
This usage ensures that the lock is unlocked if it was acquired, and doesn’t try to unlock if the lock was not acquired.
See also
lock()
function for more information about fences.- Parameters
timeout (int) – The maximum time to wait for the lock in seconds.
- Returns
- The fencing token if the lock was acquired and
INVALID_FENCE
otherwise.
- Return type
- Raises
LockOwnershipLostError – If the underlying CP session was closed before the client releases the lock
-
unlock
()¶ Releases the lock if the lock is currently held by the current thread.
- Returns
- Return type
hazelcast.future.Future[None]
- Raises
LockOwnershipLostError – If the underlying CP session was closed before the client releases the lock
IllegalMonitorStateError – If the lock is not held by the current thread
-
is_locked
()¶ Returns whether this lock is locked or not.
- Returns
True
if this lock is locked by any threadin the cluster,
False
otherwise.
- Return type
hazelcast.future.Future[bool]
- Raises
LockOwnershipLostError – If the underlying CP session was closed before the client releases the lock
-
is_locked_by_current_thread
()¶ Returns whether the lock is held by the current thread or not.
- Returns
True
if the lock is held by the current thread,False
otherwise.
- Return type
hazelcast.future.Future[bool]
- Raises
LockOwnershipLostError – If the underlying CP session was closed before the client releases the lock
-
get_lock_count
()¶ Returns the reentrant lock count if the lock is held by any thread in the cluster.
- Returns
The reentrant lock count if the lock is held by any thread in the cluster
- Return type
- Raises
LockOwnershipLostError – If the underlying CP session was closed before the client releases the lock
-
destroy
()¶ Destroys this proxy.
-