Explicit locking
Explicit locking in YugabyteDB gives you granular control over concurrency and data integrity within your transactions. While YugabyteDB automatically handles the ACID properties through its underlying distributed transaction protocol, explicit locking allows you to prevent conflicts that might otherwise occur, or to implement custom concurrency patterns that extend beyond the default isolation levels.
By using explicit locks, you can guarantee exclusive or shared access to specific data, ensuring that subsequent transactions are appropriately blocked or permitted. This is particularly useful for complex business logic, long-running transactions, or scenarios where you require custom serializability guarantees.
Row-level locks
YugabyteDB's YSQL supports explicit row-level locking, similar to PostgreSQL. Explicit row-locks ensure that two transactions can never hold conflicting locks on the same row. When two transactions try to acquire conflicting lock modes, the semantics are dictated by YugabyteDB's concurrency control policies.
The following types of row locks are supported:
- FOR UPDATE
- FOR NO KEY UPDATE
- FOR SHARE
- FOR KEY SHARE
The following example uses the FOR UPDATE row lock with the fail-on-conflict concurrency control policy. First, a row is selected for update, thereby locking it, and subsequently updated. A concurrent transaction should not be able to abort this transaction by updating the value of that row after the row is locked.
Before you start
The examples will run on any YugabyteDB universe.
To create and connect to a universe, see Set up YugabyteDB universe.
Create a sample table and populate it with sample data, as follows:
yugabyte=# CREATE TABLE t (k VARCHAR, v VARCHAR);
yugabyte=# INSERT INTO t VALUES ('k1', 'v1');
Next, connect to the universe using two independent ysqlsh instances. You can connect both session ysqlsh instances to the same server or to different servers.
Begin a transaction in the first session and perform a SELECT FOR UPDATE on the row in the table t
. This locks the row for an update as a part of a transaction that has a very high priority (that is, in the high priority bucket
, as explained in Transaction priorities):
yugabyte=# BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ;
BEGIN
yugabyte=# SELECT * from t WHERE k='k1' FOR UPDATE;
k | v
----+----
k1 | v1
(1 row)
Before completing the transaction, try to update the same key in your other session using a basic update statement, as follows:
yugabyte=# UPDATE t SET v='v1.1' WHERE k='k1';
ERROR: All transparent retries exhausted. Operation failed. Try again: bb3aace4-5de2-41f9-981e-d9ca06671419 Conflicts with higher priority transaction: d4dadbf8-ca81-4bbd-b68c-067023f8ee6b
This operation fails because it conflicts with the row-level lock and as per Fail-on-Conflict concurrency control policy, the transaction aborts itself because it has a lower priority.
Note that the error message appears after all best-effort statement retries have been exhausted.
Finally, in the first session, update the row and commit the transaction, as follows:
yugabyte=# UPDATE t SET v='v1.2' WHERE k='k1';
UPDATE 1
yugabyte=# COMMIT;
COMMIT
This should succeed.
Advisory locks
TP Advisory locks are available in v2.25.1.0 and later.
YSQL also supports advisory locks, where the application manages concurrent access to resources through a cooperative locking mechanism. Advisory locks can be less resource-intensive than table or row locks for certain use cases because they don't involve scanning tables or indexes for lock conflicts. They are session-specific and managed by the client application.
In PostgreSQL, if an advisory lock is taken on one session, all sessions should be able to see the advisory locks acquired by any other session. Similarly, in YugabyteDB, if an advisory lock is acquired on one session, all the sessions should be able to see the advisory locks regardless of the node the session is connected to. This is achieved via the pg_advisory_locks system table, which is dedicated to hosting advisory locks. All advisory lock requests are stored in this system table.
Advisory locks are diabled by default. To enable and configure advisory locks, use the Advisory lock flags.
For more information on using the locks, refer to Advisory locks.