-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce S3-native state locking #35661
Introduce S3-native state locking #35661
Conversation
5c183b9
to
a657e41
Compare
8d21c63
to
8be88f7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @bschaatsbergen, I have a few suggestions in the code.
A few other items:
- The DynamoDB lock stores MD5 hashes to prevent stale content. We should consider an equivalent mechanism for the native S3 lock
- We should add force-unlock tests that set both locks, but delete one or both out-of-band, and then try unlocking
- We should add tests that validate that the correct lock file is created, even with
workspace_key_prefix
- We should test that if a backend is locked with only DynamoDB, a second backend locked with both will not acquire a lock and will clean up its lock file
- Consider applying encryption to the lock file in the same way we now handle encryption for the state file. (Added by @bschaatsbergen)
253d56e
to
9efadb7
Compare
Signed-off-by: Bruno Schaatsbergen <[email protected]>
Buckets with object lock enabled cannot be deleted by acceptance test helpers, and therefore are now placed behind test gates which require the `TF_S3_OBJECT_LOCK_TEST` environment variable to be set.
Previously buckets with versioning enabled would be orphaned because the `deleteS3Bucket` test helper did not account for older versions of objects or delete markers. This helper has been modified to now use the `ListObjectVersions` API, which will allow buckets to be fully emptied regardless of whether versioning is enabled or not. ```console % TF_ACC=1 go test ./... ok github.com/hashicorp/terraform/internal/backend/remote-state/s3 183.551s ```
This test now uses a versioned bucket and inspects delete markers to confirm that the lockfile was present at the expected path. ```console % TF_ACC=1 go test -count=1 ./... -run=TestBackendLockFileWithPrefix ok github.com/hashicorp/terraform/internal/backend/remote-state/s3 3.921s ```
When server-side encryption is configured, both the state and lock files will be encrypted with the same mechanism. ```console % TF_S3_TEST_KMS_KEY_ID=<redacted> TF_ACC=1 go test -v -count=1 ./... -run=TestBackend_KmsKeyId === RUN TestBackend_KmsKeyId <redacted> backend_test.go:2190: creating S3 bucket terraform-remote-s3-test-66fffb52 in us-west-2 testing.go:303: TestBackend: testing state locking for *s3.Backend testing.go:303: TestBackend: testing state locking for *s3.Backend --- PASS: TestBackend_KmsKeyId (5.88s) PASS ok github.com/hashicorp/terraform/internal/backend/remote-state/s3 6.412s ```
When the `acl` argument is configured, both the state and lock files will have the same canned ACL applied. ```console % TF_ACC=1 go test -count=1 -v ./... -run=TestBackend_ACL === RUN TestBackend_ACL backend_test.go:2205: TestBackendConfig on *s3.Backend with configtesting.synthBody{Filename:"<TestWrapConfig>", Values:map[string]cty.Value{"acl":cty.StringVal("bucket-owner-full-control"), "bucket":cty.StringVal("terraform-remote-s3-test-670007ed"), "encrypt":cty.True, "key":cty.StringVal("test/state"), "region":cty.StringVal("us-west-2"), "use_lockfile":cty.True}} backend_test.go:2214: TestBackendConfig on *s3.Backend with configtesting.synthBody{Filename:"<TestWrapConfig>", Values:map[string]cty.Value{"acl":cty.StringVal("bucket-owner-full-control"), "bucket":cty.StringVal("terraform-remote-s3-test-670007ed"), "encrypt":cty.True, "key":cty.StringVal("test/state"), "region":cty.StringVal("us-west-2"), "use_lockfile":cty.True}} backend_test.go:2223: creating S3 bucket terraform-remote-s3-test-670007ed in us-west-2 testing.go:303: TestBackend: testing state locking for *s3.Backend testing.go:303: TestBackend: testing state locking for *s3.Backend --- PASS: TestBackend_ACL (5.48s) PASS ok github.com/hashicorp/terraform/internal/backend/remote-state/s3 6.020s ```
Since it is now possible to acquire locks in two locations (s3 and dynamoDB), the `Unlock` method needs to gracefully handle scenarios where one of the locks is deleted out of band. This change re-orders the unlock mechanism such that DynamoDB happens first (the inverse of lock acquisition), and adds logic which attempts to release the s3 lock when a failure occurs releasing the dynamoDB lock. Also adds an acceptance test to cover this scenario. ```console % TF_ACC=1 go test -count=1 -v ./... -run=TestBackend_LockDeletedOutOfBand === RUN TestBackend_LockDeletedOutOfBand backend_test.go:2169: TestBackendConfig on *s3.Backend with configtesting.synthBody{Filename:"<TestWrapConfig>", Values:map[string]cty.Value{"bucket":cty.StringVal("terraform-remote-s3-test-670052f3"), "dynamodb_table":cty.StringVal("terraform-remote-s3-test-670052f3"), "encrypt":cty.True, "key":cty.StringVal("test/state"), "region":cty.StringVal("us-west-2"), "use_lockfile":cty.True}} backend_test.go:2178: creating S3 bucket terraform-remote-s3-test-670052f3 in us-west-2 backend_test.go:3448: creating DynamoDB table terraform-remote-s3-test-670052f3 backend_test.go:2183: testing deletion of a dynamoDB state lock out of band --- PASS: TestBackend_LockDeletedOutOfBand (10.87s) PASS ok github.com/hashicorp/terraform/internal/backend/remote-state/s3 11.739s ```
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Double-triple check my changes but otherwise LGTM 🎉
- Switched logic in Lock and Unlock to return early, Go style
- Added log Infos
- Tightened up error messages
- Add error handling for deferred closing
Note, seems like the design decision is that double locking is acceptable but that if one fails, they both fail. This seems like a good choice but just wanted to spell it out to make sure we're on the same page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🎉
% TF_ACC=1 go test -count=1 ./...
ok github.com/hashicorp/terraform/internal/backend/remote-state/s3 198.502s
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! 🚀
I added this a very long time ago because S3 was not consistent after write and you would often read back stale data for a while. This isn't for any consistency problem within Terraform, so if the new consistency rules ensure read-after-write then the dynamodb checksum shouldn't be needed. |
Thanks for all of your effort on this, @bschaatsbergen! 🎉 |
Reminder for the merging maintainer: if this is a user-visible change, please update the changelog on the appropriate release branch. |
I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active contributions. |
This draft PR prototypes state locking for Terraform’s
s3
backend using a.tflock
file. It uses Amazon S3’s recently introduced conditional writes feature to implement a locking mechanism. When a lock is acquired, other Terraform clients attempting to lock the same Terraform state file will encounter an error until the lock is released.Context
The
internal/states/statemgr
package provides interfaces and functionality for state managers, including thes3
backend. These state managers are responsible for writing and retrieving state from persistent storage.This PR focuses on the methods implemented by
s3
: Lock and Unlock.Currently, the
s3
backend implements state locking using Amazon DynamoDB, by writing aLockID
and adigest
. Terraform uses these values in theLock
andUnlock
methods in thes3
backend to manage state locking and unlocking. While DynamoDB has long been used for state locking, leveraging Amazon S3’s newly released conditional writes feature offers an S3-native approach. By using S3 directly, we eliminate the need for an additional AWS component, simplifying the backend architecture.Implementation
A
.tflock
file, containing lock information, is used to represent the lock on the state file. This file contains lock information, including a unique lock ID and other metadata.Acquiring a Lock
To acquire a lock, a
.tflock
file is uploaded to an S3 bucket to establish a lock on the state file. If the lock file does not already exist, the upload succeeds, thereby acquiring the lock. If the file already exists, the upload fails due to a conditional write, indicating that the lock is already held by another Terraform client.Releasing a Lock
To release a lock, the corresponding lock file is deleted from the S3 bucket. This action removes the file, thereby releasing the lock and making it available for other Terraform clients.
Conditional Writes
Conditional writes in Amazon S3 allows clients to perform write operations only if certain conditions about the existing object are met. Specifically:
For buckets with versioning enabled, S3 checks for the presence of a current object version. If no current version exists or if the version is a delete marker, the write operation succeeds; otherwise, it fails.
In scenarios where multiple Terraform processes attempt to acquire a lock (i.e., multiple concurrent conditional writes to the same S3 object), only the first process to successfully acquire the lock will succeed. Subsequent processes will receive a 412 Precondition Failed response, indicating that the lock is already held by another Terraform process.
According to Amazon S3 documentation, a 409 Conflict response may occur if a delete request for the object completes before a conditional write operation finishes. In such cases, uploads might need to be retried. However, given that delete operations should generally occur after a lock is successfully acquired, encountering this situation in the context of state locking for Terraform is unlikely (?)
This locking mechanism ensures that only one Terraform process can hold the lock at any given time, preventing concurrent modifications and ensuring consistent state management using the
s3
backend and Amazon S3's conditional writes.An opt-in feature
DynamoDB has long been the standard for Terraform state locking in the
s3
backend, and it's fair to assume that many users depend on this mechanism. To provide a practitioner-friendly way to introduce the S3-native state locking via a lock file, this draft PR deprecates DynamoDB-related attributes in thes3
backend. It introduces a newuse_lockfile
boolean attribute to enable the S3-native locking mechanism.Switching to the new S3-native locking mechanism
To be worked out.
Draft
I’m still working on this. The foundations of S3-native state locking are already in place. This is still a prototype, and I'm testing it to make sure it fits well with the current state locking experience in the S3 backend. Your feedback, comments, and suggestions are welcome and appreciated.