summaryrefslogtreecommitdiff
path: root/include/linux
diff options
context:
space:
mode:
authorLong Li <leo.lilong@huawei.com>2024-12-09 19:42:39 +0800
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2025-01-17 13:40:34 +0100
commit7adf7df4bbc0d256804be4411db16f9af2ca6f4a (patch)
tree6c193b62babad97de8264e524d7ef57781fb25d0 /include/linux
parentf40881bde8f64d5c237424ea571e6ddd92209ab2 (diff)
downloadlinux-7adf7df4bbc0d256804be4411db16f9af2ca6f4a.tar.gz
linux-7adf7df4bbc0d256804be4411db16f9af2ca6f4a.tar.bz2
linux-7adf7df4bbc0d256804be4411db16f9af2ca6f4a.zip
iomap: pass byte granular end position to iomap_add_to_ioend
[ Upstream commit b44679c63e4d3ac820998b6bd59fba89a72ad3e7 ] This is a preparatory patch for fixing zero padding issues in concurrent append write scenarios. In the following patches, we need to obtain byte-granular writeback end position for io_size trimming after EOF handling. Due to concurrent writeback and truncate operations, inode size may shrink. Resampling inode size would force writeback code to handle the newly appeared post-EOF blocks, which is undesirable. As Dave explained in [1]: "Really, the issue is that writeback mappings have to be able to handle the range being mapped suddenly appear to be beyond EOF. This behaviour is a longstanding writeback constraint, and is what iomap_writepage_handle_eof() is attempting to handle. We handle this by only sampling i_size_read() whilst we have the folio locked and can determine the action we should take with that folio (i.e. nothing, partial zeroing, or skip altogether). Once we've made the decision that the folio is within EOF and taken action on it (i.e. moved the folio to writeback state), we cannot then resample the inode size because a truncate may have started and changed the inode size." To avoid resampling inode size after EOF handling, we convert end_pos to byte-granular writeback position and return it from EOF handling function. Since iomap_set_range_dirty() can handle unaligned lengths, this conversion has no impact on it. However, iomap_find_dirty_range() requires aligned start and end range to find dirty blocks within the given range, so the end position needs to be rounded up when passed to it. LINK [1]: https://lore.kernel.org/linux-xfs/Z1Gg0pAa54MoeYME@localhost.localdomain/ Signed-off-by: Long Li <leo.lilong@huawei.com> Link: https://lore.kernel.org/r/20241209114241.3725722-2-leo.lilong@huawei.com Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Christian Brauner <brauner@kernel.org> Stable-dep-of: 51d20d1dacbe ("iomap: fix zero padding data issue in concurrent append writes") Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'include/linux')
0 files changed, 0 insertions, 0 deletions