diff options
| author | Aharon Landau <aharonl@nvidia.com> | 2022-04-12 10:24:02 +0300 |
|---|---|---|
| committer | Jason Gunthorpe <jgg@nvidia.com> | 2022-04-25 11:53:00 -0300 |
| commit | 33e8aa8e049811de87cd1c16a2ead85e0c9f9606 (patch) | |
| tree | f7ed24a24365e312dfb418b3f6b0956d4df45d03 /drivers/infiniband/hw/mlx5/umr.c | |
| parent | 6f0689fdf19ed3aca3ee3910223ad27216640693 (diff) | |
| download | linux-33e8aa8e049811de87cd1c16a2ead85e0c9f9606.tar.gz linux-33e8aa8e049811de87cd1c16a2ead85e0c9f9606.tar.bz2 linux-33e8aa8e049811de87cd1c16a2ead85e0c9f9606.zip | |
RDMA/mlx5: Use mlx5_umr_post_send_wait() to revoke MRs
Move the revoke_mr logic to umr.c, and using mlx5_umr_post_send_wait()
instead of mlx5_ib_post_send_wait().
In the new implementation, do not zero out the access flags. Before
reusing the MR, we will update it to the required access.
Link: https://lore.kernel.org/r/63717dfdaf6007f81b3e6dbf598f5bf3875ce86f.1649747695.git.leonro@nvidia.com
Signed-off-by: Aharon Landau <aharonl@nvidia.com>
Reviewed-by: Michael Guralnik <michaelgur@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Diffstat (limited to 'drivers/infiniband/hw/mlx5/umr.c')
| -rw-r--r-- | drivers/infiniband/hw/mlx5/umr.c | 29 |
1 files changed, 29 insertions, 0 deletions
diff --git a/drivers/infiniband/hw/mlx5/umr.c b/drivers/infiniband/hw/mlx5/umr.c index f17f64cb1925..2f14f6ccf9da 100644 --- a/drivers/infiniband/hw/mlx5/umr.c +++ b/drivers/infiniband/hw/mlx5/umr.c @@ -320,3 +320,32 @@ static int mlx5r_umr_post_send_wait(struct mlx5_ib_dev *dev, u32 mkey, up(&umrc->sem); return err; } + +/** + * mlx5r_umr_revoke_mr - Fence all DMA on the MR + * @mr: The MR to fence + * + * Upon return the NIC will not be doing any DMA to the pages under the MR, + * and any DMA in progress will be completed. Failure of this function + * indicates the HW has failed catastrophically. + */ +int mlx5r_umr_revoke_mr(struct mlx5_ib_mr *mr) +{ + struct mlx5_ib_dev *dev = mr_to_mdev(mr); + struct mlx5r_umr_wqe wqe = {}; + + if (dev->mdev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) + return 0; + + wqe.ctrl_seg.mkey_mask |= get_umr_update_pd_mask(); + wqe.ctrl_seg.mkey_mask |= get_umr_disable_mr_mask(); + wqe.ctrl_seg.flags |= MLX5_UMR_INLINE; + + MLX5_SET(mkc, &wqe.mkey_seg, free, 1); + MLX5_SET(mkc, &wqe.mkey_seg, pd, to_mpd(dev->umrc.pd)->pdn); + MLX5_SET(mkc, &wqe.mkey_seg, qpn, 0xffffff); + MLX5_SET(mkc, &wqe.mkey_seg, mkey_7_0, + mlx5_mkey_variant(mr->mmkey.key)); + + return mlx5r_umr_post_send_wait(dev, mr->mmkey.key, &wqe, false); +} |
