diff options
| author | Lu Baolu <baolu.lu@linux.intel.com> | 2024-02-12 09:22:26 +0800 |
|---|---|---|
| committer | Joerg Roedel <jroedel@suse.de> | 2024-02-16 15:19:36 +0100 |
| commit | b554e396e51ce3d378a560666f85c6836a8323fd (patch) | |
| tree | c3fb5ce903c7ece43461c3b898803af3b6569554 /drivers/iommu/intel/iommu.h | |
| parent | 19911232713573a2ebea84a25bd4d71d024ed86b (diff) | |
| download | linux-b554e396e51ce3d378a560666f85c6836a8323fd.tar.gz linux-b554e396e51ce3d378a560666f85c6836a8323fd.tar.bz2 linux-b554e396e51ce3d378a560666f85c6836a8323fd.zip | |
iommu: Make iopf_group_response() return void
The iopf_group_response() should return void, as nothing can do anything
with the failure. This implies that ops->page_response() must also return
void; this is consistent with what the drivers do. The failure paths,
which are all integrity validations of the fault, should be WARN_ON'd,
not return codes.
If the iommu core fails to enqueue the fault, it should respond the fault
directly by calling ops->page_response() instead of returning an error
number and relying on the iommu drivers to do so. Consolidate the error
fault handling code in the core.
Co-developed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Link: https://lore.kernel.org/r/20240212012227.119381-16-baolu.lu@linux.intel.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Diffstat (limited to 'drivers/iommu/intel/iommu.h')
| -rw-r--r-- | drivers/iommu/intel/iommu.h | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h index 696d95293a69..cf9a28c7fab8 100644 --- a/drivers/iommu/intel/iommu.h +++ b/drivers/iommu/intel/iommu.h @@ -1079,8 +1079,8 @@ struct iommu_domain *intel_nested_domain_alloc(struct iommu_domain *parent, void intel_svm_check(struct intel_iommu *iommu); int intel_svm_enable_prq(struct intel_iommu *iommu); int intel_svm_finish_prq(struct intel_iommu *iommu); -int intel_svm_page_response(struct device *dev, struct iopf_fault *evt, - struct iommu_page_response *msg); +void intel_svm_page_response(struct device *dev, struct iopf_fault *evt, + struct iommu_page_response *msg); struct iommu_domain *intel_svm_domain_alloc(void); void intel_svm_remove_dev_pasid(struct device *dev, ioasid_t pasid); void intel_drain_pasid_prq(struct device *dev, u32 pasid); |
