summaryrefslogtreecommitdiff
path: root/drivers/gpu/drm
AgeCommit message (Collapse)AuthorFilesLines
2023-12-21drm/xe/uapi: Kill tile_maskRodrigo Vivi2-33/+9
It is currently unused, so by the rules it cannot go upstream. Also there was the desire to convert that to align with the engine_class_instance selection, but the consensus on that one is to remain with the global gt_id. So we are keeping the gt_id there, not converting to a generic sched_group and also killing this tile_mask and only using the default behavior of 0 that is to create a mapping / page_table entry on every tile, similar to what i915. Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Francois Dugast <francois.dugast@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
2023-12-21drm/xe/uapi: Split xe_sync types from flagsRodrigo Vivi2-16/+8
Let's continue on the uapi clean-up with more splits with stuff into their own exclusive fields instead of reusing stuff. Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Francois Dugast <francois.dugast@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
2023-12-21drm/xe/uapi: Align on a common way to return arrays (engines)Francois Dugast1-13/+18
The uAPI provides queries which return arrays of elements. As of now the format used in the struct is different depending on which element is queried. Fix this for engines by applying the pattern below: struct drm_xe_query_Xs { __u32 num_Xs; struct drm_xe_X Xs[]; ... } Instead of directly returning an array of struct drm_xe_query_engine_info, a new struct drm_xe_query_engines is introduced. It contains itself an array of struct drm_xe_engine which holds the information about each engine. v2: Use plural for struct drm_xe_query_engines as multiple engines are returned (José Roberto de Souza) Signed-off-by: Francois Dugast <francois.dugast@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/uapi: Align on a common way to return arrays (gt)Francois Dugast1-1/+1
The uAPI provides queries which return arrays of elements. As of now the format used in the struct is different depending on which element is queried. However, aligning on the new common pattern: struct drm_xe_query_Xs { __u32 num_Xs; struct drm_xe_X Xs[]; ... } ... would mean bringing back the name "gts" which is avoided per commit fca54ba12470 ("drm/xe/uapi: Rename gts to gt_list") so make an exception for gt and leave gt_list. Also, this change removes "query" in the name of struct drm_xe_query_gt as it is not returned from the query IOCTL. There is no functional change. v2: Leave gt_list (Matt Roper) Signed-off-by: Francois Dugast <francois.dugast@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/uapi: Align on a common way to return arrays (memory regions)Francois Dugast1-22/+24
The uAPI provides queries which return arrays of elements. As of now the format used in the struct is different depending on which element is queried. Fix this for memory regions by applying the pattern below: struct drm_xe_query_Xs { __u32 num_Xs; struct drm_xe_X Xs[]; ... } This removes "query" in the name of struct drm_xe_query_mem_region as it is not returned from the query IOCTL. There is no functional change. v2: Only rename drm_xe_query_mem_region to drm_xe_mem_region (José Roberto de Souza) v3: Rename usage to mem_regions in xe_query.c (José Roberto de Souza) Signed-off-by: Francois Dugast <francois.dugast@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/uapi: Reject bo creation of unaligned sizeMauro Carvalho Chehab2-11/+25
For xe bo creation we request passing size which matches system or vram minimum page alignment. This way we want to ensure userspace is aware of region constraints and not aligned allocations will be rejected returning EINVAL. v2: - Rebase, Update uAPI documentation. (Thomas) v3: - Adjust the dma-buf kunit test accordingly. (Thomas) v4: - Fixed rebase conflicts and updated commit message. (Francois) Signed-off-by: Mauro Carvalho Chehab <mauro.chehab@linux.intel.com> Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: Francois Dugast <francois.dugast@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: Make DRM_XE_DEVICE_QUERY_ENGINES future proofJosé Roberto de Souza1-7/+8
We have at least 2 future features(OA and future media engines capabilities) that will require Xe to provide more information about engines to UMDs. But this information should not just be added to drm_xe_engine_class_instance for a couple of reasons: - drm_xe_engine_class_instance is used as input to other structs/uAPIs and those uAPIs don't care about any of these future new engine fields - those new fields are useless information after initialization for some UMDs, so it should not need to carry that around So here my proposal is to make DRM_XE_DEVICE_QUERY_ENGINES return an array of drm_xe_query_engine_info that contain drm_xe_engine_class_instance and 3 u64s to be used for future features. Reference OA: https://patchwork.freedesktop.org/patch/558362/?series=121084&rev=6 v2: Reduce reserved[] to 3 u64 (Matthew Brost) Cc: Francois Dugast <francois.dugast@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: José Roberto de Souza <jose.souza@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> [Rodrigo Rebased] Signed-off-by: Francois Dugast <francois.dugast@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
2023-12-21drm/xe/uapi: Separate bo_create placement from flagsRodrigo Vivi1-7/+7
Although the flags are about the creation, the memory placement of the BO deserves a proper dedicated field in the uapi. Besides getting more clear, it also allows to remove the 'magic' shifts from the flags that was a concern during the uapi reviews. Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Francois Dugast <francois.dugast@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
2023-12-21drm/xe: add some debug info for d3coldMatthew Auld2-0/+6
From the CI logs we want to easily know if the machine is capable and allowed to enter d3cold, and can therefore potentially trigger the d3cold RPM suspend and resume path. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Anshuman Gupta <anshuman.gupta@intel.com> Cc: Riana Tauro <riana.tauro@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/guc: Split GuC params used for "hwconfig" and "post-hwconfig"Michał Winiarski1-0/+22
Move params that are not used for initial "hwconfig" load to "post-hwconfig" phase. Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/uc: Extract xe_uc_sanitize_resetMichał Winiarski3-7/+11
Earlier GuC load will require more fine-grained control over reset. Extract it outside of xe_uc_init_hw. Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/uc: Store firmware binary in system-memory backed BOMichał Winiarski1-3/+1
The firmware loading for GuC is about to be moved, and will happen much earlier in the probe process, when local-memory is not yet available. While this has the potential to make the firmware loading process slower, this is only happening during probe and full device reset. Since both are not hot-paths - store all UC-like firmware in system memory. Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/uc: Split xe_uc_fw_initMichał Winiarski1-19/+61
The function does a driver specific "request firmware" step that includes validating the input, followed by wrapping the firmware binary into a buffer object. Split it into smaller parts. Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: Add a helper for DRM device-lifetime BO createMichał Winiarski10-74/+63
A helper for managed BO allocations makes it possible to remove specific "fini" actions and will simplify the following patches adding ability to execute a release action for specific BO directly. Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: Reorder GGTT init to earlier point in probeMichał Winiarski4-9/+26
GuC will need to be loaded earlier during probe. Having functional GGTT is one of the prerequisites. Also rename xe_ggtt_init_noalloc to xe_ggtt_init_early to match the new call site. Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: Move force_wake init to earlier point in probeMichał Winiarski2-2/+3
GuC will need to be loaded earlier during probe. And in order to load GuC, being able to take the forcewake is going to be needed. Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: Move system memory management init to earlier point in probeMichał Winiarski1-2/+2
GuC will need to be loaded earlier during probe. And in order to load GuC, we will need the ability to create system memory allocations. Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: Don't "peek" into GMD_IDMichał Winiarski1-18/+16
Now that MMIO init got moved to device early, we can use regular xe_mmio_read helpers to get to GMD_ID register. Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/device: Introduce xe_device_probe_earlyMichał Winiarski5-15/+55
SR-IOV VF doesn't have access to MMIO registers used to determine graphics/media ID. It can however communicate with GuC. Introduce xe_device_probe_early, which initializes enough HW to use MMIO GuC communication. This will allow both VF and PF/native driver to have unified probe ordering. Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: Map the entire BAR0 and hold onto the initial mappingMichał Winiarski1-8/+4
Both MMIO registers and GGTT for root tile will need to be used earlier during probe. Don't rely on tile count to compute the mapping size. Further more, there's no need to remap after figuring out the real resource size. Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: Introduce xe_tile_init_early and use at earlier point in probeMichał Winiarski4-13/+37
It also merges the GT (which is part of tile) initialization happening at xe_info_init with allocating other per-tile data structures into a common helper function. Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: Split xe_info_initMichał Winiarski2-33/+48
Parts of xe_info_init are only dealing with processing driver_data. Extract it into xe_info_init_early to be able to use it earlier during probe. Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/dsb: DSB implementation for xeAnimesh Manna2-0/+72
Add xe specific DSB buffer handling methods. v1: Initial version. v2: Add null check after dynamic memory allocation of vma. [Uma] Reviewed-by: Uma Shankar <uma.shankar@intel.com> Signed-off-by: Animesh Manna <animesh.manna@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/xe2: Add workaround 16020292621Tejas Upadhyay3-0/+22
Workaround applies to Graphics 20.04 as part of ring submission V4(MattR): - Rule for engine in oob WA not supported, add explicitly V3(MattR): - Pass hwe and rename API name to hint end of ring work - Use existing RING_NOPID API V2: - Marking this WA for 20.04 instead of 20.00 Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Tejas Upadhyay <tejas.upadhyay@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/xe2: Respond to TRTT faults as unsuccessful page faultBrian Welty1-0/+6
SW is not expected to handle TRTT faults and should report these as unsuccessful page fault in the reply, such that HW can respond by raising a CAT error. Signed-off-by: Brian Welty <brian.welty@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: Support device page faults on integrated platformsBrian Welty2-3/+6
Update xe_migrate_prepare_vm() to use the usm batch buffer even for servicing device page faults on integrated platforms. And as we have no VRAM on integrated platforms, device pagefault handler should not attempt to migrate into VRAM. LNL is first integrated platform to support device pagefaults. Signed-off-by: Brian Welty <brian.welty@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: Move xe_mmio_probe_tiles outside of MMIO setupMichał Winiarski3-3/+4
MMIO is going to be setup earlier during probe. Move xe_mmio_probe_tiles outside of MMIO setup. Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Link: https://lore.kernel.org/r/20231129214509.1174116-6-michal.winiarski@intel.com Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: Move xe_set_dma_info outside of MMIO setupMichał Winiarski2-26/+26
MMIO is going to be setup earlier during probe. Move xe_set_dma_info outside of MMIO setup. Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Link: https://lore.kernel.org/r/20231129214509.1174116-5-michal.winiarski@intel.com Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/irq: Don't call pci_free_irq_vectorsMichał Winiarski1-4/+1
For devres managed devices, pci_alloc_irq_vectors is also managed (see pci_setup_msi_context for reference). PCI device used by Xe is devres managed (it was enabled with pcim_enable_device), which means that calls to pci_free_irq_vectors are redundant and can be safely removed. Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://lore.kernel.org/r/20231129214509.1174116-4-michal.winiarski@intel.com Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: Use managed pci_enable_deviceMichał Winiarski1-12/+8
Xe uses devres for most of its driver-lifetime resources, use it for pci device as well. Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://lore.kernel.org/r/20231129214509.1174116-3-michal.winiarski@intel.com Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: Skip calling drm_dev_put on probe errorMichał Winiarski2-11/+6
DRM device used by Xe is managed, which means that final ref will be dropped on driver detach. Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Link: https://lore.kernel.org/r/20231129214509.1174116-2-michal.winiarski@intel.com Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: Fix header guard warningMichał Winiarski1-1/+1
Additional underscore in the header guard causes the build to fail with: drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.h:6:9: error: '_XE_ENGINE_CLASS_SYSFS_H_' is used as a header guard here, followed by #define of a different macro [-Werror,-Wheader-guard] Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://lore.kernel.org/r/20231129214509.1174116-1-michal.winiarski@intel.com Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: rename bypass_mtcfg to skip_mtcfgKoby Elbaz3-5/+5
Per device, set this flag to access the MTCFG register or to skip it. This is done to standardise Xe driver naming if an access to any HW should be avoided. Signed-off-by: Koby Elbaz <kelbaz@habana.ai> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: add skip_pcode flagKoby Elbaz3-0/+13
Per device, set this flag to enable access to the PCODE uC or to skip it. Signed-off-by: Koby Elbaz <kelbaz@habana.ai> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/mocs: update MOCS table for xe2Matthew Auld1-5/+5
Looks like there were some changes at some point here for preferring L4 uncached for some of the indexes. Triple checked the PAT settings also, but that looks all correct as per current BSpec. BSpec: 71582 Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: Manually setup C6 when skip_guc_pc is setVinay Belgaumkar4-3/+64
Skip the init/start/stop GuC PC functions and toggle C6 using register writes instead. Also request max possible frequency as dynamic freq management is disabled. v2: Fix compile warning Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: Add skip_guc_pc flagVinay Belgaumkar2-0/+4
This flag can be used to disable GuC based power management. This could be used for debug or comparison to host based C6. v2: Fix missing definition Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: Rename xe_gt_idle_sysfs to xe_gt_idleVinay Belgaumkar6-9/+9
Prep this file to contain C6 toggling as well instead of just sysfs related stuff. Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/guc: Include only required GuC ABI headersMichal Wajdeczko8-6/+11
On i915 we were adding new GuC ABI headers directly to guc_fwif.h file since we were replacing old definitions from that file. On xe driver we could do more and better by including ABI headers only in files that need those definitions. Link: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/741 Cc: Jani Nikula <jani.nikula@intel.com> Acked-by: Jani Nikula <jani.nikula@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Link: https://lore.kernel.org/r/20231128203203.1147-3-michal.wajdeczko@intel.com Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/guc: Remove obsolete GuC CTB documentationMichal Wajdeczko1-43/+2
Refer to already described CTB Descriptor and CTB HXG Message. Reviewed-by: Matthew Brost <matthew.brost@intel.com> Link: https://lore.kernel.org/r/20231128203203.1147-2-michal.wajdeczko@intel.com Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/guc: Drop ancient GuC CTB definitionsMichal Wajdeczko1-21/+0
Those definitions were applicable for old GuC firmwares only. Reviewed-by: Matthew Brost <matthew.brost@intel.com> Link: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/741 Link: https://lore.kernel.org/r/20231128203203.1147-1-michal.wajdeczko@intel.com Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: explicitly set GGTT access for GuC DMAFei Yang2-1/+3
Confirmed with hardware that setting GGTT memory access for GuC firmware loading is correct for all platforms and required for new platforms going forward. Signed-off-by: Fei Yang <fei.yang@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://lore.kernel.org/r/20231122204501.1353325-2-fei.yang@intel.com Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/uapi: support pat_index selection with vm_bindMatthew Auld3-17/+68
Allow userspace to directly control the pat_index for a given vm binding. This should allow directly controlling the coherency, caching behaviour, compression and potentially other stuff in the future for the ppGTT binding. The exact meaning behind the pat_index is very platform specific (see BSpec or PRMs) but effectively maps to some predefined memory attributes. From the KMD pov we only care about the coherency that is provided by the pat_index, which falls into either NONE, 1WAY or 2WAY. The vm_bind coherency mode for the given pat_index needs to be at least 1way coherent when using cpu_caching with DRM_XE_GEM_CPU_CACHING_WB. For platforms that lack the explicit coherency mode attribute, we treat UC/WT/WC as NONE and WB as AT_LEAST_1WAY. For userptr mappings we lack a corresponding gem object, so the expected coherency mode is instead implicit and must fall into either 1WAY or 2WAY. Trying to use NONE will be rejected by the kernel. For imported dma-buf (from a different device) the coherency mode is also implicit and must also be either 1WAY or 2WAY. v2: - Undefined coh_mode(pat_index) can now be treated as programmer error. (Matt Roper) - We now allow gem_create.coh_mode <= coh_mode(pat_index), rather than having to match exactly. This ensures imported dma-buf can always just use 1way (or even 2way), now that we also bundle 1way/2way into at_least_1way. We still require 1way/2way for external dma-buf, but the policy can now be the same for self-import, if desired. - Use u16 for pat_index in uapi. u32 is massive overkill. (José) - Move as much of the pat_index validation as we can into vm_bind_ioctl_check_args. (José) v3 (Matt Roper): - Split the pte_encode() refactoring into separate patch. v4: - Rebase v5: - Check for and reject !coh_mode which would indicate hw reserved pat_index on xe2. v6: - Rebase on removal of coh_mode from uapi. We just need to reject cpu_caching=wb + pat_index with coh_none. Testcase: igt@xe_pat Bspec: 45101, 44235 #xe Bspec: 70552, 71582, 59400 #xe2 Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Pallavi Mishra <pallavi.mishra@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Cc: José Roberto de Souza <jose.souza@intel.com> Cc: Filip Hazubski <filip.hazubski@intel.com> Cc: Carl Zhang <carl.zhang@intel.com> Cc: Effie Yu <effie.yu@intel.com> Cc: Zhengguo Xu <zhengguo.xu@intel.com> Cc: Francois Dugast <francois.dugast@intel.com> Tested-by: José Roberto de Souza <jose.souza@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Acked-by: Zhengguo Xu <zhengguo.xu@intel.com> Acked-by: Bartosz Dunajski <bartosz.dunajski@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/pat: annotate pat_index with coherency modeMatthew Auld3-42/+89
Future uapi needs to give userspace the ability to select the pat_index for a given vm_bind. However we need to be able to extract the coherency mode from the provided pat_index to ensure it's compatible with the cpu_caching mode set at object creation. There are various security reasons for why this matters. However the pat_index itself is very platform specific, so seems reasonable to annotate each platform definition of the pat table. On some older platforms there is no explicit coherency mode, so we just pick whatever makes sense. v2: - Simplify with COH_AT_LEAST_1_WAY - Add some kernel-doc v3 (Matt Roper): - Some small tweaks v4: - Rebase v5: - Rebase on Xe2 PAT additions v6: - Rebase on removal of coh_mode from uapi Bspec: 45101, 44235 #xe Bspec: 70552, 71582, 59400 #xe2 Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Pallavi Mishra <pallavi.mishra@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Cc: José Roberto de Souza <jose.souza@intel.com> Cc: Filip Hazubski <filip.hazubski@intel.com> Cc: Carl Zhang <carl.zhang@intel.com> Cc: Effie Yu <effie.yu@intel.com> Cc: Zhengguo Xu <zhengguo.xu@intel.com> Cc: Francois Dugast <francois.dugast@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Reviewed-by: Pallavi Mishra <pallavi.mishra@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/uapi: Add support for CPU caching modePallavi Mishra6-38/+104
Allow userspace to specify the CPU caching mode at object creation. Modify gem create handler and introduce xe_bo_create_user to replace xe_bo_create. In a later patch we will support setting the pat_index as part of vm_bind, where expectation is that the coherency mode extracted from the pat_index must be least 1way coherent if using cpu_caching=wb. v2 - s/smem_caching/smem_cpu_caching/ and s/XE_GEM_CACHING/XE_GEM_CPU_CACHING/. (Matt Roper) - Drop COH_2WAY and just use COH_NONE + COH_AT_LEAST_1WAY; KMD mostly just cares that zeroing/swap-in can't be bypassed with the given smem_caching mode. (Matt Roper) - Fix broken range check for coh_mode and smem_cpu_caching and also don't use constant value, but the already defined macros. (José) - Prefer switch statement for smem_cpu_caching -> ttm_caching. (José) - Add note in kernel-doc for dgpu and coherency modes for system memory. (José) v3 (José): - Make sure to reject coh_mode == 0 for VRAM-only. - Also make sure to actually pass along the (start, end) for __xe_bo_create_locked. v4 - Drop UC caching mode. Can be added back if we need it. (Matt Roper) - s/smem_cpu_caching/cpu_caching. Idea is that VRAM is always WC, but that is currently implicit and KMD controlled. Make it explicit in the uapi with the limitation that it currently must be WC. For VRAM + SYS objects userspace must now select WC. (José) - Make sure to initialize bo_flags. (José) v5 - Make to align with the other uapi and prefix uapi constants with DRM_ (José) v6: - Make it clear that zero cpu_caching is only allowed for kernel objects. (José) v7: (Oak) - With all the changes from the original design, it looks we can further simplify here and drop the explicit coh_mode. We can just infer the coh_mode from the cpu_caching. i.e reject cpu_caching=wb + coh_none. It's one less thing for userspace to maintain so seems worth it. v8: - Make sure to also update the kselftests. Testcase: igt@xe_mmap@cpu-caching Signed-off-by: Pallavi Mishra <pallavi.mishra@intel.com> Co-developed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Cc: José Roberto de Souza <jose.souza@intel.com> Cc: Filip Hazubski <filip.hazubski@intel.com> Cc: Carl Zhang <carl.zhang@intel.com> Cc: Effie Yu <effie.yu@intel.com> Cc: Zhengguo Xu <zhengguo.xu@intel.com> Cc: Francois Dugast <francois.dugast@intel.com> Cc: Oak Zeng <oak.zeng@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Acked-by: Zhengguo Xu <zhengguo.xu@intel.com> Acked-by: Bartosz Dunajski <bartosz.dunajski@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/kunit: Return number of iterated devicesMichal Wajdeczko1-3/+3
In xe_call_for_each_device() we are already counting number of iterated devices. Lets make that available to the caller too. We will use that functionality in upcoming patches. Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://lore.kernel.org/r/20231115115816.1993-1-michal.wajdeczko@intel.com Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe: fix mem_access for early lrc generationMatthew Auld1-7/+7
We spawn some hw queues during device probe to generate the default LRC for every engine type, however the queue destruction step is typically async. Queue destruction needs to do stuff like GuC context deregister which requires GuC CT, which in turn requires an active mem_access ref. The caller during probe is meant to hold the mem_access token, however due to the async destruction it might have already been dropped if we are unlucky. Similar to how we already handle migrate VMs for which there is no mem_access ref, fix this by keeping the callers token alive, releasing it only when destroying the queue. We can treat a NULL vm as indication that we need to grab our own extra ref. Fixes the following splat sometimes seen during load: [ 1682.899930] WARNING: CPU: 1 PID: 8642 at drivers/gpu/drm/xe/xe_device.c:537 xe_device_assert_mem_access+0x27/0x30 [xe] [ 1682.900209] CPU: 1 PID: 8642 Comm: kworker/u24:97 Tainted: G U W E N 6.6.0-rc3+ #6 [ 1682.900214] Workqueue: submit_wq xe_sched_process_msg_work [xe] [ 1682.900303] RIP: 0010:xe_device_assert_mem_access+0x27/0x30 [xe] [ 1682.900388] Code: 90 90 90 66 0f 1f 00 0f 1f 44 00 00 53 48 89 fb e8 1e 6c 03 00 48 85 c0 74 06 5b c3 cc cc cc cc 8b 83 28 23 00 00 85 c0 75 f0 <0f> 0b 5b c3 cc cc cc cc 90 90 90 90 90 90 90 90 90 90 90 90 90 90 [ 1682.900390] RSP: 0018:ffffc900021cfb68 EFLAGS: 00010246 [ 1682.900394] RAX: 0000000000000000 RBX: ffff8886a96d8000 RCX: 0000000000000000 [ 1682.900396] RDX: 0000000000000001 RSI: ffff8886a6311a00 RDI: ffff8886a96d8000 [ 1682.900398] RBP: ffffc900021cfcc0 R08: 0000000000000001 R09: 0000000000000000 [ 1682.900400] R10: ffffc900021cfcd0 R11: 0000000000000002 R12: 0000000000000004 [ 1682.900402] R13: 0000000000000000 R14: ffff8886a6311990 R15: ffffc900021cfd74 [ 1682.900405] FS: 0000000000000000(0000) GS:ffff888829880000(0000) knlGS:0000000000000000 [ 1682.900407] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 1682.900409] CR2: 000055f70bad3fb0 CR3: 000000025243a004 CR4: 00000000003706e0 [ 1682.900412] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 1682.900413] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 1682.900415] Call Trace: [ 1682.900418] <TASK> [ 1682.900420] ? xe_device_assert_mem_access+0x27/0x30 [xe] [ 1682.900504] ? __warn+0x85/0x170 [ 1682.900510] ? xe_device_assert_mem_access+0x27/0x30 [xe] [ 1682.900596] ? report_bug+0x171/0x1a0 [ 1682.900604] ? handle_bug+0x3c/0x80 [ 1682.900608] ? exc_invalid_op+0x17/0x70 [ 1682.900612] ? asm_exc_invalid_op+0x1a/0x20 [ 1682.900621] ? xe_device_assert_mem_access+0x27/0x30 [xe] [ 1682.900706] ? xe_device_assert_mem_access+0x12/0x30 [xe] [ 1682.900790] guc_ct_send_locked+0xb9/0x1550 [xe] [ 1682.900882] ? lock_acquire+0xca/0x2b0 [ 1682.900885] ? guc_ct_send+0x3c/0x1a0 [xe] [ 1682.900977] ? lock_is_held_type+0x9b/0x110 [ 1682.900984] ? __mutex_lock+0xc0/0xb90 [ 1682.900989] ? __pfx___drm_printfn_info+0x10/0x10 [ 1682.900999] guc_ct_send+0x53/0x1a0 [xe] [ 1682.901090] ? __lock_acquire+0xf22/0x21b0 [ 1682.901097] ? process_one_work+0x1a0/0x500 [ 1682.901109] xe_guc_ct_send+0x19/0x50 [xe] [ 1682.901202] set_min_preemption_timeout+0x75/0xa0 [xe] [ 1682.901294] disable_scheduling_deregister+0x55/0x250 [xe] [ 1682.901383] ? xe_sched_process_msg_work+0x76/0xd0 [xe] [ 1682.901467] ? lock_release+0xc9/0x260 [ 1682.901474] xe_sched_process_msg_work+0x82/0xd0 [xe] [ 1682.901559] process_one_work+0x20a/0x500 v2: Add the splat Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Vinay Belgaumkar <vinay.belgaumkar@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/gsc: Define GSC FW for MTLDaniele Ceraolo Spurio3-8/+17
We track GSC FW based on its compatibility version, which is what determines the interface it supports. Also add a modparam override like the ones for GuC and HuC. v2: fix module param description (John) Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Alan Previn <alan.previn.teres.alexis@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/gsc: Define GSCCS for MTLDaniele Ceraolo Spurio3-7/+24
Add the GSCCS to the media_xelpmp engine list. Note that since the GSCCS is only used with the GSC FW, we can consider it disabled if we don't have the FW available. v2: mark GSCCS as allowed on the media IP in kunit tests Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Alan Previn <alan.previn.teres.alexis@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2023-12-21drm/xe/gsc: Query GSC compatibility versionDaniele Ceraolo Spurio9-6/+397
The version is obtained via a dedicated MKHI GSC HECI command. The compatibility version is what we want to match against for the GSC, so we need to call the FW version checker after obtaining the version. Since this is the first time we send a GSC HECI command via the GSCCS, this patch also introduces common infrastructure to send such commands to the GSC. Communication with the GSC FW is done via input/output buffers, whose addresses are provided via a GSCCS command. The buffers contain a generic header and a client-specific packet (e.g. PXP, HDCP); the clients don't care about the header format and/or the GSCCS command in the batch, they only care about their client-specific header. This patch therefore introduces helpers that allow the callers to automatically fill in the input header, submit the GSCCS job and decode the output header, to make it so that the caller only needs to worry about their client-specific input and output messages. v3: squash of 2 separate patches ahead of merge, so that the common functions and their first user are added at the same time Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Alan Previn <alan.previn.teres.alexis@intel.com> Cc: Suraj Kandpal <suraj.kandpal@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.Com> #v1 Reviewed-by: Suraj Kandpal <suraj.kandpal@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>