diff options
| author | Boris Brezillon <boris.brezillon@collabora.com> | 2024-02-29 17:22:24 +0100 |
|---|---|---|
| committer | Boris Brezillon <boris.brezillon@collabora.com> | 2024-03-01 10:04:17 +0100 |
| commit | de85488138247d034eb3241840424a54d660926b (patch) | |
| tree | b7fdbe6ede1507aaebef511af80f1a6555597355 | |
| parent | 9cca48fa4f8933a2dadf2f011d461329ca0a8337 (diff) | |
| download | linux-de85488138247d034eb3241840424a54d660926b.tar.gz linux-de85488138247d034eb3241840424a54d660926b.tar.bz2 linux-de85488138247d034eb3241840424a54d660926b.zip | |
drm/panthor: Add the scheduler logical block
This is the piece of software interacting with the FW scheduler, and
taking care of some scheduling aspects when the FW comes short of slots
scheduling slots. Indeed, the FW only expose a few slots, and the kernel
has to give all submission contexts, a chance to execute their jobs.
The kernel-side scheduler is timeslice-based, with a round-robin queue
per priority level.
Job submission is handled with a 1:1 drm_sched_entity:drm_gpu_scheduler,
allowing us to delegate the dependency tracking to the core.
All the gory details should be documented inline.
v6:
- Add Maxime's and Heiko's acks
- Make sure the scheduler is initialized before queueing the tick work
in the MMU fault handler
- Keep header inclusion alphabetically ordered
v5:
- Fix typos
- Call panthor_kernel_bo_destroy(group->syncobjs) unconditionally
- Don't move the group to the waiting list tail when it was already
waiting for a different syncobj
- Fix fatal_queues flagging in the tiler OOM path
- Don't warn when more than one job timesout on a group
- Add a warning message when we fail to allocate a heap chunk
- Add Steve's R-b
v4:
- Check drmm_mutex_init() return code
- s/drm_gem_vmap_unlocked/drm_gem_vunmap_unlocked/ in
panthor_queue_put_syncwait_obj()
- Drop unneeded WARN_ON() in cs_slot_sync_queue_state_locked()
- Use atomic_xchg() instead of atomic_fetch_and(0)
- Fix typos
- Let panthor_kernel_bo_destroy() check for IS_ERR_OR_NULL() BOs
- Defer TILER_OOM event handling to a separate workqueue to prevent
deadlocks when the heap chunk allocation is blocked on mem-reclaim.
This is just a temporary solution, until we add support for
non-blocking/failable allocations
- Pass the scheduler workqueue to drm_sched instead of instantiating
a separate one (no longer needed now that heap chunk allocation
happens on a dedicated wq)
- Set WQ_MEM_RECLAIM on the scheduler workqueue, so we can handle
job timeouts when the system is under mem pressure, and hopefully
free up some memory retained by these jobs
v3:
- Rework the FW event handling logic to avoid races
- Make sure MMU faults kill the group immediately
- Use the panthor_kernel_bo abstraction for group/queue buffers
- Make in_progress an atomic_t, so we can check it without the reset lock
held
- Don't limit the number of groups per context to the FW scheduler
capacity. Fix the limit to 128 for now.
- Add a panthor_job_vm() helper
- Account for panthor_vm changes
- Add our job fence as DMA_RESV_USAGE_WRITE to all external objects
(was previously DMA_RESV_USAGE_BOOKKEEP). I don't get why, given
we're supposed to be fully-explicit, but other drivers do that, so
there must be a good reason
- Account for drm_sched changes
- Provide a panthor_queue_put_syncwait_obj()
- Unconditionally return groups to their idle list in
panthor_sched_suspend()
- Condition of sched_queue_{,delayed_}work fixed to be only when a reset
isn't pending or in progress.
- Several typos in comments fixed.
Co-developed-by: Steven Price <steven.price@arm.com>
Signed-off-by: Steven Price <steven.price@arm.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Acked-by: Maxime Ripard <mripard@kernel.org>
Acked-by: Heiko Stuebner <heiko@sntech.de>
Link: https://patchwork.freedesktop.org/patch/msgid/20240229162230.2634044-11-boris.brezillon@collabora.com
| -rw-r--r-- | drivers/gpu/drm/panthor/panthor_sched.c | 3502 | ||||
| -rw-r--r-- | drivers/gpu/drm/panthor/panthor_sched.h | 50 |
2 files changed, 3552 insertions, 0 deletions
diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c new file mode 100644 index 000000000000..5f7803b6fc48 --- /dev/null +++ b/drivers/gpu/drm/panthor/panthor_sched.c @@ -0,0 +1,3502 @@ +// SPDX-License-Identifier: GPL-2.0 or MIT +/* Copyright 2023 Collabora ltd. */ + +#include <drm/drm_drv.h> +#include <drm/drm_exec.h> +#include <drm/drm_gem_shmem_helper.h> +#include <drm/drm_managed.h> +#include <drm/gpu_scheduler.h> +#include <drm/panthor_drm.h> + +#include <linux/build_bug.h> +#include <linux/clk.h> +#include <linux/delay.h> +#include <linux/dma-mapping.h> +#include <linux/dma-resv.h> +#include <linux/firmware.h> +#include <linux/interrupt.h> +#include <linux/io.h> +#include <linux/iopoll.h> +#include <linux/iosys-map.h> +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/pm_runtime.h> + +#include "panthor_devfreq.h" +#include "panthor_device.h" +#include "panthor_fw.h" +#include "panthor_gem.h" +#include "panthor_gpu.h" +#include "panthor_heap.h" +#include "panthor_mmu.h" +#include "panthor_regs.h" +#include "panthor_sched.h" + +/** + * DOC: Scheduler + * + * Mali CSF hardware adopts a firmware-assisted scheduling model, where + * the firmware takes care of scheduling aspects, to some extent. + * + * The scheduling happens at the scheduling group level, each group + * contains 1 to N queues (N is FW/hardware dependent, and exposed + * through the firmware interface). Each queue is assigned a command + * stream ring buffer, which serves as a way to get jobs submitted to + * the GPU, among other things. + * + * The firmware can schedule a maximum of M groups (M is FW/hardware + * dependent, and exposed through the firmware interface). Passed + * this maximum number of groups, the kernel must take care of + * rotating the groups passed to the firmware so every group gets + * a chance to have his queues scheduled for execution. + * + * The current implementation only supports with kernel-mode queues. + * In other terms, userspace doesn't have access to the ring-buffer. + * Instead, userspace passes indirect command stream buffers that are + * called from the queue ring-buffer by the kernel using a pre-defined + * sequence of command stream instructions to ensure the userspace driver + * always gets consistent results (cache maintenance, + * synchronization, ...). + * + * We rely on the drm_gpu_scheduler framework to deal with job + * dependencies and submission. As any other driver dealing with a + * FW-scheduler, we use the 1:1 entity:scheduler mode, such that each + * entity has its own job scheduler. When a job is ready to be executed + * (all its dependencies are met), it is pushed to the appropriate + * queue ring-buffer, and the group is scheduled for execution if it + * wasn't already active. + * + * Kernel-side group scheduling is timeslice-based. When we have less + * groups than there are slots, the periodic tick is disabled and we + * just let the FW schedule the active groups. When there are more + * groups than slots, we let each group a chance to execute stuff for + * a given amount of time, and then re-evaluate and pick new groups + * to schedule. The group selection algorithm is based on + * priority+round-robin. + * + * Even though user-mode queues is out of the scope right now, the + * current design takes them into account by avoiding any guess on the + * group/queue state that would be based on information we wouldn't have + * if userspace was in charge of the ring-buffer. That's also one of the + * reason we don't do 'cooperative' scheduling (encoding FW group slot + * reservation as dma_fence that would be returned from the + * drm_gpu_scheduler::prepare_job() hook, and treating group rotation as + * a queue of waiters, ordered by job submission order). This approach + * would work for kernel-mode queues, but would make user-mode queues a + * lot more complicated to retrofit. + */ + +#define JOB_TIMEOUT_MS 5000 + +#define MIN_CS_PER_CSG 8 + +#define MIN_CSGS 3 +#define MAX_CSG_PRIO 0xf + +struct panthor_group; + +/** + * struct panthor_csg_slot - Command stream group slot + * + * This represents a FW slot for a scheduling group. + */ +struct panthor_csg_slot { + /** @group: Scheduling group bound to this slot. */ + struct panthor_group *group; + + /** @priority: Group priority. */ + u8 priority; + + /** + * @idle: True if the group bound to this slot is idle. + * + * A group is idle when it has nothing waiting for execution on + * all its queues, or when queues are blocked waiting for something + * to happen (synchronization object). + */ + bool idle; +}; + +/** + * enum panthor_csg_priority - Group priority + */ +enum panthor_csg_priority { + /** @PANTHOR_CSG_PRIORITY_LOW: Low priority group. */ + PANTHOR_CSG_PRIORITY_LOW = 0, + + /** @PANTHOR_CSG_PRIORITY_MEDIUM: Medium priority group. */ + PANTHOR_CSG_PRIORITY_MEDIUM, + + /** @PANTHOR_CSG_PRIORITY_HIGH: High priority group. */ + PANTHOR_CSG_PRIORITY_HIGH, + + /** + * @PANTHOR_CSG_PRIORITY_RT: Real-time priority group. + * + * Real-time priority allows one to preempt scheduling of other + * non-real-time groups. When such a group becomes executable, + * it will evict the group with the lowest non-rt priority if + * there's no free group slot available. + * + * Currently not exposed to userspace. + */ + PANTHOR_CSG_PRIORITY_RT, + + /** @PANTHOR_CSG_PRIORITY_COUNT: Number of priority levels. */ + PANTHOR_CSG_PRIORITY_COUNT, +}; + +/** + * struct panthor_scheduler - Object used to manage the scheduler + */ +struct panthor_scheduler { + /** @ptdev: Device. */ + struct panthor_device *ptdev; + + /** + * @wq: Workqueue used by our internal scheduler logic and + * drm_gpu_scheduler. + * + * Used for the scheduler tick, group update or other kind of FW + * event processing that can't be handled in the threaded interrupt + * path. Also passed to the drm_gpu_scheduler instances embedded + * in panthor_queue. + */ + struct workqueue_struct *wq; + + /** + * @heap_alloc_wq: Workqueue used to schedule tiler_oom works. + * + * We have a queue dedicated to heap chunk allocation works to avoid + * blocking the rest of the scheduler if the allocation tries to + * reclaim memory. + */ + struct workqueue_struct *heap_alloc_wq; + + /** @tick_work: Work executed on a scheduling tick. */ + struct delayed_work tick_work; + + /** + * @sync_upd_work: Work used to process synchronization object updates. + * + * We use this work to unblock queues/groups that were waiting on a + * synchronization object. + */ + struct work_struct sync_upd_work; + + /** + * @fw_events_work: Work used to process FW events outside the interrupt path. + * + * Even if the interrupt is threaded, we need any event processing + * that require taking the panthor_scheduler::lock to be processed + * outside the interrupt path so we don't block the tick logic when + * it calls panthor_fw_{csg,wait}_wait_acks(). Since most of the + * event processing requires taking this lock, we just delegate all + * FW event processing to the scheduler workqueue. + */ + struct work_struct fw_events_work; + + /** + * @fw_events: Bitmask encoding pending FW events. + */ + atomic_t fw_events; + + /** + * @resched_target: When the next tick should occur. + * + * Expressed in jiffies. + */ + u64 resched_target; + + /** + * @last_tick: When the last tick occurred. + * + * Expressed in jiffies. + */ + u64 last_tick; + + /** @tick_period: Tick period in jiffies. */ + u64 tick_period; + + /** + * @lock: Lock protecting access to all the scheduler fields. + * + * Should be taken in the tick work, the irq handler, and anywhere the @groups + * fields are touched. + */ + struct mutex lock; + + /** @groups: Various lists used to classify groups. */ + struct { + /** + * @runnable: Runnable group lists. + * + * When a group has queues that want to execute something, + * its panthor_group::run_node should be inserted here. + * + * One list per-priority. + */ + struct list_head runnable[PANTHOR_CSG_PRIORITY_COUNT]; + + /** + * @idle: Idle group lists. + * + * When all queues of a group are idle (either because they + * have nothing to execute, or because they are blocked), the + * panthor_group::run_node field should be inserted here. + * + * One list per-priority. + */ + struct list_head idle[PANTHOR_CSG_PRIORITY_COUNT]; + + /** + * @waiting: List of groups whose queues are blocked on a + * synchronization object. + * + * Insert panthor_group::wait_node here when a group is waiting + * for synchronization objects to be signaled. + * + * This list is evaluated in the @sync_upd_work work. + */ + struct list_head waiting; + } groups; + + /** + * @csg_slots: FW command stream group slots. + */ + struct panthor_csg_slot csg_slots[MAX_CSGS]; + + /** @csg_slot_count: Number of command stream group slots exposed by the FW. */ + u32 csg_slot_count; + + /** @cs_slot_count: Number of command stream slot per group slot exposed by the FW. */ + u32 cs_slot_count; + + /** @as_slot_count: Number of address space slots supported by the MMU. */ + u32 as_slot_count; + + /** @used_csg_slot_count: Number of command stream group slot currently used. */ + u32 used_csg_slot_count; + + /** @sb_slot_count: Number of scoreboard slots. */ + u32 sb_slot_count; + + /** + * @might_have_idle_groups: True if an active group might have become idle. + * + * This will force a tick, so other runnable groups can be scheduled if one + * or more active groups became idle. + */ + bool might_have_idle_groups; + + /** @pm: Power management related fields. */ + struct { + /** @has_ref: True if the scheduler owns a runtime PM reference. */ + bool has_ref; + } pm; + + /** @reset: Reset related fields. */ + struct { + /** @lock: Lock protecting the other reset fields. */ + struct mutex lock; + + /** + * @in_progress: True if a reset is in progress. + * + * Set to true in panthor_sched_pre_reset() and back to false in + * panthor_sched_post_reset(). + */ + atomic_t in_progress; + + /** + * @stopped_groups: List containing all groups that were stopped + * before a reset. + * + * Insert panthor_group::run_node in the pre_reset path. + */ + struct list_head stopped_groups; + } reset; +}; + +/** + * struct panthor_syncobj_32b - 32-bit FW synchronization object + */ +struct panthor_syncobj_32b { + /** @seqno: Sequence number. */ + u32 seqno; + + /** + * @status: Status. + * + * Not zero on failure. + */ + u32 status; +}; + +/** + * struct panthor_syncobj_64b - 64-bit FW synchronization object + */ +struct panthor_syncobj_64b { + /** @seqno: Sequence number. */ + u64 seqno; + + /** + * @status: Status. + * + * Not zero on failure. + */ + u32 status; + + /** @pad: MBZ. */ + u32 pad; +}; + +/** + * struct panthor_queue - Execution queue + */ +struct panthor_queue { + /** @scheduler: DRM scheduler used for this queue. */ + struct drm_gpu_scheduler scheduler; + + /** @entity: DRM scheduling entity used for this queue. */ + struct drm_sched_entity entity; + + /** + * @remaining_time: Time remaining before the job timeout expires. + * + * The job timeout is suspended when the queue is not scheduled by the + * FW. Every time we suspend the timer, we need to save the remaining + * time so we can restore it later on. + */ + unsigned long remaining_time; + + /** @timeout_suspended: True if the job timeout was suspended. */ + bool timeout_suspended; + + /** + * @doorbell_id: Doorbell assigned to this queue. + * + * Right now, all groups share the same doorbell, and the doorbell ID + * is assigned to group_slot + 1 when the group is assigned a slot. But + * we might decide to provide fine grained doorbell assignment at some + * point, so don't have to wake up all queues in a group every time one + * of them is updated. + */ + u8 doorbell_id; + + /** + * @priority: Priority of the queue inside the group. + * + * Must be less than 16 (Only 4 bits available). + */ + u8 priority; +#define CSF_MAX_QUEUE_PRIO GENMASK(3, 0) + + /** @ringbuf: Command stream ring-buffer. */ + struct panthor_kernel_bo *ringbuf; + + /** @iface: Firmware interface. */ + struct { + /** @mem: FW memory allocated for this interface. */ + struct panthor_kernel_bo *mem; + + /** @input: Input interface. */ + struct panthor_fw_ringbuf_input_iface *input; + + /** @output: Output interface. */ + const struct panthor_fw_ringbuf_output_iface *output; + + /** @input_fw_va: FW virtual address of the input interface buffer. */ + u32 input_fw_va; + + /** @output_fw_va: FW virtual address of the output interface buffer. */ + u32 output_fw_va; + } iface; + + /** + * @syncwait: Stores information about the synchronization object this + * queue is waiting on. + */ + struct { + /** @gpu_va: GPU address of the synchronization object. */ + u64 gpu_va; + + /** @ref: Reference value to compare against. */ + u64 ref; + + /** @gt: True if this is a greater-than test. */ + bool gt; + + /** @sync64: True if this is a 64-bit sync object. */ + bool sync64; + + /** @bo: Buffer object holding the synchronization object. */ + struct drm_gem_object *obj; + + /** @offset: Offset of the synchronization object inside @bo. */ + u64 offset; + + /** + * @kmap: Kernel mapping of the buffer object holding the + * synchronization object. + */ + void *kmap; + } syncwait; + + /** @fence_ctx: Fence context fields. */ + struct { + /** @lock: Used to protect access to all fences allocated by this context. */ + spinlock_t lock; + + /** + * @id: Fence context ID. + * + * Allocated with dma_fence_context_alloc(). + */ + u64 id; + + /** @seqno: Sequence number of the last initialized fence. */ + atomic64_t seqno; + + /** + * @in_flight_jobs: List containing all in-flight jobs. + * + * Used to keep track and signal panthor_job::done_fence when the + * synchronization object attached to the queue is signaled. + */ + struct list_head in_flight_jobs; + } fence_ctx; +}; + +/** + * enum panthor_group_state - Scheduling group state. + */ +enum panthor_group_state { + /** @PANTHOR_CS_GROUP_CREATED: Group was created, but not scheduled yet. */ + PANTHOR_CS_GROUP_CREATED, + + /** @PANTHOR_CS_GROUP_ACTIVE: Group is currently scheduled. */ + PANTHOR_CS_GROUP_ACTIVE, + + /** + * @PANTHOR_CS_GROUP_SUSPENDED: Group was scheduled at least once, but is + * inactive/suspended right now. + */ + PANTHOR_CS_GROUP_SUSPENDED, + + /** + * @PANTHOR_CS_GROUP_TERMINATED: Group was terminated. + * + * Can no longer be scheduled. The only allowed action is a destruction. + */ + PANTHOR_CS_GROUP_TERMINATED, +}; + +/** + * struct panthor_group - Scheduling group object + */ +struct panthor_group { + /** @refcount: Reference count */ + struct kref refcount; + + /** @ptdev: Device. */ + struct panthor_device *ptdev; + + /** @vm: VM bound to the group. */ + struct panthor_vm *vm; + + /** @compute_core_mask: Mask of shader cores that can be used for compute jobs. */ + u64 compute_core_mask; + + /** @fragment_core_mask: Mask of shader cores that can be used for fragment jobs. */ + u64 fragment_core_mask; + + /** @tiler_core_mask: Mask of tiler cores that can be used for tiler jobs. */ + u64 tiler_core_mask; + + /** @max_compute_cores: Maximum number of shader cores used for compute jobs. */ + u8 max_compute_cores; + + /** @max_compute_cores: Maximum number of shader cores used for fragment jobs. */ + u8 max_fragment_cores; + + /** @max_tiler_cores: Maximum number of tiler cores used for tiler jobs. */ + u8 max_tiler_cores; + + /** @priority: Group priority (check panthor_csg_priority). */ + u8 priority; + + /** @blocked_queues: Bitmask reflecting the blocked queues. */ + u32 blocked_queues; + + /** @idle_queues: Bitmask reflecting the idle queues. */ + u32 idle_queues; + + /** @fatal_lock: Lock used to protect access to fatal fields. */ + spinlock_t fatal_lock; + + /** @fatal_queues: Bitmask reflecting the queues that hit a fatal exception. */ + u32 fatal_queues; + + /** @tiler_oom: Mask of queues that have a tiler OOM event to process. */ + atomic_t tiler_oom; + + /** @queue_count: Number of queues in this group. */ + u32 queue_count; + + /** @queues: Queues owned by this group. */ + struct panthor_queue *queues[MAX_CS_PER_CSG]; + + /** + * @csg_id: ID of the FW group slot. + * + * -1 when the group is not scheduled/active. + */ + int csg_id; + + /** + * @destroyed: True when the group has been destroyed. + * + * If a group is destroyed it becomes useless: no further jobs can be submitted + * to its queues. We simply wait for all references to be dropped so we can + * release the group object. + */ + bool destroyed; + + /** + * @timedout: True when a timeout occurred on any of the queues owned by + * this group. + * + * Timeouts can be reported by drm_sched or by the FW. In any case, any + * timeout situation is unrecoverable, and the group becomes useless. + * We simply wait for all references to be dropped so we can release the + * group object. + */ + bool timedout; + + /** + * @syncobjs: Pool of per-queue synchronization objects. + * + * One sync object per queue. The position of the sync object is + * determined by the queue index. + */ + struct panthor_kernel_bo *syncobjs; + + /** @state: Group state. */ + enum panthor_group_state state; + + /** + * @suspend_buf: Suspend buffer. + * + * Stores the state of the group and its queues when a group is suspended. + * Used at resume time to restore the group in its previous state. + * + * The size of the suspend buffer is exposed through the FW interface. + */ + struct panthor_kernel_bo *suspend_buf; + + /** + * @protm_suspend_buf: Protection mode suspend buffer. + * + * Stores the state of the group and its queues when a group that's in + * protection mode is suspended. + * + * Used at resume time to restore the group in its previous state. + * + * The size of the protection mode suspend buffer is exposed through the + * FW interface. + */ + struct panthor_kernel_bo *protm_suspend_buf; + + /** @sync_upd_work: Work used to check/signal job fences. */ + struct work_struct sync_upd_work; + + /** @tiler_oom_work: Work used to process tiler OOM events happening on this group. */ + struct work_struct tiler_oom_work; + + /** @term_work: Work used to finish the group termination procedure. */ + struct work_struct term_work; + + /** + * @release_work: Work used to release group resources. + * + * We need to postpone the group release to avoid a deadlock when + * the last ref is released in the tick work. + */ + struct work_struct release_work; + + /** + * @run_node: Node used to insert the group in the + * panthor_group::groups::{runnable,idle} and + * panthor_group::reset.stopped_groups lists. + */ + struct list_head run_node; + + /** + * @wait_node: Node used to insert the group in the + * panthor_group::groups::waiting list. + */ + struct list_head wait_node; +}; + +/** + * group_queue_work() - Queue a group work + * @group: Group to queue the work for. + * @wname: Work name. + * + * Grabs a ref and queue a work item to the scheduler workqueue. If + * the work was already queued, we release the reference we grabbed. + * + * Work callbacks must release the reference we grabbed here. + */ +#define group_queue_work(group, wname) \ + do { \ + group_get(group); \ + if (!queue_work((group)->ptdev->scheduler->wq, &(group)->wname ## _work)) \ + group_put(group); \ + } while (0) + +/** + * sched_queue_work() - Queue a scheduler work. + * @sched: Scheduler object. + * @wname: Work name. + * + * Conditionally queues a scheduler work if no reset is pending/in-progress. + */ +#define sched_queue_work(sched, wname) \ + do { \ + if (!atomic_read(&(sched)->reset.in_progress) && \ + !panthor_device_reset_is_pending((sched)->ptdev)) \ + queue_work((sched)->wq, &(sched)->wname ## _work); \ + } while (0) + +/** + * sched_queue_delayed_work() - Queue a scheduler delayed work. + * @sched: Scheduler object. + * @wname: Work name. + * @delay: Work delay in jiffies. + * + * Conditionally queues a scheduler delayed work if no reset is + * pending/in-progress. + */ +#define sched_queue_delayed_work(sched, wname, delay) \ + do { \ + if (!atomic_read(&sched->reset.in_progress) && \ + !panthor_device_reset_is_pending((sched)->ptdev)) \ + mod_delayed_work((sched)->wq, &(sched)->wname ## _work, delay); \ + } while (0) + +/* + * We currently set the maximum of groups per file to an arbitrary low value. + * But this can be updated if we need more. + */ +#define MAX_GROUPS_PER_POOL 128 + +/** + * struct panthor_group_pool - Group pool + * + * Each file get assigned a group pool. + */ +struct panthor_group_pool { + /** @xa: Xarray used to manage group handles. */ + struct xarray xa; +}; + +/** + * struct panthor_job - Used to manage GPU job + */ +struct panthor_job { + /** @base: Inherit from drm_sched_job. */ + struct drm_sched_job base; + + /** @refcount: Reference count. */ + struct kref refcount; + + /** @group: Group of the queue this job will be pushed to. */ + struct panthor_group *group; + + /** @queue_idx: Index of the queue inside @group. */ + u32 queue_idx; + + /** @call_info: Information about the userspace command stream call. */ + struct { + /** @start: GPU address of the userspace command stream. */ + u64 start; + + /** @size: Size of the userspace command stream. */ + u32 size; + + /** + * @latest_flush: Flush ID at the time the userspace command + * stream was built. + * + * Needed for the flush reduction mechanism. + */ + u32 latest_flush; + } call_info; + + /** @ringbuf: Position of this job is in the ring buffer. */ + struct { + /** @start: Start offset. */ + u64 start; + + /** @end: End offset. */ + u64 end; + } ringbuf; + + /** + * @node: Used to insert the job in the panthor_queue::fence_ctx::in_flight_jobs + * list. + */ + struct list_head node; + + /** @done_fence: Fence signaled when the job is finished or cancelled. */ + struct dma_fence *done_fence; +}; + +static void +panthor_queue_put_syncwait_obj(struct panthor_queue *queue) +{ + if (queue->syncwait.kmap) { + struct iosys_map map = IOSYS_MAP_INIT_VADDR(queue->syncwait.kmap); + + drm_gem_vunmap_unlocked(queue->syncwait.obj, &map); + queue->syncwait.kmap = NULL; + } + + drm_gem_object_put(queue->syncwait.obj); + queue->syncwait.obj = NULL; +} + +static void * +panthor_queue_get_syncwait_obj(struct panthor_group *group, struct panthor_queue *queue) +{ + struct panthor_device *ptdev = group->ptdev; + struct panthor_gem_object *bo; + struct iosys_map map; + int ret; + + if (queue->syncwait.kmap) + return queue->syncwait.kmap + queue->syncwait.offset; + + bo = panthor_vm_get_bo_for_va(group->vm, + queue->syncwait.gpu_va, + &queue->syncwait.offset); + if (drm_WARN_ON(&ptdev->base, IS_ERR_OR_NULL(bo))) + goto err_put_syncwait_obj; + + queue->syncwait.obj = &bo->base.base; + ret = drm_gem_vmap_unlocked(queue->syncwait.obj, &map); + if (drm_WARN_ON(&ptdev->base, ret)) + goto err_put_syncwait_obj; + + queue->syncwait.kmap = map.vaddr; + if (drm_WARN_ON(&ptdev->base, !queue->syncwait.kmap)) + goto err_put_syncwait_obj; + + return queue->syncwait.kmap + queue->syncwait.offset; + +err_put_syncwait_obj: + panthor_queue_put_syncwait_obj(queue); + return NULL; +} + +static void group_free_queue(struct panthor_group *group, struct panthor_queue *queue) +{ + if (IS_ERR_OR_NULL(queue)) + return; + + if (queue->entity.fence_context) + drm_sched_entity_destroy(&queue->entity); + + if (queue->scheduler.ops) + drm_sched_fini(&queue->scheduler); + + panthor_queue_put_syncwait_obj(queue); + + panthor_kernel_bo_destroy(group->vm, queue->ringbuf); + panthor_kernel_bo_destroy(panthor_fw_vm(group->ptdev), queue->iface.mem); + + kfree(queue); +} + +static void group_release_work(struct work_struct *work) +{ + struct panthor_group *group = container_of(work, + struct panthor_group, + release_work); + struct panthor_device *ptdev = group->ptdev; + u32 i; + + for (i = 0; i < group->queue_count; i++) + group_free_queue(group, group->queues[i]); + + panthor_kernel_bo_destroy(panthor_fw_vm(ptdev), group->suspend_buf); + panthor_kernel_bo_destroy(panthor_fw_vm(ptdev), group->protm_suspend_buf); + panthor_kernel_bo_destroy(group->vm, group->syncobjs); + + panthor_vm_put(group->vm); + kfree(group); +} + +static void group_release(struct kref *kref) +{ + struct panthor_group *group = container_of(kref, + struct panthor_group, + refcount); + struct panthor_device *ptdev = group->ptdev; + + drm_WARN_ON(&ptdev->base, group->csg_id >= 0); + drm_WARN_ON(&ptdev->base, !list_empty(&group->run_node)); + drm_WARN_ON(&ptdev->base, !list_empty(&group->wait_node)); + + queue_work(panthor_cleanup_wq, &group->release_work); +} + +static void group_put(struct panthor_group *group) +{ + if (group) + kref_put(&group->refcount, group_release); +} + +static struct panthor_group * +group_get(struct panthor_group *group) +{ + if (group) + kref_get(&group->refcount); + + return group; +} + +/** + * group_bind_locked() - Bind a group to a group slot + * @group: Group. + * @csg_id: Slot. + * + * Return: 0 on success, a negative error code otherwise. + */ +static int +group_bind_locked(struct panthor_group *group, u32 csg_id) +{ + struct panthor_device *ptdev = group->ptdev; + struct panthor_csg_slot *csg_slot; + int ret; + + lockdep_assert_held(&ptdev->scheduler->lock); + + if (drm_WARN_ON(&ptdev->base, group->csg_id != -1 || csg_id >= MAX_CSGS || + ptdev->scheduler->csg_slots[csg_id].group)) + return -EINVAL; + + ret = panthor_vm_active(group->vm); + if (ret) + return ret; + + csg_slot = &ptdev->scheduler->csg_slots[csg_id]; + group_get(group); + group->csg_id = csg_id; + + /* Dummy doorbell allocation: doorbell is assigned to the group and + * all queues use the same doorbell. + * + * TODO: Implement LRU-based doorbell assignment, so the most often + * updated queues get their own doorbell, thus avoiding useless checks + * on queues belonging to the same group that are rarely updated. + */ + for (u32 i = 0; i < group->queue_count; i++) + group->queues[i]->doorbell_id = csg_id + 1; + + csg_slot->group = group; + + return 0; +} + +/** + * group_unbind_locked() - Unbind a group from a slot. + * @group: Group to unbind. + * + * Return: 0 on success, a negative error code otherwise. + */ +static int +group_unbind_locked(struct panthor_group *group) +{ + struct panthor_device *ptdev = group->ptdev; + struct panthor_csg_slot *slot; + + lockdep_assert_held(&ptdev->scheduler->lock); + + if (drm_WARN_ON(&ptdev->base, group->csg_id < 0 || group->csg_id >= MAX_CSGS)) + return -EINVAL; + + if (drm_WARN_ON(&ptdev->base, group->state == PANTHOR_CS_GROUP_ACTIVE)) + return -EINVAL; + + slot = &ptdev->scheduler->csg_slots[group->csg_id]; + panthor_vm_idle(group->vm); + group->csg_id = -1; + + /* Tiler OOM events will be re-issued next time the group is scheduled. */ + atomic_set(&group->tiler_oom, 0); + cancel_work(&group->tiler_oom_work); + + for (u32 i = 0; i < group->queue_count; i++) + group->queues[i]->doorbell_id = -1; + + slot->group = NULL; + + group_put(group); + return 0; +} + +/** + * cs_slot_prog_locked() - Program a queue slot + * @ptdev: Device. + * @csg_id: Group slot ID. + * @cs_id: Queue slot ID. + * + * Program a queue slot with the queue information so things can start being + * executed on this queue. + * + * The group slot must have a group bound to it already (group_bind_locked()). + */ +static void +cs_slot_prog_locked(struct panthor_device *ptdev, u32 csg_id, u32 cs_id) +{ + struct panthor_queue *queue = ptdev->scheduler->csg_slots[csg_id].group->queues[cs_id]; + struct panthor_fw_cs_iface *cs_iface = panthor_fw_get_cs_iface(ptdev, csg_id, cs_id); + + lockdep_assert_held(&ptdev->scheduler->lock); + + queue->iface.input->extract = queue->iface.output->extract; + drm_WARN_ON(&ptdev->base, queue->iface.input->insert < queue->iface.input->extract); + + cs_iface->input->ringbuf_base = panthor_kernel_bo_gpuva(queue->ringbuf); + cs_iface->input->ringbuf_size = panthor_kernel_bo_size(queue->ringbuf); + cs_iface->input->ringbuf_input = queue->iface.input_fw_va; + cs_iface->input->ringbuf_output = queue->iface.output_fw_va; + cs_iface->input->config = CS_CONFIG_PRIORITY(queue->priority) | + CS_CONFIG_DOORBELL(queue->doorbell_id); + cs_iface->input->ack_irq_mask = ~0; + panthor_fw_update_reqs(cs_iface, req, + CS_IDLE_SYNC_WAIT | + CS_IDLE_EMPTY | + CS_STATE_START | + CS_EXTRACT_EVENT, + CS_IDLE_SYNC_WAIT | + CS_IDLE_EMPTY | + CS_STATE_MASK | + CS_EXTRACT_EVENT); + if (queue->iface.input->insert != queue->iface.input->extract && queue->timeout_suspended) { + drm_sched_resume_timeout(&queue->scheduler, queue->remaining_time); + queue->timeout_suspended = false; + } +} + +/** + * @cs_slot_reset_locked() - Reset a queue slot + * @ptdev: Device. + * @csg_id: Group slot. + * @cs_id: Queue slot. + * + * Change the queue slot state to STOP and suspend the queue timeout if + * the queue is not blocked. + * + * The group slot must have a group bound to it (group_bind_locked()). + */ +static int +cs_slot_reset_locked(struct panthor_device *ptdev, u32 csg_id, u32 cs_id) +{ + struct panthor_fw_cs_iface *cs_iface = panthor_fw_get_cs_iface(ptdev, csg_id, cs_id); + struct panthor_group *group = ptdev->scheduler->csg_slots[csg_id].group; + struct panthor_queue *queue = group->queues[cs_id]; + + lockdep_assert_held(&ptdev->scheduler->lock); + + panthor_fw_update_reqs(cs_iface, req, + CS_STATE_STOP, + CS_STATE_MASK); + + /* If the queue is blocked, we want to keep the timeout running, so + * we can detect unbounded waits and kill the group when that happens. + */ + if (!(group->blocked_queues & BIT(cs_id)) && !queue->timeout_suspended) { + queue->remaining_time = drm_sched_suspend_timeout(&queue->scheduler); + queue->timeout_suspended = true; + WARN_ON(queue->remaining_time > msecs_to_jiffies(JOB_TIMEOUT_MS)); + } + + return 0; +} + +/** + * csg_slot_sync_priority_locked() - Synchronize the group slot priority + * @ptdev: Device. + * @csg_id: Group slot ID. + * + * Group slot priority update happens asynchronously. When we receive a |
