diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2022-12-12 13:55:31 -0800 |
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2022-12-12 13:55:31 -0800 |
| commit | c1f0fcd85d3d66f002fc1a4986363840fcca766d (patch) | |
| tree | 414ad8ea3b38a33585de78686528d4c99e89598e | |
| parent | 691806e977a3a64895bd891878ed726cdbd282c0 (diff) | |
| parent | f04facfb993de47e2133b2b842d72b97b1c50162 (diff) | |
| download | linux-c1f0fcd85d3d66f002fc1a4986363840fcca766d.tar.gz linux-c1f0fcd85d3d66f002fc1a4986363840fcca766d.tar.bz2 linux-c1f0fcd85d3d66f002fc1a4986363840fcca766d.zip | |
Merge tag 'cxl-for-6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl
Pull cxl updates from Dan Williams:
"Compute Express Link (CXL) updates for 6.2.
While it may seem backwards, the CXL update this time around includes
some focus on CXL 1.x enabling where the work to date had been with
CXL 2.0 (VH topologies) in mind.
First generation CXL can mostly be supported via BIOS, similar to DDR,
however it became clear there are use cases for OS native CXL error
handling and some CXL 3.0 endpoint features can be deployed on CXL 1.x
hosts (Restricted CXL Host (RCH) topologies). So, this update brings
RCH topologies into the Linux CXL device model.
In support of the ongoing CXL 2.0+ enabling two new core kernel
facilities are added.
One is the ability for the kernel to flag collisions between userspace
access to PCI configuration registers and kernel accesses. This is
brought on by the PCIe Data-Object-Exchange (DOE) facility, a hardware
mailbox over config-cycles.
The other is a cpu_cache_invalidate_memregion() API that maps to
wbinvd_on_all_cpus() on x86. To prevent abuse it is disabled in guest
VMs and architectures that do not support it yet. The CXL paths that
need it, dynamic memory region creation and security commands (erase /
unlock), are disabled when it is not present.
As for the CXL 2.0+ this cycle the subsystem gains support Persistent
Memory Security commands, error handling in response to PCIe AER
notifications, and support for the "XOR" host bridge interleave
algorithm.
Summary:
- Add the cpu_cache_invalidate_memregion() API for cache flushing in
response to physical memory reconfiguration, or memory-side data
invalidation from operations like secure erase or memory-device
unlock.
- Add a facility for the kernel to warn about collisions between
kernel and userspace access to PCI configuration registers
- Add support for Restricted CXL Host (RCH) topologies (formerly CXL
1.1)
- Add handling and reporting of CXL errors reported via the PCIe AER
mechanism
- Add support for CXL Persistent Memory Security commands
- Add support for the "XOR" algorithm for CXL host bridge interleave
- Rework / simplify CXL to NVDIMM interactions
- Miscellaneous cleanups and fixes"
* tag 'cxl-for-6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl: (71 commits)
cxl/region: Fix memdev reuse check
cxl/pci: Remove endian confusion
cxl/pci: Add some type-safety to the AER trace points
cxl/security: Drop security command ioctl uapi
cxl/mbox: Add variable output size validation for internal commands
cxl/mbox: Enable cxl_mbox_send_cmd() users to validate output size
cxl/security: Fix Get Security State output payload endian handling
cxl: update names for interleave ways conversion macros
cxl: update names for interleave granularity conversion macros
cxl/acpi: Warn about an invalid CHBCR in an existing CHBS entry
tools/testing/cxl: Require cache invalidation bypass
cxl/acpi: Fail decoder add if CXIMS for HBIG is missing
cxl/region: Fix spelling mistake "memergion" -> "memregion"
cxl/regs: Fix sparse warning
cxl/acpi: Set ACPI's CXL _OSC to indicate RCD mode support
tools/testing/cxl: Add an RCH topology
cxl/port: Add RCD endpoint port enumeration
cxl/mem: Move devm_cxl_add_endpoint() from cxl_core to cxl_mem
tools/testing/cxl: Add XOR Math support to cxl_test
cxl/acpi: Support CXL XOR Interleave Math (CXIMS)
...
49 files changed, 2649 insertions, 836 deletions
diff --git a/Documentation/ABI/testing/sysfs-bus-nvdimm b/Documentation/ABI/testing/sysfs-bus-nvdimm index 1c1f5acbf53d..de8c5a59c77f 100644 --- a/Documentation/ABI/testing/sysfs-bus-nvdimm +++ b/Documentation/ABI/testing/sysfs-bus-nvdimm @@ -41,3 +41,17 @@ KernelVersion: 5.18 Contact: Kajol Jain <kjain@linux.ibm.com> Description: (RO) This sysfs file exposes the cpumask which is designated to to retrieve nvdimm pmu event counter data. + +What: /sys/bus/nd/devices/nmemX/cxl/id +Date: November 2022 +KernelVersion: 6.2 +Contact: Dave Jiang <dave.jiang@intel.com> +Description: (RO) Show the id (serial) of the device. This is CXL specific. + +What: /sys/bus/nd/devices/nmemX/cxl/provider +Date: November 2022 +KernelVersion: 6.2 +Contact: Dave Jiang <dave.jiang@intel.com> +Description: (RO) Shows the CXL bridge device that ties to a CXL memory device + to this NVDIMM device. I.e. the parent of the device returned is + a /sys/bus/cxl/devices/memX instance. diff --git a/Documentation/PCI/pci-error-recovery.rst b/Documentation/PCI/pci-error-recovery.rst index 187f43a03200..bdafeb4b66dc 100644 --- a/Documentation/PCI/pci-error-recovery.rst +++ b/Documentation/PCI/pci-error-recovery.rst @@ -83,6 +83,7 @@ This structure has the form:: int (*mmio_enabled)(struct pci_dev *dev); int (*slot_reset)(struct pci_dev *dev); void (*resume)(struct pci_dev *dev); + void (*cor_error_detected)(struct pci_dev *dev); }; The possible channel states are:: @@ -422,5 +423,11 @@ That is, the recovery API only requires that: - drivers/net/cxgb3 - drivers/net/s2io.c + The cor_error_detected() callback is invoked in handle_error_source() when + the error severity is "correctable". The callback is optional and allows + additional logging to be done if desired. See example: + + - drivers/cxl/pci.c + The End ------- diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 68a2a04c02e5..276b3baf1c50 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -69,6 +69,7 @@ config X86 select ARCH_ENABLE_THP_MIGRATION if X86_64 && TRANSPARENT_HUGEPAGE select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI select ARCH_HAS_CACHE_LINE_SIZE + select ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION select ARCH_HAS_CURRENT_STACK_POINTER select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEBUG_VM_PGTABLE if !X86_PAE diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 2e5a045731de..ef34ba21aa92 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -20,6 +20,7 @@ #include <linux/kernel.h> #include <linux/cc_platform.h> #include <linux/set_memory.h> +#include <linux/memregion.h> #include <asm/e820/api.h> #include <asm/processor.h> @@ -330,6 +331,23 @@ void arch_invalidate_pmem(void *addr, size_t size) EXPORT_SYMBOL_GPL(arch_invalidate_pmem); #endif +#ifdef CONFIG_ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION +bool cpu_cache_has_invalidate_memregion(void) +{ + return !cpu_feature_enabled(X86_FEATURE_HYPERVISOR); +} +EXPORT_SYMBOL_NS_GPL(cpu_cache_has_invalidate_memregion, DEVMEM); + +int cpu_cache_invalidate_memregion(int res_desc) +{ + if (WARN_ON_ONCE(!cpu_cache_has_invalidate_memregion())) + return -ENXIO; + wbinvd_on_all_cpus(); + return 0; +} +EXPORT_SYMBOL_NS_GPL(cpu_cache_invalidate_memregion, DEVMEM); +#endif + static void __cpa_flush_all(void *arg) { unsigned long cache = (unsigned long)arg; diff --git a/drivers/acpi/nfit/intel.c b/drivers/acpi/nfit/intel.c index 8dd792a55730..3902759abcba 100644 --- a/drivers/acpi/nfit/intel.c +++ b/drivers/acpi/nfit/intel.c @@ -3,6 +3,7 @@ #include <linux/libnvdimm.h> #include <linux/ndctl.h> #include <linux/acpi.h> +#include <linux/memregion.h> #include <asm/smp.h> #include "intel.h" #include "nfit.h" @@ -190,8 +191,6 @@ static int intel_security_change_key(struct nvdimm *nvdimm, } } -static void nvdimm_invalidate_cache(void); - static int __maybe_unused intel_security_unlock(struct nvdimm *nvdimm, const struct nvdimm_key_data *key_data) { @@ -227,9 +226,6 @@ static int __maybe_unused intel_security_unlock(struct nvdimm *nvdimm, return -EIO; } - /* DIMM unlocked, invalidate all CPU caches before we read it */ - nvdimm_invalidate_cache(); - return 0; } @@ -297,8 +293,6 @@ static int __maybe_unused intel_security_erase(struct nvdimm *nvdimm, if (!test_bit(cmd, &nfit_mem->dsm_mask)) return -ENOTTY; - /* flush all cache before we erase DIMM */ - nvdimm_invalidate_cache(); memcpy(nd_cmd.cmd.passphrase, key->data, sizeof(nd_cmd.cmd.passphrase)); rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL); @@ -317,8 +311,6 @@ static int __maybe_unused intel_security_erase(struct nvdimm *nvdimm, return -ENXIO; } - /* DIMM erased, invalidate all CPU caches before we read it */ - nvdimm_invalidate_cache(); return 0; } @@ -354,8 +346,6 @@ static int __maybe_unused intel_security_query_overwrite(struct nvdimm *nvdimm) return -ENXIO; } - /* flush all cache before we make the nvdimms available */ - nvdimm_invalidate_cache(); return 0; } @@ -380,8 +370,6 @@ static int __maybe_unused intel_security_overwrite(struct nvdimm *nvdimm, if (!test_bit(NVDIMM_INTEL_OVERWRITE, &nfit_mem->dsm_mask)) return -ENOTTY; - /* flush all cache before we erase DIMM */ - nvdimm_invalidate_cache(); memcpy(nd_cmd.cmd.passphrase, nkey->data, sizeof(nd_cmd.cmd.passphrase)); rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL); @@ -401,22 +389,6 @@ static int __maybe_unused intel_security_overwrite(struct nvdimm *nvdimm, } } -/* - * TODO: define a cross arch wbinvd equivalent when/if - * NVDIMM_FAMILY_INTEL command support arrives on another arch. - */ -#ifdef CONFIG_X86 -static void nvdimm_invalidate_cache(void) -{ - wbinvd_on_all_cpus(); -} -#else -static void nvdimm_invalidate_cache(void) -{ - WARN_ON_ONCE("cache invalidation required after unlock\n"); -} -#endif - static const struct nvdimm_security_ops __intel_security_ops = { .get_flags = intel_security_flags, .freeze = intel_security_freeze, diff --git a/drivers/acpi/pci_root.c b/drivers/acpi/pci_root.c index 4e3db20e9cbb..b3c202d2a433 100644 --- a/drivers/acpi/pci_root.c +++ b/drivers/acpi/pci_root.c @@ -493,6 +493,7 @@ static u32 calculate_cxl_support(void) u32 support; support = OSC_CXL_2_0_PORT_DEV_REG_ACCESS_SUPPORT; + support |= OSC_CXL_1_1_PORT_REG_ACCESS_SUPPORT; if (pci_aer_available()) support |= OSC_CXL_PROTOCOL_ERR_REPORTING_SUPPORT; if (IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE)) diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig index 768ced3d6fe8..0ac53c422c31 100644 --- a/drivers/cxl/Kconfig +++ b/drivers/cxl/Kconfig @@ -111,4 +111,22 @@ config CXL_REGION select MEMREGION select GET_FREE_REGION +config CXL_REGION_INVALIDATION_TEST + bool "CXL: Region Cache Management Bypass (TEST)" + depends on CXL_REGION + help + CXL Region management and security operations potentially invalidate + the content of CPU caches without notifiying those caches to + invalidate the affected cachelines. The CXL Region driver attempts + to invalidate caches when those events occur. If that invalidation + fails the region will fail to enable. Reasons for cache + invalidation failure are due to the CPU not providing a cache + invalidation mechanism. For example usage of wbinvd is restricted to + bare metal x86. However, for testing purposes toggling this option + can disable that data integrity safety and proceed with enabling + regions when there might be conflicting contents in the CPU cache. + + If unsure, or if this kernel is meant for production environments, + say N. + endif diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile index a78270794150..db321f48ba52 100644 --- a/drivers/cxl/Makefile +++ b/drivers/cxl/Makefile @@ -9,5 +9,5 @@ obj-$(CONFIG_CXL_PORT) += cxl_port.o cxl_mem-y := mem.o cxl_pci-y := pci.o cxl_acpi-y := acpi.o -cxl_pmem-y := pmem.o +cxl_pmem-y := pmem.o security.o cxl_port-y := port.o diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c index fb649683dd3a..ad0849af42d7 100644 --- a/drivers/cxl/acpi.c +++ b/drivers/cxl/acpi.c @@ -6,9 +6,120 @@ #include <linux/kernel.h> #include <linux/acpi.h> #include <linux/pci.h> +#include <asm/div64.h> #include "cxlpci.h" #include "cxl.h" +#define CXL_RCRB_SIZE SZ_8K + +struct cxl_cxims_data { + int nr_maps; + u64 xormaps[]; +}; + +/* + * Find a targets entry (n) in the host bridge interleave list. + * CXL Specfication 3.0 Table 9-22 + */ +static int cxl_xor_calc_n(u64 hpa, struct cxl_cxims_data *cximsd, int iw, + int ig) +{ + int i = 0, n = 0; + u8 eiw; + + /* IW: 2,4,6,8,12,16 begin building 'n' using xormaps */ + if (iw != 3) { + for (i = 0; i < cximsd->nr_maps; i++) + n |= (hweight64(hpa & cximsd->xormaps[i]) & 1) << i; + } + /* IW: 3,6,12 add a modulo calculation to 'n' */ + if (!is_power_of_2(iw)) { + if (ways_to_eiw(iw, &eiw)) + return -1; + hpa &= GENMASK_ULL(51, eiw + ig); + n |= do_div(hpa, 3) << i; + } + return n; +} + +static struct cxl_dport *cxl_hb_xor(struct cxl_root_decoder *cxlrd, int pos) +{ + struct cxl_cxims_data *cximsd = cxlrd->platform_data; + struct cxl_switch_decoder *cxlsd = &cxlrd->cxlsd; + struct cxl_decoder *cxld = &cxlsd->cxld; + int ig = cxld->interleave_granularity; + int iw = cxld->interleave_ways; + int n = 0; + u64 hpa; + + if (dev_WARN_ONCE(&cxld->dev, + cxld->interleave_ways != cxlsd->nr_targets, + "misconfigured root decoder\n")) + return NULL; + + hpa = cxlrd->res->start + pos * ig; + + /* Entry (n) is 0 for no interleave (iw == 1) */ + if (iw != 1) + n = cxl_xor_calc_n(hpa, cximsd, iw, ig); + + if (n < 0) + return NULL; + + return cxlrd->cxlsd.target[n]; +} + +struct cxl_cxims_context { + struct device *dev; + struct cxl_root_decoder *cxlrd; +}; + +static int cxl_parse_cxims(union acpi_subtable_headers *header, void *arg, + const unsigned long end) +{ + struct acpi_cedt_cxims *cxims = (struct acpi_cedt_cxims *)header; + struct cxl_cxims_context *ctx = arg; + struct cxl_root_decoder *cxlrd = ctx->cxlrd; + struct cxl_decoder *cxld = &cxlrd->cxlsd.cxld; + struct device *dev = ctx->dev; + struct cxl_cxims_data *cximsd; + unsigned int hbig, nr_maps; + int rc; + + rc = eig_to_granularity(cxims->hbig, &hbig); + if (rc) + return rc; + + /* Does this CXIMS entry apply to the given CXL Window? */ + if (hbig != cxld->interleave_granularity) + return 0; + + /* IW 1,3 do not use xormaps and skip this parsing entirely */ + if (is_power_of_2(cxld->interleave_ways)) + /* 2, 4, 8, 16 way */ + nr_maps = ilog2(cxld->interleave_ways); + else + /* 6, 12 way */ + nr_maps = ilog2(cxld->interleave_ways / 3); + + if (cxims->nr_xormaps < nr_maps) { + dev_dbg(dev, "CXIMS nr_xormaps[%d] expected[%d]\n", + cxims->nr_xormaps, nr_maps); + return -ENXIO; + } + + cximsd = devm_kzalloc(dev, struct_size(cximsd, xormaps, nr_maps), + GFP_KERNEL); + if (!cximsd) + return -ENOMEM; + memcpy(cximsd->xormaps, cxims->xormap_list, + nr_maps * sizeof(*cximsd->xormaps)); + cximsd->nr_maps = nr_maps; + cxlrd->platform_data = cximsd; + + return 0; +} + static unsigned long cfmws_to_decoder_flags(int restrictions) { unsigned long flags = CXL_DECODER_F_ENABLE; @@ -33,8 +144,10 @@ static int cxl_acpi_cfmws_verify(struct device *dev, int rc, expected_len; unsigned int ways; - if (cfmws->interleave_arithmetic != ACPI_CEDT_CFMWS_ARITHMETIC_MODULO) { - dev_err(dev, "CFMWS Unsupported Interleave Arithmetic\n"); + if (cfmws->interleave_arithmetic != ACPI_CEDT_CFMWS_ARITHMETIC_MODULO && + cfmws->interleave_arithmetic != ACPI_CEDT_CFMWS_ARITHMETIC_XOR) { + dev_err(dev, "CFMWS Unknown Interleave Arithmetic: %d\n", + cfmws->interleave_arithmetic); return -EINVAL; } @@ -48,7 +161,7 @@ static int cxl_acpi_cfmws_verify(struct device *dev, return -EINVAL; } - rc = cxl_to_ways(cfmws->interleave_ways, &ways); + rc = eiw_to_ways(cfmws->interleave_ways, &ways); if (rc) { dev_err(dev, "CFMWS Interleave Ways (%d) invalid\n", cfmws->interleave_ways); @@ -70,6 +183,10 @@ static int cxl_acpi_cfmws_verify(struct device *dev, return 0; } +/* + * Note, @dev must be the first member, see 'struct cxl_chbs_context' + * and mock_acpi_table_parse_cedt() + */ struct cxl_cfmws_context { struct device *dev; struct cxl_port *root_port; @@ -84,9 +201,11 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg, struct cxl_cfmws_context *ctx = arg; struct cxl_port *root_port = ctx->root_port; struct resource *cxl_res = ctx->cxl_res; + struct cxl_cxims_context cxims_ctx; struct cxl_root_decoder *cxlrd; struct device *dev = ctx->dev; struct acpi_cedt_cfmws *cfmws; + cxl_calc_hb_fn cxl_calc_hb; struct cxl_decoder *cxld; unsigned int ways, i, ig; struct resource *res; @@ -102,10 +221,10 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg, return 0; } - rc = cxl_to_ways(cfmws->interleave_ways, &ways); + rc = eiw_to_ways(cfmws->interleave_ways, &ways); if (rc) return rc; - rc = cxl_to_granularity(cfmws->granularity, &ig); + rc = eig_to_granularity(cfmws->granularity, &ig); if (rc) return rc; for (i = 0; i < ways; i++) @@ -128,7 +247,12 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg, if (rc) goto err_insert; - cxlrd = cxl_root_decoder_alloc(root_port, ways); + if (cfmws->interleave_arithmetic == ACPI_CEDT_CFMWS_ARITHMETIC_MODULO) + cxl_calc_hb = cxl_hb_modulo; + else + cxl_calc_hb = cxl_hb_xor; + + cxlrd = cxl_root_decoder_alloc(root_port, ways, cxl_calc_hb); if (IS_ERR(cxlrd)) return 0; @@ -148,7 +272,25 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg, ig = CXL_DECODER_MIN_GRANULARITY; cxld->interleave_granularity = ig; + if (cfmws->interleave_arithmetic == ACPI_CEDT_CFMWS_ARITHMETIC_XOR) { + if (ways != 1 && ways != 3) { + cxims_ctx = (struct cxl_cxims_context) { + .dev = dev, + .cxlrd = cxlrd, + }; + rc = acpi_table_parse_cedt(ACPI_CEDT_TYPE_CXIMS, + cxl_parse_cxims, &cxims_ctx); + if (rc < 0) + goto err_xormap; + if (!cxlrd->platform_data) { + dev_err(dev, "No CXIMS for HBIG %u\n", ig); + rc = -EINVAL; + goto err_xormap; + } + } + } rc = cxl_decoder_add(cxld, target_map); +err_xormap: if (rc) put_device(&cxld->dev); else @@ -193,34 +335,39 @@ static int add_host_bridge_uport(struct device *match, void *arg) { struct cxl_port *root_port = arg; struct device *host = root_port->dev.parent; - struct acpi_device *bridge = to_cxl_host_bridge(host, match); + struct acpi_device *hb = to_cxl_host_bridge(host, match); struct acpi_pci_root *pci_root; struct cxl_dport *dport; struct cxl_port *port; + struct device *bridge; int rc; - if (!bridge) + if (!hb) return 0; - dport = cxl_find_dport_by_dev(root_port, match); + pci_root = acpi_pci_find_root(hb->handle); + bridge = pci_root->bus->bridge; + dport = cxl_find_dport_by_dev(root_port, bridge); if (!dport) { dev_dbg(host, "host bridge expected and not found\n"); return 0; } - /* - * Note that this lookup already succeeded in - * to_cxl_host_bridge(), so no need to check for failure here - */ - pci_root = acpi_pci_find_root(bridge->handle); - rc = devm_cxl_register_pci_bus(host, match, pci_root->bus); + if (dport->rch) { + dev_info(bridge, "host supports CXL (restricted)\n"); + return 0; + } + + rc = devm_cxl_register_pci_bus(host, bridge, pci_root->bus); if (rc) return rc; - port = devm_cxl_add_port(host, match, dport->component_reg_phys, dport); + port = devm_cxl_add_port(host, bridge, dport->component_reg_phys, + dport); if (IS_ERR(port)) return PTR_ERR(port); - dev_dbg(host, "%s: add: %s\n", dev_name(match), dev_name(&port->dev)); + + dev_info(bridge, "host supports CXL\n"); return 0; } @@ -228,7 +375,9 @@ static int add_host_bridge_uport(struct device *match, void *arg) struct cxl_chbs_context { struct device *dev; unsigned long long uid; + resource_size_t rcrb; resource_size_t chbcr; + u32 cxl_version; }; static int cxl_get_chbcr(union acpi_subtable_headers *header, void *arg, @@ -244,51 +393,86 @@ static int cxl_get_chbcr(union acpi_subtable_headers *header, void *arg, if (ctx->uid != chbs->uid) return 0; - ctx->chbcr = chbs->base; + + ctx->cxl_version = chbs->cxl_version; + ctx->rcrb = CXL_RESOURCE_NONE; + ctx->chbcr = CXL_RESOURCE_NONE; + + if (!chbs->base) + return 0; + + if (chbs->cxl_version != ACPI_CEDT_CHBS_VERSION_CXL11) { + ctx->chbcr = chbs->base; + return 0; + } + + if (chbs->length != CXL_RCRB_SIZE) + return 0; + + ctx->rcrb = chbs->base; + ctx->chbcr = cxl_rcrb_to_component(ctx->dev, chbs->base, + CXL_RCRB_DOWNSTREAM); return 0; } static int add_host_bridge_dport(struct device *match, void *arg) { - acpi_status status; + acpi_status rc; + struct device *bridge; unsigned long long uid; struct cxl_dport *dport; struct cxl_chbs_context ctx; + struct acpi_pci_root *pci_root; struct cxl_port *root_port = arg; struct device *host = root_port->dev.parent; - struct acpi_device *bridge = to_cxl_host_bridge(host, match); + struct acpi_device *hb = to_cxl_host_bridge(host, match); - if (!bridge) + if (!hb) return 0; - status = acpi_evaluate_integer(bridge->handle, METHOD_NAME__UID, NULL, - &uid); - if (status != AE_OK) { - dev_err(host, "unable to retrieve _UID of %s\n", - dev_name(match)); + rc = acpi_evaluate_integer(hb->handle, METHOD_NAME__UID, NULL, &uid); + if (rc != AE_OK) { + dev_err(match, "unable to retrieve _UID\n"); return -ENODEV; } + dev_dbg(match, "UID found: %lld\n", uid); + ctx = (struct cxl_chbs_context) { - .dev = host, + .dev = match, .uid = uid, }; acpi_table_parse_cedt(ACPI_CEDT_TYPE_CHBS, cxl_get_chbcr, &ctx); - if (ctx.chbcr == 0) { - dev_warn(host, "No CHBS found for Host Bridge: %s\n", - dev_name(match)); + if (!ctx.chbcr) { + dev_warn(match, "No CHBS found for Host Bridge (UID %lld)\n", + uid); return 0; } - dport = devm_cxl_add_dport(root_port, match, uid, ctx.chbcr); - if (IS_ERR(dport)) { - dev_err(host, "failed to add downstream port: %s\n", - dev_name(match)); - return PTR_ERR(dport); + if (ctx.rcrb != CXL_RESOURCE_NONE) + dev_dbg(match, "RCRB found for UID %lld: %pa\n", uid, &ctx.rcrb); + + if (ctx.chbcr == CXL_RESOURCE_NONE) { + dev_warn(match, "CHBCR invalid for Host Bridge (UID %lld)\n", + uid); + return 0; } - dev_dbg(host, "add dport%llu: %s\n", uid, dev_name(match)); + + dev_dbg(match, "CHBCR found: %pa\n", &ctx.chbcr); + + pci_root = acpi_pci_find_root(hb->handle); + bridge = pci_root->bus->bridge; + if (ctx.cxl_version == ACPI_CEDT_CHBS_VERSION_CXL11) + dport = devm_cxl_add_rch_dport(root_port, bridge, uid, + ctx.chbcr, ctx.rcrb); + else + dport = devm_cxl_add_dport(root_port, bridge, uid, + ctx.chbcr); + if (IS_ERR(dport)) + return PTR_ERR(dport); + |
