diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2023-08-29 08:19:46 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2023-08-29 08:19:46 -0700 |
commit | a031fe8d1d32898582e36ccbffa9847d16f67aa2 (patch) | |
tree | bb097e00fcf06c92efffe95e48163e5658d527fc /rust/alloc | |
parent | f2586d921cea4feeddd1cc5ee3495700540dba8f (diff) | |
parent | 4af84c6a85c63bec24611e46bb3de2c0a6602a51 (diff) | |
download | linux-a031fe8d1d32898582e36ccbffa9847d16f67aa2.tar.gz linux-a031fe8d1d32898582e36ccbffa9847d16f67aa2.tar.bz2 linux-a031fe8d1d32898582e36ccbffa9847d16f67aa2.zip |
Merge tag 'rust-6.6' of https://github.com/Rust-for-Linux/linux
Pull rust updates from Miguel Ojeda:
"In terms of lines, most changes this time are on the pinned-init API
and infrastructure. While we have a Rust version upgrade, and thus a
bunch of changes from the vendored 'alloc' crate as usual, this time
those do not account for many lines.
Toolchain and infrastructure:
- Upgrade to Rust 1.71.1. This is the second such upgrade, which is a
smaller jump compared to the last time.
This version allows us to remove the '__rust_*' allocator functions
-- the compiler now generates them as expected, thus now our
'KernelAllocator' is used.
It also introduces the 'offset_of!' macro in the standard library
(as an unstable feature) which we will need soon. So far, we were
using a declarative macro as a prerequisite in some not-yet-landed
patch series, which did not support sub-fields (i.e. nested
structs):
#[repr(C)]
struct S {
a: u16,
b: (u8, u8),
}
assert_eq!(offset_of!(S, b.1), 3);
- Upgrade to bindgen 0.65.1. This is the first time we upgrade its
version.
Given it is a fairly big jump, it comes with a fair number of
improvements/changes that affect us, such as a fix needed to
support LLVM 16 as well as proper support for '__noreturn' C
functions, which are now mapped to return the '!' type in Rust:
void __noreturn f(void); // C
pub fn f() -> !; // Rust
- 'scripts/rust_is_available.sh' improvements and fixes.
This series takes care of all the issues known so far and adds a
few new checks to cover for even more cases, plus adds some more
help texts. All this together will hopefully make problematic
setups easier to identify and to be solved by users building the
kernel.
In addition, it adds a test suite which covers all branches of the
shell script, as well as tests for the issues found so far.
- Support rust-analyzer for out-of-tree modules too.
- Give 'cfg's to rust-analyzer for the 'core' and 'alloc' crates.
- Drop 'scripts/is_rust_module.sh' since it is not needed anymore.
Macros crate:
- New 'paste!' proc macro.
This macro is a more flexible version of 'concat_idents!': it
allows the resulting identifier to be used to declare new items and
it allows to transform the identifiers before concatenating them,
e.g.
let x_1 = 42;
paste!(let [<x _2>] = [<x _1>];);
assert!(x_1 == x_2);
The macro is then used for several of the pinned-init API changes
in this pull.
Pinned-init API:
- Make '#[pin_data]' compatible with conditional compilation of
fields, allowing to write code like:
#[pin_data]
pub struct Foo {
#[cfg(CONFIG_BAR)]
a: Bar,
#[cfg(not(CONFIG_BAR))]
a: Baz,
}
- New '#[derive(Zeroable)]' proc macro for the 'Zeroable' trait,
which allows 'unsafe' implementations for structs where every field
implements the 'Zeroable' trait, e.g.:
#[derive(Zeroable)]
pub struct DriverData {
id: i64,
buf_ptr: *mut u8,
len: usize,
}
- Add '..Zeroable::zeroed()' syntax to the 'pin_init!' macro for
zeroing all other fields, e.g.:
pin_init!(Buf {
buf: [1; 64],
..Zeroable::zeroed()
});
- New '{,pin_}init_array_from_fn()' functions to create array
initializers given a generator function, e.g.:
let b: Box<[usize; 1_000]> = Box::init::<Error>(
init_array_from_fn(|i| i)
).unwrap();
assert_eq!(b.len(), 1_000);
assert_eq!(b[123], 123);
- New '{,pin_}chain' methods for '{,Pin}Init<T, E>' that allow to
execute a closure on the value directly after initialization, e.g.:
let foo = init!(Foo {
buf <- init::zeroed()
}).chain(|foo| {
foo.setup();
Ok(())
});
- Support arbitrary paths in init macros, instead of just identifiers
and generic types.
- Implement the 'Zeroable' trait for the 'UnsafeCell<T>' and
'Opaque<T>' types.
- Make initializer values inaccessible after initialization.
- Make guards in the init macros hygienic.
'allocator' module:
- Use 'krealloc_aligned()' in 'KernelAllocator::alloc' preventing
misaligned allocations when the Rust 1.71.1 upgrade is applied
later in this pull.
The equivalent fix for the previous compiler version (where
'KernelAllocator' is not yet used) was merged into 6.5 already,
which added the 'krealloc_aligned()' function used here.
- Implement 'KernelAllocator::{realloc, alloc_zeroed}' for
performance, using 'krealloc_aligned()' too, which forwards the
call to the C API.
'types' module:
- Make 'Opaque' be '!Unpin', removing the need to add a
'PhantomPinned' field to Rust structs that contain C structs which
must not be moved.
- Make 'Opaque' use 'UnsafeCell' as the outer type, rather than
inner.
Documentation:
- Suggest obtaining the source code of the Rust's 'core' library
using the tarball instead of the repository.
MAINTAINERS:
- Andreas and Alice, from Samsung and Google respectively, are
joining as reviewers of the "RUST" entry.
As well as a few other minor changes and cleanups"
* tag 'rust-6.6' of https://github.com/Rust-for-Linux/linux: (42 commits)
rust: init: update expanded macro explanation
rust: init: add `{pin_}chain` functions to `{Pin}Init<T, E>`
rust: init: make `PinInit<T, E>` a supertrait of `Init<T, E>`
rust: init: implement `Zeroable` for `UnsafeCell<T>` and `Opaque<T>`
rust: init: add support for arbitrary paths in init macros
rust: init: add functions to create array initializers
rust: init: add `..Zeroable::zeroed()` syntax for zeroing all missing fields
rust: init: make initializer values inaccessible after initializing
rust: init: wrap type checking struct initializers in a closure
rust: init: make guards in the init macros hygienic
rust: add derive macro for `Zeroable`
rust: init: make `#[pin_data]` compatible with conditional compilation of fields
rust: init: consolidate init macros
docs: rust: clarify what 'rustup override' does
docs: rust: update instructions for obtaining 'core' source
docs: rust: add command line to rust-analyzer section
scripts: generate_rust_analyzer: provide `cfg`s for `core` and `alloc`
rust: bindgen: upgrade to 0.65.1
rust: enable `no_mangle_with_rust_abi` Clippy lint
rust: upgrade to Rust 1.71.1
...
Diffstat (limited to 'rust/alloc')
-rw-r--r-- | rust/alloc/alloc.rs | 20 | ||||
-rw-r--r-- | rust/alloc/boxed.rs | 131 | ||||
-rw-r--r-- | rust/alloc/lib.rs | 48 | ||||
-rw-r--r-- | rust/alloc/raw_vec.rs | 18 | ||||
-rw-r--r-- | rust/alloc/slice.rs | 43 | ||||
-rw-r--r-- | rust/alloc/vec/drain.rs | 8 | ||||
-rw-r--r-- | rust/alloc/vec/drain_filter.rs | 8 | ||||
-rw-r--r-- | rust/alloc/vec/into_iter.rs | 35 | ||||
-rw-r--r-- | rust/alloc/vec/mod.rs | 84 |
9 files changed, 195 insertions, 200 deletions
diff --git a/rust/alloc/alloc.rs b/rust/alloc/alloc.rs index acf22d45e6f2..0b6bf5b6da43 100644 --- a/rust/alloc/alloc.rs +++ b/rust/alloc/alloc.rs @@ -16,8 +16,6 @@ use core::ptr::{self, NonNull}; #[doc(inline)] pub use core::alloc::*; -use core::marker::Destruct; - #[cfg(test)] mod tests; @@ -41,6 +39,9 @@ extern "Rust" { #[rustc_allocator_zeroed] #[rustc_nounwind] fn __rust_alloc_zeroed(size: usize, align: usize) -> *mut u8; + + #[cfg(not(bootstrap))] + static __rust_no_alloc_shim_is_unstable: u8; } /// The global memory allocator. @@ -94,7 +95,14 @@ pub use std::alloc::Global; #[must_use = "losing the pointer will leak memory"] #[inline] pub unsafe fn alloc(layout: Layout) -> *mut u8 { - unsafe { __rust_alloc(layout.size(), layout.align()) } + unsafe { + // Make sure we don't accidentally allow omitting the allocator shim in + // stable code until it is actually stabilized. + #[cfg(not(bootstrap))] + core::ptr::read_volatile(&__rust_no_alloc_shim_is_unstable); + + __rust_alloc(layout.size(), layout.align()) + } } /// Deallocate memory with the global allocator. @@ -333,16 +341,12 @@ unsafe fn exchange_malloc(size: usize, align: usize) -> *mut u8 { #[cfg_attr(not(test), lang = "box_free")] #[inline] -#[rustc_const_unstable(feature = "const_box", issue = "92521")] // This signature has to be the same as `Box`, otherwise an ICE will happen. // When an additional parameter to `Box` is added (like `A: Allocator`), this has to be added here as // well. // For example if `Box` is changed to `struct Box<T: ?Sized, A: Allocator>(Unique<T>, A)`, // this function has to be changed to `fn box_free<T: ?Sized, A: Allocator>(Unique<T>, A)` as well. -pub(crate) const unsafe fn box_free<T: ?Sized, A: ~const Allocator + ~const Destruct>( - ptr: Unique<T>, - alloc: A, -) { +pub(crate) unsafe fn box_free<T: ?Sized, A: Allocator>(ptr: Unique<T>, alloc: A) { unsafe { let size = size_of_val(ptr.as_ref()); let align = min_align_of_val(ptr.as_ref()); diff --git a/rust/alloc/boxed.rs b/rust/alloc/boxed.rs index 14af9860c36c..c8173cea8317 100644 --- a/rust/alloc/boxed.rs +++ b/rust/alloc/boxed.rs @@ -152,16 +152,13 @@ use core::any::Any; use core::async_iter::AsyncIterator; use core::borrow; use core::cmp::Ordering; -use core::convert::{From, TryFrom}; use core::error::Error; use core::fmt; use core::future::Future; use core::hash::{Hash, Hasher}; -#[cfg(not(no_global_oom_handling))] -use core::iter::FromIterator; -use core::iter::{FusedIterator, Iterator}; +use core::iter::FusedIterator; use core::marker::Tuple; -use core::marker::{Destruct, Unpin, Unsize}; +use core::marker::Unsize; use core::mem; use core::ops::{ CoerceUnsized, Deref, DerefMut, DispatchFromDyn, Generator, GeneratorState, Receiver, @@ -218,6 +215,7 @@ impl<T> Box<T> { #[inline(always)] #[stable(feature = "rust1", since = "1.0.0")] #[must_use] + #[rustc_diagnostic_item = "box_new"] pub fn new(x: T) -> Self { #[rustc_box] Box::new(x) @@ -287,9 +285,7 @@ impl<T> Box<T> { #[must_use] #[inline(always)] pub fn pin(x: T) -> Pin<Box<T>> { - (#[rustc_box] - Box::new(x)) - .into() + Box::new(x).into() } /// Allocates memory on the heap then places `x` into it, @@ -381,12 +377,11 @@ impl<T, A: Allocator> Box<T, A> { /// ``` #[cfg(not(no_global_oom_handling))] #[unstable(feature = "allocator_api", issue = "32838")] - #[rustc_const_unstable(feature = "const_box", issue = "92521")] #[must_use] #[inline] - pub const fn new_in(x: T, alloc: A) -> Self + pub fn new_in(x: T, alloc: A) -> Self where - A: ~const Allocator + ~const Destruct, + A: Allocator, { let mut boxed = Self::new_uninit_in(alloc); unsafe { @@ -411,12 +406,10 @@ impl<T, A: Allocator> Box<T, A> { /// # Ok::<(), std::alloc::AllocError>(()) /// ``` #[unstable(feature = "allocator_api", issue = "32838")] - #[rustc_const_unstable(feature = "const_box", issue = "92521")] #[inline] - pub const fn try_new_in(x: T, alloc: A) -> Result<Self, AllocError> + pub fn try_new_in(x: T, alloc: A) -> Result<Self, AllocError> where - T: ~const Destruct, - A: ~const Allocator + ~const Destruct, + A: Allocator, { let mut boxed = Self::try_new_uninit_in(alloc)?; unsafe { @@ -446,13 +439,12 @@ impl<T, A: Allocator> Box<T, A> { /// assert_eq!(*five, 5) /// ``` #[unstable(feature = "allocator_api", issue = "32838")] - #[rustc_const_unstable(feature = "const_box", issue = "92521")] #[cfg(not(no_global_oom_handling))] #[must_use] // #[unstable(feature = "new_uninit", issue = "63291")] - pub const fn new_uninit_in(alloc: A) -> Box<mem::MaybeUninit<T>, A> + pub fn new_uninit_in(alloc: A) -> Box<mem::MaybeUninit<T>, A> where - A: ~const Allocator + ~const Destruct, + A: Allocator, { let layout = Layout::new::<mem::MaybeUninit<T>>(); // NOTE: Prefer match over unwrap_or_else since closure sometimes not inlineable. @@ -487,10 +479,9 @@ impl<T, A: Allocator> Box<T, A> { /// ``` #[unstable(feature = "allocator_api", issue = "32838")] // #[unstable(feature = "new_uninit", issue = "63291")] - #[rustc_const_unstable(feature = "const_box", issue = "92521")] - pub const fn try_new_uninit_in(alloc: A) -> Result<Box<mem::MaybeUninit<T>, A>, AllocError> + pub fn try_new_uninit_in(alloc: A) -> Result<Box<mem::MaybeUninit<T>, A>, AllocError> where - A: ~const Allocator + ~const Destruct, + A: Allocator, { let layout = Layout::new::<mem::MaybeUninit<T>>(); let ptr = alloc.allocate(layout)?.cast(); @@ -518,13 +509,12 @@ impl<T, A: Allocator> Box<T, A> { /// /// [zeroed]: mem::MaybeUninit::zeroed #[unstable(feature = "allocator_api", issue = "32838")] - #[rustc_const_unstable(feature = "const_box", issue = "92521")] #[cfg(not(no_global_oom_handling))] // #[unstable(feature = "new_uninit", issue = "63291")] #[must_use] - pub const fn new_zeroed_in(alloc: A) -> Box<mem::MaybeUninit<T>, A> + pub fn new_zeroed_in(alloc: A) -> Box<mem::MaybeUninit<T>, A> where - A: ~const Allocator + ~const Destruct, + A: Allocator, { let layout = Layout::new::<mem::MaybeUninit<T>>(); // NOTE: Prefer match over unwrap_or_else since closure sometimes not inlineable. @@ -559,10 +549,9 @@ impl<T, A: Allocator> Box<T, A> { /// [zeroed]: mem::MaybeUninit::zeroed #[unstable(feature = "allocator_api", issue = "32838")] // #[unstable(feature = "new_uninit", issue = "63291")] - #[rustc_const_unstable(feature = "const_box", issue = "92521")] - pub const fn try_new_zeroed_in(alloc: A) -> Result<Box<mem::MaybeUninit<T>, A>, AllocError> + pub fn try_new_zeroed_in(alloc: A) -> Result<Box<mem::MaybeUninit<T>, A>, AllocError> where - A: ~const Allocator + ~const Destruct, + A: Allocator, { let layout = Layout::new::<mem::MaybeUninit<T>>(); let ptr = alloc.allocate_zeroed(layout)?.cast(); @@ -578,12 +567,11 @@ impl<T, A: Allocator> Box<T, A> { /// construct a (pinned) `Box` in a different way than with [`Box::new_in`]. #[cfg(not(no_global_oom_handling))] #[unstable(feature = "allocator_api", issue = "32838")] - #[rustc_const_unstable(feature = "const_box", issue = "92521")] #[must_use] #[inline(always)] - pub const fn pin_in(x: T, alloc: A) -> Pin<Self> + pub fn pin_in(x: T, alloc: A) -> Pin<Self> where - A: 'static + ~const Allocator + ~const Destruct, + A: 'static + Allocator, { Self::into_pin(Self::new_in(x, alloc)) } @@ -592,8 +580,7 @@ impl<T, A: Allocator> Box<T, A> { /// /// This conversion does not allocate on the heap and happens in place. #[unstable(feature = "box_into_boxed_slice", issue = "71582")] - #[rustc_const_unstable(feature = "const_box", issue = "92521")] - pub const fn into_boxed_slice(boxed: Self) -> Box<[T], A> { + pub fn into_boxed_slice(boxed: Self) -> Box<[T], A> { let (raw, alloc) = Box::into_raw_with_allocator(boxed); unsafe { Box::from_raw_in(raw as *mut [T; 1], alloc) } } @@ -610,12 +597,8 @@ impl<T, A: Allocator> Box<T, A> { /// assert_eq!(Box::into_inner(c), 5); /// ``` #[unstable(feature = "box_into_inner", issue = "80437")] - #[rustc_const_unstable(feature = "const_box", issue = "92521")] #[inline] - pub const fn into_inner(boxed: Self) -> T - where - Self: ~const Destruct, - { + pub fn into_inner(boxed: Self) -> T { *boxed } } @@ -829,9 +812,8 @@ impl<T, A: Allocator> Box<mem::MaybeUninit<T>, A> { /// assert_eq!(*five, 5) /// ``` #[unstable(feature = "new_uninit", issue = "63291")] - #[rustc_const_unstable(feature = "const_box", issue = "92521")] #[inline] - pub const unsafe fn assume_init(self) -> Box<T, A> { + pub unsafe fn assume_init(self) -> Box<T, A> { let (raw, alloc) = Box::into_raw_with_allocator(self); unsafe { Box::from_raw_in(raw as *mut T, alloc) } } @@ -864,9 +846,8 @@ impl<T, A: Allocator> Box<mem::MaybeUninit<T>, A> { /// } /// ``` #[unstable(feature = "new_uninit", issue = "63291")] - #[rustc_const_unstable(feature = "const_box", issue = "92521")] #[inline] - pub const fn write(mut boxed: Self, value: T) -> Box<T, A> { + pub fn write(mut boxed: Self, value: T) -> Box<T, A> { unsafe { (*boxed).write(value); boxed.assume_init() @@ -1110,9 +1091,8 @@ impl<T: ?Sized, A: Allocator> Box<T, A> { /// /// [memory layout]: self#memory-layout #[unstable(feature = "allocator_api", issue = "32838")] - #[rustc_const_unstable(feature = "const_box", issue = "92521")] #[inline] - pub const fn into_raw_with_allocator(b: Self) -> (*mut T, A) { + pub fn into_raw_with_allocator(b: Self) -> (*mut T, A) { let (leaked, alloc) = Box::into_unique(b); (leaked.as_ptr(), alloc) } @@ -1122,10 +1102,9 @@ impl<T: ?Sized, A: Allocator> Box<T, A> { issue = "none", reason = "use `Box::leak(b).into()` or `Unique::from(Box::leak(b))` instead" )] - #[rustc_const_unstable(feature = "const_box", issue = "92521")] #[inline] #[doc(hidden)] - pub const fn into_unique(b: Self) -> (Unique<T>, A) { + pub fn into_unique(b: Self) -> (Unique<T>, A) { // Box is recognized as a "unique pointer" by Stacked Borrows, but internally it is a // raw pointer for the type system. Turning it directly into a raw pointer would not be // recognized as "releasing" the unique pointer to permit aliased raw accesses, @@ -1183,9 +1162,8 @@ impl<T: ?Sized, A: Allocator> Box<T, A> { /// assert_eq!(*static_ref, [4, 2, 3]); /// ``` #[stable(feature = "box_leak", since = "1.26.0")] - #[rustc_const_unstable(feature = "const_box", issue = "92521")] #[inline] - pub const fn leak<'a>(b: Self) -> &'a mut T + pub fn leak<'a>(b: Self) -> &'a mut T where A: 'a, { @@ -1246,16 +1224,16 @@ unsafe impl<#[may_dangle] T: ?Sized, A: Allocator> Drop for Box<T, A> { #[stable(feature = "rust1", since = "1.0.0")] impl<T: Default> Default for Box<T> { /// Creates a `Box<T>`, with the `Default` value for T. + #[inline] fn default() -> Self { - #[rustc_box] Box::new(T::default()) } } #[cfg(not(no_global_oom_handling))] #[stable(feature = "rust1", since = "1.0.0")] -#[rustc_const_unstable(feature = "const_default_impls", issue = "87864")] -impl<T> const Default for Box<[T]> { +impl<T> Default for Box<[T]> { + #[inline] fn default() -> Self { let ptr: Unique<[T]> = Unique::<[T; 0]>::dangling(); Box(ptr, Global) @@ -1264,8 +1242,8 @@ impl<T> const Default for Box<[T]> { #[cfg(not(no_global_oom_handling))] #[stable(feature = "default_box_extra", since = "1.17.0")] -#[rustc_const_unstable(feature = "const_default_impls", issue = "87864")] -impl const Default for Box<str> { +impl Default for Box<str> { + #[inline] fn default() -> Self { // SAFETY: This is the same as `Unique::cast<U>` but with an unsized `U = str`. let ptr: Unique<str> = unsafe { @@ -1461,8 +1439,7 @@ impl<T> From<T> for Box<T> { } #[stable(feature = "pin", since = "1.33.0")] -#[rustc_const_unstable(feature = "const_box", issue = "92521")] -impl<T: ?Sized, A: Allocator> const From<Box<T, A>> for Pin<Box<T, A>> +impl<T: ?Sized, A: Allocator> From<Box<T, A>> for Pin<Box<T, A>> where A: 'static, { @@ -1482,9 +1459,36 @@ where } } +/// Specialization trait used for `From<&[T]>`. +#[cfg(not(no_global_oom_handling))] +trait BoxFromSlice<T> { + fn from_slice(slice: &[T]) -> Self; +} + +#[cfg(not(no_global_oom_handling))] +impl<T: Clone> BoxFromSlice<T> for Box<[T]> { + #[inline] + default fn from_slice(slice: &[T]) -> Self { + slice.to_vec().into_boxed_slice() + } +} + +#[cfg(not(no_global_oom_handling))] +impl<T: Copy> BoxFromSlice<T> for Box<[T]> { + #[inline] + fn from_slice(slice: &[T]) -> Self { + let len = slice.len(); + let buf = RawVec::with_capacity(len); + unsafe { + ptr::copy_nonoverlapping(slice.as_ptr(), buf.ptr(), len); + buf.into_box(slice.len()).assume_init() + } + } +} + #[cfg(not(no_global_oom_handling))] #[stable(feature = "box_from_slice", since = "1.17.0")] -impl<T: Copy> From<&[T]> for Box<[T]> { +impl<T: Clone> From<&[T]> for Box<[T]> { /// Converts a `&[T]` into a `Box<[T]>` /// /// This conversion allocates on the heap @@ -1498,19 +1502,15 @@ impl<T: Copy> From<&[T]> for Box<[T]> { /// /// println!("{boxed_slice:?}"); /// ``` + #[inline] fn from(slice: &[T]) -> Box<[T]> { - let len = slice.len(); - let buf = RawVec::with_capacity(len); - unsafe { - ptr::copy_nonoverlapping(slice.as_ptr(), buf.ptr(), len); - buf.into_box(slice.len()).assume_init() - } + <Self as BoxFromSlice<T>>::from_slice(slice) } } #[cfg(not(no_global_oom_handling))] #[stable(feature = "box_from_cow", since = "1.45.0")] -impl<T: Copy> From<Cow<'_, [T]>> for Box<[T]> { +impl<T: Clone> From<Cow<'_, [T]>> for Box<[T]> { /// Converts a `Cow<'_, [T]>` into a `Box<[T]>` /// /// When `cow` is the `Cow::Borrowed` variant, this @@ -1620,7 +1620,6 @@ impl<T, const N: usize> From<[T; N]> for Box<[T]> { /// println!("{boxed:?}"); /// ``` fn from(array: [T; N]) -> Box<[T]> { - #[rustc_box] Box::new(array) } } @@ -1899,8 +1898,7 @@ impl<T: ?Sized, A: Allocator> fmt::Pointer for Box<T, A> { } #[stable(feature = "rust1", since = "1.0.0")] -#[rustc_const_unstable(feature = "const_box", issue = "92521")] -impl<T: ?Sized, A: Allocator> const Deref for Box<T, A> { +impl<T: ?Sized, A: Allocator> Deref for Box<T, A> { type Target = T; fn deref(&self) -> &T { @@ -1909,8 +1907,7 @@ impl<T: ?Sized, A: Allocator> const Deref for Box<T, A> { } #[stable(feature = "rust1", since = "1.0.0")] -#[rustc_const_unstable(feature = "const_box", issue = "92521")] -impl<T: ?Sized, A: Allocator> const DerefMut for Box<T, A> { +impl<T: ?Sized, A: Allocator> DerefMut for Box<T, A> { fn deref_mut(&mut self) -> &mut T { &mut **self } diff --git a/rust/alloc/lib.rs b/rust/alloc/lib.rs index 5f374378b0d4..85e91356ecb3 100644 --- a/rust/alloc/lib.rs +++ b/rust/alloc/lib.rs @@ -89,35 +89,37 @@ #![warn(missing_debug_implementations)] #![warn(missing_docs)] #![allow(explicit_outlives_requirements)] +#![warn(multiple_supertrait_upcastable)] // // Library features: +// tidy-alphabetical-start +#![cfg_attr(not(no_global_oom_handling), feature(const_alloc_error))] +#![cfg_attr(not(no_global_oom_handling), feature(const_btree_len))] +#![cfg_attr(test, feature(is_sorted))] +#![cfg_attr(test, feature(new_uninit))] #![feature(alloc_layout_extra)] #![feature(allocator_api)] #![feature(array_chunks)] #![feature(array_into_iter_constructors)] #![feature(array_methods)] #![feature(array_windows)] +#![feature(ascii_char)] #![feature(assert_matches)] #![feature(async_iterator)] #![feature(coerce_unsized)] -#![cfg_attr(not(no_global_oom_handling), feature(const_alloc_error))] +#![feature(const_align_of_val)] #![feature(const_box)] -#![cfg_attr(not(no_global_oom_handling), feature(const_btree_len))] #![cfg_attr(not(no_borrow), feature(const_cow_is_borrowed))] -#![feature(const_convert)] -#![feature(const_size_of_val)] -#![feature(const_align_of_val)] -#![feature(const_ptr_read)] -#![feature(const_maybe_uninit_zeroed)] -#![feature(const_maybe_uninit_write)] +#![feature(const_eval_select)] #![feature(const_maybe_uninit_as_mut_ptr)] +#![feature(const_maybe_uninit_write)] +#![feature(const_maybe_uninit_zeroed)] +#![feature(const_pin)] #![feature(const_refs_to_cell)] +#![feature(const_size_of_val)] +#![feature(const_waker)] #![feature(core_intrinsics)] #![feature(core_panic)] -#![feature(const_eval_select)] -#![feature(const_pin)] -#![feature(const_waker)] -#![feature(cstr_from_bytes_until_nul)] #![feature(dispatch_from_dyn)] #![feature(error_generic_member_access)] #![feature(error_in_core)] @@ -128,7 +130,6 @@ #![feature(hasher_prefixfree_extras)] #![feature(inline_const)] #![feature(inplace_iteration)] -#![cfg_attr(test, feature(is_sorted))] #![feature(iter_advance_by)] #![feature(iter_next_chunk)] #![feature(iter_repeat_n)] @@ -136,8 +137,6 @@ #![feature(maybe_uninit_slice)] #![feature(maybe_uninit_uninit_array)] #![feature(maybe_uninit_uninit_array_transpose)] -#![cfg_attr(test, feature(new_uninit))] -#![feature(nonnull_slice_from_raw_parts)] #![feature(pattern)] #![feature(pointer_byte_offsets)] #![feature(provide_any)] @@ -153,6 +152,7 @@ #![feature(slice_ptr_get)] #![feature(slice_ptr_len)] #![feature(slice_range)] +#![feature(std_internals)] #![feature(str_internals)] #![feature(strict_provenance)] #![feature(trusted_len)] @@ -163,40 +163,42 @@ #![feature(unicode_internals)] #![feature(unsize)] #![feature(utf8_chunks)] -#![feature(std_internals)] +// tidy-alphabetical-end // // Language features: +// tidy-alphabetical-start +#![cfg_attr(not(test), feature(generator_trait))] +#![cfg_attr(test, feature(panic_update_hook))] +#![cfg_attr(test, feature(test))] #![feature(allocator_internals)] #![feature(allow_internal_unstable)] #![feature(associated_type_bounds)] +#![feature(c_unwind)] #![feature(cfg_sanitize)] -#![feature(const_deref)] #![feature(const_mut_refs)] -#![feature(const_ptr_write)] #![feature(const_precise_live_drops)] +#![feature(const_ptr_write)] #![feature(const_trait_impl)] #![feature(const_try)] #![feature(dropck_eyepatch)] #![feature(exclusive_range_pattern)] #![feature(fundamental)] -#![cfg_attr(not(test), feature(generator_trait))] #![feature(hashmap_internals)] #![feature(lang_items)] #![feature(min_specialization)] +#![feature(multiple_supertrait_upcastable)] #![feature(negative_impls)] #![feature(never_type)] +#![feature(pointer_is_aligned)] #![feature(rustc_allow_const_fn_unstable)] #![feature(rustc_attrs)] -#![feature(pointer_is_aligned)] #![feature(slice_internals)] #![feature(staged_api)] #![feature(stmt_expr_attributes)] -#![cfg_attr(test, feature(test))] #![feature(unboxed_closures)] #![feature(unsized_fn_params)] -#![feature(c_unwind)] #![feature(with_negative_coherence)] -#![cfg_attr(test, feature(panic_update_hook))] +// tidy-alphabetical-end // // Rustdoc features: #![feature(doc_cfg)] diff --git a/rust/alloc/raw_vec.rs b/rust/alloc/raw_vec.rs index 5db87eac53b7..65d5ce15828e 100644 --- a/rust/alloc/raw_vec.rs +++ b/rust/alloc/raw_vec.rs @@ -6,7 +6,6 @@ use core::alloc::LayoutError; use core::cmp; use core::intrinsics; use core::mem::{self, ManuallyDrop, MaybeUninit, SizedTypeProperties}; -use core::ops::Drop; use core::ptr::{self, NonNull, Unique}; use core::slice; @@ -274,10 +273,15 @@ impl<T, A: Allocator> RawVec<T, A> { if T::IS_ZST || self.cap == 0 { None } else { - // We have an allocated chunk of memory, so we can bypass runtime - // checks to get our current layout. + // We could use Layout::array here which ensures the absence of isize and usize overflows + // and could hypothetically handle differences between stride and size, but this memory + // has already been allocated so we know it can't overflow and currently rust does not + // support such types. So we can do better by skipping some checks and avoid an unwrap. + let _: () = const { assert!(mem::size_of::<T>() % mem::align_of::<T>() == 0) }; unsafe { - let layout = Layout::array::<T>(self.cap).unwrap_unchecked(); + let align = mem::align_of::<T>(); + let size = mem::size_of::<T>().unchecked_mul(self.cap); + let layout = Layout::from_size_align_unchecked(size, align); Some((self.ptr.cast().into(), layout)) } } @@ -465,11 +469,13 @@ impl<T, A: Allocator> RawVec<T, A> { assert!(cap <= self.capacity(), "Tried to shrink to a larger capacity"); let (ptr, layout) = if let Some(mem) = self.current_memory() { mem } else { return Ok(()) }; - + // See current_memory() why this assert is here + let _: () = const { assert!(mem::size_of::<T>() % mem::align_of::<T>() == 0) }; let ptr = unsafe { // `Layout::array` cannot overflow here because it would have // overflowed earlier when capacity was larger. - let new_layout = Layout::array::<T>(cap).unwrap_unchecked(); + let new_size = mem::size_of::<T>().unchecked_mul(cap); + let new_layout = Layout::from_size_align_unchecked(new_size, layout.align()); self.alloc .shrink(ptr, layout, new_layout) .map_err(|_| AllocError { layout: new_layout, non_exhaustive: () })? diff --git a/rust/alloc/slice.rs b/rust/alloc/slice.rs index 245e01590df7..6ac463bd3edc 100644 --- a/rust/alloc/slice.rs +++ b/rust/alloc/slice.rs @@ -784,6 +784,38 @@ impl<T, A: Allocator> BorrowMut<[T]> for Vec<T, A> { } } +// Specializable trait for implementing ToOwned::clone_into. This is +// public in the crate and has the Allocator parameter so that +// vec::clone_from use it too. +#[cfg(not(no_global_oom_handling))] +pub(crate) trait SpecCloneIntoVec<T, A: Allocator> { + fn clone_into(&self, target: &mut Vec<T, A>); +} + +#[cfg(not(no_global_oom_handling))] +impl<T: Clone, A: Allocator> SpecCloneIntoVec<T, A> for [T] { + default fn clone_into(&self, target: &mut Vec<T, A>) { + // drop anything in target that will not be overwritten + target.truncate(self.len()); + + // target.len <= self.len due to the truncate above, so the + // slices here are always in-bounds. + let (init, tail) = self.split_at(target.len()); + + // reuse the contained values' allocations/resources. + target.clone_from_slice(init); + target.extend_from_slice(tail); + } +} + +#[cfg(not(no_global_oom_handling))] +impl<T: Copy, A: Allocator> SpecCloneIntoVec<T, A> for [T] { + fn clone_into(&self, target: &mut Vec<T, A>) { + target.clear(); + target.extend_from_slice(self); + } +} + #[cfg(not(no_global_oom_handling))] #[stable(feature = "rust1", since = "1.0.0")] impl<T: Clone> ToOwned for [T] { @@ -799,16 +831,7 @@ impl<T: Clone> ToOwned for [T] { } fn clone_into(&self, target: &mut Vec<T>) { - // drop anything in target that will not be overwritten - target.truncate(self.len()); - - // target.len <= self.len due to the truncate above, so the - // slices here are always in-bounds. - let (init, tail) = self.split_at(target.len()); - - // reuse the contained values' allocations/resources. - target.clone_from_slice(init); - target.extend_from_slice(tail); + SpecCloneIntoVec::clone_into(self, target); } } diff --git a/rust/alloc/vec/drain.rs b/rust/alloc/vec/drain.rs index d503d2f478ce..78177a9e2ad0 100644 --- a/rust/alloc/vec/drain.rs +++ b/rust/alloc/vec/drain.rs @@ -18,7 +18,7 @@ use super::Vec; /// /// ``` /// let mut v = vec![0, 1, 2]; -/// let iter: std::vec::Drain<_> = v.drain(..); +/// let iter: std::vec::Drain<'_, _> = v.drain(..); /// ``` #[stable(feature = "drain", since = "1.6.0")] pub struct Drain< @@ -114,9 +114,7 @@ impl<'a, T, A: Allocator> Drain<'a, T, A> { let unyielded_ptr = this.iter.as_slice().as_ptr(); // ZSTs have no identity, so we don't need to move them around. - let needs_move = mem::size_of::<T>() != 0; - - if needs_move { + if !T::IS_ZST { let start_ptr = source_vec.as_mut_ptr().add(start); // memmove back unyielded elements @@ -199,7 +197,7 @@ impl<T, A: Allocator> Drop for Drain<'_, T, A> { } } - let iter = mem::replace(&mut self.iter, (&mut []).iter()); + let iter = mem::take(&mut self.iter); let drop_len = iter.len(); let mut vec = self.vec; diff --git a/rust/alloc/vec/drain_filter.rs b/rust/alloc/vec/drain_filter.rs index 4b019220657d..09efff090e42 100644 --- a/rust/alloc/vec/drain_filter.rs +++ b/rust/alloc/vec/drain_filter.rs @@ -1,7 +1,7 @@ // SPDX-License-Identifier: Apache-2.0 OR MIT use crate::alloc::{Allocator, Global}; -use core::mem::{self, ManuallyDrop}; +use core::mem::{ManuallyDrop, SizedTypeProperties}; use core::ptr; use core::slice; @@ -18,7 +18,7 @@ use super::Vec; /// #![feature(drain_filter)] /// /// let mut v = vec![0, 1, 2]; -/// let iter: std::vec::DrainFilter<_, _> = v.drain_filter(|x| *x % 2 == 0); +/// let iter: std::vec::DrainFilter<'_, _, _> = v.drain_filter(|x| *x % 2 == 0); /// ``` #[unstable(feature = "drain_filter", reason = "recently added", issue = "43244")] #[derive(Debug)] @@ -98,9 +98,7 @@ where unsafe { // ZSTs have no identity, so we don't need to move them around. - let needs_move = mem::size_of::<T>() != 0; - - if needs_move && this.idx < this.old_len && this.del > 0 { + if !T::IS_ZST && this.idx < this.old_len && this.del > 0 { let ptr = this.vec.as_mut_ptr(); let src = ptr.add(this.idx); let dst = src.sub(this.del); diff --git a/rust/alloc/vec/into_iter.rs b/rust/alloc/vec/into_iter.rs index 34a2a70d6ded..aac0ec16aef1 100644 --- a/rust/alloc/vec/into_iter.rs +++ b/rust/alloc/vec/into_iter.rs @@ -13,6 +13,7 @@ use core::iter::{ }; use core::marker::PhantomData; use core::mem::{self, ManuallyDrop, MaybeUninit, SizedTypeProperties}; +use core::num::NonZeroUsize; #[cfg(not(no_global_oom_handling))] use core::ops::Deref; use core::ptr::{self, NonNull}; @@ -109,7 +110,7 @@ impl<T, A: Allocator> IntoIter<T, A> { /// ``` /// # let mut into_iter = Vec::<u8>::with_capacity(10).into_iter(); /// let mut into_iter = std::mem::replace(&mut into_iter, Vec::new().into_iter()); - /// (&mut into_iter).for_each(core::mem::drop); + /// (&mut into_iter).for_each(drop); /// std::mem::forget(into_iter); /// ``` /// @@ -215,7 +216,7 @@ impl<T, A: Allocator> Iterator for IntoIter<T, A> { } #[inline] - fn advance_by(&mut self, n: usize) -> Result<(), usize> { + fn advance_by(&mut self, n: usize) -> Result<(), NonZeroUsize> { let step_size = self.len().min(n); let to_drop = ptr::slice_from_raw_parts_mut(self.ptr as *mut T, step_size); if T::IS_ZST { @@ -229,10 +230,7 @@ impl<T, A: Allocator> Iterator for IntoIter<T, A> { unsafe { ptr::drop_in_place(to_drop); } |