<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux.git/block, branch v3.18.4</title>
<subtitle>Clone of https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git</subtitle>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/'/>
<entry>
<title>genhd: check for int overflow in disk_expand_part_tbl()</title>
<updated>2015-01-16T14:59:52+00:00</updated>
<author>
<name>Jens Axboe</name>
<email>axboe@fb.com</email>
</author>
<published>2014-11-19T20:06:22+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=ee9b142838a989550587a27fb3bb8ebbe8ab6fba'/>
<id>ee9b142838a989550587a27fb3bb8ebbe8ab6fba</id>
<content type='text'>
commit 5fabcb4c33fe11c7e3afdf805fde26c1a54d0953 upstream.

We can get here from blkdev_ioctl() -&gt; blkpg_ioctl() -&gt; add_partition()
with a user passed in partno value. If we pass in 0x7fffffff, the
new target in disk_expand_part_tbl() overflows the 'int' and we
access beyond the end of ptbl-&gt;part[] and even write to it when we
do the rcu_assign_pointer() to assign the new partition.

Reported-by: David Ramos &lt;daramos@stanford.edu&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 5fabcb4c33fe11c7e3afdf805fde26c1a54d0953 upstream.

We can get here from blkdev_ioctl() -&gt; blkpg_ioctl() -&gt; add_partition()
with a user passed in partno value. If we pass in 0x7fffffff, the
new target in disk_expand_part_tbl() overflows the 'int' and we
access beyond the end of ptbl-&gt;part[] and even write to it when we
do the rcu_assign_pointer() to assign the new partition.

Reported-by: David Ramos &lt;daramos@stanford.edu&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>blk-mq: Fix uninitialized kobject at CPU hotplugging</title>
<updated>2015-01-16T14:59:48+00:00</updated>
<author>
<name>Takashi Iwai</name>
<email>tiwai@suse.de</email>
</author>
<published>2014-12-10T15:38:30+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=fa5e4747af360dc65fdded3cc0ff1a6b8c227a71'/>
<id>fa5e4747af360dc65fdded3cc0ff1a6b8c227a71</id>
<content type='text'>
commit 06a41a99d13d8e919e9a00a4849e6b85ae492592 upstream.

When a CPU is hotplugged, the current blk-mq spews a warning like:

  kobject '(null)' (ffffe8ffffc8b5d8): tried to add an uninitialized object, something is seriously wrong.
  CPU: 1 PID: 1386 Comm: systemd-udevd Not tainted 3.18.0-rc7-2.g088d59b-default #1
  Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_171129-lamiak 04/01/2014
   0000000000000000 0000000000000002 ffffffff81605f07 ffffe8ffffc8b5d8
   ffffffff8132c7a0 ffff88023341d370 0000000000000020 ffff8800bb05bd58
   ffff8800bb05bd08 000000000000a0a0 000000003f441940 0000000000000007
  Call Trace:
   [&lt;ffffffff81005306&gt;] dump_trace+0x86/0x330
   [&lt;ffffffff81005644&gt;] show_stack_log_lvl+0x94/0x170
   [&lt;ffffffff81006d21&gt;] show_stack+0x21/0x50
   [&lt;ffffffff81605f07&gt;] dump_stack+0x41/0x51
   [&lt;ffffffff8132c7a0&gt;] kobject_add+0xa0/0xb0
   [&lt;ffffffff8130aee1&gt;] blk_mq_register_hctx+0x91/0xb0
   [&lt;ffffffff8130b82e&gt;] blk_mq_sysfs_register+0x3e/0x60
   [&lt;ffffffff81309298&gt;] blk_mq_queue_reinit_notify+0xf8/0x190
   [&lt;ffffffff8107cfdc&gt;] notifier_call_chain+0x4c/0x70
   [&lt;ffffffff8105fd23&gt;] cpu_notify+0x23/0x50
   [&lt;ffffffff81060037&gt;] _cpu_up+0x157/0x170
   [&lt;ffffffff810600d9&gt;] cpu_up+0x89/0xb0
   [&lt;ffffffff815fa5b5&gt;] cpu_subsys_online+0x35/0x80
   [&lt;ffffffff814323cd&gt;] device_online+0x5d/0xa0
   [&lt;ffffffff81432485&gt;] online_store+0x75/0x80
   [&lt;ffffffff81236a5a&gt;] kernfs_fop_write+0xda/0x150
   [&lt;ffffffff811c5532&gt;] vfs_write+0xb2/0x1f0
   [&lt;ffffffff811c5f42&gt;] SyS_write+0x42/0xb0
   [&lt;ffffffff8160c4ed&gt;] system_call_fastpath+0x16/0x1b
   [&lt;00007f0132fb24e0&gt;] 0x7f0132fb24e0

This is indeed because of an uninitialized kobject for blk_mq_ctx.
The blk_mq_ctx kobjects are initialized in blk_mq_sysfs_init(), but it
goes loop over hctx_for_each_ctx(), i.e. it initializes only for
online CPUs.  Thus, when a CPU is hotplugged, the ctx for the newly
onlined CPU is registered without initialization.

This patch fixes the issue by initializing the all ctx kobjects
belonging to each queue.

Bugzilla: https://bugzilla.novell.com/show_bug.cgi?id=908794
Signed-off-by: Takashi Iwai &lt;tiwai@suse.de&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 06a41a99d13d8e919e9a00a4849e6b85ae492592 upstream.

When a CPU is hotplugged, the current blk-mq spews a warning like:

  kobject '(null)' (ffffe8ffffc8b5d8): tried to add an uninitialized object, something is seriously wrong.
  CPU: 1 PID: 1386 Comm: systemd-udevd Not tainted 3.18.0-rc7-2.g088d59b-default #1
  Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_171129-lamiak 04/01/2014
   0000000000000000 0000000000000002 ffffffff81605f07 ffffe8ffffc8b5d8
   ffffffff8132c7a0 ffff88023341d370 0000000000000020 ffff8800bb05bd58
   ffff8800bb05bd08 000000000000a0a0 000000003f441940 0000000000000007
  Call Trace:
   [&lt;ffffffff81005306&gt;] dump_trace+0x86/0x330
   [&lt;ffffffff81005644&gt;] show_stack_log_lvl+0x94/0x170
   [&lt;ffffffff81006d21&gt;] show_stack+0x21/0x50
   [&lt;ffffffff81605f07&gt;] dump_stack+0x41/0x51
   [&lt;ffffffff8132c7a0&gt;] kobject_add+0xa0/0xb0
   [&lt;ffffffff8130aee1&gt;] blk_mq_register_hctx+0x91/0xb0
   [&lt;ffffffff8130b82e&gt;] blk_mq_sysfs_register+0x3e/0x60
   [&lt;ffffffff81309298&gt;] blk_mq_queue_reinit_notify+0xf8/0x190
   [&lt;ffffffff8107cfdc&gt;] notifier_call_chain+0x4c/0x70
   [&lt;ffffffff8105fd23&gt;] cpu_notify+0x23/0x50
   [&lt;ffffffff81060037&gt;] _cpu_up+0x157/0x170
   [&lt;ffffffff810600d9&gt;] cpu_up+0x89/0xb0
   [&lt;ffffffff815fa5b5&gt;] cpu_subsys_online+0x35/0x80
   [&lt;ffffffff814323cd&gt;] device_online+0x5d/0xa0
   [&lt;ffffffff81432485&gt;] online_store+0x75/0x80
   [&lt;ffffffff81236a5a&gt;] kernfs_fop_write+0xda/0x150
   [&lt;ffffffff811c5532&gt;] vfs_write+0xb2/0x1f0
   [&lt;ffffffff811c5f42&gt;] SyS_write+0x42/0xb0
   [&lt;ffffffff8160c4ed&gt;] system_call_fastpath+0x16/0x1b
   [&lt;00007f0132fb24e0&gt;] 0x7f0132fb24e0

This is indeed because of an uninitialized kobject for blk_mq_ctx.
The blk_mq_ctx kobjects are initialized in blk_mq_sysfs_init(), but it
goes loop over hctx_for_each_ctx(), i.e. it initializes only for
online CPUs.  Thus, when a CPU is hotplugged, the ctx for the newly
onlined CPU is registered without initialization.

This patch fixes the issue by initializing the all ctx kobjects
belonging to each queue.

Bugzilla: https://bugzilla.novell.com/show_bug.cgi?id=908794
Signed-off-by: Takashi Iwai &lt;tiwai@suse.de&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>blk-mq: Fix a race between bt_clear_tag() and bt_get()</title>
<updated>2015-01-16T14:59:48+00:00</updated>
<author>
<name>Bart Van Assche</name>
<email>bvanassche@acm.org</email>
</author>
<published>2014-12-09T15:58:35+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=3a6d400572ee7c6f6a82d1c385fcaefebd6062fc'/>
<id>3a6d400572ee7c6f6a82d1c385fcaefebd6062fc</id>
<content type='text'>
commit c38d185d4af12e8be63ca4b6745d99449c450f12 upstream.

What we need is the following two guarantees:
* Any thread that observes the effect of the test_and_set_bit() by
  __bt_get_word() also observes the preceding addition of 'current'
  to the appropriate wait list. This is guaranteed by the semantics
  of the spin_unlock() operation performed by prepare_and_wait().
  Hence the conversion of test_and_set_bit_lock() into
  test_and_set_bit().
* The wait lists are examined by bt_clear() after the tag bit has
  been cleared. clear_bit_unlock() guarantees that any thread that
  observes that the bit has been cleared also observes the store
  operations preceding clear_bit_unlock(). However,
  clear_bit_unlock() does not prevent that the wait lists are examined
  before that the tag bit is cleared. Hence the addition of a memory
  barrier between clear_bit() and the wait list examination.

Signed-off-by: Bart Van Assche &lt;bvanassche@acm.org&gt;
Cc: Christoph Hellwig &lt;hch@lst.de&gt;
Cc: Robert Elliott &lt;elliott@hp.com&gt;
Cc: Ming Lei &lt;ming.lei@canonical.com&gt;
Cc: Alexander Gordeev &lt;agordeev@redhat.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit c38d185d4af12e8be63ca4b6745d99449c450f12 upstream.

What we need is the following two guarantees:
* Any thread that observes the effect of the test_and_set_bit() by
  __bt_get_word() also observes the preceding addition of 'current'
  to the appropriate wait list. This is guaranteed by the semantics
  of the spin_unlock() operation performed by prepare_and_wait().
  Hence the conversion of test_and_set_bit_lock() into
  test_and_set_bit().
* The wait lists are examined by bt_clear() after the tag bit has
  been cleared. clear_bit_unlock() guarantees that any thread that
  observes that the bit has been cleared also observes the store
  operations preceding clear_bit_unlock(). However,
  clear_bit_unlock() does not prevent that the wait lists are examined
  before that the tag bit is cleared. Hence the addition of a memory
  barrier between clear_bit() and the wait list examination.

Signed-off-by: Bart Van Assche &lt;bvanassche@acm.org&gt;
Cc: Christoph Hellwig &lt;hch@lst.de&gt;
Cc: Robert Elliott &lt;elliott@hp.com&gt;
Cc: Ming Lei &lt;ming.lei@canonical.com&gt;
Cc: Alexander Gordeev &lt;agordeev@redhat.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>blk-mq: Avoid that __bt_get_word() wraps multiple times</title>
<updated>2015-01-16T14:59:48+00:00</updated>
<author>
<name>Bart Van Assche</name>
<email>bvanassche@acm.org</email>
</author>
<published>2014-12-09T15:58:11+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=d04e14ab4713a186700ef70b2e4d994618cbc64a'/>
<id>d04e14ab4713a186700ef70b2e4d994618cbc64a</id>
<content type='text'>
commit 9e98e9d7cf6e9d2ec1cce45e8d5ccaf3f9b386f3 upstream.

If __bt_get_word() is called with last_tag != 0, if the first
find_next_zero_bit() fails, if after wrap-around the
test_and_set_bit() call fails and find_next_zero_bit() succeeds,
if the next test_and_set_bit() call fails and subsequently
find_next_zero_bit() does not find a zero bit, then another
wrap-around will occur. Avoid this by introducing an additional
local variable.

Signed-off-by: Bart Van Assche &lt;bvanassche@acm.org&gt;
Cc: Christoph Hellwig &lt;hch@lst.de&gt;
Cc: Robert Elliott &lt;elliott@hp.com&gt;
Cc: Ming Lei &lt;ming.lei@canonical.com&gt;
Cc: Alexander Gordeev &lt;agordeev@redhat.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 9e98e9d7cf6e9d2ec1cce45e8d5ccaf3f9b386f3 upstream.

If __bt_get_word() is called with last_tag != 0, if the first
find_next_zero_bit() fails, if after wrap-around the
test_and_set_bit() call fails and find_next_zero_bit() succeeds,
if the next test_and_set_bit() call fails and subsequently
find_next_zero_bit() does not find a zero bit, then another
wrap-around will occur. Avoid this by introducing an additional
local variable.

Signed-off-by: Bart Van Assche &lt;bvanassche@acm.org&gt;
Cc: Christoph Hellwig &lt;hch@lst.de&gt;
Cc: Robert Elliott &lt;elliott@hp.com&gt;
Cc: Ming Lei &lt;ming.lei@canonical.com&gt;
Cc: Alexander Gordeev &lt;agordeev@redhat.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>blk-mq: Fix a use-after-free</title>
<updated>2015-01-16T14:59:48+00:00</updated>
<author>
<name>Bart Van Assche</name>
<email>bvanassche@acm.org</email>
</author>
<published>2014-12-09T15:57:48+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=b041392d4ff7e326ef472b42720755ae39e7cec7'/>
<id>b041392d4ff7e326ef472b42720755ae39e7cec7</id>
<content type='text'>
commit 45a9c9d909b24c6ad0e28a7946e7486e73010319 upstream.

blk-mq users are allowed to free the memory request_queue.tag_set
points at after blk_cleanup_queue() has finished but before
blk_release_queue() has started. This can happen e.g. in the SCSI
core. The SCSI core namely embeds the tag_set structure in a SCSI
host structure. The SCSI host structure is freed by
scsi_host_dev_release(). This function is called after
blk_cleanup_queue() finished but can be called before
blk_release_queue().

This means that it is not safe to access request_queue.tag_set from
inside blk_release_queue(). Hence remove the blk_sync_queue() call
from blk_release_queue(). This call is not necessary - outstanding
requests must have finished before blk_release_queue() is
called. Additionally, move the blk_mq_free_queue() call from
blk_release_queue() to blk_cleanup_queue() to avoid that struct
request_queue.tag_set gets accessed after it has been freed.

This patch avoids that the following kernel oops can be triggered
when deleting a SCSI host for which scsi-mq was enabled:

Call Trace:
 [&lt;ffffffff8109a7c4&gt;] lock_acquire+0xc4/0x270
 [&lt;ffffffff814ce111&gt;] mutex_lock_nested+0x61/0x380
 [&lt;ffffffff812575f0&gt;] blk_mq_free_queue+0x30/0x180
 [&lt;ffffffff8124d654&gt;] blk_release_queue+0x84/0xd0
 [&lt;ffffffff8126c29b&gt;] kobject_cleanup+0x7b/0x1a0
 [&lt;ffffffff8126c140&gt;] kobject_put+0x30/0x70
 [&lt;ffffffff81245895&gt;] blk_put_queue+0x15/0x20
 [&lt;ffffffff8125c409&gt;] disk_release+0x99/0xd0
 [&lt;ffffffff8133d056&gt;] device_release+0x36/0xb0
 [&lt;ffffffff8126c29b&gt;] kobject_cleanup+0x7b/0x1a0
 [&lt;ffffffff8126c140&gt;] kobject_put+0x30/0x70
 [&lt;ffffffff8125a78a&gt;] put_disk+0x1a/0x20
 [&lt;ffffffff811d4cb5&gt;] __blkdev_put+0x135/0x1b0
 [&lt;ffffffff811d56a0&gt;] blkdev_put+0x50/0x160
 [&lt;ffffffff81199eb4&gt;] kill_block_super+0x44/0x70
 [&lt;ffffffff8119a2a4&gt;] deactivate_locked_super+0x44/0x60
 [&lt;ffffffff8119a87e&gt;] deactivate_super+0x4e/0x70
 [&lt;ffffffff811b9833&gt;] cleanup_mnt+0x43/0x90
 [&lt;ffffffff811b98d2&gt;] __cleanup_mnt+0x12/0x20
 [&lt;ffffffff8107252c&gt;] task_work_run+0xac/0xe0
 [&lt;ffffffff81002c01&gt;] do_notify_resume+0x61/0xa0
 [&lt;ffffffff814d2c58&gt;] int_signal+0x12/0x17

Signed-off-by: Bart Van Assche &lt;bvanassche@acm.org&gt;
Cc: Christoph Hellwig &lt;hch@lst.de&gt;
Cc: Robert Elliott &lt;elliott@hp.com&gt;
Cc: Ming Lei &lt;ming.lei@canonical.com&gt;
Cc: Alexander Gordeev &lt;agordeev@redhat.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 45a9c9d909b24c6ad0e28a7946e7486e73010319 upstream.

blk-mq users are allowed to free the memory request_queue.tag_set
points at after blk_cleanup_queue() has finished but before
blk_release_queue() has started. This can happen e.g. in the SCSI
core. The SCSI core namely embeds the tag_set structure in a SCSI
host structure. The SCSI host structure is freed by
scsi_host_dev_release(). This function is called after
blk_cleanup_queue() finished but can be called before
blk_release_queue().

This means that it is not safe to access request_queue.tag_set from
inside blk_release_queue(). Hence remove the blk_sync_queue() call
from blk_release_queue(). This call is not necessary - outstanding
requests must have finished before blk_release_queue() is
called. Additionally, move the blk_mq_free_queue() call from
blk_release_queue() to blk_cleanup_queue() to avoid that struct
request_queue.tag_set gets accessed after it has been freed.

This patch avoids that the following kernel oops can be triggered
when deleting a SCSI host for which scsi-mq was enabled:

Call Trace:
 [&lt;ffffffff8109a7c4&gt;] lock_acquire+0xc4/0x270
 [&lt;ffffffff814ce111&gt;] mutex_lock_nested+0x61/0x380
 [&lt;ffffffff812575f0&gt;] blk_mq_free_queue+0x30/0x180
 [&lt;ffffffff8124d654&gt;] blk_release_queue+0x84/0xd0
 [&lt;ffffffff8126c29b&gt;] kobject_cleanup+0x7b/0x1a0
 [&lt;ffffffff8126c140&gt;] kobject_put+0x30/0x70
 [&lt;ffffffff81245895&gt;] blk_put_queue+0x15/0x20
 [&lt;ffffffff8125c409&gt;] disk_release+0x99/0xd0
 [&lt;ffffffff8133d056&gt;] device_release+0x36/0xb0
 [&lt;ffffffff8126c29b&gt;] kobject_cleanup+0x7b/0x1a0
 [&lt;ffffffff8126c140&gt;] kobject_put+0x30/0x70
 [&lt;ffffffff8125a78a&gt;] put_disk+0x1a/0x20
 [&lt;ffffffff811d4cb5&gt;] __blkdev_put+0x135/0x1b0
 [&lt;ffffffff811d56a0&gt;] blkdev_put+0x50/0x160
 [&lt;ffffffff81199eb4&gt;] kill_block_super+0x44/0x70
 [&lt;ffffffff8119a2a4&gt;] deactivate_locked_super+0x44/0x60
 [&lt;ffffffff8119a87e&gt;] deactivate_super+0x4e/0x70
 [&lt;ffffffff811b9833&gt;] cleanup_mnt+0x43/0x90
 [&lt;ffffffff811b98d2&gt;] __cleanup_mnt+0x12/0x20
 [&lt;ffffffff8107252c&gt;] task_work_run+0xac/0xe0
 [&lt;ffffffff81002c01&gt;] do_notify_resume+0x61/0xa0
 [&lt;ffffffff814d2c58&gt;] int_signal+0x12/0x17

Signed-off-by: Bart Van Assche &lt;bvanassche@acm.org&gt;
Cc: Christoph Hellwig &lt;hch@lst.de&gt;
Cc: Robert Elliott &lt;elliott@hp.com&gt;
Cc: Ming Lei &lt;ming.lei@canonical.com&gt;
Cc: Alexander Gordeev &lt;agordeev@redhat.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>blk-mq: use 'nr_cpu_ids' as highest CPU ID count for hwq &lt;-&gt; cpu map</title>
<updated>2015-01-16T14:59:48+00:00</updated>
<author>
<name>Jens Axboe</name>
<email>axboe@fb.com</email>
</author>
<published>2014-11-24T22:02:42+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=00de3a6421d5ec132d8f91f450297e06c273b8cc'/>
<id>00de3a6421d5ec132d8f91f450297e06c273b8cc</id>
<content type='text'>
commit a33c1ba2913802b6fb23e974bb2f6a4e73c8b7ce upstream.

We currently use num_possible_cpus(), but that breaks on sparc64 where
the CPU ID space is discontig. Use nr_cpu_ids as the highest CPU ID
instead, so we don't end up reading from invalid memory.

Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit a33c1ba2913802b6fb23e974bb2f6a4e73c8b7ce upstream.

We currently use num_possible_cpus(), but that breaks on sparc64 where
the CPU ID space is discontig. Use nr_cpu_ids as the highest CPU ID
instead, so we don't end up reading from invalid memory.

Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>block: fix regression where bio_integrity_process uses wrong bio_vec iterator</title>
<updated>2014-12-02T15:15:21+00:00</updated>
<author>
<name>Darrick J. Wong</name>
<email>darrick.wong@oracle.com</email>
</author>
<published>2014-11-26T01:40:25+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=594416a72032684792bb22510c538098db10b750'/>
<id>594416a72032684792bb22510c538098db10b750</id>
<content type='text'>
bio integrity handling is broken on a system with LVM layered atop a
DIF/DIX SCSI drive because device mapper clones the bio, modifies the
clone, and sends the clone to the lower layers for processing.
However, the clone bio has bi_vcnt == 0, which means that when the sd
driver calls bio_integrity_process to attach DIX data, the
for_each_segment_all() call (which uses bi_vcnt) returns immediately
and random garbage is sent to the disk on a disk write.  The disk of
course returns an error.

Therefore, teach bio_integrity_process() to use bio_for_each_segment()
to iterate the bio_vecs, since the per-bio iterator tracks which
bio_vecs are associated with that particular bio.  The integrity
handling code is effectively part of the "driver" (it's not the bio
owner), so it must use the correct iterator function.

v2: Fix a compiler warning about abandoned local variables.  This
patch supersedes "block: bio_integrity_process uses wrong bio_vec
iterator".  Patch applies against 3.18-rc6.

Signed-off-by: Darrick J. Wong &lt;darrick.wong@oracle.com&gt;
Acked-by: Martin K. Petersen &lt;martin.petersen@oracle.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
bio integrity handling is broken on a system with LVM layered atop a
DIF/DIX SCSI drive because device mapper clones the bio, modifies the
clone, and sends the clone to the lower layers for processing.
However, the clone bio has bi_vcnt == 0, which means that when the sd
driver calls bio_integrity_process to attach DIX data, the
for_each_segment_all() call (which uses bi_vcnt) returns immediately
and random garbage is sent to the disk on a disk write.  The disk of
course returns an error.

Therefore, teach bio_integrity_process() to use bio_for_each_segment()
to iterate the bio_vecs, since the per-bio iterator tracks which
bio_vecs are associated with that particular bio.  The integrity
handling code is effectively part of the "driver" (it's not the bio
owner), so it must use the correct iterator function.

v2: Fix a compiler warning about abandoned local variables.  This
patch supersedes "block: bio_integrity_process uses wrong bio_vec
iterator".  Patch applies against 3.18-rc6.

Signed-off-by: Darrick J. Wong &lt;darrick.wong@oracle.com&gt;
Acked-by: Martin K. Petersen &lt;martin.petersen@oracle.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>block: blk-merge: fix blk_recount_segments()</title>
<updated>2014-11-11T23:24:15+00:00</updated>
<author>
<name>Ming Lei</name>
<email>tom.leiming@gmail.com</email>
</author>
<published>2014-11-11T16:15:41+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=7f60dcaaf91e911002007c7ae885ff6ef0f36c0d'/>
<id>7f60dcaaf91e911002007c7ae885ff6ef0f36c0d</id>
<content type='text'>
For cloned bio, bio-&gt;bi_vcnt can't be used at all, and we
have resort to bio_segments() to figure out how many
segment there are in the bio.

Signed-off-by: Ming Lei &lt;tom.leiming@gmail.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
For cloned bio, bio-&gt;bi_vcnt can't be used at all, and we
have resort to bio_segments() to figure out how many
segment there are in the bio.

Signed-off-by: Ming Lei &lt;tom.leiming@gmail.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>scsi: Fix more error handling in SCSI_IOCTL_SEND_COMMAND</title>
<updated>2014-11-10T22:41:47+00:00</updated>
<author>
<name>Tony Battersby</name>
<email>tonyb@cybernetics.com</email>
</author>
<published>2014-11-10T22:40:02+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=92697dc9471b8655a2f1203fa351bc37b2d46a26'/>
<id>92697dc9471b8655a2f1203fa351bc37b2d46a26</id>
<content type='text'>
Fix an error path in SCSI_IOCTL_SEND_COMMAND that calls
blk_put_request(rq) on an invalid IS_ERR(rq) pointer.

Fixes: a492f075450f ("block,scsi: fixup blk_get_request dead queue scenarios")
Signed-off-by: Tony Battersby &lt;tonyb@cybernetics.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Fix an error path in SCSI_IOCTL_SEND_COMMAND that calls
blk_put_request(rq) on an invalid IS_ERR(rq) pointer.

Fixes: a492f075450f ("block,scsi: fixup blk_get_request dead queue scenarios")
Signed-off-by: Tony Battersby &lt;tonyb@cybernetics.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>blk-mq: make mq_queue_reinit_notify() freeze queues in parallel</title>
<updated>2014-11-04T21:49:31+00:00</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2014-11-04T18:52:27+00:00</published>
<link rel='alternate' type='text/html' href='https://git.exis.tech/linux.git/commit/?id=f3af020b9a8d298022b811a19719df0cf461efa5'/>
<id>f3af020b9a8d298022b811a19719df0cf461efa5</id>
<content type='text'>
q-&gt;mq_usage_counter is a percpu_ref which is killed and drained when
the queue is frozen.  On a CPU hotplug event, blk_mq_queue_reinit()
which involves freezing the queue is invoked on all existing queues.
Because percpu_ref killing and draining involve a RCU grace period,
doing the above on one queue after another may take a long time if
there are many queues on the system.

This patch splits out initiation of freezing and waiting for its
completion, and updates blk_mq_queue_reinit_notify() so that the
queues are frozen in parallel instead of one after another.  Note that
freezing and unfreezing are moved from blk_mq_queue_reinit() to
blk_mq_queue_reinit_notify().

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Reported-by: Christian Borntraeger &lt;borntraeger@de.ibm.com&gt;
Tested-by: Christian Borntraeger &lt;borntraeger@de.ibm.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
q-&gt;mq_usage_counter is a percpu_ref which is killed and drained when
the queue is frozen.  On a CPU hotplug event, blk_mq_queue_reinit()
which involves freezing the queue is invoked on all existing queues.
Because percpu_ref killing and draining involve a RCU grace period,
doing the above on one queue after another may take a long time if
there are many queues on the system.

This patch splits out initiation of freezing and waiting for its
completion, and updates blk_mq_queue_reinit_notify() so that the
queues are frozen in parallel instead of one after another.  Note that
freezing and unfreezing are moved from blk_mq_queue_reinit() to
blk_mq_queue_reinit_notify().

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Reported-by: Christian Borntraeger &lt;borntraeger@de.ibm.com&gt;
Tested-by: Christian Borntraeger &lt;borntraeger@de.ibm.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
